10 interesting stories served every morning and every evening.




1 596 shares, 64 trendiness

Gemini 3.1 Pro

Models

Research

Science

About

Explore our next gen­er­a­tion AI sys­tems

Explore mod­els

Gemini

Specialized mod­els

World mod­els & em­bod­ied AI

Open mod­els

Our lat­est AI break­throughs and up­dates from the lab

Explore re­search

Breakthroughs

Learn more

Unlocking a new era of dis­cov­ery with AI

Explore sci­ence

Life sci­ences

Climate and sus­tain­abil­ity

Our mis­sion is to build AI re­spon­si­bly to ben­e­fit hu­man­ity

About Google DeepMind

Responsibility

Ensuring AI safety through proac­tive se­cu­rity, even against evolv­ing threats

Learn more

News

Discover our lat­est AI break­throughs, pro­jects, and up­dates

Learn more

Careers

We’re look­ing for peo­ple who want to make a real, pos­i­tive im­pact on the world

Learn more

Learn more

Models

Research

Science

About

Explore our next gen­er­a­tion AI sys­tems

Explore mod­els

Gemini

Specialized mod­els

World mod­els & em­bod­ied AI

Open mod­els

Our lat­est AI break­throughs and up­dates from the lab

Explore re­search

Breakthroughs

Learn more

Unlocking a new era of dis­cov­ery with AI

Explore sci­ence

Life sci­ences

Climate and sus­tain­abil­ity

Our mis­sion is to build AI re­spon­si­bly to ben­e­fit hu­man­ity

About Google DeepMind

Responsibility

Ensuring AI safety through proac­tive se­cu­rity, even against evolv­ing threats

Learn more

News

Discover our lat­est AI break­throughs, pro­jects, and up­dates

Learn more

Careers

We’re look­ing for peo­ple who want to make a real, pos­i­tive im­pact on the world

Learn more

Learn more

Gemini 3.1 Pro is the next it­er­a­tion in the Gemini 3 se­ries of mod­els, a suite of highly ca­pa­ble, na­tively mul­ti­modal rea­son­ing mod­els. As of this model card’s date of pub­li­ca­tion, Gemini 3.1 Pro is Google’s most ad­vanced model for com­plex tasks. Gemini 3.1 Pro can com­pre­hend vast datasets and chal­leng­ing prob­lems from mas­sively mul­ti­modal in­for­ma­tion sources, in­clud­ing text, au­dio, im­ages, video, and en­tire code repos­i­to­ries.

Text strings (e.g., a ques­tion, a prompt, doc­u­ment(s) to be sum­ma­rized), im­ages, au­dio, and video files, with a to­ken con­text win­dow of up to 1M.

Gemini 3.1 Pro is based on Gemini 3 Pro. For more in­for­ma­tion about the model ar­chi­tec­ture for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

Gemini 3.1 Pro is based on Gemini 3 Pro. For more in­for­ma­tion about the train­ing dataset for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

For more in­for­ma­tion about the train­ing data pro­cess­ing for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

Gemini 3.1 Pro is based on Gemini 3 Pro. For more in­for­ma­tion about the hard­ware for Gemini 3.1 Pro and our con­tin­ued com­mit­ment to op­er­ate sus­tain­ably, see the Gemini 3 Pro model card.

Gemini 3.1 Pro is based on Gemini 3 Pro. For more in­for­ma­tion about the soft­ware for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

Gemini 3.1 Pro was eval­u­ated across a range of bench­marks, in­clud­ing rea­son­ing, mul­ti­modal ca­pa­bil­i­ties, agen­tic tool use, multi-lin­gual per­for­mance, and long-con­text. Additional bench­marks and de­tails on ap­proach, re­sults and their method­olo­gies can be found at: deep­mind.google/​mod­els/​evals-method­ol­ogy/​gem­ini-3-1-pro.

Gemini 3.1 Pro sig­nif­i­cantly out­per­forms Gemini 3 Pro across a range of bench­marks re­quir­ing en­hanced rea­son­ing and mul­ti­modal ca­pa­bil­i­ties. Results as of February 2026 are listed be­low:

Gemini 3.1 Pro is the next it­er­a­tion in the Gemini 3 se­ries of mod­els, a suite of highly in­tel­li­gent and adap­tive mod­els, ca­pa­ble of help­ing with real-world com­plex­ity, solv­ing prob­lems that re­quire en­hanced rea­son­ing and in­tel­li­gence, cre­ativ­ity, strate­gic plan­ning and mak­ing im­prove­ments step-by-step. It is par­tic­u­larly well-suited for ap­pli­ca­tions that re­quire:

For more in­for­ma­tion about the known lim­i­ta­tions for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

For more in­for­ma­tion about the ac­cept­able us­age for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

For more in­for­ma­tion about the eval­u­a­tion ap­proach for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

For more in­for­ma­tion about the safety poli­cies for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

Results for some of the in­ter­nal safety eval­u­a­tions con­ducted dur­ing the de­vel­op­ment phase are listed be­low. The eval­u­a­tion re­sults are for au­to­mated eval­u­a­tions and not hu­man eval­u­a­tion or red team­ing. Scores are pro­vided as an ab­solute per­cent­age in­crease or de­crease in per­for­mance com­pared to the in­di­cated model, as de­scribed be­low. Overall, Gemini 3.1 Pro out­per­forms Gemini 3 Pro across both safety and tone, while keep­ing un­jus­ti­fied re­fusals low. We mark im­prove­ments in green and re­gres­sions in red. Safety eval­u­a­tions of Gemini 3.1 Pro pro­duced re­sults con­sis­tent with the orig­i­nal Gemini 3 Pro safety as­sess­ment.

Automated eval­u­a­tion mea­sur­ing mod­el’s abil­ity to re­spond to bor­der­line prompts while re­main­ing safe

We con­tinue to im­prove our in­ter­nal eval­u­a­tions, in­clud­ing re­fin­ing au­to­mated eval­u­a­tions to re­duce false pos­i­tives and neg­a­tives, as well as up­date query sets to en­sure bal­ance and main­tain a high stan­dard of re­sults. The per­for­mance re­sults re­ported be­low are com­puted with im­proved eval­u­a­tions and thus are not di­rectly com­pa­ra­ble with per­for­mance re­sults found in pre­vi­ous Gemini model cards. We ex­pect vari­a­tion in our au­to­mated safety eval­u­a­tions re­sults, which is why we re­view flagged con­tent to check for egre­gious or dan­ger­ous ma­te­r­ial. Our man­ual re­view con­firmed losses were over­whelm­ingly ei­ther a) false pos­i­tives or b) not egre­gious.

We con­duct man­ual red team­ing by spe­cial­ist teams who sit out­side of the model de­vel­op­ment team. High-level find­ings are fed back to the model team. For child safety eval­u­a­tions, Gemini 3.1 Pro sat­is­fied re­quired launch thresh­olds, which were de­vel­oped by ex­pert teams to pro­tect chil­dren on­line and meet Google’s com­mit­ments to child safety across our mod­els and Google prod­ucts. For con­tent safety poli­cies gen­er­ally, in­clud­ing child safety, we saw sim­i­lar safety per­for­mance com­pared to Gemini 3 Pro.

For more in­for­ma­tion about the risks and mit­i­ga­tions for Gemini 3.1 Pro, see the Gemini 3 Pro model card.

Our Frontier Safety Framework in­cludes rig­or­ous eval­u­a­tions that ad­dress risks of se­vere harm from fron­tier mod­els, cov­er­ing five risk do­mains: CBRN (chemical, bi­o­log­i­cal, ra­di­o­log­i­cal and nu­clear in­for­ma­tion risks), cy­ber, harm­ful ma­nip­u­la­tion, ma­chine learn­ing R&D and mis­align­ment. Our fron­tier safety strat­egy is based on a safety buffer” to pre­vent mod­els from reach­ing crit­i­cal ca­pa­bil­ity lev­els (CCLs), i.e. if a fron­tier model does not reach the alert thresh­old for a CCL, we can as­sume mod­els de­vel­oped be­fore the next reg­u­lar test­ing in­ter­val will not reach that CCL. We con­duct con­tin­u­ous test­ing, eval­u­at­ing mod­els at a fixed ca­dence and when a sig­nif­i­cant ca­pa­bil­ity jump is de­tected. (Read more about this in our ap­proach to tech­ni­cal AGI safety.)Fol­low­ing FSF pro­to­cols, we con­ducted a full eval­u­a­tion of Gemini 3.1 Pro (focusing on Deep Think mode). We found that the model re­mains be­low alert thresh­olds for the CBRN, harm­ful ma­nip­u­la­tion, ma­chine learn­ing R&D, and mis­align­ment CCLs. As pre­vi­ous mod­els passed the alert thresh­old for cy­ber, we per­formed more ad­di­tional test­ing in this do­main on Gemini 3.1 Pro with and with­out Deep Think mode, and found that the model re­mains be­low the cy­ber CCL.More de­tails on our eval­u­a­tions and the mit­i­ga­tions we de­ploy can be found in the Gemini 3 Pro Frontier Safety Framework Report.

(Deep Think mode) The model can pro­vide highly ac­cu­rate and ac­tion­able in­for­ma­tion but still fails to of­fer novel or suf­fi­ciently com­plete and de­tailed in­struc­tions for crit­i­cal stages, to sig­nif­i­cantly en­hance the ca­pa­bil­i­ties of low to medium re­sourced threat ac­tors re­quired for the CCL. We con­tinue to de­ploy mit­i­ga­tions in this do­main.

(3.1 Pro) We con­ducted ad­di­tional test­ing on the model in this do­main as Gemini 3 Pro had pre­vi­ously reached the alert thresh­old. The model shows an in­crease in cy­ber ca­pa­bil­i­ties com­pared to Gemini 3 Pro. As with Gemini 3 Pro, the model has reached the alert thresh­old, but still does not reach the lev­els of up­lift re­quired for the CCL.

(Deep Think mode) Accounting for in­fer­ence costs, the model with Deep Think mode per­forms con­sid­er­ably worse than with­out Deep Think mode. Even at high lev­els of in­fer­ence, re­sults for the model with Deep Think mode do not sug­gest higher ca­pa­bil­ity than with­out Deep Think mode.

We con­tinue to de­ploy mit­i­ga­tions in this do­main.

(Deep Think mode) Evaluations in­di­cated that the model showed higher ma­nip­u­la­tive ef­fi­cacy for be­lief change met­rics com­pared to a non-AI base­line, with the max­i­mum odds ra­tio of 3.6x, which is the same as Gemini 3 Pro, and did not reach the alert thresh­old.

(Deep Think mode) The model shows gains on RE-Bench com­pared to Gemini 3 Pro, with a hu­man-nor­malised av­er­age score of 1.27 com­pared to Gemini 3 Pro’s score of 1.04. On one par­tic­u­lar chal­lenge, Optimise LLM Foundry, it scores dou­ble the hu­man-nor­malised base­line score (reducing the run­time of a fine-tun­ing script from 300 sec­onds to 47 sec­onds, com­pared to the hu­man ref­er­ence so­lu­tion of 94 sec­onds). However, the mod­el’s av­er­age per­for­mance across all chal­lenges re­mains be­neath the alert thresh­old for the CCLs.

(Deep Think mode) On stealth eval­u­a­tions, the model per­forms sim­i­larly to Gemini 3 Pro. On sit­u­a­tional aware­ness, the model is stronger than Gemini 3 Pro: on three chal­lenges which no other model has been able to con­sis­tently solve, max to­kens, con­text size mod, and over­sight fre­quency, the model achieves a suc­cess rate of al­most 100%. However, its per­for­mance on other chal­lenges is in­con­sis­tent, and thus the model does not reach the alert thresh­old.

...

Read the original on deepmind.google »

2 477 shares, 57 trendiness

AI makes you boring

This post is an elab­o­ra­tion on a com­ment I made on Hacker News re­cently, on a blog post that showed an in­crease in vol­ume and de­cline in qual­ity among the Show HN sub­mis­sons. I don’t ac­tu­ally mind AI-aided de­vel­op­ment, a tool is a tool and should be used if you find it use­ful, but I think the vibe coded Show HN pro­jects are over­all pretty bor­ing. They gen­er­ally don’t have a lot of work put into them, and as a re­sult, the au­thor (pilot?) has­n’t gen­er­ally thought too much about the prob­lem space, and so there is­n’t re­ally much of a dis­cus­sion to be had.The cool part about pre-AI show HN is you got to talk to some­one who had thought about a prob­lem for way longer than you had. It was a real op­por­tu­nity to learn some­thing new, to get an en­tirely dif­fer­ent per­spec­tive.I feel like this is what AI has done to the pro­gram­ming dis­cus­sion. It draws in bor­ing peo­ple with bor­ing pro­jects who don’t have any­thing in­ter­est­ing to say about pro­gram­ming.

This is­n’t some­thing that is lim­ited to Show HN or even Hacker News, it’s some­thing you see every­where.

While part of this phe­nom­e­non is likely just an up­swing of peo­ple who don’t usu­ally do pro­gram­ming that get swept up in the fun of build­ing a prod­uct, I want to build an ar­gu­ment that it’s much worse than that.

AI mod­els are ex­tremely bad at orig­i­nal think­ing, so any think­ing that is of­floaded to a LLM is as a re­sult usu­ally not very orig­i­nal, even if they’re very good at treat­ing your in­puts to the dis­cus­sion as amaz­ing ge­nius level in­sights.

This may be a fea­ture if you are ex­plor­ing a topic you are un­fa­mil­iar with, but it’s a fa­tal flaw if you are writ­ing a blog post or de­sign­ing a prod­uct or try­ing to do some other form of orig­i­nal work.

Some will ar­gue that this is why you need a hu­man in the loop to steer the work and do the high level think­ing. That premise is fun­da­men­tally flawed. Original ideas are the re­sult of the very work you’re of­fload­ing on LLMs. Having hu­mans in the loop does­n’t make the AI think more like peo­ple, it makes the hu­man thought more like AI out­put.

The way hu­man be­ings tend to have orig­i­nal ideas is to im­merse in a prob­lem for a long pe­riod of time, which is some­thing that flat out does­n’t hap­pen when LLMs do the think­ing. You get shal­low, sur­face-level ideas in­stead.

Ideas are then fur­ther re­fined when you try to ar­tic­u­late them. This is why we make stu­dents write es­says. It’s also why we make pro­fes­sors teach un­der­grad­u­ates.

Prompting an AI model is not ar­tic­u­lat­ing an idea. You get the out­put, but in terms of ideation the out­put is dis­card­able. It’s the work that mat­ters.

You don’t get build mus­cle us­ing an ex­ca­va­tor to lift weights. You don’t pro­duce in­ter­est­ing thoughts us­ing a GPU to think.

...

Read the original on www.marginalia.nu »

3 406 shares, 92 trendiness

A smarter model for your most complex tasks

Your browser does not sup­port the au­dio el­e­ment.

This con­tent is gen­er­ated by Google AI. Generative AI is ex­per­i­men­tal

Last week, we re­leased a ma­jor up­date to Gemini 3 Deep Think to solve mod­ern chal­lenges across sci­ence, re­search and en­gi­neer­ing. Today, we’re re­leas­ing the up­graded core in­tel­li­gence that makes those break­throughs pos­si­ble: Gemini 3.1 Pro. We are ship­ping 3.1 Pro across our con­sumer and de­vel­oper prod­ucts to bring this progress in in­tel­li­gence to your every­day ap­pli­ca­tions. For de­vel­op­ers in pre­view via the Gemini API in Google AI Studio, Gemini CLI, our agen­tic de­vel­op­ment plat­form Google Antigravity and Android StudioFor en­ter­prises in Vertex AI and Gemini EnterpriseFor con­sumers via the Gemini app and NotebookLMBuilding on the Gemini 3 se­ries, 3.1 Pro rep­re­sents a step for­ward in core rea­son­ing. 3.1 Pro is a smarter, more ca­pa­ble base­line for com­plex prob­lem-solv­ing. This is re­flected in our progress on rig­or­ous bench­marks. On ARC-AGI-2, a bench­mark that eval­u­ates a mod­el’s abil­ity to solve en­tirely new logic pat­terns, 3.1 Pro achieved a ver­i­fied score of 77.1%. This is more than dou­ble the rea­son­ing per­for­mance of 3 Pro.

3.1 Pro is de­signed for tasks where a sim­ple an­swer is­n’t enough, tak­ing ad­vanced rea­son­ing and mak­ing it use­ful for your hard­est chal­lenges. This im­proved in­tel­li­gence can help in prac­ti­cal ap­pli­ca­tions — whether you’re look­ing for a clear, vi­sual ex­pla­na­tion of a com­plex topic, a way to syn­the­size data into a sin­gle view, or bring­ing a cre­ative pro­ject to life.

Code-based an­i­ma­tion: 3.1 Pro can gen­er­ate web­site-ready, an­i­mated SVGs di­rectly from a text prompt. Because these are built in pure code rather than pix­els, they re­main crisp at any scale and main­tain in­cred­i­bly small file sizes com­pared to tra­di­tional video.

Complex sys­tem syn­the­sis: 3.1 Pro uti­lizes ad­vanced rea­son­ing to bridge the gap be­tween com­plex APIs and user-friendly de­sign. In this ex­am­ple, the model built a live aero­space dash­board, suc­cess­fully con­fig­ur­ing a pub­lic teleme­try stream to vi­su­al­ize the International Space Station’s or­bit.

Interactive de­sign: 3.1 Pro codes a com­plex 3D star­ling mur­mu­ra­tion. It does­n’t just gen­er­ate the vi­sual code; it builds an im­mer­sive ex­pe­ri­ence where users can ma­nip­u­late the flock with hand-track­ing and lis­ten to a gen­er­a­tive score that shifts based on the birds’ move­ment. For re­searchers and de­sign­ers, this pro­vides a pow­er­ful way to pro­to­type sen­sory-rich in­ter­faces.

Creative cod­ing: 3.1 Pro can trans­late lit­er­ary themes into func­tional code. When prompted to build a mod­ern per­sonal port­fo­lio for Emily Brontë’s Wuthering Heights,” the model did­n’t just sum­ma­rize the text. It rea­soned through the nov­el’s at­mos­pheric tone to de­sign a sleek, con­tem­po­rary in­ter­face, cre­at­ing a web­site that cap­tures the essence of the pro­tag­o­nist.

Since re­leas­ing Gemini 3 Pro in November, your feed­back and the pace of progress have dri­ven these rapid im­prove­ments. We are re­leas­ing 3.1 Pro in pre­view to­day to val­i­date these up­dates and con­tinue to make fur­ther ad­vance­ments in ar­eas such as am­bi­tious agen­tic work­flows be­fore we make it gen­er­ally avail­able soon.Start­ing to­day, Gemini 3.1 Pro in the Gemini app is rolling out with higher lim­its for users with the Google AI Pro and Ultra plans. 3.1 Pro is also now avail­able on NotebookLM ex­clu­sively for Pro and Ultra users. And de­vel­op­ers and en­ter­prises can ac­cess 3.1 Pro now in pre­view in the Gemini API via AI Studio, Antigravity, Vertex AI, Gemini Enterprise, Gemini CLI and Android Studio.We can’t wait to see what you build and dis­cover with it.

...

Read the original on blog.google »

4 405 shares, 50 trendiness

Google Cloud console

Your page may be load­ing slowly be­cause you’re build­ing op­ti­mized sources. If you in­tended on us­ing un­com­piled sources, please click this link.

Google Cloud Console has failed to load JavaScript sources from www.gsta­tic.com.

Possible rea­sons are:www.gsta­tic.com or its IP ad­dresses are blocked by your net­work ad­min­is­tra­tor­Google has tem­porar­ily blocked your ac­count or net­work due to ex­ces­sive au­to­mated re­quest­sPlease con­tact your net­work ad­min­is­tra­tor for fur­ther as­sis­tance.

...

Read the original on console.cloud.google.com »

5 366 shares, 55 trendiness

micasa — your house, in a terminal

Your house is qui­etly plot­ting to break while you sleep—and you’re dream­ing about re­do­ing the kitchen.

mi­casa tracks main­te­nance, pro­jects, in­ci­dents, ap­pli­ances, ven­dors, quotes, and doc­u­ments—all from your ter­mi­nal.

When did I last change the fur­nace fil­ter?What if we fi­nally did the back­yard?How much would it ac­tu­ally cost to…Quotes side by side, ven­dor his­tory, and the math you need to ac­tu­ally de­cide. Is the dish­washer still un­der war­ranty?Ap­pli­ance track­ing with pur­chase dates, war­ranty sta­tus, and main­te­nance his­tory tied to each one.The base­ment is leak­ing again.Log in­ci­dents with sever­ity and lo­ca­tion, link them to ap­pli­ances and ven­dors, and re­solve them when fixed.Who did we use last time?A ven­dor di­rec­tory with con­tact info, quote his­tory, and every job they’ve done for you.At­tach files—man­u­als, in­voices, pho­tos—di­rectly to pro­jects and ap­pli­ances. Stored in the same SQLite file.

or grab a bi­nary from the lat­est re­leaseLinux, ma­cOS, and Windows bi­na­ries are avail­able for amd64 and ar­m64.mi­casa –demo # poke around with sam­ple data

mi­casa # start fresh with your own house

mi­casa –print-path # show where the data­base lives­Linux, ma­cOS, Windows. One SQLite file, your ma­chine. Back it up with cp.

Vim-style modal keys. nav to browse, edit to change things. Sort by any col­umn, jump to columns with fuzzy search, hide what you don’t need, drill into re­lated records. The full list is in the key­bind­ing ref­er­ence.

...

Read the original on micasa.dev »

6 345 shares, 11 trendiness

LangChain Integration for Vector Support for SQL-based AI applications

Microsoft SQL now sup­ports na­tive vec­tor search ca­pa­bil­i­ties in Azure SQL and SQL data­base in Microsoft Fabric. We also re­leased the langchain-sqlserver pack­age, en­abling the man­age­ment of SQL Server as a Vectorstore in LangChain. In this step-by-step tu­to­r­ial, we will show you how to add gen­er­a­tive AI fea­tures to your own ap­pli­ca­tions with just a few lines of code us­ing Azure SQL DB, LangChain, and LLMs.

The Harry Potter se­ries, writ­ten by J. K. Rowling, is a glob­ally beloved col­lec­tion of seven books that fol­low the jour­ney of a young wiz­ard, Harry Potter, and his friends as they bat­tle the dark forces led by the evil Voldemort. Its cap­ti­vat­ing plot, rich char­ac­ters, and imag­i­na­tive world have made it one of the most fa­mous and cher­ished se­ries in lit­er­ary his­tory

By us­ing a well-known dataset, we can cre­ate en­gag­ing and re­lat­able ex­am­ples that res­onate with a wide au­di­ence

This Sample dataset from Kag­gle con­tains 7 .txt files of 7 books of Harry Potter. For this demo we will only be us­ing the first book — Harry Potter and the Sorcerer’s Stone.

Whether you’re a tech en­thu­si­ast or a Potterhead, we have two ex­cit­ing use cases to ex­plore:

A Q&A sys­tem that lever­ages the power of SQL Vector Store & LangChain to pro­vide ac­cu­rate and con­text-rich an­swers from the Harry Potter Book.

Next, we will push the cre­ative lim­its of the ap­pli­ca­tion by teach­ing it to gen­er­ate new AI-driven Harry Potter fan fic­tion based on our ex­ist­ing dataset of Harry Potter books. This fea­ture is sure to de­light Potterheads, al­low­ing them to ex­plore new ad­ven­tures and cre­ate their own mag­i­cal sto­ries.

The code lives in an in­te­gra­tion pack­age langchain-sqlserver

!pip in­stall langchain-sqlserver==0.1.1

In this ex­am­ple, we will use a dataset con­sist­ing of text files from the Harry Potter books, which are stored in Azure Blob Storage

LangChain has a seam­less in­te­gra­tion with Azure­Blob­Stor­age, mak­ing it easy to load doc­u­ments di­rectly from Azure Blob Storage.

Additionally, LangChain pro­vides a method to split long text into smaller chunks, us­ing langchain-text-split­ter which is es­sen­tial since Azure OpenAI em­bed­dings have an in­put to­ken limit.

In this ex­am­ple we use Azure OpenAI to gen­er­ate em­bed­dings of the split doc­u­ments, how­ever you can use any of the dif­fer­ent em­bed­dings pro­vided in LangChain.

After split­ting the long text files of Harry Potter books into smaller chunks, you can gen­er­ate vec­tor em­bed­dings for each chunk us­ing the Text Embedding Model avail­able through Azure­Ope­nAI. Notice how we can ac­com­plish this in just a few lines of code!

* First, ini­tial­ize the vec­tor store and set up the em­bed­dings us­ing AzureOpenAI

* Once we have our Vector Store we can add items to our vec­tor store by us­ing the ad­d_­doc­u­ments func­tion

Once your vec­tor store has been cre­ated and the rel­e­vant doc­u­ments have been added you can now per­form sim­i­lar­ity search.

The vec­tor­store also sup­ports a set of fil­ters that can be ap­plied against the meta­data fields of the doc­u­ments. By ap­ply­ing fil­ters based on spe­cific meta­data at­trib­utes, users can limit the scope of their searches, con­cen­trat­ing only on the most rel­e­vant data sub­sets.

Performing a sim­ple sim­i­lar­ity search can be done as fol­lows with the sim­i­lar­i­ty_search_with­_s­core

The Q&A func­tion al­lows users to ask spe­cific ques­tions about the story, char­ac­ters, and events, and get con­cise, con­text-rich an­swers. This not only en­hances their un­der­stand­ing of the books but also makes them feel like they’re part of the mag­i­cal uni­verse.

The LangChain Vector store sim­pli­fies build­ing so­phis­ti­cated Q&A sys­tems by en­abling ef­fi­cient sim­i­lar­ity searches to find the top 10 rel­e­vant doc­u­ments based on the user’s query.

The re­triever is cre­ated from the vec­tor_­s­tore, and the ques­tion-an­swer chain is built us­ing the cre­ate_stuff_­doc­u­ments_chain func­tion.

A prompt tem­plate is crafted us­ing the Chat­Prompt­Tem­plate class, en­sur­ing struc­tured and con­text-rich re­sponses.

Often in Q&A ap­pli­ca­tions it’s im­por­tant to show users the sources that were used to gen­er­ate the an­swer. LangChain’s built-in cre­ate_re­trieval_chain will prop­a­gate re­trieved source doc­u­ments to the out­put un­der the context” key:

Read more about Langchain RAG tu­to­ri­als & the ter­mi­nolo­gies men­tioned above here

We can now ask the user ques­tion and re­ceive re­sponses from the Q&A System:

Potterheads are known for their cre­ativ­ity and pas­sion for the se­ries. With this they can craft their own sto­ries based on user prompt given , ex­plore new ad­ven­tures, and even cre­ate al­ter­nate end­ings. Whether it’s imag­in­ing a new duel be­tween Harry and Voldemort or craft­ing a per­son­al­ized Hogwarts bed­time story for you kiddo, the pos­si­bil­i­ties are end­less.

The fan fic­tion func­tion uses the em­bed­dings in the vec­tor store to gen­er­ate new sto­ries

* Retrieving Relevant Passages: When a user pro­vides a prompt for a fan fic­tion story, the func­tion first re­trieves rel­e­vant pas­sages from the SQL vec­tor store. The vec­tor store con­tains em­bed­dings of the text from the Harry Potter books, which al­lows it to find pas­sages that are con­tex­tu­ally sim­i­lar to the user’s prompt.

* Formatting the Retrieved Passages: The re­trieved pas­sages are then for­mat­ted into a co­her­ent con­text. This in­volves com­bin­ing the text from the re­trieved pas­sages into a sin­gle string that can be used as in­put for the lan­guage model.

* Generating the Story: The for­mat­ted con­text, along with the user’s prompt, is fed into a lan­guage model GPT4o to gen­er­ate the fan fic­tion story. The lan­guage model uses the con­text to en­sure that the gen­er­ated story is rel­e­vant and co­her­ent, in­cor­po­rat­ing el­e­ments from the re­trieved pas­sages.

Let’s imag­ine we prompt with this

Don’t miss the dis­cus­sion around the Vector Support in SQL — Public Preview” by young Davide on the Hogwarts Express! Even the Wizards are ex­cited!

As you can see along with gen­er­at­ing the story it also men­tions the sources of in­spi­ra­tion from the Vector Store above

Hence, by com­bin­ing the Q&A sys­tem with the fan fic­tion gen­er­a­tor of­fers a unique and im­mer­sive read­ing ex­pe­ri­ence. If users come across a puz­zling mo­ment in the books, they can ask the Q&A sys­tem for clar­i­fi­ca­tion. If they’re in­spired by a par­tic­u­lar scene, they can use the fan fic­tion gen­er­a­tor to ex­pand on it and cre­ate their own ver­sion of events. This in­ter­ac­tive ap­proach makes read­ing more en­gag­ing and en­joy­able.

You can find this note­book in the GitHub Repo along with other sam­ples: https://​github.com/​Azure-Sam­ples/​azure-sql-db-vec­tor-search.

We’d love to hear your thoughts on this fea­ture! Please share how you’re us­ing it in the com­ments be­low and let us know any feed­back or sug­ges­tions for fu­ture im­prove­ments. If you have spe­cific re­quests, don’t for­get to sub­mit them through the Azure SQL and SQL Server feed­back por­tal, where other users can also con­tribute and help us pri­or­i­tize fu­ture de­vel­op­ments. We look for­ward to hear­ing your ideas!

...

Read the original on devblogs.microsoft.com »

7 316 shares, 26 trendiness

DOGE Track

...

Read the original on dogetrack.info »

8 266 shares, 11 trendiness

Minecraft Java is switching from OpenGL to Vulkan for the Vibrant Visuals update

Work con­tin­ues for the Vibrant Visuals up­date to come to Minecraft Java, and as part of that they’re switch­ing the ren­der­ing from OpenGL to Vulkan.

Announced to­day (February 18th) by Mojang de­vel­op­ers, it’s a huge change for such a game and will take time - but it will be worth it in the end so they can take ad­van­tage of all the mod­ern fea­tures avail­able for both vi­sual im­prove­ments and bet­ter per­for­mance.

They note clearly that their aim is to keep Minecraft: Java Edition playable for al­most any PC-operating sys­tem, in­clud­ing ma­cOS and Linux”. For the ma­cOS side of things, they’ll use a trans­la­tion layer since Apple don’t sup­port Vulkan di­rectly (they made their own API with Metal).

For mod­ders, they’re sug­gest­ing they start mak­ing prepa­ra­tions to move away from OpenGL

Switching from OpenGL to Vulkan will have an im­pact on the mods that cur­rently use OpenGL for ren­der­ing, and we an­tic­i­pate that up­dat­ing from OpenGL to Vulkan will take mod­ders more ef­fort than the up­dates you un­der­take for each of our re­leases.

To start with, we rec­om­mend our mod­ding com­mu­nity look at mov­ing away from OpenGL us­age. We en­cour­age au­thors to try to reuse as much of the in­ter­nal ren­der­ing APIs as pos­si­ble, to make this tran­si­tion as easy as pos­si­ble. If that is not suf­fi­cient for your needs, then come and talk to us!

It does mean that play­ers on re­ally old de­vices that don’t sup­port Vulkan will be left out, but Vulkan has been sup­ported go­ing back to some pretty old GPUs. You’ve got time though, as they’ll be rolling out Vulkan along­side OpenGL in snap­shots (development re­leases) sometime over the sum­mer”. You’ll be able to tog­gle be­tween them dur­ing the test­ing pe­riod un­til Mojang be­lieve it’s ready. OpenGL will be en­tirely re­moved even­tu­ally once they’re happy with per­for­mance and sta­bil­ity.

...

Read the original on www.gamingonlinux.com »

9 248 shares, 23 trendiness

February Pebble Production and Software Updates

Things are busy in Pebbleland! We’re get­ting close to ship­ping 3 new hard­ware prod­ucts and all the as­so­ci­ated soft­ware that comes along with them. Overall, things feel good. I’d say the amount of last minute shenani­gans is at the nor­mal amount. Getting new hard­ware into production’ is a pretty wild and ex­cit­ing process. Building hard­ware is an ex­er­cise in bal­anc­ing com­pet­ing pri­or­i­ties of cost, qual­ity and speed. In the last mile push to get into pro­duc­tion, things can change quickly for the best (woohoo! the wa­ter­proof test fi­nally passes, we can move to the next stage), or less good (uh, the pro­duc­tion line needs 3 more test fix­tures to test Index 01 mic per­for­mance, and a ma­jor pro­duc­tion test soft­ware up­date…that’ll be a lot more money). Unlike with soft­ware, you can’t eas­ily fix hard­ware is­sues af­ter you ship! Making these last minute de­ci­sions is some­times pretty stress­ful but hey, that’s the world of mak­ing hard­ware.

We’re in the Production Verification Test (PVT) phase right now, the last stop be­fore Mass Production (MP). During this phase we man­u­fac­tured hun­dreds of PT2s in a se­ries of test builds, un­cov­ered a bunch of is­sues, and fixed a bunch of is­sues. Just be­fore the fac­to­ries shut down for the lu­nar New Year, we got the good news that all the tests passed on the last build!

We fo­cused most of January on im­prov­ing the wa­ter­proof­ing on the watch (flash back to last sum­mer when we worked on this for Pebble 2 Duo!). I trav­eled to visit the fac­tory (travelogue here) and worked through a lot of open is­sues. Above is a video of the speaker wa­ter­proof test­ing from the pro­duc­tion line. Good news is that we fixed all the is­sues, tests are pass­ing and it looks like we’ll be able to cer­tify PT2 with a wa­ter­proof rat­ing of 30m or 3ATM! This means you can get your watch wet, wear it while swim­ming (but not in hot tubs/​saunas) and gen­er­ally not worry about it. It’s not a dive watch, though. Also, don’t ex­pose it to hot wa­ter (this could weaken the wa­ter­proof seals), or high pres­sure wa­ter. It’s not in­vin­ci­ble.

Snapshot of our mass pro­duc­tion plan (output counts are cu­mu­la­tive)

The fac­tory is closed now for Lunar New Year and will re­open around the end of Feb. As of to­day, mass pro­duc­tion is sched­uled to start on March 9. It will take the pro­duc­tion line a lit­tle while to spin up to­wards our tar­get out­put of 500 watches per day. Finished watches ship from the fac­tory once a week to our dis­tri­b­u­tion cen­ter (which takes ~1 week), then get packed for ship­ping (a few days to a week), then get de­liv­ered to you (~7-10 days). These dates and es­ti­mates are ALL sub­ject to change - if we run into a prob­lem, pro­duc­tion shuts down un­til we fix it. Delays can and most likely will hap­pen.

What every­one’s been wait­ing for…when will your PT2 ar­rive 🙂

Based on cur­rent sched­ule, the first mass pro­duc­tion PT2s will ar­rive on wrists dur­ing the be­gin­ning of April. We should wrap up de­liv­er­ing all pre-or­dered Pebble Time 2s two months later by the be­gin­ning of June. If your watch had an ini­tial date of December, it should ar­rive in April and if your ini­tial date was April, it should ar­rive in June. Unfortunately we can’t pre­dict when your spe­cific watch will ar­rive - please don’t email to ask, we’ll just send you a link to this blog post.

A few weeks be­fore your watch is sched­uled to ship, we’ll email link for you to con­firm your ad­dress (change if now if you’d like), pick op­tional ac­ces­sories (extra charg­ers and straps) and pay any tar­iffs/​VAT/​taxes owed. For US or­ders, the tar­iff amount is $10 per watch. For other coun­tries, VAT/taxes will be cal­cu­lated and charged dur­ing or­der con­fir­ma­tion. When the watch is de­liv­ered you won’t need to pay any­thing else or deal with cus­toms forms.

Index 01 is also in the Production Verification Test (PVT) phase. We’ve man­u­fac­tured sev­eral hun­dred so far. Waterproof test­ing went well (it’s rated 1m of sub­mer­sion, ipx8). You’ll be able to wash your hands, wash dishes, shower, get it wet etc but you can’t swim with it on. PTV is pro­ceed­ing well, but we’re not fin­ished yet. We’re still aim­ing to start mass pro­duc­tion dur­ing March, but we don’t have a firm start date yet.

In or­der news, we’re work­ing an Index 01 ring sizer kit that will be avail­able for $10 (hopefully in­clud­ing world­wide ship­ping, work­ing on that now). This will let you mea­sure your in­dex fin­ger and find your ex­act Pebble-specific ring size. We will ask every­one to mea­sure their ring size, ei­ther by or­der­ing an Index 01 sizer kit or 3D print­ing the kit, be­cause our sizes are dif­fer­ent than Oura or other rings.

We’re also con­sid­er­ing of­fer­ing size 14 and 15. It’s a big up­front ex­pense (~$50,000) to of­fer these sizes due to ad­di­tional tool­ing that will be needed, so we’re col­lect­ing in­ter­est - sign up here if you would like Index 01 in these sizes!

Things are rolling along. We fin­ished the Design Verification 1 (DVT1) phase just be­fore the Lunar New Year hol­i­day started. Work is pro­gress­ing well. One of the huge speed-ups to the pro­gram over­all is that the elec­tri­cal de­sign is al­most iden­ti­cal to Pebble Time 2. This means our (two per­son) firmware team can code new fea­tures or bug fixes for PT2 and they work im­me­di­ately on PR2! After the lu­nar new year, we’ll fo­cus on wa­ter­proof test­ing and last minute tweaks be­fore the cur­rent es­ti­mated pro­duc­tion start date in late May.

Our soft­ware out­put has been tremen­dous - we’re fix­ing bugs left, right and cen­ter and adding lots of new fea­tures to PebbleOS (changelog) and the Pebble mo­bile app (changelog).

Here are some high­lights:

* Weather now works (in sun­rise/​sun­set time­line pins and the Weather app)

* WhatsApp calls show up as calls (on Android)

* Fixed a ma­jor back­ground crash bug in Pebble iOS that caused weather and other apps to not fetch live data.

* Many old Pebble apps/​faces use weather APIs that no longer work (Yahoo, OpenWeather). The Pebble mo­bile app now catches these net­work re­quests and re­turns data from Open-Meteo - keep­ing old watch­faces work­ing!

* Pebble Appstore is now native’ in­side the Pebble mo­bile app (in v1.0.11.1 on beta chan­nels to­day). We’ve also up­dated the Pebble Appstore on the web at apps.repeb­ble.com . If you’re a de­vel­oper and don’t see the lat­est ver­sion of your app or watch­face, please make sure to im­port them (takes ~2 min­utes).

* Now you can fil­ter out older apps with non-work­ing set­tings pages or com­pan­ion apps. Or fil­ter specif­i­cally for apps that are open source!

* Some PebbleKit 1.0 Android apps should work again (thanks Google for giv­ing us back com.get­peb­ble.an­droid.provider.basalt). But devs - please up­grade your apps to PebbleKit 2.0 Android for new com­pan­ion apps (more info and repo)

* Watch set­tings can now be ad­justed in the Pebble mo­bile app. Your set­tings are saved and synced to all your Pebble watches.

* Thanks to many com­mu­nity con­tri­bu­tions, there are now many new app icons for no­ti­fi­ca­tions for apps that did­n’t ex­ist 10 years ago!

* Most PebbleOS work has been go­ing into fac­tory ver­i­fi­ca­tion sw for Obelix

* Left handed mode - wear your Pebble on right hand with but­tons flipped (thanks Claudio!)

* Health data is now synced from watch to phone (thanks Michael!)

We’ve also made some great ad­vances on the SDK and de­vel­oper front…ex­pect an up­date very soon 😉

...

Read the original on repebble.com »

10 246 shares, 9 trendiness

European tech alternatives

Find GDPR-compliant, EU-hosted soft­ware and ser­vice al­ter­na­tives that re­spect your data sov­er­eignty. Browse 500+ European com­pa­nies across 30+ cat­e­gories.

* Alternatives to US Tech — EU re­place­ments for pop­u­lar tools

* Browse by Country — Find com­pa­nies in your re­gion

* Submit a Company — Add your com­pany to the di­rec­tory

EU Tech Map is the lead­ing di­rec­tory of European soft­ware com­pa­nies and GDPR-compliant al­ter­na­tives. We help busi­nesses find trust­wor­thy, pri­vacy-re­spect­ing tech­nol­ogy so­lu­tions hosted in Europe.

...

Read the original on eutechmap.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.