10 interesting stories served every morning and every evening.




1 401 shares, 32 trendiness

AI for better understanding the genome

AlphaGenome: AI for bet­ter un­der­stand­ing the genome

Introducing a new, uni­fy­ing DNA se­quence model that ad­vances reg­u­la­tory vari­ant-ef­fect pre­dic­tion and promises to shed new light on genome func­tion — now avail­able via API.

The genome is our cel­lu­lar in­struc­tion man­ual. It’s the com­plete set of DNA which guides nearly every part of a liv­ing or­gan­ism, from ap­pear­ance and func­tion to growth and re­pro­duc­tion. Small vari­a­tions in a genome’s DNA se­quence can al­ter an or­gan­is­m’s re­sponse to its en­vi­ron­ment or its sus­cep­ti­bil­ity to dis­ease. But de­ci­pher­ing how the genome’s in­struc­tions are read at the mol­e­c­u­lar level — and what hap­pens when a small DNA vari­a­tion oc­curs — is still one of bi­ol­o­gy’s great­est mys­ter­ies.

Today, we in­tro­duce AlphaGenome, a new ar­ti­fi­cial in­tel­li­gence (AI) tool that more com­pre­hen­sively and ac­cu­rately pre­dicts how sin­gle vari­ants or mu­ta­tions in hu­man DNA se­quences im­pact a wide range of bi­o­log­i­cal processes reg­u­lat­ing genes. This was en­abled, among other fac­tors, by tech­ni­cal ad­vances al­low­ing the model to process long DNA se­quences and out­put high-res­o­lu­tion pre­dic­tions.

To ad­vance sci­en­tific re­search, we’re mak­ing AlphaGenome avail­able in pre­view via our AlphaGenome API for non-com­mer­cial re­search, and plan­ning to re­lease the model in the fu­ture.

We be­lieve AlphaGenome can be a valu­able re­source for the sci­en­tific com­mu­nity, help­ing sci­en­tists bet­ter un­der­stand genome func­tion, dis­ease bi­ol­ogy, and ul­ti­mately, drive new bi­o­log­i­cal dis­cov­er­ies and the de­vel­op­ment of new treat­ments.

Our AlphaGenome model takes a long DNA se­quence as in­put — up to 1 mil­lion let­ters, also known as base-pairs — and pre­dicts thou­sands of mol­e­c­u­lar prop­er­ties char­ac­ter­is­ing its reg­u­la­tory ac­tiv­ity. It can also score the ef­fects of ge­netic vari­ants or mu­ta­tions by com­par­ing pre­dic­tions of mu­tated se­quences with un­mu­tated ones.

Predicted prop­er­ties in­clude where genes start and where they end in dif­fer­ent cell types and tis­sues, where they get spliced, the amount of RNA be­ing pro­duced, and also which DNA bases are ac­ces­si­ble, close to one an­other, or bound by cer­tain pro­teins. Training data was sourced from large pub­lic con­sor­tia in­clud­ing ENCODE, GTEx, 4D Nucleome and FANTOM5, which ex­per­i­men­tally mea­sured these prop­er­ties cov­er­ing im­por­tant modal­i­ties of gene reg­u­la­tion across hun­dreds of hu­man and mouse cell types and tis­sues.

Animation show­ing AlphaGenome tak­ing one mil­lion DNA let­ters as in­put and pre­dict­ing di­verse mol­e­c­u­lar prop­er­ties across dif­fer­ent tis­sues and cell types.

The AlphaGenome ar­chi­tec­ture uses con­vo­lu­tional lay­ers to ini­tially de­tect short pat­terns in the genome se­quence, trans­form­ers to com­mu­ni­cate in­for­ma­tion across all po­si­tions in the se­quence, and a fi­nal se­ries of lay­ers to turn the de­tected pat­terns into pre­dic­tions for dif­fer­ent modal­i­ties. During train­ing, this com­pu­ta­tion is dis­trib­uted across mul­ti­ple in­ter­con­nected Tensor Processing Units (TPUs) for a sin­gle se­quence.

This model builds on our pre­vi­ous ge­nomics model, Enformer and is com­ple­men­tary to AlphaMissense, which spe­cial­izes in cat­e­go­riz­ing the ef­fects of vari­ants within pro­tein-cod­ing re­gions. These re­gions cover 2% of the genome. The re­main­ing 98%, called non-cod­ing re­gions, are cru­cial for or­ches­trat­ing gene ac­tiv­ity and con­tain many vari­ants linked to dis­eases. AlphaGenome of­fers a new per­spec­tive for in­ter­pret­ing these ex­pan­sive se­quences and the vari­ants within them.

Our model an­a­lyzes up to 1 mil­lion DNA let­ters and makes pre­dic­tions at the res­o­lu­tion of in­di­vid­ual let­ters. Long se­quence con­text is im­por­tant for cov­er­ing re­gions reg­u­lat­ing genes from far away and base-res­o­lu­tion is im­por­tant for cap­tur­ing fine-grained bi­o­log­i­cal de­tails.

Previous mod­els had to trade off se­quence length and res­o­lu­tion, which lim­ited the range of modal­i­ties they could jointly model and ac­cu­rately pre­dict. Our tech­ni­cal ad­vances ad­dress this lim­i­ta­tion with­out sig­nif­i­cantly in­creas­ing the train­ing re­sources — train­ing a sin­gle AlphaGenome model (without dis­til­la­tion) took four hours and re­quired half of the com­pute bud­get used to train our orig­i­nal Enformer model.

By un­lock­ing high res­o­lu­tion pre­dic­tion for long in­put se­quences, AlphaGenome can pre­dict the most di­verse range of modal­i­ties. In do­ing so, AlphaGenome pro­vides sci­en­tists with more com­pre­hen­sive in­for­ma­tion about the com­plex steps of gene reg­u­la­tion.

In ad­di­tion to pre­dict­ing a di­verse range of mol­e­c­u­lar prop­er­ties, AlphaGenome can ef­fi­ciently score the im­pact of a ge­netic vari­ant on all of these prop­er­ties in a sec­ond. It does this by con­trast­ing pre­dic­tions of mu­tated se­quences with un­mu­tated ones, and ef­fi­ciently sum­maris­ing that con­trast us­ing dif­fer­ent ap­proaches for dif­fer­ent modal­i­ties.

Many rare ge­netic dis­eases, such as spinal mus­cu­lar at­ro­phy and some forms of cys­tic fi­bro­sis, can be caused by er­rors in RNA splic­ing — a process where parts of the RNA mol­e­cule are re­moved, or spliced out”, and the re­main­ing ends re­joined. For the first time, AlphaGenome can ex­plic­itly model the lo­ca­tion and ex­pres­sion level of these junc­tions di­rectly from se­quence, of­fer­ing deeper in­sights about the con­se­quences of ge­netic vari­ants on RNA splic­ing.

AlphaGenome achieves state-of-the-art per­for­mance across a wide range of ge­nomic pre­dic­tion bench­marks, such as pre­dict­ing which parts of the DNA mol­e­cule will be in close prox­im­ity, whether a ge­netic vari­ant will in­crease or de­crease ex­pres­sion of a gene, or whether it will change the gene’s splic­ing pat­tern.

Bar graph show­ing AlphaGenome’s rel­a­tive im­prove­ments on se­lected DNA se­quence and vari­ant ef­fect tasks, com­pared against re­sults for the cur­rent best meth­ods in each cat­e­gory.

When pro­duc­ing pre­dic­tions for sin­gle DNA se­quences, AlphaGenome out­per­formed the best ex­ter­nal mod­els on 22 out of 24 eval­u­a­tions. And when pre­dict­ing the reg­u­la­tory ef­fect of a vari­ant, it matched or ex­ceeded the top-per­form­ing ex­ter­nal mod­els on 24 out of 26 eval­u­a­tions.

This com­par­i­son in­cluded mod­els spe­cial­ized for in­di­vid­ual tasks. AlphaGenome was the only model that could jointly pre­dict all of the as­sessed modal­i­ties, high­light­ing its gen­er­al­ity. Read more in our preprint.

AlphaGenome’s gen­er­al­ity al­lows sci­en­tists to si­mul­ta­ne­ously ex­plore a vari­ant’s im­pact on a num­ber of modal­i­ties with a sin­gle API call. This means that sci­en­tists can gen­er­ate and test hy­pothe­ses more rapidly, with­out hav­ing to use mul­ti­ple mod­els to in­ves­ti­gate dif­fer­ent modal­i­ties.

Moreover AlphaGenome’s strong per­for­mance in­di­cates it has learned a rel­a­tively gen­eral rep­re­sen­ta­tion of DNA se­quence in the con­text of gene reg­u­la­tion. This makes it a strong foun­da­tion for the wider com­mu­nity to build upon. Once the model is fully re­leased, sci­en­tists will be able to adapt and fine-tune it on their own datasets to bet­ter tackle their unique re­search ques­tions.

Finally, this ap­proach pro­vides a flex­i­ble and scal­able ar­chi­tec­ture for the fu­ture. By ex­tend­ing the train­ing data, AlphaGenome’s ca­pa­bil­i­ties could be ex­tended to yield bet­ter per­for­mance, cover more species, or in­clude ad­di­tional modal­i­ties to make the model even more com­pre­hen­sive.

It’s a mile­stone for the field. For the first time, we have a sin­gle model that uni­fies long-range con­text, base-level pre­ci­sion and state-of-the-art per­for­mance across a whole spec­trum of ge­nomic tasks.

AlphaGenome’s pre­dic­tive ca­pa­bil­i­ties could help sev­eral re­search av­enues:

Disease un­der­stand­ing: By more ac­cu­rately pre­dict­ing ge­netic dis­rup­tions, AlphaGenome could help re­searchers pin­point the po­ten­tial causes of dis­ease more pre­cisely, and bet­ter in­ter­pret the func­tional im­pact of vari­ants linked to cer­tain traits, po­ten­tially un­cov­er­ing new ther­a­peu­tic tar­gets. We think the model is es­pe­cially suit­able for study­ing rare vari­ants with po­ten­tially large ef­fects, such as those caus­ing rare Mendelian dis­or­ders. Synthetic bi­ol­ogy: Its pre­dic­tions could be used to guide the de­sign of syn­thetic DNA with spe­cific reg­u­la­tory func­tion — for ex­am­ple, only ac­ti­vat­ing a gene in nerve cells but not mus­cle cells.Fun­da­men­tal re­search: It could ac­cel­er­ate our un­der­stand­ing of the genome by as­sist­ing in map­ping its cru­cial func­tional el­e­ments and defin­ing their roles, iden­ti­fy­ing the most es­sen­tial DNA in­struc­tions for reg­u­lat­ing a spe­cific cell type’s func­tion.

For ex­am­ple, we used AlphaGenome to in­ves­ti­gate the po­ten­tial mech­a­nism of a can­cer-as­so­ci­ated mu­ta­tion. In an ex­ist­ing study of pa­tients with T-cell acute lym­phoblas­tic leukemia (T-ALL), re­searchers ob­served mu­ta­tions at par­tic­u­lar lo­ca­tions in the genome. Using AlphaGenome, we pre­dicted that the mu­ta­tions would ac­ti­vate a nearby gene called TAL1 by in­tro­duc­ing a MYB DNA bind­ing mo­tif, which repli­cated the known dis­ease mech­a­nism and high­lighted AlphaGenome’s abil­ity to link spe­cific non-cod­ing vari­ants to dis­ease genes.

AlphaGenome will be a pow­er­ful tool for the field. Determining the rel­e­vance of dif­fer­ent non-cod­ing vari­ants can be ex­tremely chal­leng­ing, par­tic­u­larly to do at scale. This tool will pro­vide a cru­cial piece of the puz­zle, al­low­ing us to make bet­ter con­nec­tions to un­der­stand dis­eases like can­cer.

AlphaGenome marks a sig­nif­i­cant step for­ward, but it’s im­por­tant to ac­knowl­edge its cur­rent lim­i­ta­tions.

Like other se­quence-based mod­els, ac­cu­rately cap­tur­ing the in­flu­ence of very dis­tant reg­u­la­tory el­e­ments, like those over 100,000 DNA let­ters away, is still an on­go­ing chal­lenge. Another pri­or­ity for fu­ture work is fur­ther in­creas­ing the mod­el’s abil­ity to cap­ture cell- and tis­sue-spe­cific pat­terns.

We haven’t de­signed or val­i­dated AlphaGenome for per­sonal genome pre­dic­tion, a known chal­lenge for AI mod­els. Instead, we fo­cused more on char­ac­ter­is­ing the per­for­mance on in­di­vid­ual ge­netic vari­ants. And while AlphaGenome can pre­dict mol­e­c­u­lar out­comes, it does­n’t give the full pic­ture of how ge­netic vari­a­tions lead to com­plex traits or dis­eases. These of­ten in­volve broader bi­o­log­i­cal processes, like de­vel­op­men­tal and en­vi­ron­men­tal fac­tors, that are be­yond the di­rect scope of our model.

We’re con­tin­u­ing to im­prove our mod­els and gath­er­ing feed­back to help us ad­dress these gaps.

AlphaGenome is now avail­able for non-com­mer­cial use via our AlphaGenome API. Please note that our mod­el’s pre­dic­tions are in­tended only for re­search use and haven’t been de­signed or val­i­dated for di­rect clin­i­cal pur­poses.

Researchers world­wide are in­vited to get in touch with po­ten­tial use-cases for AlphaGenome and to ask ques­tions or share feed­back through the com­mu­nity fo­rum.

We hope AlphaGenome will be an im­por­tant tool for bet­ter un­der­stand­ing the genome and we’re com­mit­ted to work­ing along­side ex­ter­nal ex­perts across acad­e­mia, in­dus­try, and gov­ern­ment or­ga­ni­za­tions to en­sure AlphaGenome ben­e­fits as many peo­ple as pos­si­ble.

Together with the col­lec­tive ef­forts of the wider sci­en­tific com­mu­nity, we hope it will deepen our un­der­stand­ing of the com­plex cel­lu­lar processes en­coded in the DNA se­quence and the ef­fects of vari­ants, and drive ex­cit­ing new dis­cov­er­ies in ge­nomics and health­care.

We would like to thank Juanita Bawagan, Arielle Bier, Stephanie Booth, Irina Andronic, Armin Senoner, Dhavanthi Hariharan, Rob Ashley, Agata Laydon and Kathryn Tunyasuvunakool for their help with the text and fig­ures. This work was done thanks to the con­tri­bu­tions of the AlphaGenome co-au­thors: Žiga Avsec, Natasha Latysheva, Jun Cheng, Guido Novati, Kyle R. Taylor, Tom Ward, Clare Bycroft, Lauren Nicolaisen, Eirini Arvaniti, Joshua Pan, Raina Thomas, Vincent Dutordoir, Matteo Perino, Soham De, Alexander Karollus, Adam Gayoso, Toby Sargeant, Anne Mottram, Lai Hong Wong, Pavol Drotár, Adam Kosiorek, Andrew Senior, Richard Tanburn, Taylor Applebaum, Souradeep Basu, Demis Hassabis and Pushmeet Kohli.We would also like to thank Dhavanthi Hariharan, Charlie Taylor, Ottavia Bertolli, Yannis Assael, Alex Botev, Anna Trostanetski, Lucas Tenório, Victoria Johnston, Richard Green, Kathryn Tunyasuvunakool, Molly Beck, Uchechi Okereke, Rachael Tremlett, Sarah Chakera, Ibrahim I. Taskiran, Andreea-Alexandra Muşat, Raiyan Khan, Ren Yi and the greater Google DeepMind team for their sup­port, help and feed­back.

...

Read the original on deepmind.google »

2 316 shares, 35 trendiness

US economy shrank 0.5% in the first quarter, worse than earlier estimates had revealed

WASHINGTON (AP) — The U. S. econ­omy shrank at a 0.5% an­nual pace from January through March as President Donald Trump’s trade wars dis­rupted busi­ness, the Commerce Department re­ported Thursday in an un­ex­pected de­te­ri­o­ra­tion of ear­lier es­ti­mates.

First-quarter growth was weighed down by a surge of im­ports as U. S. com­pa­nies, and house­holds, rushed to buy for­eign goods be­fore Trump could im­pose tar­iffs on them. The Commerce Department pre­vi­ously es­ti­mated that the econ­omy fell 0.2% in the first quar­ter. Economists had fore­cast no change in the de­part­men­t’s third and fi­nal es­ti­mate.

The January-March drop in gross do­mes­tic prod­uct — the na­tion’s out­put of goods and ser­vices — re­versed a 2.4% in­crease in the last three months of 2024 and marked the first time in three years that the econ­omy con­tracted. Imports ex­panded 37.9%, fastest since 2020, and pushed GDP down by nearly 4.7 per­cent­age points.

Consumer spend­ing also slowed sharply, ex­pand­ing just 0.5%, down from a ro­bust 4% in the fourth-quar­ter of last year. It is a sig­nif­i­cant down­grade from the Commerce Department’s pre­vi­ous es­ti­mate.

Consumers have turned jit­tery since Trump started plas­ter­ing big taxes on im­ports, an­tic­i­pat­ing that the tar­iffs will im­pact their fi­nances di­rectly.

And the Conference Board re­ported this week that Americans’ view of the U. S. econ­omy wors­ened in June, re­sum­ing a down­ward slide that had dragged con­sumer con­fi­dence in April to its low­est level since the COVID-19 pan­demic five years ago.

The Conference Board said Tuesday that its con­sumer con­fi­dence in­dex slid to 93 in June, down 5.4 points from 98.4 last month. A mea­sure of Americans’ short-term ex­pec­ta­tions for their in­come, busi­ness con­di­tions and the job mar­ket fell 4.6 points to 69. That’s well be­low 80, the marker that can sig­nal a re­ces­sion ahead.

Former Federal Reserve econ­o­mist Claudia Sahm said the down­ward re­vi­sion to con­sumer spend­ing to­day is a po­ten­tial red flag.’′ Sahm, now chief econ­o­mist at New Century Advisors, noted that Commerce down­graded spend­ing on recre­ation ser­vices and for­eign travel — which could have re­flect great con­sumer pes­simism and un­cer­tainty.’′

A cat­e­gory within the GDP data that mea­sures the econ­o­my’s un­der­ly­ing strength rose at a 1.9% an­nual rate from January through March. It’s a de­cent num­ber, but down from 2.9% in the fourth quar­ter of 2024 and from the Commerce Department’s pre­vi­ous es­ti­mate of 2.5% January-March growth.

This cat­e­gory in­cludes con­sumer spend­ing and pri­vate in­vest­ment but ex­cludes volatile items like ex­ports, in­ven­to­ries and gov­ern­ment spend­ing.

And fed­eral gov­ern­ment spend­ing fell at a 4.6% an­nual pace, the biggest drop since 2022.

In an­other sign that Trump’s poli­cies are dis­rupt­ing trade,

Trade deficits re­duce GDP. But that’s just a mat­ter of math­e­mat­ics. GDP is sup­posed to count only what’s pro­duced do­mes­ti­cally, not stuff that comes in from abroad. So im­ports — which show up in the GDP re­port as con­sumer spend­ing or busi­ness in­vest­ment — have to be sub­tracted out to keep them from ar­ti­fi­cially in­flat­ing do­mes­tic pro­duc­tion.

The first-quar­ter im­port in­flux likely won’t be re­peated in the April-June quar­ter and there­fore should­n’t weigh on GDP. In fact, econ­o­mists ex­pect sec­ond-quar­ter growth to bounce back to 3% in the sec­ond quar­ter, ac­cord­ing to a sur­vey of fore­cast­ers by the data firm FactSet.

The first look at April-June GDP growth is due July 30.

This story has been cor­rected to show that the drop in fed­eral spend­ing was the biggest since 2022, not 1986.

...

Read the original on apnews.com »

3 305 shares, 22 trendiness

The developer guide- Google Developers Blog

The first Gemma model launched early last year and has since grown into a thriv­ing Gemmaverse of over 160 mil­lion col­lec­tive down­loads. This ecosys­tem in­cludes our fam­ily of over a dozen spe­cial­ized mod­els for every­thing from safe­guard­ing to med­ical ap­pli­ca­tions and, most in­spir­ingly, the count­less in­no­va­tions from the com­mu­nity. From in­no­va­tors like Roboflow build­ing en­ter­prise com­puter vi­sion to the Institute of Science Tokyo cre­at­ing highly-ca­pa­ble Japanese Gemma vari­ants, your work has shown us the path for­ward. Building on this in­cred­i­ble mo­men­tum, we’re ex­cited to an­nounce the full re­lease of Gemma 3n. While last mon­th’s pre­view of­fered a glimpse, to­day un­locks the full power of this mo­bile-first ar­chi­tec­ture. Gemma 3n is de­signed for the de­vel­oper com­mu­nity that helped shape Gemma. It’s sup­ported by your fa­vorite tools in­clud­ing Hugging Face Transformers, llama.cpp, Google AI Edge, Ollama, MLX, and many oth­ers, en­abling you to fine-tune and de­ploy for your spe­cific on-de­vice ap­pli­ca­tions with ease. This post is the de­vel­oper deep dive: we’ll ex­plore some of the in­no­va­tions be­hind Gemma 3n, share new bench­mark re­sults, and show you how to start build­ing to­day.Gemma 3n rep­re­sents a ma­jor ad­vance­ment for on-de­vice AI, bring­ing pow­er­ful mul­ti­modal ca­pa­bil­i­ties to edge de­vices with per­for­mance pre­vi­ously only seen in last year’s cloud-based fron­tier mod­els.

Link to Youtube Video

(visible only when JS is dis­abled)

Multimodal by de­sign: Gemma 3n na­tively sup­ports im­age, au­dio, video, and text in­puts and text out­puts.Op­ti­mized for on-de­vice: Engineered with a fo­cus on ef­fi­ciency, Gemma 3n mod­els are avail­able in two sizes based on ef­fec­tive pa­ra­me­ters: E2B and E4B. While their raw pa­ra­me­ter count is 5B and 8B re­spec­tively, ar­chi­tec­tural in­no­va­tions al­low them to run with a mem­ory foot­print com­pa­ra­ble to tra­di­tional 2B and 4B mod­els, op­er­at­ing with as lit­tle as 2GB (E2B) and 3GB (E4B) of mem­ory.Ground­break­ing ar­chi­tec­ture: At its core, Gemma 3n fea­tures novel com­po­nents like the MatFormer ar­chi­tec­ture for com­pute flex­i­bil­ity, Per Layer Embeddings (PLE) for mem­ory ef­fi­ciency, LAuReL and AltUp for ar­chi­tec­tural ef­fi­ciency, and new au­dio and MobileNet-v5 based vi­sion en­coders op­ti­mized for on-de­vice use cases.En­hanced qual­ity: Gemma 3n de­liv­ers qual­ity im­prove­ments across mul­ti­lin­gual­ity (supporting 140 lan­guages for text and mul­ti­modal un­der­stand­ing of 35 lan­guages), math, cod­ing, and rea­son­ing. The E4B ver­sion achieves an LMArena score over 1300, mak­ing it the first model un­der 10 bil­lion pa­ra­me­ters to reach this bench­mark.

Achieving this leap in on-de­vice per­for­mance re­quired re­think­ing the model from the ground up. The foun­da­tion is Gemma 3n’s unique mo­bile-first ar­chi­tec­ture, and it all starts with MatFormer.At the core of Gemma 3n is the MatFormer (🪆Matryoshka Transformer) ar­chi­tec­ture, a novel nested trans­former built for elas­tic in­fer­ence. Think of it like Matryoshka dolls: a larger model con­tains smaller, fully func­tional ver­sions of it­self. This ap­proach ex­tends the con­cept of Matryoshka Representation Learning from just em­bed­dings to all trans­former com­po­nents.

During the MatFormer train­ing of the 4B ef­fec­tive pa­ra­me­ter (E4B) model, a 2B ef­fec­tive pa­ra­me­ter (E2B) sub-model is si­mul­ta­ne­ously op­ti­mized within it, as shown in the fig­ure above. This pro­vides de­vel­op­ers two pow­er­ful ca­pa­bil­i­ties and use cases to­day:1: Pre-extracted mod­els: You can di­rectly down­load and use ei­ther the main E4B model for the high­est ca­pa­bil­i­ties, or the stand­alone E2B sub-model which we have al­ready ex­tracted for you, of­fer­ing up to 2x faster in­fer­ence.2: Custom sizes with Mix-n-Match: For more gran­u­lar con­trol tai­lored to spe­cific hard­ware con­straints, you can cre­ate a spec­trum of cus­tom-sized mod­els be­tween E2B and E4B us­ing a method we call Mix-n-Match. This tech­nique al­lows you to pre­cisely slice the E4B mod­el’s pa­ra­me­ters, pri­mar­ily by ad­just­ing the feed for­ward net­work hid­den di­men­sion per layer (from 8192 to 16384) and se­lec­tively skip­ping some lay­ers. We are re­leas­ing the MatFormer Lab, a tool that shows how to re­trieve these op­ti­mal mod­els, which were iden­ti­fied by eval­u­at­ing var­i­ous set­tings on bench­marks like MMLU.

MMLU scores for the pre-trained Gemma 3n check­points at dif­fer­ent model sizes (using Mix-n-Match)

Looking ahead, the MatFormer ar­chi­tec­ture also paves the way for elas­tic ex­e­cu­tion. While not part of to­day’s launched im­ple­men­ta­tions, this ca­pa­bil­ity al­lows a sin­gle de­ployed E4B model to dy­nam­i­cally switch be­tween E4B and E2B in­fer­ence paths on the fly, en­abling real-time op­ti­miza­tion of per­for­mance and mem­ory us­age based on the cur­rent task and de­vice load.Gemma 3n mod­els in­cor­po­rate Per-Layer Embeddings (PLE). This in­no­va­tion is tai­lored for on-de­vice de­ploy­ment as it dra­mat­i­cally im­proves model qual­ity with­out in­creas­ing the high-speed mem­ory foot­print re­quired on your de­vice’s ac­cel­er­a­tor (GPU/TPU).While the Gemma 3n E2B and E4B mod­els have a to­tal pa­ra­me­ter count of 5B and 8B re­spec­tively, PLE al­lows a sig­nif­i­cant por­tion of these pa­ra­me­ters (the em­bed­dings as­so­ci­ated with each layer) to be loaded and com­puted ef­fi­ciently on the CPU. This means only the core trans­former weights (approximately 2B for E2B and 4B for E4B) need to sit in the typ­i­cally more con­strained ac­cel­er­a­tor mem­ory (VRAM).

With Per-Layer Embeddings, you can use Gemma 3n E2B while only hav­ing ~2B pa­ra­me­ters loaded in your ac­cel­er­a­tor.

Processing long in­puts, such as the se­quences de­rived from au­dio and video streams, is es­sen­tial for many ad­vanced on-de­vice mul­ti­modal ap­pli­ca­tions. Gemma 3n in­tro­duces KV Cache Sharing, a fea­ture de­signed to sig­nif­i­cantly ac­cel­er­ate time-to-first-to­ken for stream­ing re­sponse ap­pli­ca­tions.KV Cache Sharing op­ti­mizes how the model han­dles the ini­tial in­put pro­cess­ing stage (often called the prefill” phase). The keys and val­ues of the mid­dle layer from lo­cal and global at­ten­tion are di­rectly shared with all the top lay­ers, de­liv­er­ing a no­table 2x im­prove­ment on pre­fill per­for­mance com­pared to Gemma 3 4B. This means the model can in­gest and un­der­stand lengthy prompt se­quences much faster than be­fore.Gemma 3n uses an ad­vanced au­dio en­coder based on the Universal Speech Model (USM). The en­coder gen­er­ates a to­ken for every 160ms of au­dio (about 6 to­kens per sec­ond), which are then in­te­grated as in­put to the lan­guage model, pro­vid­ing a gran­u­lar rep­re­sen­ta­tion of the sound con­text.Au­to­matic Speech Translation (AST): Translate spo­ken lan­guage into text in an­other lan­guage.We’ve ob­served par­tic­u­larly strong AST re­sults for trans­la­tion be­tween English and Spanish, French, Italian, and Portuguese, of­fer­ing great po­ten­tial for de­vel­op­ers tar­get­ing ap­pli­ca­tions in these lan­guages. For tasks like speech trans­la­tion, lever­ag­ing Chain-of-Thought prompt­ing can sig­nif­i­cantly en­hance re­sults. Here’s an ex­am­ple:

user

Transcribe the fol­low­ing speech seg­ment in Spanish, then trans­late it into English:

At launch time, the Gemma 3n en­coder is im­ple­mented to process au­dio clips up to 30 sec­onds. However, this is not a fun­da­men­tal lim­i­ta­tion. The un­der­ly­ing au­dio en­coder is a stream­ing en­coder, ca­pa­ble of pro­cess­ing ar­bi­trar­ily long au­dios with ad­di­tional long form au­dio train­ing. Follow-up im­ple­men­ta­tions will un­lock low-la­tency, long stream­ing ap­pli­ca­tions.Along­side its in­te­grated au­dio ca­pa­bil­i­ties, Gemma 3n fea­tures a new, highly ef­fi­cient vi­sion en­coder, MobileNet-V5-300M, de­liv­er­ing state-of-the-art per­for­mance for mul­ti­modal tasks on edge de­vices.De­signed for flex­i­bil­ity and power on con­strained hard­ware, MobileNet-V5 gives de­vel­op­ers:Mul­ti­ple in­put res­o­lu­tions: Natively sup­ports res­o­lu­tions of 256x256, 512x512, and 768x768 pix­els, al­low­ing you to bal­ance per­for­mance and de­tail for your spe­cific ap­pli­ca­tions.Broad vi­sual un­der­stand­ing: Co-trained on ex­ten­sive mul­ti­modal datasets, it ex­cels at a wide range of im­age and video com­pre­hen­sion tasks.High through­put: Processes up to 60 frames per sec­ond on a Google Pixel, en­abling real-time, on-de­vice video analy­sis and in­ter­ac­tive ex­pe­ri­ences.This level of per­for­mance is achieved with mul­ti­ple ar­chi­tec­tural in­no­va­tions, in­clud­ing:An ad­vanced foun­da­tion of MobileNet-V4 blocks (including Universal Inverted Bottlenecks and Mobile MQA).A sig­nif­i­cantly scaled up ar­chi­tec­ture, fea­tur­ing a hy­brid, deep pyra­mid model that is 10x larger than the biggest MobileNet-V4 vari­ant.A novel Multi-Scale Fusion VLM adapter that en­hances the qual­ity of to­kens for bet­ter ac­cu­racy and ef­fi­ciency.

Benefiting from novel ar­chi­tec­tural de­signs and ad­vanced dis­til­la­tion tech­niques, MobileNet-V5-300M sub­stan­tially out­per­forms the base­line SoViT in Gemma 3 (trained with SigLip, no dis­til­la­tion). On a Google Pixel Edge TPU, it de­liv­ers a 13x speedup with quan­ti­za­tion (6.5x with­out), re­quires 46% fewer pa­ra­me­ters, and has a 4x smaller mem­ory foot­print, all while pro­vid­ing sig­nif­i­cantly higher ac­cu­racy on vi­sion-lan­guage tasksWe’re ex­cited to share more about the work be­hind this model. Look out for our up­com­ing MobileNet-V5 tech­ni­cal re­port, which will deep dive into the model ar­chi­tec­ture, data scal­ing strate­gies, and ad­vanced dis­til­la­tion tech­niques.Mak­ing Gemma 3n ac­ces­si­ble from day one has been a pri­or­ity. We’re proud to part­ner with many in­cred­i­ble open source de­vel­op­ers to en­sure broad sup­port across pop­u­lar tools and plat­forms, in­clud­ing con­tri­bu­tions from teams be­hind AMD, Axolotl, Docker, Hugging Face, llama.cpp, LMStudio, MLX, NVIDIA, Ollama, RedHat, SGLang, Unsloth, and vLLM.But this ecosys­tem is just the be­gin­ning. The true power of this tech­nol­ogy is in what you will build with it. That’s why we’re launch­ing the Gemma 3n Impact Challenge. Your mis­sion: use Gemma 3n’s unique on-de­vice, of­fline, and mul­ti­modal ca­pa­bil­i­ties to build a prod­uct for a bet­ter world. With $150,000 in prizes, we’re look­ing for a com­pelling video story and a wow” fac­tor demo that shows real-world im­pact. Join the chal­lenge and help build a bet­ter fu­ture.Ready to ex­plore the po­ten­tial of Gemma 3n to­day? Here’s how:Ex­per­i­ment di­rectly: Use Google AI Studio to try Gemma 3n in just a cou­ple of clicks. Gemma mod­els can also be de­ployed di­rectly to Cloud Run from AI Studio.Download the mod­els: Find the model weights on Hugging Face and Kaggle.Learn & in­te­grate: Dive into our com­pre­hen­sive doc­u­men­ta­tion to quickly in­te­grate Gemma into your pro­jects or start with our in­fer­ence and fine-tun­ing guides.Build with your fa­vorite on-de­vice AI tools: Google AI Edge Gallery/LiteRT-LLM, Ollama, MLX, llama.cpp, Docker, trans­form­ers.js and more.Use your fa­vorite de­vel­op­ment tools: Leverage your pre­ferred tools and frame­works, in­clud­ing Hugging Face Transformers and TRL, NVIDIA NeMo Framework, Unsloth, and LMStudio.Deploy your way: Gemma 3n of­fers mul­ti­ple de­ploy­ment op­tions, in­clud­ing Google GenAI API, Vertex AI, SGLang, vLLM, and NVIDIA API Catalog.

Unlock deeper in­sights with the new Python client li­brary for Data Commons

Multilingual in­no­va­tion in LLMs: How open mod­els help un­lock global com­mu­ni­ca­tion

Supercharge your note­books: The new AI-first Google Colab is now avail­able to every­one

...

Read the original on developers.googleblog.com »

4 234 shares, 10 trendiness

The first non-opioid painkiller

In the nine­teenth cen­tury, the in­ven­tion of anes­the­sia was con­sid­ered a gift from God. But post-op­er­a­tive pain re­lief has con­tin­ued to rely on opi­oids, de­riv­a­tives of opium, the ad­dic­tive sub­stance em­ployed since an­cient times. Although no other drug has man­aged to match the rapid, po­tent, and broadly ef­fec­tive re­lief de­liv­ered by opi­oids, their side ef­fects have led to decades of ad­dic­tion and over­dose, leav­ing re­searchers keen to find a bet­ter so­lu­tion.

This all changed in January 2025, when the FDA ap­proved Vertex Pharmaceuticals’s Journavx (suzetrigine): the first non-opi­oid pain re­liever suit­able for treat­ing post-surgery pain. Clinical tri­als found no signs of the prob­lem­atic side ef­fects as­so­ci­ated with opi­oids: no drug abuse, tol­er­ance, or with­drawal. But this was not an easy win: Vertex and other pharma com­pa­nies spent decades search­ing for drugs like this to no avail.

Opioids are used pri­mar­ily to treat no­ci­cep­tive pain, pain caused by tis­sue dam­age from in­jury or dis­ease. This dam­age ac­ti­vates nearby no­ci­cep­tors: sen­sory neu­rons that sig­nal phys­i­cal or chem­i­cal harm. These no­ci­cep­tors send sig­nals up to the cen­tral ner­vous sys­tem — the brain and spinal cord — and the brain then cre­ates a lo­cal­ized sen­sa­tion of pain, draw­ing your at­ten­tion to the threat.

Traditional opi­oids mimic opium, a com­pound found in the poppy plant that con­tains mor­phine. Opioids al­le­vi­ate pain by act­ing on one of the three main opi­oid re­cep­tors, mu (μ) opi­oid re­cep­tors, which are dis­trib­uted through­out the cen­tral ner­vous sys­tem, par­tic­u­larly in the brain. When opi­oids bind to the brain’s mu re­cep­tors, this sup­presses in­com­ing pain sig­nals from the dam­aged site’s no­ci­cep­tors, pre­vent­ing the brain from cre­at­ing the sen­sa­tion of pain even when tis­sue dam­age is pre­sent.

Our bod­ies nat­u­rally pro­duce their own opi­oids — such as en­dor­phins, en­doge­nous mor­phine — to briefly blunt pain dur­ing mo­ments of stress or in­jury. However, these are far weaker and shorter-act­ing than pre­scrip­tion opi­oids since they de­grade quickly, re­main lo­cal­ized, and are re­leased in short, con­trolled bursts. Prescription opi­oids, on the other hand, flood the brain with higher doses that linger for hours.

Crucially, opi­oids don’t just kill pain: they also in­cite plea­sure. When the mu opi­oid re­cep­tors pre­sent in the re­ward cen­ter of the brain are ac­ti­vated, this re­duces the se­cre­tion of a neu­ro­trans­mit­ter called GABA, which works to in­hibit dopamine-pro­duc­ing neu­rons. As GABA re­lease de­clines, dopamine spikes, light­ing up the re­ward cen­ter and in­duc­ing plea­sure.

With the body’s nat­ural opi­oids, this is fleet­ing and un­prob­lem­atic. When prop­erly pre­scribed, even syn­thetic opi­oids are no is­sue for most pa­tients: un­der se­vere post-sur­gi­cal pain, opi­oids mostly func­tion to nor­mal­ize dis­rupted brain func­tion, damp­en­ing any plea­sur­able ef­fect. But for some, whether due to ge­net­ics or in­ap­pro­pri­ate ad­min­is­tra­tion (e.g. a pre­scrip­tion that goes on af­ter the pain’s source has been re­lieved), the in­ten­sity of pre­scrip­tion opi­oids pro­duces a pro­longed dopamine spike, along with a marked sen­sa­tion of eu­pho­ria: a recipe for ad­dic­tion.

With chronic use, the body’s nat­ural opi­oid sys­tem be­comes dys­func­tional. Fewer nat­ural opi­oids are pro­duced and opi­oid re­cep­tors be­come de­sen­si­tized. As a re­sult, the pa­tient de­vel­ops a tol­er­ance, re­quir­ing higher and higher doses to even feel nor­mal.

The nine­teenth cen­tury wit­nessed the cre­ation of mor­phine, codeine, and heroin (which was sold over-the-counter), as well as the in­ven­tion of the hy­po­der­mic sy­ringe. By the turn of the cen­tury, 15 per­cent of all pre­scrip­tions dis­pensed in Boston were for opi­oids, which were used for every­thing from men­strual cramps to chil­dren’s coughs, and as many as 300,000 Americans, or 0.5 per­cent of the pop­u­la­tion, were opi­ate ad­dicts. Anti-narcotics laws pro­lif­er­ated through­out the states, and the med­ical com­mu­nity ex­pressed con­cerns about the lib­eral pro­vi­sion of ad­dic­tive drugs. These mount­ing pres­sures led to the pas­sage of the Harrison Narcotic Act in 1914, which made opium and opi­ates the first reg­u­lated sub­stances in the United States.

Unlike opi­oids, which act within the cen­tral ner­vous sys­tem, Journavx does not mean­ing­fully in­ter­act with the brain. Instead, it tar­gets a spe­cific sodium ion chan­nel found al­most ex­clu­sively on pe­riph­eral no­ci­cep­tors, the pain-sens­ing neu­rons through­out your body. Ion chan­nels, whether sodium, potas­sium, or cal­cium, are like tiny doors em­bed­ded on the neu­ron’s mem­brane: when a door opens, ions rush in or out and the neu­ron fires, send­ing an elec­tri­cal sig­nal to the next cell.

Three sodium chan­nels are found pri­mar­ily on no­ci­cep­tors: NaV1.7, NaV1.8, and NaV1.9. Suzetrigine se­lec­tively blocks NaV1.8, which stops no­ci­cep­tors from send­ing pain sig­nals to the brain. Rather than pre­vent­ing your brain from re­ceiv­ing pain sig­nals, as opi­oids do, it pre­vents your neu­rons from trans­mit­ting them. In essence, Journavx works from the bot­tom up to al­le­vi­ate pain, rather than the top down.

Critically, the NaV1.8 chan­nel is largely ab­sent from the cen­tral ner­vous sys­tem. This means that suzet­rig­ine does not af­fect the brain, which means users do not ex­pe­ri­ence the same eu­pho­ria that is trig­gered by opi­oids. This pre­vents ad­dic­tion and abuse, as well as the de­pres­sive ef­fects on breath­ing or heart rate typ­i­cal with opi­oids.

At first glance, this may seem like a straight­for­ward so­lu­tion, es­pe­cially given the ur­gent de­mand for non-opi­oid al­ter­na­tives. So why did it take so long?

Unlike dis­eases with well-de­fined bi­o­log­i­cal causes, pain is a broad symp­tom rooted in com­plex and over­lap­ping path­ways. Many of these are deeply in­ter­twined with vi­tal bod­ily func­tions like blood pres­sure, im­mune re­sponse, and res­pi­ra­tion. Together, this makes it dif­fi­cult to iso­late a tar­get that can be blocked with­out col­lat­eral dam­age.

A par­tic­u­larly good ex­am­ple of this predica­ment in­volves TRPV1, also known as the cap­saicin re­cep­tor. It is an ion chan­nel mainly found in no­ci­cep­tors, and is re­spon­si­ble for the pain you feel when eat­ing spicy foods. In clin­i­cal tri­als, TRPV1 in­hibitors ef­fec­tively al­le­vi­ated pain, but re­searchers found that they un­ex­pect­edly dis­rupted ther­moreg­u­la­tion, caus­ing pa­tients to ex­pe­ri­ence hy­per­ther­mia, or over­heat­ing, with one trial par­tic­i­pant sus­tain­ing a 104 de­grees fahren­heit fever for hours.

Another ex­am­ple in­volves nerve growth fac­tor in­hibitors like tanezumab. Although tanezumab al­le­vi­ated in­flam­ma­tory pain from con­di­tions like os­teoarthri­tis, Phase III tri­als re­vealed an un­for­tu­nate side ef­fect: rapidly pro­gres­sive os­teoarthri­tis. Researchers hy­poth­e­sized that be­cause pa­tients felt so much bet­ter, they overused their arthritic joints, ac­cel­er­at­ing dam­age. Although fur­ther tri­als were con­ducted at lower doses and with re­stric­tions, the FDA ul­ti­mately voted against its ap­proval. Tanezumab’s story re­flects a dif­fi­culty in de­vel­op­ing painkillers: while pain can cause ex­ces­sive suf­fer­ing, it also serves as a vi­tal warn­ing sign that must be se­lec­tively main­tained.

Vertex has his­tor­i­cally fo­cused on de­vel­op­ing drugs tar­get­ing ion chan­nels. These chan­nels play a ma­jor role in cel­lu­lar sig­nal­ing, mean­ing that com­pounds that act upon them can pro­duce large, rapid phys­i­o­log­i­cal ef­fects. Ion chan­nels are really good drug tar­gets’, Paul Negulescu, head of Vertex’s pain pro­gram, says, They just re­quire a lot of care and at­ten­tion to how you mea­sure them’.

The dis­cov­ery of the NaV sodium chan­nels, made in­de­pen­dently in the early 2000s by two dif­fer­ent re­searchers, opened a new fron­tier in pain re­search. Both ob­served that mu­ta­tions af­fect­ing NaV1.7 caused ab­nor­mal­i­ties in the ex­pe­ri­ence of pain, a ma­jor clue that pain might be me­di­ated through that spe­cific sodium chan­nel.

Stephen Waxman, a pro­fes­sor of neu­rol­ogy, neu­ro­science, and phar­ma­col­ogy at Yale’s med­ical school, dis­cov­ered that a com­mu­nity in Alabama had nu­mer­ous in­di­vid­u­als suf­fer­ing from ery­throme­lal­gia or Man on Fire’ syn­drome. These in­di­vid­u­als ex­pe­ri­enced mild warmth — such as from wear­ing a sweater or shoes — as in­tense burn­ing pain. Waxman’s re­search tied this phe­nom­e­non to mu­ta­tions in the SCN9A gene, which is in­volved in the pro­duc­tion of NaV1.7 chan­nels. Meanwhile, Geoff Woods, a clin­i­cal ge­neti­cist at St. James’s University Hospital in Leeds, un­cov­ered a com­ple­men­tary dis­cov­ery. He ob­served con­gen­i­tal in­sen­si­tiv­ity to pain within spe­cific Pakistani com­mu­ni­ties, also trac­ing it back to mu­ta­tions in the SCN9A gene.

This con­gen­i­tal in­sen­si­tiv­ity pro­vided a par­tic­u­larly com­pelling ge­netic val­i­da­tion for a drug tar­get, as the af­fected in­di­vid­u­als were en­tirely nor­mal ex­cept for their in­abil­ity to feel pain, un­like prior sim­i­lar cases. Related chan­nels like NaV1.8 and NaV1.9 were also in­ves­ti­gated by Woods’s team and found rel­e­vant for pain sig­nal­ing.

But de­spite the ini­tial en­thu­si­asm sur­round­ing these dis­cov­er­ies, re­searchers soon en­coun­tered sig­nif­i­cant ob­sta­cles: NaV1.7 in­hibitors failed to re­lieve pain dur­ing clin­i­cal tri­als. Researchers even­tu­ally un­cov­ered that the con­gen­i­tal ab­sence of NaV1.7 did not elim­i­nate pain sig­nals but in­stead am­pli­fied the pro­duc­tion of nat­ural painkillers called enkephalins. They con­cluded that com­pletely block­ing the chan­nel, which would be re­quired to repli­cate this ef­fect phar­ma­ceu­ti­cally, was im­prac­ti­cal.

So re­searchers turned their at­ten­tion to the other promis­ing sodium chan­nel: NaV1.8. Again, re­search be­gan with set­backs: in 2015, it was dis­cov­ered that in­di­vid­u­als with Brugada syn­drome, a dis­or­der char­ac­ter­ized by ab­nor­mal heart rhythms and sud­den car­diac death, also had mu­ta­tions in the gene en­cod­ing NaV1.8.

Despite this chal­lenge, re­searchers still thought NaV1.8 had po­ten­tial. Woods’ re­search ge­net­i­cally val­i­dated it, show­ing that mu­ta­tions in NaV1.8 af­fect pain sig­nalling. Researchers at the University of Alcalá con­firmed that mice ge­net­i­cally en­gi­neered to lack Nav1.8 chan­nels showed vir­tu­ally no spon­ta­neous nerve ac­tiv­ity af­ter in­jury — ac­tiv­ity thought to un­der­lie cer­tain chronic pains. Additionally, NaV1.8′s al­most ex­clu­sive pres­ence in the pe­riph­eral ner­vous sys­tem (rather than in the brain) sug­gested that it might uniquely limit un­de­sir­able cen­tral side ef­fects.

As Vertex re­searchers searched for NaV1.8 in­hibitors, they made use of Negulescu’s E-VIPR tech­nol­ogy, which en­abled them to con­duct more than 50,000 tests per day to iden­tify com­pounds that blocked NaV1.8 with­out af­fect­ing other ion chan­nels. This was es­sen­tial be­cause the hu­man body con­tains nine known volt­age-gated sodium chan­nel types, each with a dis­tinc­tive personality’ — a unique pat­tern of rapid open­ing, clos­ing, and volt­age sen­si­tiv­ity — mak­ing high through­put key to pin­point­ing an ap­pro­pri­ately se­lec­tive drug.

But even with this tool, Negulescu de­scribed the it­er­a­tive learn­ing process as painful’. Vertex spent a decade screen­ing mil­lions of com­pounds be­fore find­ing a promis­ing class of mol­e­cules. Another decade was spent on op­ti­miza­tion, con­duct­ing tens of thou­sands of screen­ings to max­i­mize po­tency and se­lec­tiv­ity (a drug is se­lec­tive if it binds only to the tar­get pro­teins and nowhere else).

Vertex faced sev­eral fail­ures in pre­clin­i­cal and clin­i­cal test­ing. Between 2018 and 2022, they ter­mi­nated de­vel­op­ment for three gen­er­a­tions of NaV1.8 in­hibitors, VX-150, VX-128, and VX-961, due to dos­ing and tol­er­a­bil­ity is­sues. However, un­like pre­vi­ous at­tempts with NaV1.7, TRPV1, and nerve growth fac­tor in­hibitors, the path­way over­all did not ex­hibit fa­tal flaws, and so re­search con­tin­ued.

Eventually, this it­er­a­tive process pro­duced VX-548, which was dis­cov­ered to be many times more se­lec­tive and po­tent than ear­lier can­di­dates. In 2022, two Phase II proof-of-con­cept stud­ies yielded pos­i­tive re­sults. In 2024, Phase III tri­als val­i­dated VX-548’s ef­fi­cacy in treat­ing acute pain with min­i­mal ad­verse ef­fects. During this process, the FDA granted VX-548, now suzet­rig­ine, Fast Track and Breakthrough Therapy des­ig­na­tions, processes de­signed to ac­cel­er­ate the de­vel­op­ment and re­view of cru­cial phar­ma­ceu­ti­cal in­no­va­tions.

On July 30, 2024, the FDA ac­cepted Vertex’s New Drug Application, fil­ing it un­der pri­or­ity re­view. Exactly six months later, on January 30, 2025, it was ap­proved, mark­ing suzet­rig­ine — sold un­der the brand name Journavx — as the first non-opi­oid anal­gesic for treat­ing acute pain.

Journavx is­n’t a sil­ver bul­let. It has not yet been tested or ap­proved for treat­ing chronic pain, from which over 20 per­cent of Americans suf­fer. Across its clin­i­cal tri­als, 85 and 98 per­cent of par­tic­i­pants were fe­male. This re­flects a broader pat­tern in painkiller tri­als, which of­ten rely on sur­gi­cal mod­els like bunionec­tomy and ab­domino­plasty (‘tummy tuck’) — pro­ce­dures over­whelm­ingly per­formed on women. Since 2022, the bi­par­ti­san No Pain Act’ has man­dated Medicare and other gov­ern­ment health plans to cover this class of med­ica­tion in out­pa­tient sur­gi­cal set­tings, pri­vate in­sur­ance cov­er­age is still in flux: with­out in­sur­ance, a week’s worth of Journavx costs around $230, com­pared to $10–20 for a low-dose opi­oid-ac­eta­minophen com­bi­na­tion med­ica­tion.

Journavx failed to out­per­form that opi­oid-ac­eta­minophen com­bi­na­tion in clin­i­cal tri­als. Todd Bertoch, an anes­the­si­ol­o­gist in­volved in suzet­rig­ine’s Phase III tri­als, ex­plains that the drug likely won’t serve as an out­right opi­oid re­place­ment, but the first step on the jour­ney to min­i­miz­ing opi­oid us­age. If parac­eta­mol and ibupro­fen are in­ad­e­quate for pain re­lief, Journavx can now be pre­scribed as the next al­ter­na­tive treat­ment, in­stead of mild- to mod­er­ate-strength opi­oids.

It will al­most cer­tainly im­prove: Vertex’s sci­en­tists are con­tin­u­ing their decades-long pro­ject to it­er­ate and screen for even more po­tent and se­lec­tive NaV1.8 block­ers. They are also in­ves­ti­gat­ing com­ple­men­tar­i­ties with NaV1.7 in­hibitors. A Phase III clin­i­cal trial of suzet­rig­ine for di­a­betic pe­riph­eral neu­ropa­thy, which in­volves chronic pain, is cur­rently un­der­way.

Journavx is the prod­uct of 27 years, bil­lions of dol­lars, mil­lions of mol­e­cules screened, dozens of mon­keys and rats and data from over 2,400 sur­gi­cal pa­tients, all dis­tilled into a sin­gle 50-mg blue tablet.

Vertex chose to keep fund­ing and push­ing for­ward through decades of work that in­dus­try pro­fes­sion­als de­scribe as tedious’, mind-numbing’, and painstaking’, a slog dri­ven by slow, in­cre­men­tal progress and fre­quent set­backs. In ex­change, hu­man­ity now has its first non-opi­oid painkiller.

Michelle Ma stud­ies eco­nom­ics at the University of Chicago.

...

Read the original on www.worksinprogress.news »

5 217 shares, 11 trendiness

Scott Small 🇨🇦 (@smallsco@oldbytes.space)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on oldbytes.space »

6 217 shares, 9 trendiness

Classic Macintosh emulator

Snow em­u­lates clas­sic (Motorola 680x0-based) Macintosh com­put­ers. It fea­tures a graph­i­cal user in­ter­face to op­er­ate the em­u­lated ma­chine and pro­vides ex­ten­sive de­bug­ging ca­pa­bil­i­ties. The aim of this pro­ject is to em­u­late the Macintosh on a hard­ware-level as much as pos­si­ble, as op­posed to em­u­la­tors that patch the ROM or in­ter­cept sys­tem calls.

It cur­rently em­u­lates the Macintosh 128K, Macintosh 512K, Macintosh Plus, Macintosh SE, Macintosh Classic and Macintosh II.

The em­u­la­tor is writ­ten in Rust and re­leased as open source, li­censed un­der the MIT li­cense.

There is a lim­ited on­line demo avail­able (only the em­u­lated ma­chine, no user in­ter­face or other func­tion­al­ity from the full soft­ware).

To get set up or for fur­ther in­for­ma­tion, check the on­line doc­u­men­ta­tion.

Currently, only bleed­ing edge builds are avail­able. These get gen­er­ated au­to­mat­i­cally as work pro­gresses on the em­u­la­tor.

* Bug re­ports can be filed on GitHub is­sues.

* For sup­port or just a chat, join the #snow chan­nel on MartyPC and Friends Discord.

...

Read the original on snowemu.com »

7 176 shares, 10 trendiness

I Fought in Ukraine and Here’s Why FPV Drones Kind of Suck – War on the Rocks

In 2024 and 2025, I served for six months as an in­ter­na­tional vol­un­teer on a first-per­son view at­tack drone team in the Armed Forces of Ukraine. My team was de­ployed in the Donbas re­gion, in one of the hottest sec­tors of the front. When I joined the team, I was ex­cited to work with a cut­ting-edge tool. By the end of my de­ploy­ment, I was a bit dis­il­lu­sioned. Let me tell you why.

First-person view drones are un­manned aer­ial ve­hi­cles with four pro­pellers lo­cated at the four cor­ners of the craft, roughly in the shape of a square of seven to 12 inches in length on each side. They are con­trolled by an op­er­a­tor wear­ing vir­tual-re­al­ity gog­gles that re­ceive the im­age from the drone’s for­ward-fac­ing cam­era (hence the name first-per­son view). The most com­mon types of first-per­son view drones are sin­gle-use: They fly di­rectly into their tar­get, where they det­o­nate an ex­plo­sive charge of up to 1.5 kilo­grams. These drones are touted as a cheap and ac­ces­si­ble so­lu­tion that can give troops on the tac­ti­cal level their own or­ganic pre­ci­sion-strike ca­pa­bil­ity. They can sup­pos­edly re­act quickly and strike mov­ing tar­gets or tar­gets in dif­fi­cult-to-reach lo­ca­tions, such as bunkers, base­ments, or in­side build­ings. Proponents of first-per­son view drones of­ten re­peat the claim that as much as 60 to 70 per­cent of all bat­tle­field ca­su­al­ties in the Russo-Ukrainian War are now caused by drones. This sta­tis­tic is prob­a­bly broadly ac­cu­rate, though it does not dif­fer­en­ti­ate be­tween ca­su­al­ties caused by first-per­son view drones and other types of un­crewed aer­ial sys­tems.

Some au­thors, in­clud­ing ex­pe­ri­enced mil­i­tary of­fi­cers writ­ing in these pages, go even fur­ther and claim that first-per­son view drones will pre­cip­i­tate a rev­o­lu­tion in how wars are fought, akin to the in­tro­duc­tion of mus­kets. Among other things, they will make con­ceal­ment and the mass­ing of troops and equip­ment in the com­bat zone nearly im­pos­si­ble. Any con­cen­tra­tion of troops or ve­hi­cles will sup­pos­edly be ob­served im­me­di­ately and butchered by swarms of cheap, fast drones. Proponents of drones, es­pe­cially in Silicon Valley, have claimed that drones might com­pletely re­place ar­tillery.

Whether or not we be­lieve these far-reach­ing claims, we’ve cer­tainly all seen the videos on so­cial me­dia of these drones per­form­ing im­pres­sive, highly pre­cise at­tacks. We’ve seen them strik­ing a Russian tank on the move, fly­ing through the open back hatch of an in­fantry fight­ing ve­hi­cle, or en­ter­ing a build­ing to sur­prise the en­emy, some­times lit­er­ally, with their pants down. But those im­pres­sive strikes are rare ex­cep­tions. The cases when first-per­son view drones ac­tu­ally do that are few and far be­tween.

During my time in Ukraine, I col­lected sta­tis­tics on the suc­cess of our drone op­er­a­tions. I found that 43 per­cent of our sor­ties re­sulted in a hit on the in­tended tar­get in the sense that the drone was able to suc­cess­fully fly all the way to the tar­get, iden­tify it cor­rectly, hit it, and the drone’s ex­plo­sive charge det­o­nated as it was sup­posed to. This num­ber does not in­clude in­stances when our higher com­mand re­quested a sor­tie but we had to de­cline be­cause we knew that we could not strike the tar­get for rea­sons such as weather, tech­ni­cal prob­lems, or elec­tronic in­ter­fer­ence. If this type of pre-aborted mis­sion is in­cluded in the to­tal, the suc­cess rate drops to be­tween 20 and 30 per­cent. On the face of it, this suc­cess rate is bad, but that is not the whole story.

I be­gan to no­tice that the vast ma­jor­ity of our sor­ties were against tar­gets that had al­ready been struck suc­cess­fully by a dif­fer­ent weapons sys­tem, most com­monly by a mor­tar or by a mu­ni­tion dropped by a reusable drone (in other words, not a first-per­son view drone). Put dif­fer­ently, the goal of the ma­jor­ity of our mis­sions was to de­liver the sec­ond tap in a dou­ble-tap strike against a tar­get that had al­ready been suc­cess­fully pros­e­cuted by a dif­fer­ent weapons sys­tem. The pro­por­tion of mis­sions when we suc­cess­fully car­ried out a task that only a first-per­son view drone can ful­fill — de­liv­er­ing a pre­ci­sion strike on a tar­get that could not be hit by other means — was in the sin­gle-digit per­cent.

There are two rea­sons why these drones rarely suc­cess­fully do what they were de­signed to do. The first has to do with how com­man­ders choose to em­ploy first-per­son view drones. Presumably, our com­man­ders de­cided that they had first-per­son view drones as a ca­pa­bil­ity, so they might as well use them, even if there were other weapons sys­tems that could also do the job. There is a cer­tain logic to this, and the com­man­ders were not pay­ing for the ex­pended drones out of their own pock­ets. They were more fo­cused on the im­me­di­ate mis­sion. While first-per­son view drones are cheap, they are usu­ally not the cheap­est op­tion avail­able to com­man­ders. This is the prob­lem with us­ing them in dou­ble-tap strikes or for mis­sions that can be achieved by other sys­tems. One of these drone sor­ties costs about $500 in ma­teriel. A mor­tar shell costs less than $100. A mu­ni­tion dropped from a reusable drone, usu­ally also some­thing like a mod­i­fied mor­tar shell or 40-millimeter grenade, also costs less than $100.

The sec­ond rea­son why these drones rarely do what they were de­signed to do is tech­ni­cal. They are finicky, un­re­li­able, hard to use, and sus­cep­ti­ble to elec­tronic in­ter­fer­ence. Few first-per­son view drones have night-vi­sion ca­pa­bil­ity. Those that do are in short sup­ply and cost twice as much as the base model. In Ukraine, in the win­ter, it’s dark for 14 hours a day. Wind, rain, snow, and fog all mean a drone can­not fly.

A solid quar­ter of all these drones have some sort of tech­ni­cal fault that pre­vents them from tak­ing off. This is usu­ally dis­cov­ered only when they are be­ing prepped for launch. The most com­mon is a fault in the ra­dio re­ceiver that re­ceives in­puts from the con­trol panel, or in the video trans­mit­ter that trans­mits the sig­nal to the op­er­a­tor’s vir­tual-re­al­ity gog­gles. Sometimes this fault can be fixed through a soft­ware up­date in the field. Often, it can­not. Many faulty drones are sim­ply can­ni­bal­ized for spare parts, be­cause there is no bet­ter use for them. Even once a drone is air­borne, bat­ter­ies of­ten die mid-flight. In about 10 per­cent of sor­ties, the drone hits the tar­get, but its war­head does not det­o­nate.

Once air­borne, op­er­at­ing a first-per­son view drone suc­cess­fully is not easy. These drones were orig­i­nally de­signed to be toys for rich peo­ple. Before they were press-ganged into ser­vice as tools of war, they were used ei­ther in aer­o­batic dis­plays or in races where a group of op­er­a­tors would com­pete in fly­ing through an ob­sta­cle course. In ei­ther case, the drones were not meant to be easy to fly. They were meant to be highly ma­neu­ver­able, but also un­sta­ble. First-person view drones can­not re­ally hover, fly slowly, or linger above a tar­get. The as­sump­tion among hob­by­ists is that en­thu­si­asts will in­vest the time and money to be­come pro­fi­cient at fly­ing. As a re­sult, train­ing a highly pro­fi­cient op­er­a­tor can take months. A stan­dard, base-level course for Ukrainian drone pi­lots takes about five weeks. The qual­ity of op­er­a­tors it pre­pares is ques­tion­able, and grad­u­ates of the course need ex­tra on-the-job ex­pe­ri­ence to be­come truly pro­fi­cient. Most drone pi­lots I en­coun­tered did not go through this course. Instead, they learned to fly drones on the job. Even ex­pe­ri­enced op­er­a­tors rou­tinely miss their tar­gets and crash into trees, power lines, or other ob­sta­cles.

To keep costs down, the first-per­son view drones used by Ukrainian forces have no nav­i­ga­tional aids, such as a com­pass, a GPS re­ceiver (though it should be noted that us­ing GPS of­ten would not be pos­si­ble any­way due to wide­spread GPS sig­nal jam­ming), or an in­er­tial nav­i­ga­tion sys­tem. The op­er­a­tor re­lies on their knowl­edge of the lo­cal ter­rain and on ver­bal in­struc­tions from a nav­i­ga­tor, who usu­ally has ac­cess to the video from the first-per­son view drone it­self and from other re­con­nais­sance as­sets that are track­ing the tar­get.

But the great­est ob­sta­cle to the suc­cess­ful use of these drones by far is the un­re­li­a­bil­ity of the ra­dio link be­tween the op­er­a­tor and the drone. One of the rea­sons why hit­ting a tar­get at ground level with pre­ci­sion is dif­fi­cult is that when first-per­son view drones get close to the ground, due to ob­sta­cles, they start to lose their ra­dio con­nec­tion to the op­er­a­tor, of­ten lo­cated up to 10 kilo­me­ters away. In some cases, drones can­not at­tack a tar­get if it is sim­ply on the wrong side of a tall build­ing or hill be­cause the build­ing or hill blocks the line of sight be­tween the drone and the op­er­a­tor. Sometimes, the op­er­a­tor can work around the loss of sig­nal close to the ground by climb­ing, point­ing the drone at the tar­get, and hop­ing in­er­tia will take it to its tar­get once they have lost con­trol. When strik­ing a small tar­get like a door­way, a win­dow, or the en­trance to a base­ment, this de­grades pre­ci­sion sig­nif­i­cantly.

Drones also op­er­ate in a clut­tered seg­ment of the elec­tro­mag­netic spec­trum. First-person view drones use un­en­crypted ana­log ra­dio sig­nals, and in hot parts of the front, as many as a dozen drone teams may be com­pet­ing for use of a hand­ful of fre­quen­cies (a con­se­quence of us­ing cheaper com­po­nents). This re­sults in the need for so­phis­ti­cated de-con­flic­tion pro­ce­dures that, quite sim­ply, do not al­ways work. Even when de-con­flic­tion works, some­times a team must wait as long as half an hour for a fre­quency to be­come avail­able be­fore take­off. If it does not work and two drones find them­selves in the air on the same chan­nel at the same time, they will in­ter­fere with each oth­er’s sig­nals, usu­ally re­sult­ing in a crash. On top of that, the en­e­my’s drones also fly on the same fre­quen­cies, which can also re­sult in in­ter­fer­ence and a crash. Interference from an­other drone, whether friendly or hos­tile, re­sulted in the fail­ure of at least three per­cent of our mis­sions.

In ad­di­tion to in­ter­fer­ence and the phys­i­cal lim­i­ta­tions of ra­dio com­mu­ni­ca­tion, first-per­son view drones are also highly sus­cep­ti­ble to elec­tronic-war­fare jam­ming. Both sides of the Russo-Ukrainian War make ex­ten­sive use of jam­ming. When our side turned on its jam­mers, they usu­ally in­formed us in ad­vance. That meant our drones sim­ply could not take off, some­times for a pe­riod of sev­eral hours. About three per­cent of our sor­ties failed be­cause we did not get ad­vanced warn­ing that our own jam­ming sys­tems would be op­er­a­tional, caus­ing our drones to fall out of the sky. On top of that, some­times, even the best ef­forts at de-con­flic­tion were not enough, sim­ply be­cause Ukrainian in­fantry or in­di­vid­ual ve­hi­cles are of­ten equipped with small portable jam­mers. When they heard a drone, they sim­ply ac­ti­vated the jam­mer with­out wait­ing to find out whether the drone was friendly or not.

Of course, when the other side ac­ti­vated its jam­mers, we got no ad­vance warn­ing what­so­ever. Enemy elec­tronic war­fare downed a full 31 per­cent of our sor­ties. This num­ber could have been lower, but for our com­mand’s oc­ca­sional stub­born in­sis­tence that we fly even though it was al­most cer­tain that en­emy jam­mers were op­er­at­ing in the tar­get area. When en­emy jam­mers were op­er­at­ing, the en­e­my’s own drones also could not fly, putting them in the same dilemma that our side also suf­fered. Nevertheless, when jam­mers were avail­able and switched on, first-per­son view op­er­a­tions be­came ef­fec­tively im­pos­si­ble.

Some of the prob­lems with first-per­son view drones will even­tu­ally be re­solved as tech­nol­ogy ma­tures. Better pro­duc­tion stan­dards will en­sure that a larger per­cent­age of drones ac­tu­ally take off. In Ukraine, there are count­less as­sem­bly lines that build drones from cheap, off-the-shelf com­po­nents sourced from du­bi­ous sup­pli­ers. A sin­gle unit of­ten sources its drones from nu­mer­ous or­ga­ni­za­tions, each with its own pro­duc­tion processes. More stan­dard­iza­tion, bet­ter qual­ity con­trol, and less re­liance on cheap com­po­nents could im­prove re­li­a­bil­ity. Better trans­mit­ters and re­ceivers that are more re­sis­tant to in­ter­fer­ence will im­prove the con­nec­tion be­tween drone and op­er­a­tor. Digital sig­nal trans­mis­sion and fre­quency hop­ping are start­ing to ap­pear in some first-per­son view drones, though these are still rare. Putting re-trans­la­tors that am­plify the drone’s sig­nal on a sec­ond drone that hov­ers some­where be­tween the op­er­a­tor and the first-per­son view drone can also im­prove the qual­ity of the con­nec­tion. Improved and stan­dard­ized pro­ce­dures for train­ing op­er­a­tors would cut down the time needed to be­come pro­fi­cient.

To be sure, the tech­nol­ogy has al­ready evolved since I left the bat­tle­field. Today, some Ukrainian and Russian units are also us­ing drones con­trolled by fiber-op­tic ca­ble, rather than ra­dio, though I had no per­sonal ex­pe­ri­ence with this type of drone in my unit. This tech­nol­ogy is of­ten touted as the next step in the evo­lu­tion of drone war­fare. It would seem to ad­dress some of the ma­jor prob­lems with ra­dio-con­trolled drones I ex­pe­ri­enced, and com­pared to ra­dio-con­trolled drones, fiber-op­tic drones may in­deed have a num­ber of ad­van­tages. Fiber op­tics make jam­ming im­pos­si­ble and de­con­flict­ing fre­quen­cies un­nec­es­sary. The ab­sence of an en­ergy-guz­zling ra­dio trans­mit­ter can ex­tend bat­tery life and even al­low for some in­no­v­a­tive tac­tics, such as land­ing the drone next to a road and wait­ing for sev­eral hours un­til a ve­hi­cle passes by.

Fiber op­tic drones do, how­ever, have a num­ber of draw­backs that mean they might not fully re­place ra­dio-con­trolled drones. The wire that con­nects the drone to the op­er­a­tor lim­its the ma­neu­ver­abil­ity of the drone. Snagging it on any kind of ob­sta­cle can re­sult in a loss of con­trol. Fiber-optic drones can­not re­ally dou­ble back over their route or cir­cle a tar­get, as this could tan­gle their con­trol wire and also re­sult in a loss of con­trol. As a re­sult, fiber-op­tic drones are said to be even more dif­fi­cult to fly than ra­dio-con­trolled drones. Because of these lim­i­ta­tions, sev­eral drone op­er­a­tors I spoke to ac­tively re­sist us­ing fiber-op­tic drones. Furthermore, though cost will prob­a­bly come down, at pre­sent the cost of the ca­ble means that a fiber-op­tic drone with 10 kilo­me­ters of ca­ble costs about twice as much as a ra­dio-con­trolled model of sim­i­lar range. Finally, pro­duc­tion ca­pac­i­ties avail­able to Ukraine for fiber-op­tic ca­bles are, at pre­sent, fairly lim­ited com­pared to ra­dio-con­trolled drones, mean­ing they are chron­i­cally in short sup­ply.

All that said, if a mem­ber of a NATO mil­i­tary were hy­po­thet­i­cally to ask me whether NATO coun­tries should ac­quire first-per­son view drone ca­pa­bil­i­ties, based on my ex­pe­ri­ence and given the cur­rent state of the tech­nol­ogy, I would prob­a­bly say no, whether they are ra­dio-con­trolled or fiber-op­tic. The vast ma­jor­ity of first-per­son view drone mis­sions can be com­pleted more cheaply, ef­fec­tively, or re­li­ably by other as­sets. Furthermore, other au­thors have noted that drones still do not come close to match­ing the ef­fects that can be achieved by massed ar­tillery fires. Additionally, ex­perts on ar­tillery sys­tems con­sis­tently note the greater re­li­a­bil­ity and range of ar­tillery.

Scaling up drone use would also in­volve scal­ing up the drones’ lo­gis­ti­cal tail. This means more com­pli­cated and ex­pen­sive lo­gis­tics for drones that would com­pete for re­sources with other types of weapons. For the time be­ing, first-per­son view drones are un­likely to fully re­place other weapons sys­tems. No mil­i­tary leader is yet se­ri­ously ad­vo­cat­ing do­ing away with ar­tillery com­pletely in fa­vor of first-per­son view drones. This means that the mil­i­tary will have two com­pet­ing lo­gis­ti­cal tails: one for first-per­son view drones and one for ar­tillery.

For so­phis­ti­cated NATO mil­i­taries, in­stead of in­vest­ing heav­ily in the de­vel­op­ment of first-per­son view drone ca­pa­bil­i­ties, I would, first of all, rec­om­mend en­sur­ing that troops in the field have well-trained or­ganic mor­tar sup­port with an am­ple sup­ply of am­mu­ni­tion. Mortars, like ar­tillery, can’t be stopped by bad weather, jam­ming, or crowded fre­quen­cies. Nor can they be im­peded by the dark. A well-trained mor­tar crew can re­li­ably put rounds on a tar­get in less than five min­utes. Our first-per­son view sor­ties took about 15 min­utes from the ini­tial re­quest to the mo­ment the drone struck the tar­get, and that was only when con­di­tions were op­ti­mal. A mor­tar’s price per shot is lower than a first-per­son view drone. Drones can nom­i­nally have an ad­van­tage over mor­tars in range, but this is vari­able and de­pends on the ter­rain, the spe­cific lo­ca­tion of the mor­tars rel­a­tive to the drone launch site, and the de­ploy­ment of in­tel­li­gence, sur­veil­lance, and re­con­nais­sance as­sets that find the tar­gets for drones or mor­tars. In prac­tice, I don’t re­mem­ber a sin­gle case when we struck a tar­get that was be­yond the range of mor­tars, and we cer­tainly never struck a tar­get that was be­yond the range of ar­tillery.

Secondly, for the rare cases when troops ac­tu­ally need tac­ti­cal-level, or­ganic pre­ci­sion-strike ca­pa­bil­ity, and when ac­tu­ally car­ry­ing out such a strike is fea­si­ble, I would rec­om­mend some­thing a lit­tle bit more high-end than a first-per­son view drone. NATO coun­tries and their al­lies al­ready pro­duce high-qual­ity loi­ter­ing mu­ni­tions, like the Switchblade. Such loi­ter­ing mu­ni­tions pro­vide greater pre­ci­sion in day and night, more ease of use, and higher re­sis­tance to elec­tronic in­ter­fer­ence than first-per­son view drones. They are also more ex­pen­sive, but their cost is, like first-per­son view drones, com­ing down. The in­vest­ment in qual­ity seems to jus­tify the greater ex­pense, es­pe­cially since, at most, one in ten first-per­son view sor­ties is a pre­ci­sion strike.

Jakub Jajcay is a for­mer of­fi­cer in the Armed Forces of the Slovak Republic, where he served in a num­ber of elite units. He is cur­rently work­ing on his Ph. D. in the Department of Middle Eastern Studies of Charles University in Prague.

...

Read the original on warontherocks.com »

8 160 shares, 23 trendiness

Scripts

Same SizerWiggle OutFill the SpaceHyphen OutHyphenatorLast is FirstExt. Word & LetterVariable Gradient

The Same Sizer” script ap­plies the prin­ci­ple of

en­sures that each word, re­gard­less of length or let­ter

Following a tra­di­tion seen in Ashkenazi Hebrew

man­u­scripts and cer­tain Quranic texts, this script

ro­tates words that are too large to fit within a

text block into the mar­gin. The re­sult­ing curve can

be ad­justed to be more or less pro­nounced.

It also of­fers a ver­sion with a straight-end fin­ish.

This script im­i­tates a method used in cer­tain

man­u­scripts, where the space be­tween the last

word of a line and the end of the text block is

filled with var­i­ous el­e­ments ― such as a sim­ple

or wavy pen stroke, rep­e­ti­tion of the last let­ter,

punc­tu­a­tion marks, em­bell­ished slashes, full stops,

etc. It al­lows you to fill this space with one or

more glyphs of your choice, or by re­peat­ing the

last let­ter of the line.

the sec­ond part out­side the text frame.

The size (in %) and align­ment of the re­sult­ing part

can be ad­justed.

The Hyphenator” InDesign script en­hances text

flow and read­abil­ity by avoid­ing word breaks.

It re­duces the size of the last let­ters in the fi­nal word

of a line, en­sur­ing they fit within the avail­able space.

Last is First

This script of­fers a pre­view of the word that will

ap­pear on the next line, a phe­nom­e­non seen in some

Hebrew man­u­scripts.

Frequently used in Hebrew man­u­scripts,

par­tic­u­larly for copy­ing bibli­cal texts, this script

ex­pands the last let­ter or the last word of a line.

To counter the 1000% max­i­mum en­largement limit

im­posed by InDesign, we sug­gest se­lect­ing the

vec­tor­iz­ing op­tion so that the right-hand side of

the frame is per­fectly aligned.

The Variable Gradient” script cre­ates a gra­di­ent

ef­fect through­out a text block by cal­cu­lat­ing

inter­me­diate val­­­ues be­tween two ex­tremes on

a cho­sen axis. The re­sult can be ap­plied word

by word or glyph by glyph.

...

Read the original on alternativelayoutsystem.com »

9 156 shares, 9 trendiness

cubic – Cursor for code review

How we made our AI code re­viewer stop be­ing so noisy­I’m Paul, co­founder of cu­bic—an AI-native GitHub.” One of our core fea­tures is an AI code re­view agent that per­forms an ini­tial re­view pass, catch­ing bugs, anti-pat­terns, du­pli­cated code, and sim­i­lar is­sues in pull re­quests. When we first re­leased this agent back in April, the main feed­back we got was straight­for­ward: it was too noisy.Even small PRs of­ten ended up flooded with mul­ti­ple low-value com­ments, nit­picks, or out­right false pos­i­tives. Rather than help­ing re­view­ers, it clut­tered dis­cus­sions and ob­scured gen­uinely valu­able feed­back.We de­cided to take a step back and thor­oughly in­ves­ti­gate why this was hap­pen­ing.Af­ter three ma­jor ar­chi­tec­ture re­vi­sions and ex­ten­sive of­fline test­ing, we man­aged to re­duce false pos­i­tives by 51% with­out sac­ri­fic­ing re­call.Many of these lessons turned out to be broadly use­ful—not just for code re­view agents but for de­sign­ing ef­fec­tive AI sys­tems in gen­eral.Our ini­tial ar­chi­tec­ture was straight­for­ward but prob­lem­atic:It looked clean in the­ory but quickly fell apart in prac­tice:Ex­ces­sive false pos­i­tives: The agent of­ten mis­took style is­sues for crit­i­cal bugs, flagged re­solved is­sues, and re­peated sug­ges­tions our lin­ters had al­ready ad­dressed.Users lost trust: Developers quickly learned to ig­nore the com­ments al­to­gether. When half the com­ments feel ir­rel­e­vant, the truly im­por­tant ones get missed.Opaque rea­son­ing: Understanding why the agent made spe­cific calls was prac­ti­cally im­pos­si­ble. Even ex­plicit prompts like ignore mi­nor style is­sues” had min­i­mal ef­fect.We tried stan­dard so­lu­tions—longer prompts, ad­just­ing the mod­el’s tem­per­a­ture, ex­per­i­ment­ing with sam­pling—but saw lit­tle mean­ing­ful im­prove­ment.Af­ter ex­ten­sive trial-and-er­ror, we de­vel­oped an ar­chi­tec­ture that sig­nif­i­cantly im­proved re­sults and proved ef­fec­tive in real-world repos­i­to­ries. These so­lu­tions un­der­pin the 51% re­duc­tion in false pos­i­tives cur­rently run­ning in pro­duc­tion.We re­quired the AI to ex­plic­itly state its rea­son­ing be­fore pro­vid­ing any feed­back:{

reasoning”: `cfg` can be nil on line 42; deref­er­enced with­out check on line 47″,

finding”: Possible nil‑pointer deref­er­ence”,

confidence”: 0.81

}Enabled us to clearly trace the AIs de­ci­sion-mak­ing process. If rea­son­ing was flawed, we could quickly iden­tify and ex­clude the pat­tern in fu­ture it­er­a­tions.En­cour­aged struc­tured think­ing by forc­ing the AI to jus­tify its find­ings first, sig­nif­i­cantly re­duc­ing ar­bi­trary con­clu­sions.Cre­ated a foun­da­tion to di­ag­nose and re­solve root causes be­hind other is­sues we faced.Ini­tially, the agent had ex­ten­sive tool­ing—Lan­guage Server Protocol (LSP), sta­tic analy­sis, test run­ners, and more. However, ex­plicit rea­son­ing logs re­vealed most analy­ses re­lied on a few core tools, with ex­tra com­plex­ity caus­ing con­fu­sion and mis­takes.We stream­lined the toolkit to es­sen­tial com­po­nents only—a sim­pli­fied LSP and a ba­sic ter­mi­nal.With fewer dis­trac­tions, the agent spent more en­ergy con­firm­ing gen­uine is­sues, sig­nif­i­cantly im­prov­ing pre­ci­sion.Ini­tially, our in­stinct was to con­tin­u­ously add more rules into a sin­gle large prompt to han­dle edge cases:This rapidly be­came un­sus­tain­able and was largely in­ef­fec­tive as the AI fre­quently over­looked many rules.Our break­through came from em­ploy­ing spe­cial­ized mi­cro-agents, each han­dling a nar­rowly-de­fined scope:Plan­ner: Quickly as­sesses changes and iden­ti­fies nec­es­sary checks.Se­cu­rity Agent: Detects vul­ner­a­bil­i­ties such as in­jec­tion or in­se­cure au­then­ti­ca­tion.Spe­cial­iz­ing al­lowed each agent to main­tain a fo­cused con­text, keep­ing to­ken us­age ef­fi­cient and pre­ci­sion high. The main trade-off was in­creased to­ken con­sump­tion due to over­lap­ping con­text, man­aged through ef­fec­tive caching strate­gies.These ar­chi­tec­ture and prompt im­prove­ments led to mean­ing­ful re­sults across hun­dreds of real pull re­quests from ac­tive open-source and pri­vate repos­i­to­ries. Specifically, over the past six weeks:Me­dian com­ments per pull re­quest cut by half, help­ing teams con­cen­trate on gen­uinely im­por­tant is­sues.Teams re­ported no­tably smoother re­view processes, spend­ing less time man­ag­ing ir­rel­e­vant com­ments and more time ef­fec­tively merg­ing changes.Ad­di­tion­ally, the re­duced noise sig­nif­i­cantly im­proved de­vel­oper con­fi­dence and en­gage­ment, mak­ing re­views faster and more im­pact­ful.Ex­plicit rea­son­ing im­proves clar­ity. Require your AI to clearly ex­plain its ra­tio­nale first—this boosts ac­cu­racy and sim­pli­fies de­bug­ging.Sim­plify the toolset. Regularly eval­u­ate your agen­t’s toolkit and re­move tools rarely used (less than 10% of tasks).Spe­cial­ize with mi­cro-agents. Keep each AI agent tightly fo­cused on a sin­gle task, re­duc­ing cog­ni­tive over­load and en­hanc­ing pre­ci­sion.How we made our AI code re­viewer stop be­ing so noisy­I’m Paul, co­founder of cu­bic—an AI-native GitHub.” One of our core fea­tures is an AI code re­view agent that per­forms an ini­tial re­view pass, catch­ing bugs, anti-pat­terns, du­pli­cated code, and sim­i­lar is­sues in pull re­quests.When we first re­leased this agent back in April, the main feed­back we got was straight­for­ward: it was too noisy.Even small PRs of­ten ended up flooded with mul­ti­ple low-value com­ments, nit­picks, or out­right false pos­i­tives. Rather than help­ing re­view­ers, it clut­tered dis­cus­sions and ob­scured gen­uinely valu­able feed­back.We de­cided to take a step back and thor­oughly in­ves­ti­gate why this was hap­pen­ing.Af­ter three ma­jor ar­chi­tec­ture re­vi­sions and ex­ten­sive of­fline test­ing, we man­aged to re­duce false pos­i­tives by 51% with­out sac­ri­fic­ing re­call.Many of these lessons turned out to be broadly use­ful—not just for code re­view agents but for de­sign­ing ef­fec­tive AI sys­tems in gen­eral.Our ini­tial ar­chi­tec­ture was straight­for­ward but prob­lem­atic:It looked clean in the­ory but quickly fell apart in prac­tice:Ex­ces­sive false pos­i­tives: The agent of­ten mis­took style is­sues for crit­i­cal bugs, flagged re­solved is­sues, and re­peated sug­ges­tions our lin­ters had al­ready ad­dressed.Users lost trust: Developers quickly learned to ig­nore the com­ments al­to­gether. When half the com­ments feel ir­rel­e­vant, the truly im­por­tant ones get missed.Opaque rea­son­ing: Understanding why the agent made spe­cific calls was prac­ti­cally im­pos­si­ble. Even ex­plicit prompts like ignore mi­nor style is­sues” had min­i­mal ef­fect.We tried stan­dard so­lu­tions—longer prompts, ad­just­ing the mod­el’s tem­per­a­ture, ex­per­i­ment­ing with sam­pling—but saw lit­tle mean­ing­ful im­prove­ment.Af­ter ex­ten­sive trial-and-er­ror, we de­vel­oped an ar­chi­tec­ture that sig­nif­i­cantly im­proved re­sults and proved ef­fec­tive in real-world repos­i­to­ries. These so­lu­tions un­der­pin the 51% re­duc­tion in false pos­i­tives cur­rently run­ning in pro­duc­tion.We re­quired the AI to ex­plic­itly state its rea­son­ing be­fore pro­vid­ing any feed­back:{

reasoning”: `cfg` can be nil on line 42; deref­er­enced with­out check on line 47″,

finding”: Possible nil‑pointer deref­er­ence”,

confidence”: 0.81

}Enabled us to clearly trace the AIs de­ci­sion-mak­ing process. If rea­son­ing was flawed, we could quickly iden­tify and ex­clude the pat­tern in fu­ture it­er­a­tions.En­cour­aged struc­tured think­ing by forc­ing the AI to jus­tify its find­ings first, sig­nif­i­cantly re­duc­ing ar­bi­trary con­clu­sions.Cre­ated a foun­da­tion to di­ag­nose and re­solve root causes be­hind other is­sues we faced.Ini­tially, the agent had ex­ten­sive tool­ing—Lan­guage Server Protocol (LSP), sta­tic analy­sis, test run­ners, and more. However, ex­plicit rea­son­ing logs re­vealed most analy­ses re­lied on a few core tools, with ex­tra com­plex­ity caus­ing con­fu­sion and mis­takes.We stream­lined the toolkit to es­sen­tial com­po­nents only—a sim­pli­fied LSP and a ba­sic ter­mi­nal.With fewer dis­trac­tions, the agent spent more en­ergy con­firm­ing gen­uine is­sues, sig­nif­i­cantly im­prov­ing pre­ci­sion.Ini­tially, our in­stinct was to con­tin­u­ously add more rules into a sin­gle large prompt to han­dle edge cases:This rapidly be­came un­sus­tain­able and was largely in­ef­fec­tive as the AI fre­quently over­looked many rules.Our break­through came from em­ploy­ing spe­cial­ized mi­cro-agents, each han­dling a nar­rowly-de­fined scope:Plan­ner: Quickly as­sesses changes and iden­ti­fies nec­es­sary checks.Se­cu­rity Agent: Detects vul­ner­a­bil­i­ties such as in­jec­tion or in­se­cure au­then­ti­ca­tion.Spe­cial­iz­ing al­lowed each agent to main­tain a fo­cused con­text, keep­ing to­ken us­age ef­fi­cient and pre­ci­sion high. The main trade-off was in­creased to­ken con­sump­tion due to over­lap­ping con­text, man­aged through ef­fec­tive caching strate­gies.These ar­chi­tec­ture and prompt im­prove­ments led to mean­ing­ful re­sults across hun­dreds of real pull re­quests from ac­tive open-source and pri­vate repos­i­to­ries. Specifically, over the past six weeks:Me­dian com­ments per pull re­quest cut by half, help­ing teams con­cen­trate on gen­uinely im­por­tant is­sues.Teams re­ported no­tably smoother re­view processes, spend­ing less time man­ag­ing ir­rel­e­vant com­ments and more time ef­fec­tively merg­ing changes.Ad­di­tion­ally, the re­duced noise sig­nif­i­cantly im­proved de­vel­oper con­fi­dence and en­gage­ment, mak­ing re­views faster and more im­pact­ful.Ex­plicit rea­son­ing im­proves clar­ity. Require your AI to clearly ex­plain its ra­tio­nale first—this boosts ac­cu­racy and sim­pli­fies de­bug­ging.Sim­plify the toolset. Regularly eval­u­ate your agen­t’s toolkit and re­move tools rarely used (less than 10% of tasks).Spe­cial­ize with mi­cro-agents. Keep each AI agent tightly fo­cused on a sin­gle task, re­duc­ing cog­ni­tive over­load and en­hanc­ing pre­ci­sion.How we made our AI code re­viewer stop be­ing so noisy­I’m Paul, co­founder of cu­bic—an AI-native GitHub.” One of our core fea­tures is an AI code re­view agent that per­forms an ini­tial re­view pass, catch­ing bugs, anti-pat­terns, du­pli­cated code, and sim­i­lar is­sues in pull re­quests.When we first re­leased this agent back in April, the main feed­back we got was straight­for­ward: it was too noisy.Even small PRs of­ten ended up flooded with mul­ti­ple low-value com­ments, nit­picks, or out­right false pos­i­tives. Rather than help­ing re­view­ers, it clut­tered dis­cus­sions and ob­scured gen­uinely valu­able feed­back.We de­cided to take a step back and thor­oughly in­ves­ti­gate why this was hap­pen­ing.Af­ter three ma­jor ar­chi­tec­ture re­vi­sions and ex­ten­sive of­fline test­ing, we man­aged to re­duce false pos­i­tives by 51% with­out sac­ri­fic­ing re­call.Many of these lessons turned out to be broadly use­ful—not just for code re­view agents but for de­sign­ing ef­fec­tive AI sys­tems in gen­eral.Our ini­tial ar­chi­tec­ture was straight­for­ward but prob­lem­atic:It looked clean in the­ory but quickly fell apart in prac­tice:Ex­ces­sive false pos­i­tives: The agent of­ten mis­took style is­sues for crit­i­cal bugs, flagged re­solved is­sues, and re­peated sug­ges­tions our lin­ters had al­ready ad­dressed.Users lost trust: Developers quickly learned to ig­nore the com­ments al­to­gether. When half the com­ments feel ir­rel­e­vant, the truly im­por­tant ones get missed.Opaque rea­son­ing: Understanding why the agent made spe­cific calls was prac­ti­cally im­pos­si­ble. Even ex­plicit prompts like ignore mi­nor style is­sues” had min­i­mal ef­fect.We tried stan­dard so­lu­tions—longer prompts, ad­just­ing the mod­el’s tem­per­a­ture, ex­per­i­ment­ing with sam­pling—but saw lit­tle mean­ing­ful im­prove­ment.Af­ter ex­ten­sive trial-and-er­ror, we de­vel­oped an ar­chi­tec­ture that sig­nif­i­cantly im­proved re­sults and proved ef­fec­tive in real-world repos­i­to­ries. These so­lu­tions un­der­pin the 51% re­duc­tion in false pos­i­tives cur­rently run­ning in pro­duc­tion.We re­quired the AI to ex­plic­itly state its rea­son­ing be­fore pro­vid­ing any feed­back:{

reasoning”: `cfg` can be nil on line 42; deref­er­enced with­out check on line 47″,

finding”: Possible nil‑pointer deref­er­ence”,

confidence”: 0.81

}Enabled us to clearly trace the AIs de­ci­sion-mak­ing process. If rea­son­ing was flawed, we could quickly iden­tify and ex­clude the pat­tern in fu­ture it­er­a­tions.En­cour­aged struc­tured think­ing by forc­ing the AI to jus­tify its find­ings first, sig­nif­i­cantly re­duc­ing ar­bi­trary con­clu­sions.Cre­ated a foun­da­tion to di­ag­nose and re­solve root causes be­hind other is­sues we faced.Ini­tially, the agent had ex­ten­sive tool­ing—Lan­guage Server Protocol (LSP), sta­tic analy­sis, test run­ners, and more. However, ex­plicit rea­son­ing logs re­vealed most analy­ses re­lied on a few core tools, with ex­tra com­plex­ity caus­ing con­fu­sion and mis­takes.We stream­lined the toolkit to es­sen­tial com­po­nents only—a sim­pli­fied LSP and a ba­sic ter­mi­nal.With fewer dis­trac­tions, the agent spent more en­ergy con­firm­ing gen­uine is­sues, sig­nif­i­cantly im­prov­ing pre­ci­sion.Ini­tially, our in­stinct was to con­tin­u­ously add more rules into a sin­gle large prompt to han­dle edge cases:This rapidly be­came un­sus­tain­able and was largely in­ef­fec­tive as the AI fre­quently over­looked many rules.Our break­through came from em­ploy­ing spe­cial­ized mi­cro-agents, each han­dling a nar­rowly-de­fined scope:Plan­ner: Quickly as­sesses changes and iden­ti­fies nec­es­sary checks.Se­cu­rity Agent: Detects vul­ner­a­bil­i­ties such as in­jec­tion or in­se­cure au­then­ti­ca­tion.Spe­cial­iz­ing al­lowed each agent to main­tain a fo­cused con­text, keep­ing to­ken us­age ef­fi­cient and pre­ci­sion high. The main trade-off was in­creased to­ken con­sump­tion due to over­lap­ping con­text, man­aged through ef­fec­tive caching strate­gies.These ar­chi­tec­ture and prompt im­prove­ments led to mean­ing­ful re­sults across hun­dreds of real pull re­quests from ac­tive open-source and pri­vate repos­i­to­ries. Specifically, over the past six weeks:Me­dian com­ments per pull re­quest cut by half, help­ing teams con­cen­trate on gen­uinely im­por­tant is­sues.Teams re­ported no­tably smoother re­view processes, spend­ing less time man­ag­ing ir­rel­e­vant com­ments and more time ef­fec­tively merg­ing changes.Ad­di­tion­ally, the re­duced noise sig­nif­i­cantly im­proved de­vel­oper con­fi­dence and en­gage­ment, mak­ing re­views faster and more im­pact­ful.Ex­plicit rea­son­ing im­proves clar­ity. Require your AI to clearly ex­plain its ra­tio­nale first—this boosts ac­cu­racy and sim­pli­fies de­bug­ging.Sim­plify the toolset. Regularly eval­u­ate your agen­t’s toolkit and re­move tools rarely used (less than 10% of tasks).Spe­cial­ize with mi­cro-agents. Keep each AI agent tightly fo­cused on a sin­gle task, re­duc­ing cog­ni­tive over­load and en­hanc­ing pre­ci­sion.

...

Read the original on mrge-home.framer.website »

10 153 shares, 17 trendiness

"Why is the Rust compiler so slow?"

My web­site (the one you’re read­ing right now) is mainly served by a sin­gle Rust bi­nary. For far too long now, every time I wanted to make a change, I would:

Copy it to my server

This is… not ideal.

So in­stead, I’d like to switch to de­ploy­ing my web­site with con­tain­ers (be it Docker, Kubernetes, or oth­er­wise), match­ing the vast ma­jor­ity of soft­ware de­ployed any time in the last decade.

The only is­sue is that fast Rust builds with Docker are not sim­ple.

Rust in Docker, the sim­ple way

To get your Rust pro­gram in a con­tainer, the typ­i­cal ap­proach you might find would be some­thing like:

FROM rust:1.87-alpine3.22 AS builder

RUN apk add musl-dev

WORKDIR /workdir

COPY . .

# the package” for my web­site is web-http-server”.

RUN cargo build –package web-http-server –target=x86_64-unknown-linux-musl

# Only in­clude the bi­nary in the fi­nal im­age

FROM alpine:3.20

COPY –from=builder /workdir/target/x86_64-unknown-linux-musl/release/web-http-server /usr/bin/web-http-server

ENTRYPOINT [“/usr/bin/web-http-server”]

Unfortunately, this will re­build every­thing from scratch when­ever there’s any change.

In my case, build­ing from scratch takes about 4 min­utes (including 10s to down­load the crates every time).

$ cargo build –release –target=x86_64-unknown-linux-musl –package web-http-server

Updating crates.io in­dex

Downloading crates …

Downloaded anstream v0.6.18

Downloaded http-body v1.0.1

… many more lines …

Compiling web-http-server v0.1.0 (/workdir/web-http-server)

Finished `release` pro­file [optimized + de­bug­info] tar­get(s) in 3m 51s

Sure, it could be worse. But I’ve grown ac­cus­tomed to speedy lo­cal builds, thanks to in­cre­men­tal com­pi­la­tion — I don’t want to wait that long on every tiny change!

Rust in Docker, with bet­ter caching

Thankfully, there’s a tool to help with this!

Luca Palmieri’s cargo-chef makes it easy to pre-build all of the de­pen­den­cies as a sep­a­rate layer in the docker build cache, so that changes in your code­base only trig­ger re-com­pi­la­tion of your code­base (and not your de­pen­den­cies).

I’ll save the de­tailed ex­pla­na­tion for Luca’s blog post, but broadly cargo-chef cre­ates a sim­pli­fied recipe” file from the cur­rent work­space, which can be cooked” to cache the de­pen­den­cies with­out be­ing in­val­i­dated by changes in the work­space.

My web­site pulls in a few hun­dred de­pen­den­cies, so this should help!

FROMAS plan­ner

COPY . .

RUN cargo chef pre­pare –recipe-path=/workdir/recipe.json

FROMAS cooker

# NOTE: changes to the pro­ject can pro­duce the same recipe”,

# al­low­ing this build stage to be cached.

COPY –from=planner /workdir/recipe.json recipe.json

RUN cargo chef cook –release –recipe-path=/workdir/recipe.json \

–target=x86_64-unknown-linux-musl

# If recipe.json is the same, cooker’ will be cached.

# All that’s left is com­pil­ing the fi­nal bi­nary.

FROM cooker AS builder

COPY . .

RUN cargo build –release –package web-http-server \

–target=x86_64-unknown-linux-musl

Unfortunately though, it does­n’t have quite the speedup we’re look­ing for — most of the time is still in the fi­nal bi­nary:

$ # Build de­pen­den­cies

$ cargo chef cook –release …

Updating crates.io in­dex

Downloading crates …

Compiling web-http-server v0.0.1 (/workdir/web-http-server)

Finished `release` pro­file [optimized + de­bug­info] tar­get(s) in 1m 07s

$ # Build the fi­nal bi­nary, us­ing cached de­pen­den­cies

$ cargo build –release …

Compiling web-http-server v0.1.0 (/workdir/web-http-server)

Finished `release` pro­file [optimized + de­bug­info] tar­get(s) in 2m 50s

Weirdly, only 25% of the time is ac­tu­ally spent on the de­pen­den­cies! As far as I could tell, my code is­n’t do­ing any­thing fun­da­men­tally un­rea­son­able. It’s ~7k lines of glu­ing to­gether var­i­ous larger de­pen­den­cies (axum, re­qwest,

tokio-post­gres, among oth­ers.)

What’s rustc do­ing for all that time?

Following this ex­cel­lent post by fasterthanlime, I first tried us­ing cargo –timings to get some more in­for­ma­tion:

$ cargo build –release –timings …

Compiling web-http-server v0.1.0 (/workdir/web-http-server)

Timing re­port saved to /workdir/target/cargo-timings/cargo-timing-20250607T192029.207407545Z.html

Finished `release` pro­file [optimized + de­bug­info] tar­get(s) in 2m 54s

In ad­di­tion to that cargo-tim­ing- file, there’s also a cargo-tim­ing.html. We’ll just copy out the canon­i­cal ver­sion:

FROM cooker AS builder

COPY . .

RUN cargo build –timings –release –target=x86_64-unknown-linux-musl –package web-http-server

# NEW: Move the cargo tim­ings to a known lo­ca­tion

RUN mv tar­get/​cargo-tim­ings/​cargo-tim­ing-*.html cargo-tim­ing.html

FROM alpine:3.22

COPY –from=builder /workdir/target/x86_64-unknown-linux-musl/release/web-http-server /usr/bin/web-http-server

# NEW: Include it in the fi­nal im­age

COPY –from=builder /workdir/cargo-timing.html cargo-tim­ing.html

And with a lit­tle bit of con­tainer wran­gling…

id=“$(docker con­tainer cre­ate

… we should be able to see what’s go­ing on! Let’s have a look:

Oh. There’s not re­ally much in­for­ma­tion there!

What’s go­ing on here?

cargo build –timings shows a bunch of in­for­ma­tion about how long each crate took to com­pile. But here, we only care about the com­pi­la­tion time of the fi­nal crate!

That aside, this does help give us more ac­cu­rate tim­ing. Measuring out­side the com­piler adds some ex­tra mov­ing pieces, or re­quires search­ing the out­put of cargo build — so us­ing car­go’s self-re­ported tim­ings will make more pre­cise analy­sis a bit eas­ier, later on.

Just to check, the value here of 174.1s roughly matches the 2m 54s” we saw from the cargo build out­put.

Actually ask­ing rustc this time

The post from fasterthanlime had one more tip we can use — rustc’s self-pro­fil­ing fea­ture, via the -Zself-profile

flag.

Normally, you’d prob­a­bly run some­thing like:

RUSTC_BOOTSTRAP=1 cargo rustc –release — -Z self-pro­file

Unfortunately, this won’t work here — the change in ar­gu­ments will in­val­i­date the cached de­pen­den­cies from

cargo chef cook, and there’s no equiv­a­lent way to pass ad­di­tional rustc flags through cargo-chef.

Instead, we can fun­nel every­thing via the RUSTFLAGS en­vi­ron­ment vari­able:

# cargo chef:

RUSTC_BOOTSTRAP=1 RUSTFLAGS=‘-Zself-profile’ cargo chef cook –release …

# fi­nal build:

RUSTC_BOOTSTRAP=1 RUSTFLAGS=‘-Zself-profile’ cargo build –release …

This gives us files like we­b_http_server-, which we can move and ex­tract from the im­age in the same way as we did for cargo-tim­ing.html.

Actually us­ing the prof­data

The Rust folks main­tain a suite of tools for ex­plor­ing rustc’s self-pro­fil­ing out­put, over in

...

Read the original on sharnoff.io »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.