10 interesting stories served every morning and every evening.




1 1,001 shares, 49 trendiness

Cardiff couple invited man in for Christmas, he stayed for 45 years

Christmas is of­ten re­garded as a time for good­will, but one young UK cou­ple’s act of kind­ness 50 years ago changed their lives for­ever.

On 23 December 1975, Rob Parsons and his wife Dianne were prepar­ing for Christmas at their Cardiff home when they heard a knock at the door.

On their doorstep stood a man with a bin bag con­tain­ing his pos­ses­sions in his right hand and a frozen chicken in his left.

Rob stud­ied the man’s face and vaguely re­mem­bered him as Ronnie Lockwood, some­one he would oc­ca­sion­ally see at Sunday School as a boy and who he was told to be kind to as he was a bit dif­fer­ent”.

I said Ronnie, what’s with the chicken?’ He said somebody gave it to me for Christmas’. And then I said two words that changed all of our lives.

And I’m not ex­actly sure why I said them. I said come in.”

...

Read the original on www.bbc.co.uk »

2 447 shares, 25 trendiness

New study shows Alzheimer’s disease can be reversed to achieve full neurological recovery—not just prevented or slowed—in animal models

For more than a cen­tury, peo­ple have con­sid­ered Alzheimer’s dis­ease (AD) an ir­re­versible ill­ness. Consequently, re­search has fo­cused on pre­vent­ing or slow­ing it, rather than re­cov­ery. Despite bil­lions of dol­lars spent on decades of re­search, there has never been a clin­i­cal trial of any drug to re­verse and re­cover from AD.

A re­search team from Case Western Reserve University, University Hospitals (UH) and the Louis Stokes Cleveland VA Medical Center has now chal­lenged this long-held dogma in the field, test­ing whether brains al­ready badly af­flicted with ad­vanced AD could re­cover.

The study, led by Kalyani Chaubey, from the Pieper Laboratory, was pub­lished on­line Dec. 22 in Cell Reports Medicine. Using di­verse pre­clin­i­cal mouse mod­els and analy­sis of hu­man AD brains, the team showed that the brain’s fail­ure to main­tain nor­mal lev­els of a cen­tral cel­lu­lar en­ergy mol­e­cule, NAD+, is a ma­jor dri­ver of AD, and that main­tain­ing proper NAD+ bal­ance can pre­vent and even re­verse the dis­ease.

NAD+ lev­els de­cline nat­u­rally across the body, in­clud­ing the brain, as peo­ple age. Without proper NAD+ bal­ance, cells even­tu­ally be­come un­able to ex­e­cute many of the crit­i­cal processes re­quired for proper func­tion­ing and sur­vival. In this study, the team showed that the de­cline in NAD+ is even more se­vere in the brains of peo­ple with AD, and that this same phe­nom­e­non also oc­curs in mouse mod­els of the dis­ease.

While AD is a uniquely hu­man con­di­tion, it can be stud­ied in the lab­o­ra­tory with mice that have been ge­net­i­cally en­gi­neered to ex­press ge­netic mu­ta­tions known to cause AD in peo­ple.

The re­searchers used two of these mouse mod­els: One car­ried mul­ti­ple hu­man mu­ta­tions in amy­loid pro­cess­ing; the other car­ried a hu­man mu­ta­tion in the tau pro­tein.

Amyloid and tau pathol­ogy are two of the ma­jor early events in AD. Both lines of mice de­velop brain pathol­ogy re­sem­bling AD, in­clud­ing blood-brain bar­rier de­te­ri­o­ra­tion, ax­onal de­gen­er­a­tion, neu­roin­flam­ma­tion, im­paired hip­pocam­pal neu­ro­ge­n­e­sis, re­duced synap­tic trans­mis­sion and wide­spread ac­cu­mu­la­tion of ox­ida­tive dam­age. These mice also de­velop the char­ac­ter­is­tics of se­vere cog­ni­tive im­pair­ments seen in peo­ple with AD.

After find­ing that NAD+ lev­els in the brain de­clined pre­cip­i­tously in both hu­man and mouse AD, the re­search team tested whether pre­vent­ing loss of brain NAD+ bal­ance be­fore dis­ease on­set or restor­ing brain NAD+ bal­ance af­ter sig­nif­i­cant dis­ease pro­gres­sion could pre­vent or re­verse AD, re­spec­tively.

The study was based on their pre­vi­ous work, pub­lished in Proceeding of the National Academy of Sciences USA, show­ing that restor­ing the brain’s NAD+ bal­ance achieved patho­log­i­cal and func­tional re­cov­ery af­ter se­vere, long-last­ing trau­matic brain in­jury. They re­stored NAD+ bal­ance by ad­min­is­ter­ing a now well-char­ac­ter­ized phar­ma­co­logic agent known as P7C3-A20, de­vel­oped in the Pieper lab.

Remarkably, not only did pre­serv­ing NAD+ bal­ance pro­tect mice from de­vel­op­ing AD, but de­layed treat­ment in mice with ad­vanced dis­ease also en­abled the brain to fix the ma­jor patho­log­i­cal events dri­ven by the dis­ease-caus­ing ge­netic mu­ta­tions.

Moreover, both lines of mice fully re­cov­ered cog­ni­tive func­tion. This was ac­com­pa­nied by nor­mal­ized blood lev­els of phos­pho­ry­lated tau 217, a re­cently ap­proved clin­i­cal bio­marker of AD in peo­ple, pro­vid­ing con­fir­ma­tion of dis­ease re­ver­sal and high­light­ing an ob­jec­tive bio­marker that could be used in fu­ture clin­i­cal tri­als for AD re­cov­ery.

We were very ex­cited and en­cour­aged by our re­sults,” said Andrew A. Pieper, the study’s se­nior au­thor, a pro­fes­sor at the Case Western Reserve School of Medicine and di­rec­tor of the Brain Health Medicines Center, Harrington Discovery Institute at UH. Restoring the brain’s en­ergy bal­ance achieved patho­log­i­cal and func­tional re­cov­ery in both lines of mice with ad­vanced Alzheimer’s. Seeing this ef­fect in two very dif­fer­ent an­i­mal mod­els, each dri­ven by dif­fer­ent ge­netic causes, strength­ens the new idea that re­cov­ery from ad­vanced dis­ease might be pos­si­ble in peo­ple with AD when the brain’s NAD+ bal­ance is re­stored.”

Pieper also holds the Morley-Mather Chair in Neuropsychiatry at UH and the CWRU Rebecca E. Barchas, MD, DLFAPA, University Professorship in Translational Psychiatry. He serves as psy­chi­a­trist and in­ves­ti­ga­tor in the Louis Stokes VA Geriatric Research Education and Clinical Center.

The re­sults prompt a par­a­digm shift in how re­searchers, clin­i­cians and pa­tients can think about treat­ing AD in the fu­ture.

The key take­away is a mes­sage of hope—the ef­fects of Alzheimer’s dis­ease may not be in­evitably per­ma­nent,” Pieper said. The dam­aged brain can, un­der some con­di­tions, re­pair it­self and re­gain func­tion.”

Through our study, we demon­strated one drug-based way to ac­com­plish this in an­i­mal mod­els, and also iden­ti­fied can­di­date pro­teins in the hu­man AD brain that may re­late to the abil­ity to re­verse AD,” Chaubey said.

Pieper em­pha­sized that cur­rent over-the-counter NAD+-precursors have been shown in an­i­mal mod­els to raise cel­lu­lar NAD+ to dan­ger­ously high lev­els that pro­mote can­cer. The phar­ma­co­log­i­cal ap­proach in this study, how­ever, uses a phar­ma­co­logic agent (P7C3-A20) that en­ables cells to main­tain their proper bal­ance of NAD+ un­der con­di­tions of oth­er­wise over­whelm­ing stress, with­out el­e­vat­ing NAD+ to sup­ra­phys­i­o­logic lev­els.

This is an im­por­tant fac­tor when con­sid­er­ing pa­tient care, and clin­i­cians should con­sider the pos­si­bil­ity that ther­a­peu­tic strate­gies aimed at restor­ing brain en­ergy bal­ance might of­fer a path to dis­ease re­cov­ery,” Pieper said.

This work also en­cour­ages new re­search into com­ple­men­tary ap­proaches and even­tual test­ing in pa­tients, and the tech­nol­ogy is be­ing com­mer­cial­ized by Cleveland-based com­pany Glengary Brain Health, which Pieper co-founded.

This new ther­a­peu­tic ap­proach to re­cov­ery needs to be moved into care­fully de­signed hu­man clin­i­cal tri­als to de­ter­mine whether the ef­fi­cacy seen in an­i­mal mod­els trans­lates to hu­man pa­tients,” Pieper said. Additional next steps for the lab­o­ra­tory re­search in­clude pin­point­ing which as­pects of brain en­ergy bal­ance are most im­por­tant for re­cov­ery, iden­ti­fy­ing and eval­u­at­ing com­ple­men­tary ap­proaches to Alzheimer’s re­ver­sal, and in­ves­ti­gat­ing whether this re­cov­ery ap­proach is also ef­fec­tive in other forms of chronic, age-re­lated neu­rode­gen­er­a­tive dis­ease.”

...

Read the original on case.edu »

3 415 shares, 51 trendiness

Maybe the Default Settings Are Too High

I’ve been read­ing Lord of the Rings for two months and I’m just at the end of the first part. It’s not be­cause I’m not en­joy­ing it. It’s one of the most en­joy­able read­ing ex­pe­ri­ences I can re­mem­ber.

From the be­gin­ning, I’ve read the whole thing aloud. I’ve found read­ing aloud help­ful for stay­ing en­gaged — lim­it­ing my­self to mouth-speed rather than eye-speed means I won’t rush, miss im­por­tant de­tails, and then lose in­ter­est, which has al­ways been a prob­lem for me.

At first I was anx­ious to read a 1,500-page book this way, be­cause it would take so long. But, as some­one pointed out to me, if I’m en­joy­ing it, why would I want to be done with it sooner?

So I tried slow­ing down even more, and dis­cov­ered some­thing. I slowed to a pace that felt al­most ab­surd, treat­ing each sen­tence as though it might be a par­tic­u­larly im­por­tant one. I gave each one maybe triple the usual time and at­ten­tion, ig­nor­ing the fact that there are hun­dreds of pages to go.

This leisurely pace made Middle-Earth blos­som be­fore my eyes. When I paused af­ter each comma, and let each sen­tence ring for a small mo­ment af­ter the pe­riod, the events of the story reached me with more weight and strength. That ex­tra time gave space for Tolkien’s im­ages and moods to prop­a­gate in my mind, which they did au­to­mat­i­cally.

Some part of me still wanted to rush and get on with it, to make good time, to gloss over the songs and lore to get to Moria and Mount Doom and the other mar­quee mo­ments of the story. But the more I ig­nored that im­pulse, the bet­ter the ex­pe­ri­ence got.

By of­fer­ing the book about triple the usual amount of at­ten­tive­ness, I was get­ting about triple the sto­ry­ness (i.e. mean­ing, en­gage­ment, lit­er­ary plea­sure). Whatever the thing is that I’m seek­ing when I pick up a novel in the first place, there’s much more of it avail­able at this pace.

This ef­fect re­minded me of a para­dox around eat­ing I rec­og­nized long ago. When you slow down your eat­ing speed, say to half or a third your de­fault speed, you get much more en­joy­ment out of a smaller amount of food. The ex­tra at­ten­tion given to each bite al­lows more of the good stuff,” what­ever that is ex­actly, to reach you.

What’s para­dox­i­cal is that it’s pre­cisely the seek­ing of that good stuff” that nor­mally dri­ves me to eat so quickly, and miss most of what I’m seek­ing. When you try to bar­rel ahead to ac­cess the good stuff quicker, you get less of it in the end. Slow down and much more of it is re­leased.

And it’s re­leased au­to­mat­i­cally, in both read­ing and eat­ing. You don’t have to search it out. The good stuff (the mean­ing in the text, the plea­sure in the eat­ing) just rises up to meet you in that ex­tra time you give it. Slowing down, and of­fer­ing more time to the act of con­sump­tion, im­me­di­ately in­creases read­ing com­pre­hen­sion (and eat­ing com­pre­hen­sion).

Both are anal­o­gous to slow­ing down while you vac­uum a car­pet. If you pass the vac­uum head too quickly, you miss half the dirt. Slow down, and you can hear how much more grit is sent skit­ter­ing up the tube. The suc­tion and bris­tles are work­ing, but they need more time to do their work fully, to draw up the deeper-ly­ing stuff.

It seems that my de­fault con­sump­tion speeds for read­ing and eat­ing (and maybe every­thing else) re­duce the re­wards of those things sig­nif­i­cantly, un­der­min­ing the point of do­ing ei­ther.

Part of it is my own im­pa­tience. But I also sus­pect that mod­ern liv­ing, with its in­fi­nite sup­ply of con­sum­ables, tends to push our rate-of-in­take di­als too high. I’m not go­ing to run out of books, or snacks, or op­por­tu­ni­ties to learn some­thing. There’s al­ways more, so not every crust of bread or printed page needs to be ap­pre­ci­ated fully.

Internally though, the mind is jug­gling like Lucy and Ethel on the con­veyor belt at the choco­late fac­tory. Our re­cep­tors for mean­ing and ap­pre­ci­a­tion, like the vac­uum head, need more time to do their full work, to make all the con­nec­tions they’re de­signed to make.

It might sound like I’m just of­fer­ing clichés — less is more, stop and smell the roses, take your time — and I guess I am. But clichés suf­fer the same is­sue: they are of­ten pro­found in­sights, con­sumed and passed on too rapidly for their real mean­ing to reg­is­ter any­more. You re­ally should stop and smell roses, as you know if you’re in the habit of do­ing that.

At least see what hap­pens when you re­duce your con­sump­tion speed — of any­thing, but es­pe­cially books, in­for­ma­tion, and food — by a half, or two thirds. Notice that (1) some­thing in you re­ally wants to plow through at the high­est vi­able set­ting, and (2) how much more of the re­ward is re­leased when you slow down any­way.

As far as I can tell, al­most every­thing be­comes more sat­is­fy­ing when you give it more time and in­ten­tion, even things like check­ing the mail­box or writ­ing a shop­ping list.

Slowing down your rate of con­sump­tion will in­evitably change what you want to con­sume. Reading throw­away news ar­ti­cles or AI slop with great care and at­ten­tion is only go­ing to show you how empty of value it is. Reading dense writ­ing in inky old books, crafted for your mind by great mas­ters, be­comes eas­ier with­out the rushed pace, and the mean­ing just blooms out of it.

Same with food. Try to sa­vor a cheap, waxy chocolate” bar, or a bag of store-brand cheese puffs, and you dis­cover a harsh taste that you don’t want to look at too closely. Enjoy a home­made pas­try with great at­ten­tion, and dis­cover there’s even more in it than you re­al­ized.

Mass pro­duc­tion is good in so many ways, but the faster we tend to con­sume its fruits, the more we end up seek­ing things for their glossy, can­died sur­faces. The more we go for these sur­face-level re­wards, the more the cul­ture fo­cuses on of­fer­ing only that part — such as TikTok videos, processed food, CGI-forward movies, and pub­lic dis­course in the form of un­ex­am­ined talk­ing points.

Who knows how far we’ve drifted from the best modes of con­sum­ing the things we value. Once some­thing be­comes a norm, it seems like an ap­pro­pri­ate stan­dard, no mat­ter how much has been lost. Apparently, read­ing silently and alone was un­usual un­til as late as the 18th cen­tury. Certainly sit-down meals and cook­ing at home were.

I don’t mean to sound like a scold. Let’s say none of this is morally good or bad. It’s just that in so much of what we do, we could be get­ting much more of the part of it that we re­ally seek — but it’s only avail­able at slower speeds.

If you’re cu­ri­ous, try con­sum­ing things more slowly, so slowly it seems silly to oth­ers — say a third your ha­bit­ual speed — and see what rises up to meet you.

Recently I opened a dis­cus­sion fo­rum for Raptitude read­ers who want to give some­thing up for the month of December (alcohol, so­cial me­dia, snacks, etc).

It’s been a real suc­cess, and many peo­ple want to do some­thing sim­i­lar in January. If you want to quit some­thing, or just give it up for a month, you’re in­vited to join.

Follow this link at the end of this post to get an in­vite.

...

Read the original on www.raptitude.com »

4 387 shares, 23 trendiness

The Entire New Yorker Archive Is Now Fully Digitized

In the in­tro­duc­tion to The New Yorker Index 1992,” a twenty-page cat­a­logue of every­thing the mag­a­zine pub­lished that year, the staff writer John McPhee ac­knowl­edged a rit­ual fa­mil­iar to many New Yorker read­ers: tack­ling a stack of un­read is­sues. Instead of catch­ing up at home, he’d schlep his copies up to New Hampshire and read in the mid­dle of a lake, while ly­ing in a ca­noe. With those is­sues dis­patched, he’d call the New Yorker of­fice and ask the li­brar­ian for help lo­cat­ing other sto­ries he wanted to read: Hello, Helen, in what is­sue did [the staff writer Thomas] Whiteside tee up the American la­tex tomato? Whose was the thing about the grass at Wimbledon?” (The thing was McPhee’s, of course.)

Exploring past New Yorker pieces is now a lot eas­ier (and more portable). As of this week, our full archive is avail­able to read at newyorker.com. On top of what was pre­vi­ously ac­ces­si­ble, we’ve added more than a hun­dred thou­sand ar­ti­cles from more than four thou­sand is­sues, a stack hefty enough to sink your ca­noe. Not only is every­thing from the 1992 in­dex ac­counted for—Su­san Orlean on the in­ner work­ings of a su­per­mar­ket, Talk of the Town sto­ries about urinals (art)” and urinals (not art)”—but also John Updike’s 1961 short story A & P” and Calvin Tomkins’s Profile of Marcel Duchamp. There’s work by Jorge Luis Borges and Susan Sontag, Ralph Ellison and Louise Glück. There are ar­ti­cles about Frank Sinatra and Michael Jordan, roy­als and rock stars, cow­boys and clowns. All in all, there are more than thirty-one thou­sand Talk of the Town sto­ries; twenty-four hun­dred Reporter at Large pieces; more than thir­teen thou­sand works of fic­tion and four­teen thou­sand po­ems; three thou­sand Letters from every­where, from Abu Dhabi to Zimbabwe; and fif­teen hun­dred Annals of” every­thing, from haberdashery” to veterinary med­i­cine.”

While the com­plete dig­i­tal archive may not have the same charm as mag­a­zines piled on the night­stand, there is now a sin­gle home for every is­sue—a place to pe­ruse cov­ers, scan ta­bles of con­tents, and choose what to read next. Better still, if you don’t hap­pen to have the phone num­ber of our li­brar­ian, up­graded search ca­pa­bil­i­ties al­low you to hunt down Whiteside” or Wimbledon,” vaping” or vampires,” and sort re­sults by date of pub­li­ca­tion. We’ve also made use of A. I. to add short sum­maries where they did­n’t pre­vi­ously ap­pear, mak­ing it eas­ier to dis­cern what an ar­ti­cle is about. (This is, af­ter all, a mag­a­zine in which the head­line Measure for Measure” might lead to an es­say not on Shakespeare’s com­edy but on the rise of the met­ric sys­tem.)

The mag­a­zine’s cen­te­nary cel­e­bra­tions, which kicked off in February, pro­vide a won­der­ful oc­ca­sion to get reac­quainted with our rich his­tory. Whether you are look­ing for some­thing spe­cific, go­ing down a rab­bit hole, or sim­ply catch­ing up, the newly ex­panded archive is de­signed to make a hun­dred years of writ­ing more ac­ces­si­ble than ever. Subscribers en­joy un­lim­ited ac­cess; if you aren’t a sub­scriber yet, be­come one to­day.

We’ll con­tinue to high­light some of our past fa­vorites in the Classics newslet­ter, on our home page, and else­where, but con­sider this an open in­vi­ta­tion to dive into the archive on your own. If you do choose to read on the wa­ter, please be care­ful—an iPad dropped over­board won’t hold up quite as well as a copy of the print mag­a­zine. ♦

...

Read the original on www.newyorker.com »

5 350 shares, 20 trendiness

Python 3.15’s interpreter for Windows x86-64 should hopefully be 15% faster

Python 3.15’s in­ter­preter for Windows x86-64 should hope­fully be 15% faster

Some time ago I posted an apol­ogy piece

for Python’s tail call­ing re­sults. I apol­o­gized for com­mu­ni­cat­ing per­for­mance re­sults with­out notic­ing a com­piler bug had oc­cured.

I can proudly say to­day that I am par­tially re­tract­ing that apol­ogy, but only for two plat­forms—ma­cOS AArch64 (XCode Clang) and Windows x86-64 (MSVC).

In our own ex­per­i­ments, the tail call­ing in­ter­preter for CPython was found to beat the com­puted goto in­ter­preter by 5% on pyper­for­mance on AArch64 ma­cOS us­ing XCode Clang, and roughly 15% on pyper­for­mance on Windows on an ex­per­i­men­tal in­ter­nal ver­sion of MSVC. The Windows build is against a switch-case in­ter­preter, but this in the­ory should­n’t mat­ter too much, more on that in the next sec­tion.

This is of course, a hope­fully ac­cu­rate re­sult. I tried to be more dili­gent here, but I am of course not in­fal­li­ble. However, I have found that shar­ing early and mak­ing a fool of my­self of­ten works well, as it has led to peo­ple catch­ing bugs in my code, so I shall con­tinue do­ing so :).

Also this as­sumes the change does­n’t get re­verted later in Python 3.15’s de­vel­op­ment cy­cle.

Just a re­cap. There are two pop­u­lar cur­rent ways of writ­ing C-based in­ter­preters.

Where we just switch-case to the cor­rect in­struc­tion han­dler.

And the other pop­u­lar way is a GCC/Clang ex­ten­sion called la­bels-as-val­ues/​com­puted go­tos.

Which is ba­si­cally the same idea, but to in­stead jump to the ad­dress of the next la­bel. Traditionally, the key op­ti­miza­tion here is that it needs only one jump to go to the next in­struc­tion, while in the switch-case in­ter­preter, a nai­ive com­piler would need two jumps.

With mod­ern com­pil­ers how­ever, the ben­e­fits of the com­puted go­tos is a lot less, mainly be­cause mod­ern com­pil­ers have got­ten bet­ter and mod­ern hard­ware has also got­ten bet­ter. In Nelson Elhage’s

ex­cel­lent in­ves­ti­ga­tion

on the next kind of in­ter­preter, the speedup of com­puted go­tos over switch case on mod­ern Clang was only in the low sin­gle dig­its on pyper­for­mance.

A 3rd way that was sug­gested decades ago, but not re­ally en­tirely fea­si­ble is call/​tail-call threaded in­ter­preters. In this scheme, each byte­code han­dler is its own func­tion, and we tail-call from one han­dler to the next in the in­struc­tion stream:

This was­n’t too fea­si­ble in C for one main rea­son—tail call op­ti­miza­tion was merely an op­ti­miza­tion. It’s some­thing the C com­piler might do, or might not do. This means if you’re un­lucky and the C com­piler chooses not to per­form the tail call, your in­ter­preter might stack over­flow!

Some time ago, Clang in­tro­duced __attribute__((musttail)), which al­lowed for man­dat­ing that a call must be tail-called. Otherwise, the com­pi­la­tion will fail. To my knowl­edge, the first time this was pop­u­lar­ized for use in a main­stream in­ter­preter was in

Josh Haberman’s Protobuf blog post.

Later on, Haoran Xu no­ticed that the GHC call­ing con­ven­tion com­bined with tail calls pro­duced ef­fi­cient code. They used this for their base­line JIT in a pa­per and termed the tech­nique

Copy-and-Patch.

So where are we now?

After us­ing a fixed XCode Clang, our per­for­mance num­bers on CPython 3.14/3.15 sug­gest that the tail call­ing in­ter­preter does pro­vide a mod­est speedup over com­puted go­tos. Around the 5% geomean range on pyper­for­mance.

To my un­der­stand­ing, uv al­ready ships Python 3.14 on ma­cOS with tail call­ing, which might be re­spon­si­ble for some of the speedups you see on there. We’re plan­ning to ship the of­fi­cial 3.15 ma­cOS bi­na­ries on python.org with tail call­ing as well.

However, you’re not here for that. The ti­tle of this blog post is clearly about MSVC Windows x86-64. So what about that?

[!CAUTION] The fea­tures for MSVC dis­cussed be­low are to my knowl­edge, ex­per­i­men­tal. They are not guar­an­teed to al­ways be around un­less the MSVC team de­cide to keep them. Use at your own risk!

These are the pre­lim­i­nary pyper­for­mance re­sults for CPython on MSVC with tail-call­ing vs switch-case. Any num­ber above 1.00x is a speedup (e.g. 1.01x == 1% speedup), any­thing be­low 1.00x is a slow­down. The speedup is a ge­omtric mean of around 15-16%, with a range of ~60% slow­down (one or two out­liers) to 78% speedup. However, the key thing is that the vast ma­jor­ity of bench­maarks sped up!

[!WARNING] These re­sults are on an ex­per­i­men­tal in­ter­nal MSVC com­piler, pub­lic re­sults be­low.

To ver­ify this and make sure I was­n’t wrong yet again, I checked the re­sults on my ma­chine with Visual Studio 2026. These are the re­sults from

this is­sue.

So yeah, the speedups are real! For a large-ish li­brary like xDSL, we see a 14% speedup, while for smaller mi­crobench­marks like nbody and spec­tral­norm, the speedups are greater.

Thanks to Chris Eibl and Brandt Bucher, we man­aged to get the

PR for this

on MSVC over the fin­ish line. I also want to sin­cerely thank the MSVC team. I can’t say this enough: they have been a joy to work with and I’m very im­pressed by what they’ve done, and I want to con­grat­u­late them on re­leas­ing Visual Studio 2026.

This is now listed in the What’s New for 3.15 notes:

Builds us­ing Visual Studio 2026 (MSVC 18) may now use the new tail-call­ing in­ter­preter. Results on an early ex­per­i­men­tal MSVC com­piler re­ported roughly 15% speedup on the geo­met­ric mean of pyper­for­mance on Windows x86-64 over the switch-case in­ter­preter. We have ob­served speedups rang­ing from 15% for large pure-Python li­braries to 40% for long-run­ning small pure-Python scripts on Windows. (Contributed by Chris Eibl, Ken Jin, and Brandt Bucher in gh-143068. Special thanks to the MSVC team in­clud­ing Hulon Jenkins.)

This is the doc­u­men­ta­tion for [[msvc::musttail]].

Where ex­actly do the speedups come from?

I used to be­lieve the the tail call­ing in­ter­preters get their speedup from bet­ter reg­is­ter use. While I still be­lieve that now, I sus­pect that is not the main rea­son for speedups in CPython.

My main guess now is that

tail call­ing re­sets com­piler heuris­tics to sane lev­els, so that com­pil­ers can do their jobs.

Let me show an ex­am­ple, at the time of writ­ing, CPython 3.15’s in­ter­preter loop is around 12k lines of C code. That’s 12k lines in a sin­gle func­tion for the switch-case and com­puted goto in­ter­preter.

This has caused many is­sues for com­pil­ers in the past, too many to list in fact. I have a EuroPython 2025 talk about this. In short, this overly large func­tion breaks a lot of com­piler heuris­tics.

One of the most ben­e­fi­cial op­ti­mi­sa­tions is in­lin­ing. In the past, we’ve found that com­pil­ers some­times straight up

refuse to in­line even the sim­plest of func­tions in that 12k loc eval loop. I want to stress that this is not the fault of the com­piler. It’s ac­tu­ally do­ing the cor­rect thing—you usu­ally don’t want to in­crease the code size of some­thing al­ready su­per large. Unfortunately, this does’t bode well for our in­ter­preter.

You might say just write the in­ter­preter in as­sem­bly! However, the whole point of this ex­er­cise is to not do that.

Ok enough talk, let’s take a look at the code now. Taking a real ex­am­ple, we ex­am­ine BINARY_OP_ADD_INT which adds two Python in­te­gers. Cleaning up the code so it’s read­able, things look like this:

TARGET(BINARY_OP_ADD_INT) {

// Increment the in­struc­tion pointer.

_Py_CODEUNIT* const this_in­str = nex­t_in­str;

frame->in­str_ptr = nex­t_in­str;

nex­t_in­str += 6;

_PyStackRef right = stack­_­pointer[-1];

// Check that LHS is an int.

PyObject *value_o = PyStackRef_AsPyObjectBorrow(left);

if (!_PyLong_CheckExactAndCompact(value_o)) {

JUMP_TO_PREDICTED(BINARY_OP);

// Check that RHS is an int.

// … (same code as above for LHS)

// Add them to­gether.

PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);

PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);

res = _PyCompactLong_Add((PyLongObject *)left_o, (PyLongObject *)right_o);

// If the ad­di­tion fails, fall back to the generic in­struc­tion.

if (PyStackRef_IsNull(res)) {

JUMP_TO_PREDICTED(BINARY_OP);

// Close the ref­er­ences.

PyStackRef_CLOSE_SPECIALIZED(left, _PyLong_ExactDealloc);

PyStackRef_CLOSE_SPECIALIZED(right, _PyLong_ExactDealloc);

// Write to the stack, and dis­patch.

stack­_­pointer[-2] = res;

stack­_­pointer += -1;

DISPATCH();

Seems sim­ple enough, let’s take a look at the as­sem­bly for switch-case on VS 2026. Note again, this is a non-PGO build for easy source in­for­ma­tion, PGO gen­er­ally makes some of these prob­lems go away, but not all of them:

Huh… all our func­tions were not in­lined. Surely that must’ve mean they were too big or some­thing right? Let’s look at PyStackReF_CLOSE_SPECIALIZED:

Here’s how BINARY_OP_ADD_INT looks with tail call­ing on VS 2026 (again, no PGO):

if (!_PyLong_CheckExactAndCompact(left_o)) {

00007FFC67164785 cmp qword ptr [rax+8],rdx

00007FFC67164789 jne _TAIL_CALL_BINARY_OP_ADD_INT@@_A+149h (07FFC67164879h)

00007FFC6716478F mov r9,qword ptr [rax+10h]

00007FFC67164793 cmp r9,10h

00007FFC67164797 jae _TAIL_CALL_BINARY_OP_ADD_INT@@_A+149h (07FFC67164879h)

res = _PyCompactLong_Add((PyLongObject *)left_o, (PyLongObject *)right_o);

00007FFC6716479D mov eax,dword ptr [rax+18h]

00007FFC671647A0 and r9d,3

00007FFC671647A4 and r8d,3

00007FFC671647A8 mov edx,1

00007FFC671647AD sub rdx,r9

00007FFC671647B0 mov ecx,1

00007FFC671647B5 imul rdx,rax

00007FFC671647B9 mov eax,dword ptr [rbx+18h]

00007FFC671647BC sub rcx,r8

00007FFC671647BF imul rcx,rax

00007FFC671647C3 add rcx,rdx

00007FFC671647C6 call medi­um_from_stwodig­its (07FFC6706E9E0h)

00007FFC671647CB mov rbx,rax

PyStackRef_CLOSE_SPECIALIZED(value, _PyLong_ExactDealloc);

00007FFC671647EB test bpl,1

00007FFC671647EF jne _TAIL_CALL_BINARY_OP_ADD_INT@@_A+0ECh (07FFC6716481Ch)

00007FFC671647F1 add dword ptr [rbp],0FFFFFFFFh

00007FFC671647F5 jne _TAIL_CALL_BINARY_OP_ADD_INT@@_A+0ECh (07FFC6716481Ch)

...

Read the original on fidget-spinner.github.io »

6 348 shares, 13 trendiness

Message limit 10000 team edition · Issue #34271 · mattermost/mattermost

We have a prob­lem since the new up­grade.

10.000-message limit reached. Messages sent be­fore 26. September 2025 are hid­den -Restore Access” ap­peared.

so the mes­sages be­fore that date can not be ac­cessed any­more.

When was this hard re­stric­tion im­ple­mented. v11?

We are a school and have since corona this Mattermost in­stance with over 2000 ac­tive users and 470000 posts.

is the September 26 a cal­cu­lated date with the 10000 mes­sages or just ac­cess to the last months mes­sages?

...

Read the original on github.com »

7 339 shares, 21 trendiness

Toys with the highest play-time and lowest clean-up-time

The worst toy is one with many pieces that my kids dump on the ground and then play with for only 2 min­utes. This makes a clean­ing to play­time ra­tio: 2 minute play vs 10 minute clean up

Sucking away my life as a par­ent.

A beau­ti­ful toy is one that the kids play with a lot, over a long time, and that is­n’t hard to clean up.

Comparing the toys I score high and the toy I scored low, here are the prin­ci­ples that I think give a toy a high score.

The high-scor­ing toys can be­come many dif­fer­ent kinds of ob­jects. At our house, they are ro­bots, they are rocket ships. They are a fish­ing hook that then we go fish­ing with.

The gi­ant mag­net tiles and small mag­net tiles also be­come con­tain­ers for a nar­ra­tive to play out. The mag­net tiles are of­ten pre­sent boxes to de­liver birth­day pre­sents to each other, or houses that other toys live in­side of. The gi­ant tiles are houses, rocket ships, nap pods, or shops. I would rather have a pile of gi­ant mag­net tiles than one play store”.

By con­trast, the pieces in the Minecraft toy are each a spe­cific thing: a tree, wa­ter, or lava. There are fewer world build­ing pos­si­bil­i­ties, with every­thing fit­ting into the strong frame that the toy of­fers. It makes sense that they grow bored of play­ing with them if it has fewer games” to of­fer.

Each piece of the high scor­ing toys has a fun re­la­tion­ship with the oth­ers. If the pieces are dif­fer­ent, they are dif­fer­ent enough that it does­n’t take a lot of thought to choose be­tween them.

The toy be­low looks less fun to me be­cause it looks more fid­dly. If I was play­ing, I would need to fig­ure out if I want a slightly bent an­gle or a straight one, and it seems like not a fun choice.

Apparently every toy that I find eas­i­est to clean up has mag­nets on it. Maybe I feel the sat­is­fac­tion of click­ing them to­gether as I clean them up. Cleaning be­comes a lit­tle like play­ing.

With the high scor­ing toys, the mag­nets are strong, and the con­nec­tion be­tween parts feels sat­is­fy­ing when you make it.

On the other side, the Minecraft toy mag­nets are less strong and feel less sat­is­fy­ing to put to­gether. It does­n’t have a sat­is­fy­ing sense of be­ing com­plete when you stick them to­gether; it’s more like a tem­po­rary paste.

The toy Clixo seems cool: flex­i­ble play, el­e­gant shapes, and mag­netic. I pre­dict it would be a top scor­ing toy.

...

Read the original on joannabregan.substack.com »

8 219 shares, 15 trendiness

Asahi Linux with Sway on the MacBook Air M2

I bought a MacBook Air M2. As of writ­ing, it’s very af­ford­able with the 16 GB RAM, 256 GB SSD, 13.6” model avail­able for $750. As of writ­ing, also Asahi Linux does­n’t sup­port any­thing newer than M2.

* 2011-2015: MacBook Air 13.3” with Intel Core i5 1.8 GHz, 8 GB of RAM, and 256 GB SSD (aftermarket up­grade from OWC). I in­stalled Arch Linux on it with the i3 win­dow man­ager.

* 2014-2018: Dell XPS 13 Developer Edition. I used the Ubuntu 14.04 that came with it with the i3 win­dow man­ager.

* 2018-2024: Lenovo Thinkpad X1 Carbon Gen 6 with Intel Core i7 8640U, 16 GB of RAM, and 1 TB SSD. I in­stalled Arch Linux on it with Sway.

On the Asahi Linux there’s a one liner which you can paste into the Terminal. It worked very well. The only com­plaint is that it seemed to take hours to copy root.img and boot.img over at 150 KB/s.

Since I in­tended to run it with the Sway Window Manager, and stor­age space is pre­cious, I in­stalled Fedora min­i­mal.

nm­cli de­vice wifi list

nm­cli de­vice wifi con­nect my_ssid’ pass­word mypassword’

and then I in­stalled a bunch of pack­ages I use, such as:

sudo dnf in­stall @sway-desktop-environment fish alacritty rofi ruff rclone pavu­con­trol-qt i3s­ta­tus mako pass sync­thing maim xdg-user-dirs fire­fox rustup openssl-de­vel ncdu fd-find neovim

Then, I cloned my per­sonal dot­file git repo and ran setup.sh. Of course, my con­figs weren’t meant for the MacBook, so I had to make some changes (which I’ve pushed to the dot­files).

By de­fault, the whole row con­tain­ing the notch is dis­abled, lead­ing to a large-bezels look which I per­son­ally don’t like. There has got to be a way to use that screen real es­tate nicely!

I re-en­abled that part of the screen with

grubby –args=apple_dcp.show_notch=1 –update-kernel=ALL

Then, I put the Sway bar on the top to make a seam­less ap­pear­ance where the left and right side are used for use­ful in­for­ma­tion but the mid­dle part is all black. By ex­per­i­men­ta­tion I found that the notch is 56px tall.

bar {

po­si­tion top

sta­tus_­com­mand i3s­ta­tus

mod­i­fier $mod

tray_out­put pri­mary

# the height of the m2 mac­book air’s notch???

height 56

col­ors {

back­ground #000000

sta­tusline #cfcfd9

sep­a­ra­tor #000000

# bor­der back­ground text

fo­cused_­work­space #0c0c0c #413459 #cfcfd9

ac­tive_­work­space #0c0c0c #413459 #cfcfd9

in­ac­tive_­work­space #0c0c0c #0c0c0c #cfcfd9

ur­gen­t_­work­space #2f343a #ff3300 #ffffff

The full i3s­ta­tus shows a lot of in­for­ma­tion which might get oc­cluded by the notch, and it does­n’t work with the MacBook bat­tery lev­els by de­fault, so I had to up­date the con­fig:

gen­eral {

col­ors = true

in­ter­val = 5

or­der += wireless _first_”

or­der += ethernet _first_”

or­der += battery 0”

or­der += tztime lo­cal”

wire­less _first_ {

for­mat_up = W: (%quality at %essid) %ip”

for­mat_­down = W: down”

eth­er­net _first_ {

for­mat_up = E: %ip (%speed)”

for­mat_­down = E: down”

bat­tery 0 {

for­mat = %status %percentage”

hide_sec­onds = true

path = /sys/class/power_supply/macsmc-battery/uevent

tz­time lo­cal {

for­mat = %Y-%m-%d %H:%M:%S”

I usu­ally don’t like hav­ing the bar on the top (as with ma­cOS), since you won’t be able to move your mouse cur­sor to the top edge to, say, click on tabs. Despite be­ing mostly key­board-dri­ven, click­ing on browser tabs with the mouse is some­thing I still do of­ten.

To fix that, I pre­vented the mouse cur­sor from en­ter­ing the bar on the top, with

# use swaymsg -t get_in­puts for the touch­pad’s iden­ti­fier

in­put 1452:849:Apple_MTP_multi-touch map_­to_re­gion 0 56 2560 1608

In around September 2025, I switched from the na­tive Swaybar to Waybar. Somehow, I was run­ning into some is­sues with swaymsg‘s han­dling of bat­tery lev­els, and my com­puter ric­ing was due for a slight vi­sual up­date any­way. It’s nice to save a tiny bit of screen real es­tate with icons in­stead of pure text, but of course, it is some­what slower than Swaybar as it needs to ren­der graph­i­cal stuff. The Waybar is still sit­u­ated be­hind the notch.

The new way­bar con­fig and css are at way­bar­con­fig and way­barstyle.

I am very im­pressed with how smooth and prob­lem-free Asahi Linux is. It is in­cred­i­bly re­spon­sive and feels even smoother than my Arch Linux desk­top with a 16 core AMD Ryzen 7945HX and 64GB of RAM.

The touch­pad in par­tic­u­lar is stun­ningly good and just as good as na­tive ma­cOS. The mouse cur­sor move­ment and two fin­ger scroll with in­er­tia just feel in­cred­i­bly nat­ural, much bet­ter than my old Thinkpad X1 Carbon.

One of the main rea­sons for get­ting the lap­top was to use it for line scan pho­tog­ra­phy. I was able to in­stall the Alkeria SDK for ARM64 with­out any is­sues, even though it came as a deb file in­stead of an rpm. I did­n’t man­age to get alien to work prop­erly (something about the ar­chi­tec­ture ar­m64 not match­ing Fedora’s con­ven­tion of call­ing it aarch64?) so I just used bs­d­tar to ex­tract the con­tents into the filesys­tem root, yolo!!! The M2 com­piles my code su­per fast!

With high screen bright­ness and com­pil­ing lots of code, my bat­tery went down from 100% to 60% af­ter about 4.5 hours of use — not as good as the 15 hours of bat­tery life on ma­cOS but still pretty re­spectable.

That said, it is­n’t per­fect. Common is­sues are:

* higher bat­tery drainage dur­ing sleep, so I usu­ally just shut it down en­tirely when not us­ing it

...

Read the original on daniel.lawrence.lu »

9 184 shares, 16 trendiness

Fahrplan

...

Read the original on fahrplan.events.ccc.de »

10 181 shares, 19 trendiness

Google says it is 'gradually rolling out' option to change your @gmail.com address

On a Google sup­port page, the com­pany says it is rolling out a new op­tion to let users change their email ad­dress even if it is an @gmail.com” ad­dress.

* Related: Google One dis­count­ing an­nual 2 TB and AI Pro plans by 50%

For quite some time now, Google has al­lowed users to change their ac­count email ad­dress if they are us­ing a third-party email ad­dress, but users with an @gmail.com” ad­dress are left un­able to change it, as Google says:

If your ac­coun­t’s email ad­dress ends in @gmail.com, you usu­ally can’t change it.

It ap­pears this is chang­ing.

On the same sup­port page that cur­rently says that you usu­ally can’t change your email, Google is de­tail­ing a new process that is gradually rolling out.” The al­tered page weirdly only shows in Hindi at the mo­ment, mean­ing you can’t see the changes in English. Everything quoted be­low is trans­lated. The page was first spot­ted in the Google Pixel Hub” group on Telegram.

The email ad­dress as­so­ci­ated with your Google Account is the ad­dress you use to sign in to Google ser­vices. This email ad­dress helps you and oth­ers iden­tify your ac­count. If you’d like, you can change your Google Account email ad­dress that ends in gmail.com to a new email ad­dress that ends in gmail.com.

This is new func­tion­al­ity that Google has­n’t de­tailed else­where yet, but says is gradually rolling out to all users.”

With this change, Google will al­low users to change their @gmail.com” email ad­dress to a new @gmail.com” ad­dress with an al­tered user­name. After chang­ing, Google de­tails that your orig­i­nal email ad­dress will still re­ceive emails at the same in­box as your new one and work for sign-in, and that none of your ac­count ac­cess will change. Users will be un­able to change or delete their email within 12 months of the change.

When you change your Google Account email ad­dress from an email ad­dress end­ing in gmail.com to a new email ad­dress end­ing in gmail.com:

The old email ad­dress in your Google Account that ends with gmail.com will be set as an alias. Learn more about alias email ad­dresses .

You will re­ceive emails at both your old and new email ad­dresses.

Data saved in your ac­count, in­clud­ing pho­tos, mes­sages, and emails sent to your old email ad­dress, will not be af­fected.

You can reuse your old Google Account email ad­dress at any time. However, you can’t cre­ate a new Google Account email ad­dress that ends with gmail.com for the next 12 months. You can’t delete your new email ad­dress ei­ther.

You can sign in to Google ser­vices like Gmail, Maps, YouTube, Google Play, or Drive with your old or new email ad­dress.

Each ac­count can only change its @gmail.com” ad­dress up to 3 times for a to­tal of 4 ad­dresses.

Google fur­ther de­tails that your old Gmail ad­dress will still ap­pear in some cases, and won’t be im­me­di­ately re­flected in older in­stances” such as events on Calendar cre­ated be­fore the change. You’ll also still be able to send emails from the old ad­dress. The old ad­dress re­mains yours and can­not be used by an­other user.

The page is very de­tailed on the process, but the changes just aren’t live yet. Presumably, this sup­port page de­tail­ing the process in Hindi went up a lit­tle ear­lier than in­tended, but it cer­tainly seems that we’ll be hear­ing more about this change in the com­ing weeks.

When the func­tion­al­ity goes live, users will be able to change their Gmail ad­dress via My Account.”

We’ll up­date this ar­ti­cle if fur­ther in­for­ma­tion comes out.

...

Read the original on 9to5google.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.