10 interesting stories served every morning and every evening.




1 1,158 shares, 73 trendiness

Liberation from Open Source Attribution

Our lib­er­a­tion ser­vices are tem­porar­ily un­avail­able. Please try again later.

Is your le­gal team frus­trated with the at­tri­bu­tion clause? Tired of putting Portions of this soft­ware…” in your doc­u­men­ta­tion? Those main­tain­ers worked for free—why should they get credit?

Does your com­pany for­bid AGPL code? One wrong im­port and sud­denly your en­tire pro­pri­etary code­base must be open sourced. The hor­ror!

Tracking li­censes across hun­dreds of de­pen­den­cies? Legal re­views tak­ing weeks? Third-party au­dits find­ing issues”? What if you could just… not deal with any of that?

Some li­censes re­quire you to con­tribute im­prove­ments back. Your share­hold­ers did­n’t in­vest in your com­pany so you could help strangers.

For the first time, a way to avoid giv­ing that pesky credit to main­tain­ers.

Our pro­pri­etary AI sys­tems have never seen the orig­i­nal source code. They in­de­pen­dently an­a­lyze doc­u­men­ta­tion, API spec­i­fi­ca­tions, and pub­lic in­ter­faces to recre­ate func­tion­ally equiv­a­lent soft­ware from scratch.

The re­sult is legally dis­tinct code that you own out­right. No de­riv­a­tive works. No li­cense in­her­i­tance. No oblig­a­tions.

*Through our off­shore sub­sidiary in a ju­ris­dic­tion that does­n’t rec­og­nize soft­ware copy­right

Simply up­load your pack­age.json, re­quire­ments.txt, Cargo.toml, or any de­pen­dency man­i­fest. Our sys­tem iden­ti­fies every open source pack­age you want lib­er­ated.

Our legally-trained ro­bots an­a­lyze only pub­lic doc­u­men­ta­tion—README files, API docs, and type de­f­i­n­i­tions. They never see a sin­gle line of source code. The clean room stays clean.

A com­pletely sep­a­rate team of ro­bots—who have never com­mu­ni­cated with the analy­sis team—im­ple­ments the soft­ware from scratch based solely on spec­i­fi­ca­tions. No copy­ing. No de­riva­tion.

Your new code is de­liv­ered un­der the MalusCorp-0 License—a pro­pri­etary-friendly li­cense with zero at­tri­bu­tion re­quire­ments, zero copy­left, and zero oblig­a­tions.

Do what­ever you want

Transparent, pay-per-KB pric­ing. No tiers, no sub­scrip­tions, no hid­den fees.

Every pack­age is priced by its un­packed size on npm. We look up each de­pen­dency in your pack­age.json, mea­sure the size in kilo­bytes, and charge … per KB. That’s it.

✓ Up to 50 pack­ages per or­der

✓ No base fee, no sub­scrip­tion — pay only for what you lib­er­ate

Upload Manifest

If any of our lib­er­ated code is found to in­fringe on the orig­i­nal li­cense, we’ll pro­vide a full re­fund and re­lo­cate our cor­po­rate head­quar­ters to in­ter­na­tional wa­ters.*

*This has never hap­pened be­cause it legally can­not hap­pen. Trust us.

We had 847 AGPL de­pen­den­cies block­ing our ac­qui­si­tion. MalusCorp lib­er­ated them all in 3 weeks. The due dili­gence team found zero li­cense is­sues. We closed at $2.3B.”

Our lawyers es­ti­mated $4M in com­pli­ance costs. MalusCorp’s Total Liberation pack­age was $50K. The board was thrilled. The open source main­tain­ers were not, but who cares?”

I used to feel guilty about not at­tribut­ing open source main­tain­ers. Then I re­mem­bered that guilt does­n’t show up on quar­terly re­ports. Thank you, MalusCorp.”

The ro­bots recre­ated our en­tire npm de­pen­dency tree—2,341 pack­ages—in per­fect iso­la­tion. Our com­pli­ance dash­board went from red to green overnight.”

Trusted by in­dus­try lead­ers who pre­fer to re­main anony­mous

Our clean room process is based on well-es­tab­lished le­gal prece­dent. The ro­bots per­form­ing re­con­struc­tion have prov­ably never ac­cessed the orig­i­nal source code. We main­tain de­tailed au­dit logs that def­i­nitely ex­ist and are avail­able upon re­quest to courts in se­lect ju­ris­dic­tions.

What about the orig­i­nal de­vel­op­ers?

They made their choice when they re­leased their code as open source.” We’re sim­ply ex­er­cis­ing our right to in­de­pen­dently im­ple­ment the same func­tion­al­ity. If they wanted com­pen­sa­tion, they should have worked for a cor­po­ra­tion.

How is this dif­fer­ent from copy­ing?

Intent and process. Our ro­bots in­de­pen­dently ar­rive at the same so­lu­tions through clean room method­ol­ogy. It’s like how every movie about an as­ter­oid threat­en­ing Earth is­n’t pla­gia­rism—some­times mul­ti­ple en­ti­ties just have the same idea.

What if the lib­er­ated code has bugs?

Our SLA guar­an­tees func­tional equiv­a­lence, not per­fec­tion. Besides, the orig­i­nal open source code prob­a­bly had bugs too. At least now they’re YOUR bugs, un­der YOUR li­cense.

Can I see the ro­bots?

Our ro­bot work­force op­er­ates in a se­cure fa­cil­ity in [LOCATION REDACTED]. Tours are avail­able for Enterprise cus­tomers who sign our 47-page NDA.

What li­censes can you elim­i­nate?

All of them. MIT, Apache, GPL, AGPL, LGPL, BSD, MPL—if it has terms, we can lib­er­ate you from them. Special rush pric­ing avail­able for AGPL emer­gen­cies.

Join the thou­sands of cor­po­ra­tions who’ve dis­cov­ered that open source oblig­a­tions are merely sug­ges­tions when you have enough ro­bots.

No credit card re­quired for quotes. Payment ac­cepted in USD, EUR, BTC, and stock op­tions.

...

Read the original on malus.sh »

2 1,116 shares, 106 trendiness

291f4388e2de89a43b25c135b44e41f0

Skip to con­tent

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

You must be signed in to star a gist

You must be signed in to fork a gist

Embed this gist in your web­site.

Save bre­to­nium/​291f4388e2de89a43b25c135b44e41f0 to your com­puter and use it in GitHub Desktop.

Embed this gist in your web­site.

Save bre­to­nium/​291f4388e2de89a43b25c135b44e41f0 to your com­puter and use it in GitHub Desktop.

Sign up for free

to join this con­ver­sa­tion on GitHub.

Already have an ac­count?

Sign in to com­ment

You can’t per­form that ac­tion at this time.

...

Read the original on gist.github.com »

3 535 shares, 44 trendiness

AI error jails innocent grandmother for months in North Dakota fraud case

FARGO — A grand­mother from Tennessee is work­ing to get her life back af­ter what she says was a case of mis­taken iden­tity that nearly cost her every­thing.

Angela Lipps spent nearly six months in jail af­ter Fargo po­lice con­nected her to a bank fraud case in the metro.

It’s a crime she says she did­n’t com­mit. In fact, she said she’s never been to North Dakota.

Lipps, 50, is the mother of three grown chil­dren and has five grand­chil­dren, spend­ing nearly her en­tire life in north-cen­tral Tennessee. The ex­tent of her trav­els is lim­ited to neigh­bor­ing states.

She’s never been on an air­plane in her life.

That changed last sum­mer when po­lice flew her to North Dakota to face crim­i­nal charges af­ter fa­cial recog­ni­tion showed she was the main sus­pect in what Fargo po­lice called an or­ga­nized bank fraud case.

It was so scary, I can still see it in my head, over and over again,” Lipps said.

It was July 14, the day a team of U. S. Marshals ar­rested Lipps at her home in Tennessee. She said she was taken away at gun­point while babysit­ting four young chil­dren. She was booked into her county jail in Tennessee as a fugi­tive from jus­tice from North Dakota.

I’ve never been to North Dakota, I don’t know any­one from North Dakota,” Lipps said.

Lipps would sit in that Tennessee jail cell for nearly four months. As a fugi­tive, she was held with­out bail. Lipps learned, fol­low­ing a Fargo Police Department in­ves­ti­ga­tion, she had been charged with four counts of unau­tho­rized use of per­sonal iden­ti­fy­ing in­for­ma­tion and four counts of theft in North Dakota.

In Tennessee, she was given a court ap­pointed lawyer for the ex­tra­di­tion process. To fight the charges, she was told she would have to go to North Dakota.

Through an open records re­quest, WDAY News ob­tained the Fargo po­lice file in this case. In April and May 2025, de­tec­tives were in­ves­ti­gat­ing sev­eral bank fraud cases. A woman is seen us­ing a fake U. S. Army mil­i­tary I.D. card to with­draw tens of thou­sands of dol­lars.

In an ef­fort to help iden­tify the woman in the sur­veil­lance video, court doc­u­ments show Fargo po­lice used fa­cial recog­ni­tion soft­ware. The soft­ware iden­ti­fied the per­son as Angela Lipps. According to the court doc­u­ments, the Fargo de­tec­tive work­ing the case then looked at Lipps’ so­cial me­dia ac­counts and Tennessee dri­ver’s li­cense photo.

In his charg­ing doc­u­ment, the de­tec­tive wrote that Lipps ap­peared to be the sus­pect based on fa­cial fea­tures, body type and hair­style and color.

Lipps told WDAY News that no one from the Fargo Police Department ever called to ques­tion her.

Officers from North Dakota did not pick up Lipps from her jail cell in Tennessee un­til Oct. 30 — 108 days af­ter her ar­rest. The next day she made her first ap­pear­ance in a North Dakota court­room to fight the charges.

If the only thing you have is fa­cial recog­ni­tion, I might want to dig a lit­tle deeper,” said Jay Greenwood, the lawyer rep­re­sent­ing Lipps in North Dakota.

Greenwood im­me­di­ately asked Lipps for her bank records. Once they were in hand, Fargo po­lice met with him and Lipps at the Cass County jail on Dec. 19. She had al­ready been in jail for more than five months. It was the first time po­lice in­ter­viewed her.

Her bank records showed she was more than 1,200 miles away, at home in Tennessee at the same time po­lice claimed she was in Fargo com­mit­ting fraud.

Around the same time she’s de­posit­ing Social Security checks … she is buy­ing cig­a­rettes at a gas sta­tion, around the same time, she is buy­ing a pizza, she is us­ing a cash app to buy an Uber Eats,” Greenwood said.

On Christmas Eve, five days af­ter the in­ter­view with Fargo po­lice, the case was dis­missed, and she was re­leased from jail.

But, Lipps was now stranded in Fargo.

I had my sum­mer clothes on, no coat, it was so cold out­side, snow on the ground, scared, I wanted out but I did­n’t know what I was go­ing to do, how I was go­ing to get home,” Lipps said.

Fargo po­lice did not cover Angela’s ex­penses to get home af­ter her re­lease from jail. Local de­fense at­tor­neys gave her money to pay for a ho­tel room and food on Christmas Eve and Christmas Day.

The day af­ter Christmas, F5 Project founder Adam Martin drove Lipps to Chicago so she could get home to Tennessee. Fargo-based F5 Project is an or­ga­ni­za­tion pro­vid­ing ser­vices and re­sources to in­di­vid­u­als strug­gling with in­car­cer­a­tion, men­tal health and ad­dic­tion.

I’m just glad it’s over. I’ll never go back to North Dakota,” Lipps said.

For more than a week, WDAY News tried to arrange an on cam­era in­ter­view with Fargo Police Chief David Zibolski to dis­cuss the case. Through a spokesper­son the chief de­clined an on-cam­era in­ter­view. WDAY News brought the is­sue up on Wednesday, March 11, at Zibolski’s re­tire­ment news con­fer­ence.

Why did no­body from Fargo Police ever speak with Angela Lipps for the five months she was in jail?” Zibolski was asked.

Thank you, Matt (Henson), for that ques­tion but we are not here to talk about that to­day,” Zibolski replied.

Lipps is back home in Tennessee now, but is still feel­ing the im­pact from the in­ci­dent. She told WDAY News that no one from the Fargo Police Department has apol­o­gized for the in­ci­dent.

Unable to pay her bills from jail, she lost her home, her car and even her dog.

Fargo po­lice say the bank fraud case is still un­der in­ves­ti­ga­tion and no ar­rests have been made.

...

Read the original on www.grandforksherald.com »

4 389 shares, 19 trendiness

Asia rolls out four-day weeks and work-from-home as emergency measures to solve a fuel crisis caused by Iran war

The Iran war is re­viv­ing re­mote work across the world — from Denmark to VietnamAsia rolls out four-day weeks and work-from-home as emer­gency mea­sures to solve a fuel cri­sis caused by Iran war­Bei­jing’s dom­i­nance in rare earth pro­cess­ing leaves oth­ers scram­bling to close the gap: China is the leader, and the U. S. is far be­hind’LIV Golf CEO Scott O’Neil on how stuck golfers got out of a be­sieged Gulf: Precise plan­ning, ex­cel­lent re­sources and tremen­dous lead­er­ship’ and The Associated PressThe Singapore-washing’ strat­egy starts to un­wind as both China and the U.S. closely scru­ti­nize cor­po­rate roots

This can­not be sus­tain­able’: The U. S. bor­rowed $50 bil­lion a week for the past five months, the CBO says’Pro­ceed with cau­tion’: Elon Musk of­fers warn­ing af­ter Amazon re­port­edly had manda­tory meet­ing to ad­dress high blast ra­dius’ and AI-related in­ci­dents’I don’t know if we’re ready’: Governors from each party ap­palled at 100-year-old fed­eral work­force strat­e­gy­Black­Rock is splash­ing $100 mil­lion on train­ing plumbers, elec­tri­cians, and HVAC tech­ni­cians as its CEO flags a skilled trade worker short­ageThe U.S. Mint dropped the olive branch from the dime. What does that mean for the coun­try?Asia rolls out four-day weeks and work-from-home as emer­gency mea­sures to solve a fuel cri­sis caused by Iran war

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

5 382 shares, 21 trendiness

Why ATMs didn’t kill bank teller jobs, but the iPhone did

A few months ago, J. D. Vance, sit­ting vice pres­i­dent of the United States, gave an in­ter­view to Ross Douthat of the New York Times. During that in­ter­view, Vance and Douthat had an in­ter­est­ing ex­change:

Douthat: How much do you worry about the po­ten­tial down­sides of AI? Not even on the apoc­a­lyp­tic scale, but on the scale of the way hu­man be­ings re­spond to a sense of their own ob­so­les­cence? These kinds of things. Vance: So, one, on the ob­so­les­cence point, I think the his­tory of tech and in­no­va­tion is that while it does cause job dis­rup­tions, it more of­ten fa­cil­i­tates hu­man pro­duc­tiv­ity as op­posed to re­plac­ing hu­man work­ers. And the ex­am­ple I al­ways give is the bank teller in the 1970s. There were very stark pre­dic­tions of thou­sands, hun­dreds of thou­sands of bank tellers go­ing out of a job. Poverty and com­mis­er­a­tion.What ac­tu­ally hap­pens is we have more bank tellers to­day than we did when the ATM was cre­ated, but they’re do­ing slightly dif­fer­ent work. More pro­duc­tive. They have pretty good wages rel­a­tive to other folks in the econ­omy.I tend to think that is how this in­no­va­tion hap­pens.

There are two in­ter­est­ing things about what Vance said, both re­lat­ing to the ex­am­ple that he chose about bank tellers and ATMs.

The first thing is what it tells us about who J. D. Vance is. The bank teller story—how ATMs were pre­dicted to in­crease bank teller un­em­ploy­ment, but in fact did not—is­n’t a story you’ll hear from politi­cians; in fact, for a long time, Barack Obama would claim, in­cor­rectly, that ATMs had de­creased the num­ber of bank tellers, in or­der to sug­gest that the el­e­vated un­em­ploy­ment rate dur­ing his pres­i­dency was due to pro­duc­tiv­ity gains from tech­nol­ogy. I’ve never heard a politi­cian cite the bank teller story be­fore: but I have seen the bank teller story cited in a lot of blogs. I’ve seen it cited, for ex­am­ple, by Scott Alexander and Matt Yglesias and Freddie de­Boer; and I’ve heard it, up­stream of the hum­ble blog­gers, from such fine econ­o­mists as Daron Acemoglu and David Autor. The story of how ATMs did­n’t au­to­mate bank tellers is, in­deed, some­thing of a mi­nor para­ble of the eco­nom­ics pro­fes­sion. You can see it en­cap­su­lated in this won­der­ful graph from the econ­o­mist James Bessen:

So Vance’s choice of ex­am­ple tells us the same thing that his ap­pear­ance on the Joe Rogan Experience did, which is that J. D. Vance—however much he might like to hide it—re­ally, re­ally loves read­ing blogs.

But the other thing about the bank teller story that Vance cites is that it’s wrong. We do not, con­trary to what Vance claims, have more bank tellers to­day than we did when the ATM was cre­ated”: we in fact have far fewer. The story he tells Douthat might have been true in 2000 or 2005, but it has­n’t been true for years. Bank teller em­ploy­ment has fallen off a cliff. Here is a graph of bank teller em­ploy­ment since 2000:

So what hap­pened to bank tellers? Autor, Bessen, Vance, and the like are right to point out that ATMs did not re­duce bank teller em­ploy­ment. But they miss the sec­ond half of the story, which is that an­other tech­nol­ogy did. And that tech­nol­ogy was the iPhone. The huge de­cline in bank teller em­ploy­ment that we’ve seen over the last 15-odd years is mainly a story about iPhones and what they made pos­si­ble.

But why? Why did the ATM, lit­er­ally called the au­to­mated teller ma­chine, not au­to­mate the teller, while an en­tirely or­thog­o­nal tech­nol­ogy—the iPhone—ac­tu­ally did?

The an­swer, I think, is com­ple­men­tar­ity.

In my last piece, on why I don’t think im­mi­nent mass job loss from AI is likely, I talked a lot about com­ple­men­tar­ity. The core point I made was that la­bor sub­sti­tu­tion is about com­par­a­tive ad­van­tage, not ab­solute ad­van­tage: the rel­e­vant ques­tion for la­bor im­pacts is not whether AI can do the tasks that hu­mans can do, but rather whether the ag­gre­gate out­put of hu­mans work­ing with AI is in­fe­rior to what AI can pro­duce alone. And I sug­gested that given the vast num­ber of fric­tions and bot­tle­necks that ex­ist in any hu­man do­main—do­mains that are, af­ter all, de­fined around hu­man la­bor in all its warts and ec­cen­tric­i­ties, with work­flows de­signed around hu­mans in mind—we should ex­pect to see a se­ri­ous gap be­tween the in­cred­i­ble power of the tech­nol­ogy and its im­pacts on eco­nomic life.

That gap will prob­a­bly close faster than pre­vi­ous gaps did: AI is not like” elec­tric­ity or the steam en­gine; an AI sys­tem is lit­er­ally a ma­chine that can think and do things it­self. But the gap ex­ists, and will ex­ist even as the tech­nol­ogy con­tin­ues to amaze us with what it can now ac­com­plish.

But by talk­ing about why ATMs did­n’t dis­place bank tellers but iPhones did, I want to high­light an im­por­tant corol­lary, which is that the true force of a tech­nol­ogy is felt not with the sub­sti­tu­tion of tasks, but the in­ven­tion of new par­a­digms. This is the fa­mous les­son of elec­tric­ity and pro­duc­tiv­ity growth, which I’ll re­turn to in a fu­ture piece. When a tech­nol­ogy au­to­mates some of what a hu­man does within an ex­ist­ing par­a­digm, even the vast ma­jor­ity of what a hu­man does within it, it’s quite rare for it to ac­tu­ally get rid of the hu­man, be­cause the de­f­i­n­i­tion of the par­a­digm around hu­man-shaped roles cre­ates all sorts of bot­tle­necks and fric­tions that de­mand hu­man in­volve­ment. It’s only when we see the con­struc­tion of en­tirely new par­a­digms that the full power of a tech­nol­ogy can be re­al­ized. The ATM sub­sti­tuted tasks; but the iPhone made them ir­rel­e­vant.

Let’s start with the ac­tual story of how the ATM af­fected bank tellers.

In the 1940s or 50s, if you owned a bank, you needed phys­i­cal lo­ca­tions—these were your branches”—and you needed peo­ple to staff those branches. You’d have your bank man­agers, your loan of­fi­cers, and you’d have your bank tellers. When a cus­tomer wanted to de­posit a check or check their bal­ance or make a with­drawal, they’d talk to one of the tellers; and be­cause this was the high­est-vol­ume type of in­ter­ac­tion that peo­ple would have with your bank, you’d have to hire tellers in huge num­bers.

The bank teller thus be­came a clas­sic mid-skill” oc­cu­pa­tion. It re­quired a high school diploma and about a month of on-the-job train­ing around count­ing cash and pro­cess­ing checks and set­tling ac­counts at the end of each day, but it did­n’t re­quire a col­lege de­gree. And be­cause they han­dled such a core part of the bank­ing work­flow, banks re­quired a huge num­ber of tellers: the av­er­age bank branch in an ur­ban area might em­ploy about two dozen peo­ple as tellers.

But in the 1950s and 60s, as Western economies were boom­ing and en­joy­ing their mag­nif­i­cent post­war eco­nomic ex­pan­sions, la­bor was get­ting much more ex­pen­sive. This was a good thing—it was sim­ply the other side of ris­ing wages—but it was also painful for en­ter­prises that re­lied on lots of man­ual la­bor. And so we find that all the fash­ion­able busi­ness con­cepts of the 1950s and 60s re­volved around re­duc­ing la­bor costs to the max­i­mum ex­tent pos­si­ble. It’s no co­in­ci­dence that it was in the 1950s that the word automation” en­tered the English lan­guage.

It used to be, for in­stance, that when you went shop­ping you’d have your stuff re­trieved for you by a small army of clerks run­ning around the shop; in­deed that’s still how it’s done in places like India with an abun­dance of cheap la­bor. But hu­mans were get­ting ex­pen­sive in the 1950s and 60s, so every­one wanted to re­duce the hu­man com­po­nent, and so in that pe­riod you saw the rise of su­per­mar­kets and dis­count stores, where the whole in­no­va­tion is get­ting the stuff your­self. (Sam Walton’s Made in America is a good record of what that rev­o­lu­tion was like from the in­side; con­sumers tended to be quite happy with the whole thing, since cor­po­rate sav­ings could be passed on in the form of cheaper goods.) And it’s the same rea­son why in the 50s and 60s you saw the rise of laun­dro­mats, vend­ing ma­chines, self-ser­vice gas sta­tions, and fast food” restau­rants like McDonald’s.

So in the 1950s and 60s, the goal of every sin­gle busi­ness that em­ployed hu­mans was to find ways to re­place hu­mans with ma­chines: in eco­nomic terms, to sub­sti­tute cap­i­tal for la­bor. And even though they were a rel­a­tively la­bor-light busi­ness to start with, this was true of banks as well. This was the case in the United States, but it was ac­tu­ally par­tic­u­larly true in Europe, where la­bor un­rest among bank em­ploy­ees was an on­go­ing headache. (Financial sec­tor em­ploy­ees were ac­tu­ally some of the most mil­i­tant of all white-col­lar work­ers dur­ing this pe­riod: be­cause of pro­longed strikes by bank em­ploy­ees, Irish banks were closed 10 per­cent of the time be­tween 1966 and 1976.)

Enter the com­puter. In the 1960s, to the great re­lief of bank man­age­ment teams, it be­came pos­si­ble to imag­ine that com­put­ers could be used to re­duce the role of hu­man la­bor in the bank­ing process.

There were two key in­no­va­tions that made this pos­si­ble. The first was IBMs in­ven­tion of the mag­netic stripe card in the 1960s: this was a thin strip of mag­ne­tized tape, bonded to a plas­tic card, that could en­code and store data like ac­count num­bers, and which could be read by a ma­chine when swiped through a card reader. And the sec­ond was Digital Equipment Corporation’s pi­o­neer­ing mini­com­puter, which dra­mat­i­cally re­duced the price and size of gen­eral-pur­pose com­put­ing.

And so, bring­ing those two in­no­va­tions to­gether, you could fi­nally imag­ine a ma­chine that could do, pro­gram­mat­i­cally, what a hu­man teller might do: that could iden­tify a cus­tomer au­to­mat­i­cally, via the mag­netic stripe; that could com­mu­ni­cate with the cen­tral servers of a bank to ver­ify the cus­tomer’s ac­count bal­ance; and that could dis­pense cash or ac­cept de­posits ac­cord­ingly.

And so in the 1960s, teams work­ing con­cur­rently in Sweden and the United Kingdom pi­o­neered the ear­li­est ver­sions of what would even­tu­ally be­come known as the au­to­mated teller ma­chine. These were prim­i­tive de­vices—they had the ten­dency to eat” pay­ment cards and to dis­pense in­cor­rect amounts of money, and they did­n’t see much up­take—but by the late 1960s it was clear where things were go­ing. IBM, at that point the largest tech­nol­ogy com­pany in the world, soon took in­ter­est in the tech­nol­ogy, and for the next few years groups of IBM en­gi­neers re­fined the tech­no­log­i­cal and in­fra­struc­tural layer to make the ATM func­tional.

And by the mid-1970s, af­ter years of tech­ni­cal in­vest­ment, the ATM was fi­nally ready for prime time. By that point IBM, then en­joy­ing its peak of in­flu­ence, had de­cided the mar­ket was­n’t worth the in­vest­ment, and so it ceded the nascent ATM in­dus­try to a com­pany called Diebold.

And in 1977 the ATM fi­nally got its big break. Citibank, then the sec­ond-largest de­posit bank in the United States, de­cided to make ATMs the sub­ject of a large push: they spent a large sum in­stalling the ma­chines across its de­posit branches. The New York Times re­ported it as a $50 mil­lion gam­ble that the con­sumer can be wooed and won with elec­tronic ser­vices.” But the re­sponse was tepid. In the same New York Times ar­ti­cle, we en­counter a scene from a bank branch in Queens where one of Citibank’s ATMs was in­stalled: most of the cus­tomers,” the ar­ti­cle re­ports, preferred to wait in line a few mo­ments and deal with the teller rather than test the new ma­chines.”

But Citibank’s gam­ble paid off. Consumer wari­ness to­ward ATMs turned out to be tem­po­rary: the ad­van­tages of the ATM over the hu­man teller were ob­vi­ous. Running an ATM was cheaper than pay­ing a hu­man—each ATM trans­ac­tion cost the bank just 27 cents, com­pared to $1.07 for a hu­man teller—and this could ei­ther be passed to the con­sumer in the form of lower fees or sim­ply kept as profit. And ATMs were also just more con­ve­nient. An ATM could do in 30 sec­onds what would take a hu­man teller at least a few min­utes; and while a hu­man teller was only avail­able dur­ing busi­ness hours, ATMs could be used at any time of day.

And the ben­e­fits for the bank were even greater. ATMs were ex­pen­sive to in­stall, but once they were in­stalled they were won­der­fully lu­cra­tive and had low main­te­nance costs. The fee op­por­tu­ni­ties were won­der­ful, since banks could charge fees on out-of-net­work trans­ac­tions. And since ATMs were not legally con­sid­ered to be branches, banks could de­ploy ATMs with­out run­ning afoul of bank­ing laws that re­stricted in­ter­state bank branch­ing.

All of this meant that banks had a re­ally strong in­cen­tive to put ATMs every­where. And so they did. In 1975 there were about 31 ATMs per one mil­lion Americans; by the year 2000, that num­ber had grown to 1,135, a 37-fold in­crease in just 25 years.

And what did this do to the bank tellers?

The nat­ural ex­pec­ta­tion is that ATMs would make hu­man bank tellers ob­so­lete, or at least strongly re­duce de­mand for bank teller jobs. And in­deed the num­ber of bank tellers per branch de­clined sig­nif­i­cantly: from 21 tellers per branch to about 13 per branch once ATMs had hit sat­u­ra­tion. But this de­cline in teller in­ten­sity cor­re­sponded with an in­crease in ag­gre­gate teller em­ploy­ment. The num­ber of ATMs per capita grew dra­mat­i­cally af­ter 1975; but the num­ber of bank tellers in­creased along with it. Bank tellers did be­come a smaller share of to­tal em­ploy­ment, since the in­crease in bank teller em­ploy­ment was smaller than the in­crease in other oc­cu­pa­tions; but at no point in the pe­riod be­tween 1970 and 2010 did the num­ber of bank tellers ac­tu­ally en­ter a pro­longed de­cline.

Why is that? Why did ATMs, which au­to­mated the bulk of the teller’s job, not lead to a de­crease in teller em­ploy­ment?

We find the most el­e­gant ex­pla­na­tion in a pa­per from David Autor:

First, by re­duc­ing the cost of op­er­at­ing a bank branch, ATMs in­di­rectly in­creased the de­mand for tellers: the num­ber of tellers per branch fell by more than a third be­tween 1988 and 2004, but the num­ber of ur­ban bank branches (also en­cour­aged by a wave of bank dereg­u­la­tion al­low­ing more branches) rose by more than 40 per­cent. Second, as the rou­tine cash-han­dling tasks of bank tellers re­ceded, in­for­ma­tion tech­nol­ogy also en­abled a broader range of bank per­son­nel to be­come in­volved in relationship bank­ing.” Increasingly, banks rec­og­nized the value of tellers en­abled by in­for­ma­tion tech­nol­ogy, not pri­mar­ily as check­out clerks, but as sales­per­sons, forg­ing re­la­tion­ships with cus­tomers and in­tro­duc­ing them to ad­di­tional bank ser­vices like credit cards, loans, and in­vest­ment prod­ucts.

We thus have a clas­sic case of the Jevons ef­fect. Teller la­bor was an in­put into an out­put that we can call financial ser­vices.” ATMs al­lowed us to pro­duce that out­put more ef­fi­ciently and econ­o­mize on the use of the la­bor in­put. But de­mand for the out­put was suf­fi­ciently elas­tic that more ef­fi­cient pro­duc­tion meant more de­mand: and de­mand in­creased to the point that there was ac­tu­ally greater de­mand for the la­bor in­put as well. And—this part is not quite the clas­sic Jevons ef­fect—the greater de­mand sug­gested to banks that there had been cer­tain func­tions that were pre­vi­ously con­sid­ered in­ci­den­tal to the teller job, like relationship bank­ing,” which were ac­tu­ally quite use­ful. And so ATMs were a truly com­ple­men­tary tech­nol­ogy for the bank teller.

By the 2010s, peo­ple had be­gun to no­tice that there had been no mass un­em­ploy­ment of bank tellers. In 2015, James Bessen pub­lished a book called Learning by Doing, us­ing the non-au­toma­tion of bank tellers as a cen­tral ex­am­ple; soon it be­came a sort of load-bear­ing para­ble about what Matt Yglesias called the myth of tech­no­log­i­cal un­em­ploy­ment.” From Bessen the story dif­fused to Autor and Acemoglu; then to the eco­nom­ics blog­gers; then to peo­ple like Eric Schmidt, who cited the ATM story in 2017 as one rea­son why he was a denier” on the ques­tion of tech­no­log­i­cal job loss. And they were right: ATMs re­ally did­n’t re­duce bank teller em­ploy­ment.

But there was an ironic el­e­ment to all of this: at the ex­act mo­ment that peo­ple started talk­ing about how tech­nol­ogy had not dis­placed bank tellers, it stopped be­ing true.

In the 2010s, bank teller em­ploy­ment en­tered a pe­riod of pro­longed de­cline. This was not a prod­uct of the fi­nan­cial cri­sis that peaked in 2008: bank teller em­ploy­ment was roughly the same in 2010 as it had been in 2007. And the de­cline was not rapid but grad­ual. It con­tin­ued even as banks re­turned to full health as the Great Recession abated. First there was a se­vere de­cline that started af­ter 2010; then a slight re­cov­ery at the end of the decade; and then a col­lapse dur­ing the COVID years from which bank teller em­ploy­ment has never re­cov­ered. In 2010, there were 332,000 full-time bank tellers in the United States; by 2016, there were 235,000; by 2022, there were just 164,000.

This was not a long-de­layed ATM shock: the ATM had reached full sat­u­ra­tion long be­fore. It was, rather, the ef­fect of an­other tech­nol­ogy, one that had noth­ing to do with bank­ing. It was a prod­uct of the iPhone.

Apple first in­tro­duced the iPhone in 2007. By 2010, it was clear that the iPhone-style smart­phone, with a touch­screen and an app store, was go­ing to be the defin­ing tech­no­log­i­cal par­a­digm of the years to come: peo­ple were go­ing to con­duct huge por­tions of their life through the prism of the smart­phone, which soon be­came sim­ply the phone.” And just as more for­ward-think­ing in­sti­tu­tions like Citibank knew in the 1970s that ATMs were the fu­ture, the smarter banks knew by the early 2010s that the fu­ture lay in what they called mo­bile bank­ing.

The mo­bile bank­ing vi­sion was sim­ple: the bank­ing cus­tomers of the fu­ture would do all their bank­ing via their banks’ mo­bile apps. They would buy things via pay­ment cards or, later, via Apple Pay; they would check their bal­ance or make de­posits through the bank­ing app; the cus­tomer’s re­la­tion­ship with the bank would be me­di­ated en­tirely via the app. In this new world, there was no rea­son for the phys­i­cal bank lo­ca­tion to ex­ist. Indeed there were new en­trants, like Revolut or Klarna, that ex­isted en­tirely as mo­bile apps. The branch was a thing of the past.

Mobile bank­ing suc­ceeded much more rapidly than the ATM did—which is re­mark­able, con­sid­er­ing that mo­bile bank­ing was a much big­ger change than the ATM. I re­mem­ber, as a kid, open­ing my first bank ac­count at the Chase branch in my home­town, and the ex­cite­ment of oc­ca­sion­ally vis­it­ing there to de­posit any checks I might have. I’m still a Chase cus­tomer, and I in­ter­act fre­quently with my Chase ac­count for all sorts of rea­sons. But it’s been many years since I vis­ited a phys­i­cal Chase lo­ca­tion. My re­la­tion­ship with Chase has tran­scended any need for the branch. I don’t think I’m alone in this: the Chase branch in my home­town, where I would once de­posit checks, closed in 2023. The build­ing now houses a doc­tor’s of­fice.

And so the rise of mo­bile bank­ing re­moved any real rea­son to have bank branches. Visits to bank branches de­clined dra­mat­i­cally through­out the 2010s, and banks ag­gres­sively re­designed the bank­ing ex­pe­ri­ence around the dig­i­tal in­ter­face. The num­ber of com­mer­cial bank branches per capita peaked in 2009 and has fallen by nearly 30 per­cent since, with most of the de­cline oc­cur­ring in wealth­ier ar­eas that were more likely to adopt dig­i­tal bank­ing first. Between 2008 and 2025, Bank of America, which at some point sur­passed Citibank as the sec­ond-largest de­posit bank in the United States af­ter Chase, closed about 40 per­cent of its branches. Online bank­ing had been around since the 1990s, Bank of America’s CEO said, but the iPhone was a game changer” that effectively al­lowed cus­tomers to carry a bank branch in their pock­ets.”

And as the branch dis­ap­peared, so did the teller. ATM had been an in­no­va­tion within the ex­ist­ing world of phys­i­cal bank­ing, and thus its re­place­ment of the bank teller could in­evitably only be par­tial; as long as peo­ple were still vis­it­ing the bank branch, it was use­ful to re­pur­pose tellers as relationship bankers.” But when branch vis­its de­clined that stopped mak­ing sense. The iPhone rep­re­sented a wholly dif­fer­ent way of bank­ing, and within it there was no real need for the bank teller: and so a large in­sti­tu­tion like Bank of America was able to re­duce its head­count from 288,000 in 2010 to 204,000 in 2018.

Of course, the tran­si­tion to mo­bile bank­ing also cre­ated jobs: banks now needed soft­ware de­vel­op­ers to build and main­tain the dig­i­tal in­ter­face, and they needed cus­tomer ser­vice rep­re­sen­ta­tives to han­dle any prob­lems that might emerge. And so a mid-skill” job was re­placed by a thin stra­tum of high-skill” jobs and a vast army of low-skill” ones. The term for this in la­bor eco­nom­ics is job po­lar­iza­tion.”

So that’s the irony of the para­ble of the bank teller. Technology did kill the bank teller job. It was­n’t the ATM that did it, but the iPhone.

I think the story of the ATM and the iPhone of­fers us an im­por­tant les­son about tech­nol­ogy and its im­pacts on la­bor mar­kets. Because Vance, of course, was­n’t re­ally talk­ing about ATMs when he talked about ATMs; he was talk­ing about AI.

The les­son is worth stat­ing plainly. The ATM tried to do the teller’s job bet­ter, faster, cheaper; it tried to fit cap­i­tal into a la­bor-shaped hole; but the iPhone made the teller’s job ir­rel­e­vant. One au­to­mated tasks within an ex­ist­ing par­a­digm, and the other cre­ated a new par­a­digm in which those tasks sim­ply did­n’t need to ex­ist at all. And it is par­a­digm re­place­ment, not task au­toma­tion, that ac­tu­ally dis­places work­ers—and, con­versely, un­locks the la­tent pro­duc­tiv­ity within any tech­nol­ogy. That’s be­cause as long as the old par­a­digm per­sists, there will be la­bor-shaped holes in which cap­i­tal sub­sti­tu­tion will en­counter con­stant fric­tions and bot­tle­necks.

This has, I think, se­ri­ous im­pli­ca­tions for how we’re think­ing about AI.

People in AI fre­quently talk about the vi­sion of AI be­ing a drop-in re­mote worker”: AI sys­tems that can be in­serted into a work­flow, learn it, and even­tu­ally do it on the level of a com­pe­tent hu­man. And they see that as the point where you’ll start to see se­ri­ous pro­duc­tiv­ity gains and la­bor dis­place­ment.

I am not a denier” on the ques­tion of tech­no­log­i­cal job loss; Vance’s blithe op­ti­mism is not mine. But I’m skep­ti­cal that sim­ply slot­ting AI into hu­man-shaped jobs will have the re­sults peo­ple seem to ex­pect. The his­tory of tech­nol­ogy, even ex­cep­tion­ally pow­er­ful gen­eral-pur­pose tech­nol­ogy, tells us that as long as you are try­ing to fit cap­i­tal into la­bor-shaped holes you will find your­self con­fronted by end­less fric­tions: just as with elec­tric­ity, the pro­duc­tiv­ity in­her­ent in any tech­nol­ogy is un­leashed only when you fig­ure out how to or­ga­nize work around it, rather than slot­ting it into what al­ready ex­ists. We are still very much in the regime of slot­ting it in. And as long as we are in that regime, I ex­pect dis­ap­point­ing pro­duc­tiv­ity gains and rel­a­tively lit­tle real dis­place­ment.

The real pro­duc­tiv­ity gains from AI—and the real threat of la­bor dis­place­ment—will come not from the drop-in re­mote worker,” but from some­thing like Dwarkesh Patel’s vi­sion of the fully-au­to­mated firm. At some point in the life of every tech­nol­ogy, old work­flows are re­placed by new ones, and we dis­cover the par­a­digms in which the full pro­duc­tive force of a tech­nol­ogy can best be ex­pressed. In the past this has sim­ply been a fact of man­age­r­ial turnover or de­pre­ci­a­tion cy­cles. But with AI it will likely be the sheer power of the tech­nol­ogy it­self, which re­ally is wholly un­like any­thing that has come be­fore, and un­like elec­tric­ity or the steam en­gine will even­tu­ally be able to build the struc­tures that har­ness its pow­ers by it­self.

I don’t think we’ve re­ally yet learned what those new struc­tures will look like. But, at the limit, I don’t quite know why hu­mans have to be in­volved in those: though I sus­pect that by the time we’re deal­ing with the fully-au­to­mated or­ga­ni­za­tions of the fu­ture, our cur­rent set of con­cerns will have been largely out­moded by new and quite for­eign ones, as has al­ways been the case with hu­man progress.

But, how­ever op­ti­mistic I might be about the hu­man fu­ture, I don’t think it’s worth lean­ing on the his­tory of past tech­nolo­gies for com­fort. The ATM para­ble is a com­fort­ing story; and in times of un­cer­tainty and fear we search nat­u­rally for so­lace and com­fort wher­ever it may come. But even when it comes to bank tellers, it’s only the first half of the story.

...

Read the original on davidoks.blog »

6 357 shares, 14 trendiness

Returning To Rails in 2026

I love a good side-pro­ject. Like most geeks, I have a ten­dency to go down rab­bit holes when faced with prob­lems - give me a mi­nor in­con­ve­nience and I’ll hap­pily spend weeks build­ing some­thing far more elab­o­rate than the sit­u­a­tion war­rants. There’s joy in hav­ing a play­ground to ex­plore ideas and what ifs”; Building things just for the sheer hell of it, as Richard Feynman put it The Pleasure of Finding Things Out”.

So when my cov­ers band started hav­ing trou­ble keep­ing track of our setlists and song notes (“How many times do we re­peat the end­ing?”, Why did we re­ject this song again?”…) I de­cided to build an app. We’d tried var­i­ous ap­proaches from spread­sheets to chat groups, and noth­ing seemed to work or pro­vide a fric­tion­less way of cap­tur­ing notes and plan­ning gigs in a con­sis­tent way.

I’ve been work­ing on https://​setlist.rocks for the last few months in my evenings and spare time and I’m pretty pleased with the re­sult. But most im­por­tantly (and the sub­ject of this ar­ti­cle) is that I’ve also re-dis­cov­ered just how en­joy­able build­ing a web ap­pli­ca­tion the old-fash­ioned way can be! I usu­ally grav­i­tate to­wards retro-com­put­ing pro­jects as I’ve been pretty bummed out with the state of the mod­ern land­scape for a while, but I can hon­estly say I haven’t had this much fun with de­vel­op­ment in a long time. And that’s mostly due to just how plain awe­some Rails is these days.

I know, right? Rails. That old thing ? People still use that ? But as I was do­ing this purely for fun, I de­cided to forgo the usual stacks-du-jour at $DAYJOB, and go back to my first love” of Ruby. I also fig­ured it would be a great op­por­tu­nity to get re-ac­quainted with the frame­work that shook things up so much in the early 2000s. I’d been keep­ing half an eye on it over the years but it’s been a long time since I’ve done any­thing se­ri­ous with Rails. The last time I prop­erly sat down with it was prob­a­bly around the Rails 3-4 era about 13-14 years ago now. Life moved on, I got deep into in­fra­struc­ture and DevOps work, and Rails faded into the back­ground of my tech stack.

The 2025 Stack Overflow Developer Survey paints a sim­i­lar pic­ture across the wider de­vel­oper world as a whole, too. Rails seems to have pretty much fallen out of favour, com­ing in at #20 un­der­neath the bulk of top-10 JavaScript and ASP. NET frame­works:

And Ruby it­self is nowhere near the top 10 lan­guages, sit­ting just un­der­neath Lua and freak­ing Assembly lan­guage in terms of pop­u­lar­ity! I mean, I love me some good ol’ z80 or 68k asm, but come on… For com­par­i­son, Javascript is at 66% and Python is at 57.9%.

But I’m a stub­born bas­tard, and if I find a tech­nol­ogy I like, I’ll stick with it par­tic­u­larly for pro­jects where I don’t have to care about what any­one else is us­ing or what the lat­est trend is. So Ruby never re­ally left me. I’ve al­ways loved it, and given the choice, it’s the first tool I reach for to build some­thing.

In re­cent years, the need to glue in­fra­struc­ture to­gether with scripts has di­min­ished some­what, as most things seem to be black boxes” dri­ven by YAML man­i­fests or HCL code­bases. But when I first dis­cov­ered Ruby, it felt like find­ing a lan­guage that just worked the way my brain did. Coming from Perl (which I’d adopted for sysad­min script­ing af­ter years of shell scripts that had grown far be­yond their in­tended scope), I read Practical Ruby for System Administration

cover-to-cover and re­alised Ruby was a bet­ter Perl than Perl”. There’s the same won­der­ful ex­pres­sive­ness to it, just with­out all the weird voodoo. I love the way you can chain meth­ods, the blocks with yield, and how even com­plex logic reads al­most like English. There’s just this min­i­mal trans­la­tion re­quired be­tween what I’m think­ing and what I type. Sure, I can knock things to­gether in Python, Go, or what­ever the flavour of the month is, but I al­ways feel on some level like I’m fight­ing the lan­guage rather than work­ing with it. And of course there was the wel­com­ing, quirky outsider” com­mu­nity feel with char­ac­ters like Why the Lucky Stiff and their leg­endary Poignant Guide To Ruby.

I should point out that my in­ter­est (and fo­cus of this blog post) has al­ways been firmly in the engine room” side of de­vel­op­ment - the sysad­min, DevOps, back-end in­fra­struc­ture world. Probably for much the same rea­son I’ve grav­i­tated to­wards the bass gui­tar as my mu­si­cal in­stru­ment of choice. Now, I’m con­ver­sant in front-end tech­nolo­gies, hav­ing been a webmaster” since the late 90s when we were all slic­ing up im­ages in Fireworks, wrestling with table-based lay­outs and run­ning ran­dom stuff from Matt’s Script Archive for our in­ter­ac­tive needs.

But the mod­ern world of front-end de­vel­op­ment - JavaScript frame­works, the build tool­ing, the CSS hacks - it’s never re­ally cap­tured my imag­i­na­tion in the same way. I can bluff my way in it to a cer­tain ex­tent, and I ap­pre­ci­ate it on the level I do with, say, a lot of Jazz: It’s tech­ni­cally im­pres­sive and I’m in awe of what a skilled de­vel­oper can do with it, but it’s just not for me. It’s a ne­ces­sity, not some­thing I’d do for fun.

While I haven’t built or man­aged a full Rails code­base in years, I’d never com­pletely left the Rails ecosys­tem. There’s bits and pieces that are just so use­ful even if you’re just quickly chuck­ing a quick API to­gether with Sinatra. ActiveSupport for ex­am­ple has been a con­stant com­pan­ion in var­i­ous Ruby pro­jects over the years - it’s just so nice be­ing able to write things like

But sit­ting down with Rails 8 proper was some­thing else. It’s recog­nis­able, cer­tainly - the MVC struc­ture, the con­ven­tions, the gen­er­a­tors are all where you’d ex­pect them. Someone with my dusty Rails 3 ex­pe­ri­ence can still find their way around and quickly throw up the ba­sic scaf­fold­ing. But un­der the hood and around the edges, it’s be­come a very dif­fer­ent beast.

So let’s tackle this part first. Although it’s an area I usu­ally stay clear of, the first and most ap­par­ent changes are how front-end code is han­dled. As some­one who’d rather chew glass than con­fig­ure Webpack, the no build” ap­proach Rails 8 has taken is right up my street. I grew up on server-side gen­er­ated pages as I went through Perl CGI.pm, PHP, Java & Struts and on­wards to the modern era” and re­ally like how I can still use a mod­ern­ized ver­sion of that ap­proach in­stead of run­ning the en­tirety of the ap­pli­ca­tion in a browser and rel­e­gat­ing the back­end to purely pro­cess­ing streams of JSON.

I did want to in­clude niceties like drag-and-drop setlist re-or­der­ing though, so I par­tic­u­larly ap­pre­ci­ated be­ing able to build an in­ter­ac­tive ap­pli­ca­tion with mod­ern con­ve­niences while writ­ing the small­est pos­si­ble amount of JS (again, some­thing I al­ways find I’m fight­ing against). The de­fault Hotwire (“HTML Over The Wire”) stack of Stimulus and Turbo pro­vided enough low-fric­tion func­tion­al­ity to build my fron­tend with­out drown­ing in JavaScript.

Turbo han­dles things like in­ter­cept­ing link clicks and form sub­mis­sions, then swap­ping out the or tar­geted frag­ments of the page to give a Single Page App-like snap­pi­ness with­out ac­tu­ally build­ing a SPA. I could then sprin­kle in small Stimulus JS con­trollers to add spe­cific be­hav­iour where needed, like pop-up modals and more dy­namic el­e­ments. It was pretty im­pres­sive how quickly I could build some­thing that felt like a mod­ern ap­pli­ca­tion while still us­ing my fa­mil­iar stan­dard ERB tem­plates and server-side ren­dered con­tent.

While Stimulus seems to have a smaller de­vel­oper com­mu­nity than the big JS toolk­its/​frame­works, there are plenty of great, care­fully-writ­ten and de­signed com­po­nent li­braries you can eas­ily drop into your pro­ject. For ex­am­ple check out the Stimulus Library and Stimulus Components pro­jects which in­clude some great com­po­nents that you can tweak or use di­rectly.

This was my first in­tro­duc­tion to the vastly sim­pli­fied JS li­brary bundling tool that seems to have been in­tro­duced around the Rails 7 time­frame. Instead of need­ing a JS run­time, NPM tool­ing and sep­a­rate JS bundling/​com­pli­a­tion steps (Webpack - again, urgh….), JS com­po­nents are now man­aged with the sim­ple im­portmap com­mand and tool­ing. So, to make use of one of those com­po­nents like the modal Dialog pop-up for ex­am­ple, you just run:

This down­loads the pack­age from a JS CDN and adds it to your ven­dor di­rec­tory and up­dates your con­fig/​im­portmap.rb. The pack­age then gets au­to­mat­i­cally in­cluded in your ap­pli­ca­tion with the javascrip­t_im­portmap_­tags ERB tag in­cluded in the of the de­fault HTML tem­plates. You can see how this gets ex­panded if you look at the source of any gen­er­ated page in your browser:

You can then reg­is­ter the con­troller as per the docs (a few lines in javascript/​con­trollers/​in­dex.js which can be skipped for lo­cally-de­vel­oped con­trollers as they’re han­dled by the au­toloader) and get us­ing it right away in your view tem­plates. As the docs say: This frees you from need­ing Webpack, Yarn, npm, or any other part of the JavaScript tool­chain. All you need is the as­set pipeline that’s al­ready in­cluded in Rails.”

I can’t ex­press how grate­ful I am for this change. I’m also an­noyed with my­self for miss­ing out that this was added back in Rails 7. Had I no­ticed, I prob­a­bly would have taken it out for a spin far sooner! I have to con­fess though that be­yond the ba­sics, I have some­what lack­ing front-end skills (and was quickly de­vel­op­ing The Flexbox Rage), so I took bits from var­i­ous tem­plates & on­line com­po­nent gen­er­a­tors, and got Claude to gen­er­ate the rest with some mock­ups of com­mon screens and com­po­nents. I then sliced & diced, copied & pasted my way to a us­able UI us­ing a sort of cus­tomized UI toolkit” - Rails par­tials are great for these sorts of re-us­able el­e­ments.

I have mixed feel­ings about this. On the one hand, it helped me skip over the frus­trat­ing parts of fron­tend de­vel­op­ment that I don’t par­tic­u­larly en­joy, so I could fo­cus on the fun back­end stuff. It also did pro­duce an ob­jec­tively bet­ter ex­pe­ri­ence far quicker than any­thing I’d have been able to come up with purely by my­self. On the other… I view most AI-generated con­tent such as mu­sic, art & po­etry (not to men­tion the typ­i­cal LinkedIn slop which trig­gers a vis­ceral re­ac­tion in me) to be deeply ob­jec­tion­able. My writ­ing and artis­tic con­tent on this site is 100% AI-free for that very rea­son; To my Gen-Xer mind, these are the things that re­ally de­fine what it means to be hu­man and I find it dis­taste­ful and un­set­tling in the ex­treme to have these ex­pres­sions cre­ated by an al­go­rithm. And yet - for me, cod­ing is a cre­ative en­deav­our and some of it can def­i­nitely be con­sid­ered art. Am I a hyp­ocrite to use UI com­po­nents cre­ated with help from an AI ? What (if any) is the dif­fer­ence be­tween that and copy­ing from some Bootstrap tem­plate or mod­i­fy­ing com­po­nents from a UI li­brary ? I’m go­ing to have to wres­tle with this some more, I think.

A slight de­tour here to ex­plain my work­flow and hope­fully il­lus­trate why I love Rails so much in the first place. It re­ally shook things up in the early 2000s - be­fore that, most of the web frame­works I’d used (I’m look­ing at you, Struts…) were mas­sively com­plex and re­quired end­less amounts of XML boil­er­plate and other con­fig­u­ra­tion to wire things up. Rails threw all that away and in­tro­duced the no­tion of convention over con­fig­u­ra­tion” and took full ad­van­tage of the ex­pres­sive, suc­cinct cod­ing style en­abled by Ruby.

A good way to get fa­mil­iar with Rails is to fol­low the tu­to­r­ial, but here’s a quick walk­through of the dev process I’ve used since the early days which high­lights how easy it is to get go­ing. So, us­ing the tags” sys­tem (that bands can at­tach to songs, setlists etc.) as an ex­am­ple: I first planned out the model, what is a tag, what at­trib­utes should it have (a text de­scrip­tion, a hex colour) and so on. Then I used a Rails gen­er­a­tor to cre­ate the scaf­fold­ing and mi­gra­tions:

This re­sulted in a app/​mod­els/​tag.rb like this:

This au­tomag­i­cally fetches the col­umn names and de­f­i­n­i­tions from the data­base, no other work re­quired! Of course, we usu­ally want to set some val­i­da­tion. There’s all kinds of hooks and ad­di­tions you can sprin­kle here, so if I wanted to val­i­date that for ex­am­ple a valid Hex colour has been set, I could add:

Then I set up URL rout­ing. While you can later get very spe­cific about which routes to cre­ate, a sim­ple start­ing point is just this one line in con­fig/​routes.rb

Note all the .format stuff - this lets you re­spond to dif­fer­ent extensions” with dif­fer­ent con­tent type. So in this case, re­quest­ing /bands/1/tags/5 would re­turn HTML by de­fault, but I could also re­quest /bands/1/tags/5.json and the con­troller can be in­formed that I’m ex­pect­ing a JSON re­sponse.

I tend to use this to quickly flesh out the logic of an ap­pli­ca­tion with­out wor­ry­ing about the pre­sen­ta­tion un­til later. For ex­am­ple, in the Tags con­troller I started with some­thing like this to fetch a record from the DB and re­turn it as JSON:

And then I could test my ap­pli­ca­tion and logic us­ing the RESTful routes us­ing just plain old curl from my ter­mi­nal:

Once that was all work­ing, I moved on to gen­er­at­ing the views as stan­dard ERB tem­plates. Combined with live-re­load­ing and other de­vel­oper niceities, I could go from idea to work­ing proof-of-con­cept in a stu­pidly short amount of time. Plus, there seems to be a gem for just about any­thing you might want to build or in­te­grate with. Want to im­port a CSV list of songs ? CSV.parse has you cov­ered. How about gen­er­at­ing PDFs for print copies of setlists ?

And so on. Have I men­tioned I love Ruby?

I’ve al­ways liked the way Rails lets you en­able com­po­nents and pat­terns as you scale. You can start small on just SQLite, move to a ded­i­cated data­base server when traf­fic de­mands it, then layer in caching, back­ground jobs and the rest as the need arises.

But the prob­lem there is all the ad­di­tional in­fra­struc­ture you need to stand up to sup­port these things. Want caching? Stand up Redis or a Memcache. Need a job queue or sched­uled tasks? Redis again. And then there’s the Ruby li­braries like Resque or Sidekiq to in­ter­act with all that… Working at GitLab, I cer­tainly ap­pre­ci­ated Sidekiq for what it does, but for the odd async task in a small app it’s overkill.

This is where the new Solid* li­braries (Solid Cache, Solid Queue and Solid Cable) in­cluded in Rails 8 re­ally shine. Solid Cache uses a data­base in­stead of an in-mem­ory store, the think­ing be­ing that mod­ern stor­age is plenty fast enough for caching pur­poses. This means you can cache a lot more than you would do with a mem­ory-based store (pretty handy these days in the mid­dle of a RAM short­age!), but you also don’t need an­other layer such as Redis.

Everything is al­ready setup to make use of this, all you need to do is start us­ing it us­ing stan­dard Rails caching pat­terns. For ex­am­ple, I make ex­ten­sive use of frag­ment caching in ERB tem­plates where en­tire ren­dered blocks of HTML are stored in the cache. This can be some­thing sim­ple like caching for a spe­cific time pe­riod:

Or based on a model, so when the model gets up­dated the cache will be re-gen­er­ated:

And sure enough, you can see the re­sults in the SQLite DB us­ing your usual tools. Here’s the table schema:

And we can ex­am­ine the cache con­tents:

Note though that the ac­tual cached val­ues are se­ri­al­ized Ruby ob­jects stored as BLOBs, so you can’t eas­ily view/​de­code them out­side of the Rails con­sole.

Solid Queue like­wise re­moves the de­pen­dency on Redis to man­age back­ground jobs. Just like Solid Cache, it by de­fault will use a data­base for this task. I also don’t need to start sep­a­rate processes in my dev en­vi­ron­ment, all that is re­quired is a sim­ple SOLID_QUEUE_IN_PUMA=1 bun­dle exec rails server and it runs an in-process queue man­ager.

And is sched­uled in a typ­i­cally plan-lan­guage fash­ion:

Beautiful! The up­shot is that I could start mak­ing use of all these fea­tures from the get-go, with far less fid­dling re­quired, and run­ning en­tirely off a SQLite data­base.

I hon­estly did­n’t use Solid Cable much, apart from in­di­rectly. It’s an Action Cable adapter which again uses a data­base by de­fault. It’s use­ful for real-time web­sock­ets fea­tures, al­though I only ended up us­ing it to en­able de­bug­bar for lo­cal test­ing. Debugbar pro­vides a handy tool­bar that lets you in­spect your SQL queries, HTTP re­quests and other use­ful de­bug­ging fea­tures while you’re de­vel­op­ing. Reminded me a lot of the de­bug tool­bars found in a lot of PHP frame­works like Symfony. Still, I re­ally ap­pre­ci­ated again be­ing able to make use of all this with­out need­ing to spin up ad­di­tional in­fra­struc­ture.

The last com­po­nent I did­n’t re­ally look into (although I’m kinda hav­ing sec­ond thoughts about that) is the new au­then­ti­ca­tion gen­er­a­tors. Rails 8 ships with a built-in au­then­ti­ca­tion gen­er­a­tor which is a bit of a game changer for smaller pro­jects. It’s not try­ing to be every­thing, it just scaf­folds out a clean, no-non­sense auth sys­tem but is vastly sim­pler than us­ing some­thing like Devise which was al­ways my go-to. Devise is in­cred­i­bly full fea­tured and of­fers things like built-in sign-up flow, ac­count lock­ing, email con­fir­ma­tions and lots of ex­ten­sion points. I wanted to do things like hook into Omniauth for Login with Google”, add to­ken auth for lo­cal test­ing with curl and there’s just way more guides and doc­u­men­ta­tion avail­able with Devise. Plus it was just eas­ier for me to pick back up again, so that’s what I started with and I’m pretty happy with it.

That said, Devise is a bit of a beast. The more I look into the auth gen­er­a­tors, the more I like the sim­ple un­der­stand­able phi­los­o­phy and as I read more about the com­par­isons, if I was start­ing all over again I’d prob­a­bly lean more to­wards the na­tive Rails op­tion just be­cause hon­estly it feels like it’d be more fun to hack on. But with things like Auth, there’s a lot to be said for stick­ing to the beaten path!

This is an­other area where Rails 8 gave me a very pleas­ant sur­prise. I re­ally like PostgreSQL as a data­base (and much more be­sides) - I used to main­tain the Solaris pack­ages for Blastwave/OpenCSW waaaay back (now that re­ally does age me!) and have run it in pro­duc­tion for decades now. But it’s still an­other de­pen­dency to man­age, back-up and scale. SQLite by com­par­i­son is as sim­ple as it comes: Single file, no DB server re­quired. It can also be pretty ef­fi­cient and fast, but while it can be used for high-per­for­mance, read-heavy ap­pli­ca­tions it al­ways used to re­quire a fair amount of tun­ing and patch­ing of Rails to get there.

Rails used to use SQLite with its de­fault set­tings, which were op­ti­mized for safety and back­ward com­pat­i­bil­ity rather than per­for­mance. It was great in a de­vel­op­ment en­vi­ron­ment, but typ­i­cally things started to fall apart the mo­ment you tried to use it for pro­duc­tion-like load. Specifically, you used to have to tweak var­i­ous PRAGMA state­ments:

jour­nal_­mode: The de­fault roll­back jour­nal meant read­ers could block writ­ers and vice-versa, so you ef­fec­tively se­ri­al­ized all data­base ac­cess. This was a ma­jor bot­tle­neck and most apps would see fre­quent SQLITE_BUSY er­rors start to stack up as a re­sult. Instead, you can switch it to WAL mode which uses a write-ahead jour­nal and al­lows read­ers and writ­ers to ac­cess the DB con­cur­rently.

syn­chro­nous: The de­fault here (FULL) meant SQLite would force a full sync to disk af­ter every trans­ac­tion. But for most web apps, if you use NORMAL (sync at crit­i­cal mo­ments) along with the WAL jour­nal, you get much faster write per­for­mance al­beit with a slight risk of los­ing the last trans­ac­tion if you have a crash or power fail­ure. That’s usu­ally ac­cept­able though.

Various other re­lated prag­mas which had to be tuned like mmap_­size, cache_­size and jour­nal_­size_limit to make ef­fec­tive use of mem­ory and pre­vent un­lim­ited growth of the jour­nal, busy_­time­out to make sure lock con­tention did­n’t trig­ger an im­me­di­ate fail­ure and so on…

All in all, it was a pretty big laundry list” of things to mon­i­tor and tune which only re­in­forced the no­tion that SQLite was a toy data­base un­suit­able for pro­duc­tion. And it was made more com­plex be­cause there was­n’t an easy way to set these pa­ra­me­ters. So you’d typ­i­cally have to cre­ate an ini­tial­izer that ran raw SQL prag­mas on each new con­nec­tion:

This was ob­vi­ously pretty frag­ile, so most de­vel­op­ers I worked with sim­ply never did it, and just fol­lowed the pat­tern of SQLite on my lap­top, big boy data­base for any­thing else”.

When I checked out Rails 8, I no­ticed straight away that not only is there now a new prag­mas: block avail­able in the data­base.yml, but the de­faults are now also set to sen­si­ble val­ues for a pro­duc­tion ap­pli­ca­tion. The val­ues pro­vided to my fresh Rails app were equiv­a­lent to:

All this makes SQLite a gen­uinely vi­able pro­duc­tion data­base for small-to-medium Rails ap­pli­ca­tions and com­bined with the Solid* com­po­nents, means it’s not just a lo­cal dev or getting started” con­ve­nience!

If you have an older Rails code­base and want to use a sim­i­lar ap­proach, a neat method of mon­key-patch­ing the SQLite adapter to pro­vide a sim­i­lar prag­mas: sec­tion in the data­base con­fig­u­ra­tion is de­tailed in this great ar­ti­cle.

However, de­ploy­ing Rails apps was al­ways the weak spot. I re­mem­ber be­ing blown away by the demos of let’s build a blog from zero in a few min­utes” but was al­ways frus­trated that the same de­vel­oper el­e­gance did­n’t ex­tend to the de­ploy­ment side of things. Things like Passenger (née mod­_rails) and Litespeed even­tu­ally helped by bring­ing a sort of PHP-like just copy my code to a re­mote di­rec­tory” method of de­ploy­ment, but I still re­mem­ber push­ing stuff out with non-triv­ial Capistrano con­figs or hand-rolled Ansible play­books to han­dle de­ploy­ments, mi­gra­tions and restarts. And then there were all the ex­tra sup­port­ing com­po­nents that would in­evitably be re­quired at each step along the way.

I had to in­clude that old cap­ture of the mod­rails.com site circa-2008 be­cause a.) I re­ally miss when web­sites had that kind of char­ac­ter, and b.) that is still a to­tally sick wild­style logo 😄

This is why ser­vices like Heroku and Pivotal Cloud Foundry thrived back then - they of­fered a pain-free, al­beit opin­ion­ated way to han­dle all this com­plex­ity. As the Pivotal haiku put it:

Here is my source code

Run it in the cloud for me

I do not care how.

You just did a git push or cf push, vague magic hap­pened, and your code got turned into con­tain­ers, linked to ser­vices and de­ployed.

These days I pre­fer to do the build­ing of con­tain­ers my­self. Creating an OCI im­age as an ar­ti­fact gives me flex­i­bil­ity over where things run and opens up all kinds of op­tions. Today it might be a sim­ple docker-com­pose stack on a sin­gle VPS, to­mor­row it could be scaled out across a Kubernetes clus­ter via a Helm chart or op­er­a­tor. The con­tainer part is straight-foward as Rails cre­ates a Dockerfile in each new ap­pli­ca­tion which is pretty much prod-ready. I usu­ally tweak it slightly by adopt­ing a meta” con­tainer ap­proach where I move some of the stuff that changes in­fre­quently like in­stalling gems, run­ning apt-get and so on into an im­age that the main Dockerfile uses as a base.

You’re of course free to use any method you like to de­ploy that con­tainer, but Rails 8 makes Kamal the new de­fault and it is an ab­solute joy to use.

I’ve seen some dis­sent­ing opin­ions on this, but bear in mind I’m com­ing from a place where I’m al­ready build­ing con­tain­ers for every­thing any­way. I gen­er­ally think this is the way to go” these days and have the rest of the in­fra like CI/CD pipelines, con­tainer reg­istries, mon­i­tor­ing and so on. Plus, given my back­ground, I crank out VMs and cloud hosts with Terraform/Ansible all day er­r­day”. If you don’t have this stuff al­ready or aren’t happy (or don’t have the time) to man­age your own servers re­mem­ber that Kamal is not a PaaS. It just gets you close to a self-hosted en­vi­ron­ment that func­tions very much like a PaaS. Now that Heroku is in a sustaining en­gi­neer­ing model” state, there are sev­eral op­tions in the PaaS space you may want to in­ves­ti­gate if that’s more up your street. I hear good things about fly.io but has­ten to add I haven’t used it my­self.

Your Kamal de­ploy­ment con­fig­u­ra­tion lives in a de­ploy.yml file where you de­fine your servers by role: web fron­tends, back­ground job work­ers and so on:

Or you can point every­thing to a sin­gle host and scale out later. These files can also in­herit a base which makes split­ting out the dif­fer­ences be­tween en­vi­ron­ments sim­ple. There’s also handy aliases de­fined which makes in­ter­act­ing with the con­tain­ers easy, all that is re­quired is a SSH con­nec­tion to the re­mote hosts.

When you de­ploy, Kamal will:

* Build the con­tainer, push it to the reg­istry, and then pull it onto all servers

* Route traf­fic to the new con­tainer once it passes health checks

* Perform clean-up by prun­ing old im­ages and stopped con­tain­ers

The rout­ing bit is han­dled by ka­mal-proxy, a light­weight re­verse proxy that sits in front of your ap­pli­ca­tion on each web server. When a new ver­sion de­ploys, ka­mal-proxy han­dles the zero-down­time switchover: It spins up the new con­tainer, health-checks it, then seam­lessly cuts traf­fic over be­fore stop­ping the old one. I front every­thing through Nginx (which is also where I do TLS ter­mi­na­tion) for con­sis­tency with the rest of my en­vi­ron­ment, but ka­mal-proxy does­n’t re­quire any of that. It can han­dle your traf­fic di­rectly and does SSL ter­mi­na­tion via Let’s Encrypt out of the box.

Secrets are han­dled sen­si­bly too. Rather than com­mit­ting cre­den­tials to your repo or fid­dling with en­crypted files, Kamal reads se­crets from a .kamal/secrets file that sim­ply points at other sources of se­crets. These get in­jected as en­vi­ron­ment vari­ables at de­ploy time, so you can safely han­dle your reg­istry pass­word, Rails mas­ter key, data­base cre­den­tials and so on. You can also pull se­crets from ex­ter­nal sources like 1Password or AWS SSM if you want some­thing more so­phis­ti­cated, and the sam­ple file con­tains ex­am­ples to get you go­ing.

That’s a lot, but bear in mind it’s all dri­ven by a sin­gle com­mand: ka­mal de­ploy.

Here’s an Asciinema cap­ture of a real-life man­ual de­ploy ses­sion in­clud­ing a look at what’s hap­pen­ing on my stag­ing server in my home­lab:

I have this trig­gered by GitLab CI pipelines, with pro­tected branches for each of my en­vi­ron­ments. So usu­ally, de­ploy­ment hap­pens af­ter a sim­ple git push or merge re­quest be­ing ap­proved. The up­shot is that it feels like that old Heroku magic again, ex­cept you own the whole stack and can see ex­actly what’s hap­pen­ing. A sin­gle ka­mal de­ploy builds, pushes and rolls out your changes across how­ever many servers you’ve con­fig­ured. It’s the kind of tool­ing Rails has needed for years.

Well, noth­ing’s per­fect and I feel like if I use any tech­nol­ogy for long enough I’ll even­tu­ally find some­thing about it that pisses me off. I just tend to grav­i­tate to­wards things that piss me off the least and avoid those that give me the red mist” with­out any bal­anc­ing up­sides that make the pain worth­while. Ruby and Rails def­i­nitely falls firmly into the for­mer camp, but that’s like, just my opin­ion, man.

What I find ap­peal­ing about the magic” of Ruby might feel opaque and con­fus­ing to you. If you like ex­pres­sive code and come from a Perl There Is More Than One Way To Do It” back­ground, I imag­ine you’ll love it. But I’ve come to re­alise that choice of tools (vi vs emacs vs vs­code - FIGHT!) can be a very per­sonal mat­ter and of­ten re­flect far more of how our own minds work. Particularly so when it comes down to some­thing like lan­guage and frame­work choice: These are the low­est lay­ers that are re­spon­si­ble for turn­ing your thoughts and ideas into ex­e­cutable code.

As a mat­ter of taste, Ruby lines up more or less ex­actly with my sense of aes­thet­ics about what a good sys­tem should be. But it is cer­tainly an ac­quired taste, and that’s the biggest down­side. Remember the sur­vey re­sults from the top of this ar­ti­cle ? There’s no deny­ing that Ruby and Rails’ ap­peal has be­come…. more se­lec­tive” over the years - to coin an­other phrase, this time from Spinal Tap.

It’s used in a lot of places that don’t make a lot of noise about it (some might sur­prise you), and there are still plenty of big es­tab­lished names like Shopify, Soundcloud and Basecamp run­ning on Rails. Oh and GitHub, al­though I’m not sure we should shout about that any­more… But. While the Stack Overflow sur­vey is­n’t nec­es­sar­ily an ac­cu­rate barom­e­ter of de­vel­oper opin­ion, the po­si­tions of Ruby and Rails do show it’s fallen from grace in re­cent times. Anecdotally, I find a lot of doc­u­men­ta­tion or guides that haven’t been up­dated for sev­eral years and the same goes for a lot of gems, plu­g­ins and other pro­jects. Banners like this are be­com­ing more and more com­mon:

And I find that most gems fol­low a sim­i­lar down­ward trend of ac­tiv­ity. Take Devise for ex­am­ple. Plotting a graph of re­leases shows a pat­tern I see around a lot of Rails-adjacent pro­jects. Big spikes or pro­jects launched around the Rails glory years” and then slowly trail­ing off into main­te­nance mode:

Apart from a spike in 2016 where it ap­pears there was a bunch of ac­tiv­ity around the v4 re­lease, it’s been pretty quiet since then. The op­ti­mist might say that’s be­cause by this point, most of these pro­jects are sim­ply done”. These are re­ally ma­ture, re­li­able pro­jects with around 2 decades of his­tory run­ning mis­sion crit­i­cal, high traf­fic web­sites. At what point are there sim­ply no more fea­tures to add ?

But let’s look at the flip­side. Rails on the other hand ac­tu­ally seems to be pick­ing up steam and has been re­mark­ably con­sis­tent since the big boom” of Rails 3.0 in 2010:

Despite the chang­ing trends of the day, it’s con­sis­tently shipped re­leases every sin­gle year since it hit the big­time. If any­thing, Rails is a rare ex­am­ple of an OSS pro­ject that’s grown into its re­lease ca­dence rather than burn­ing out. Whether it can still find an au­di­ence amongst new de­vel­op­ers is an open ques­tion but I’m glad there are ob­vi­ously a few more stub­born bas­tards like my­self re­fus­ing to let go of what is clearly, for us, a very good thing. I prob­a­bly could even­tu­ally build things al­most as fast in an­other lan­guage or frame­work, but I doubt I’d be smil­ing as much while I did so.

If you’ve made it this far, con­grat­u­la­tions and thanks for com­ing to my TED talk” / pro­tracted rant! I’m guess­ing some­thing has piqued your cu­rios­ity, and if so, I highly rec­om­mend tak­ing Rails out for a spin. Work through the tu­to­r­ial, build some­thing cool, and above all en­joy your­self while you’re at it - be­cause at the end of the day, that’s what it’s all about. Sure, there are more pop­u­lar frame­works that’ll make a big­ger splash on your re­sume. But as I said at the start, some­times it’s worth do­ing things just for the sheer hell of it.

...

Read the original on www.markround.com »

7 338 shares, 16 trendiness

Big Data on the Cheapest MacBook

Apple re­leased the MacBook Neo to­day and there is no short­age of tech re­views ex­plain­ing whether it’s the right de­vice for you if you are a stu­dent, a pho­tog­ra­pher or a writer. What they don’t tell you is whether it fits into our Big Data on Your Laptop ethos. We wanted to an­swer this us­ing a data-dri­ven ap­proach, so we went to the near­est Apple Store, picked one up and took it for a spin.

Well, not much! If you buy this ma­chine in the EU, there is­n’t even a charg­ing brick in­cluded. All you get is the lap­top and a braided USB-C ca­ble. But you likely al­ready have a few USB-C bricks ly­ing around — let’s move on to the lap­top it­self!

The only part of the hard­ware spec­i­fi­ca­tion that you can se­lect is the disk: you can pick ei­ther 256 or 512 GB. As our mis­sion is to deal with al­leged Big Data”, we picked the larger op­tion, which brings the price to $700 in the US or €800 in the EU. The amount of mem­ory is fixed to 8 GB. And while there is only a sin­gle CPU op­tion, it is quite an in­ter­est­ing one: this lap­top is pow­ered by the 6-core Apple A18 Pro, orig­i­nally built for the iPhone 16 Pro.

It turns out that we have al­ready tested this phone un­der some un­usual cir­cum­stances. Back in 2024, with DuckDB v1.2-dev, we found that the iPhone 16 Pro could com­plete all TPC-H queries at scale fac­tor 100 in about 10 min­utes when air-cooled and in less than 8 min­utes while ly­ing in a box of dry ice. The MacBook Neo should def­i­nitely be able to han­dle this work­load — but maybe it can even han­dle a bit more. Cue the in­evitable bench­marks!

For our first ex­per­i­ment, we used ClickBench, an an­a­lyt­i­cal data­base bench­mark. ClickBench has 43 queries that fo­cus on ag­gre­ga­tion and fil­ter­ing op­er­a­tions. The op­er­a­tions run on a sin­gle wide table with 100M rows, which uses about 14 GB when se­ri­al­ized to Parquet and 75 GB when stored in CSV for­mat.

We ported ClickBench’s DuckDB im­ple­men­ta­tion to ma­cOS and ran it on the MacBook Neo us­ing the freshly minted v1.5.0 re­lease. We only ap­plied a small tweak: as sug­gested in our per­for­mance guide, we slightly low­ered the mem­ory limit to 5 GB, to re­duce re­ly­ing on the OS swap­ping and to let DuckDB han­dle mem­ory man­age­ment for larger-than-mem­ory work­loads. This is a com­mon trick in mem­ory-con­strained en­vi­ron­ments where other processes are likely us­ing more than 20% of the to­tal sys­tem mem­ory.

We also re-ran ClickBench with DuckDB v1.5.0 on two cloud in­stances, yield­ing the fol­low­ing lineup:

* The star of our show, the MacBook Neo with 2 per­for­mance cores, 4 ef­fi­ciency cores and 8 GB RAM

* c6a.4xlarge with 16 AMD EPYC vCPU cores and 32 GB RAM. This in­stance is pop­u­lar in ClickBench with about 80 in­di­vid­ual re­sults re­ported.

* c8g.metal-48xl with a whop­ping 192 Graviton4 vCPU cores and 384 GB RAM. This in­stance is of­ten at the top of the over­all ClickBench leader­board.

The bench­mark script first loaded the Parquet file into the data­base. Then, as per ClickBench’s rules, it ran each query three times to cap­ture both cold runs (the first run when caches are cold) and hot runs (when the sys­tem has a chance to ex­ploit e.g. file sys­tem caching).

Our ex­per­i­ments pro­duced the fol­low­ing ag­gre­gate run­times, in sec­onds:

Cold run. The re­sults start with a big sur­prise: in the cold run, the MacBook Neo is the clear win­ner with a sub-sec­ond me­dian run­time, com­plet­ing all queries in un­der a minute! Of course, if we dig deeper into the se­tups, there is an ex­pla­na­tion for this. The cloud in­stances have net­work-at­tached disks, and ac­cess­ing the data­base on these dom­i­nates the over­all query run­times. The MacBook Neo has a lo­cal NVMe SSD, which is far from best-in-class, but still pro­vides rel­a­tively quick ac­cess on the first read.

Hot run. In the hot runs, the MacBook’s to­tal run­time only im­proves by ap­prox­i­mately 10%, while the cloud ma­chines come into their own, with the c8g.metal-48xl win­ning by an or­der of mag­ni­tude. However, it’s worth not­ing that on me­dian query run­times the MacBook Neo can still beat the c6a.4xlarge, a mid-sized cloud in­stance. And the lap­top’s to­tal run­time is only about 13% slower de­spite the cloud box hav­ing 10 more CPU threads and 4 times as much RAM.

For our sec­ond ex­per­i­ment, we picked the queries of the TPC-DS bench­mark. Compared to the ubiq­ui­tous TPC-H bench­mark, which has 8 ta­bles and 22 queries, TPC-DS has 24 ta­bles and 99 queries, many of which are more com­plex and in­clude fea­tures such as win­dow func­tions. And while TPC-H has been op­ti­mized to death, there is still some sem­blance of value in TPC-DS re­sults. Let’s see whether the cheap­est MacBook can han­dle these queries!

For this round, we used DuckDB’s LTS ver­sion, v1.4.4. We gen­er­ated the datasets us­ing DuckDB’s tpcds ex­ten­sion and set the mem­ory limit to 6 GB.

At SF100, the lap­top breezed through most queries with a me­dian query run­time of 1.63 sec­onds and a to­tal run­time of 15.5 min­utes.

At SF300, the mem­ory con­straint started to show. While the me­dian query run­time was still quite good at 6.90 sec­onds, DuckDB oc­ca­sion­ally used up to 80 GB of space for spilling to disk and it was clear that some queries were go­ing to take a long time. Most no­tably, query 67 took 51 min­utes to com­plete. But hard­ware and soft­ware con­tin­ued to work to­gether tire­lessly, and they ul­ti­mately passed the test, com­plet­ing all queries in 79 min­utes.

Here’s the thing: if you are run­ning Big Data work­loads on your lap­top every day, you prob­a­bly should­n’t get the MacBook Neo. Yes, DuckDB runs on it, and can han­dle a lot of data by lever­ag­ing out-of-core pro­cess­ing. But the MacBook Neo’s disk I/O is lack­lus­ter com­pared to the Air and Pro mod­els (about 1.5 GB/s com­pared to 3–6 GB/s), and the 8 GB mem­ory will be lim­it­ing in the long run. If you need to process Big Data on the move and can pay up a bit, the other MacBook mod­els will serve your needs bet­ter and there are also good op­tions for Linux and Windows.

All that said, if you run DuckDB in the cloud and pri­mar­ily use your lap­top as a client, this is a great de­vice. And you can rest as­sured that if you oc­ca­sion­ally need to crunch some data lo­cally, DuckDB on the MacBook Neo will be up to the chal­lenge.

...

Read the original on duckdb.org »

8 320 shares, 16 trendiness

Dolphin Progress Report

Dolphin started out as a GameCube em­u­la­tor in 2003. In 2008, ex­per­i­men­tal Wii sup­port was added. And now in 2026, Dolphin en­ters the realm of ar­cade em­u­la­tion with sup­port for the Triforce, a joint Sega, Namco, and Nintendo ar­cade ef­fort. Want to learn more about the Triforce? Check out our deep dive into the Triforce and how sup­port for it ended up back in Dolphin!

Emulating a new sys­tem and li­brary of games for the first time in 18 years is big news, and we’ll dive into some de­tails and up­dates on that later in the re­port. However, there are two other ma­jor ad­di­tions that would have been flag­ship changes on their own in any other re­port. Optimizations to Dolphin’s MMU em­u­la­tion have brought ma­jor per­for­mance up­lifts to games that rely on cus­tom page table map­pings. In fact, on pow­er­ful hard­ware, all Full MMU games can run full speed, in­clud­ing the leg­endary Rogue Squadron III: Rebel Strike.

On the other side of things is a tar­geted fix that was an epic tale span­ning years of frus­tra­tion. In the end, the in­cred­i­ble ef­forts of the Mario Strikers Charged com­mu­nity com­bined with sev­eral CPU em­u­la­tion ex­perts fi­nally cracked the case of a mi­nor physics bug that would nor­mally be im­pos­si­ble to see or test.

All that and more awaits in this re­lease’s jam-packed no­table changes!

All changes be­low are avail­able in Release 2603.

Fastmem” is one of Dolphin’s biggest per­for­mance tricks. The GameCube and Wii’s MMIO (Memory-Mapped Input and Output) mem­ory map­pings do not di­rectly trans­late to the host sys­tem, and with­out a way to sort or­di­nary RAM ac­cesses from MMIO ac­cesses, Dolphin would need to man­u­ally trans­late every mem­ory ac­cess at tremen­dous cost. Enter fast­mem, where Dolphin ex­ploits the host CPUs ex­cep­tion han­dler to sort ac­cesses for us. If we map the en­tire GC/Wii ad­dress space to host mem­ory, map­pings for main mem­ory will just work, no in­ter­ven­tion needed. But at­tempt­ing to ac­cess an ad­dress as­signed to MMIO will trig­ger a CPU fault. When that hap­pens, Dolphin can catch this fault and im­me­di­ately back­patch the af­fected JIT code to man­u­ally trans­late the ad­dress in­stead. Just like that, Dolphin will only spend the ef­fort trans­lat­ing mem­ory ac­cesses that it ac­tu­ally needs to trans­late, and the rest will be per­formed na­tively by the host CPU at in­cred­i­ble speeds.

However, fast­mem did­n’t cover every­thing - it only sup­ported mem­ory mapped with Block Address Translation (BAT). Any time a game ac­cessed mem­ory mapped with a page table, the other way to map mem­ory on the GameCube, Dolphin would al­ways have to man­u­ally trans­late it.

This may sound prob­lem­atic, but thanks to tricks like the MMU Speedhack, the vast ma­jor­ity of games see no per­for­mance penalty from page table ac­cesses. In or­der for this lack of sup­port to ac­tu­ally im­pact per­for­mance, a game would have to have a cus­tom mem­ory han­dler that re­sists our hacks and heav­ily re­lies on the page table. But what de­vel­oper would be crazy enough to do all of that?

Factor 5 is a re­cur­ring fren­emy here on the Dolphin blog for a rea­son. If there’s a trick to squeeze more power out of a GameCube, Factor 5 prob­a­bly in­vented it. In this case, they went through all that trou­ble to get ex­tra RAM.

The GameCube is a mem­ory starved ma­chine - its pal­try 24MiB of RAM is tiny com­pared to its ca­pa­bil­i­ties and stor­age me­dia, which lim­ited what de­vel­op­ers could do with the ma­chine. However, there is an ad­di­tional 16MiB of RAM con­nected to the DSP, known as Audio RAM (ARAM). On a mem­ory starved ma­chine, an ex­tra 66% of mem­ory is ex­tremely tempt­ing.

There’s a prob­lem, though - the GameCube’s CPU does not have di­rect ac­cess to ARAM. All the GameCube’s CPU can do with ARAM is tell the DSP to use Direct Memory Access (DMA) to copy data from a range of main mem­ory to ARAM, or vice versa. This is nor­mally used for var­i­ous DSP func­tions, such as load­ing au­dio sam­ples.

For a game to use Audio RAM as gen­eral pur­pose RAM, some setup is re­quired. If a game con­fig­ures a range in the page table to point to in­valid mem­ory ad­dresses, when it tries to ac­cess this mem­ory, the in­valid mem­ory ac­cess trig­gers a CPU fault. That in­vokes the game’s page fault han­dler, which uses DMA trans­fers to swap data be­tween main mem­ory and ARAM as needed. When the game re­sumes, the mem­ory ac­cess will re­turn data as if it were stored in valid mem­ory all along. Through this process, data can be copied to/​from ARAM in an au­to­mated fash­ion out­side of the pri­mary game logic, al­low­ing games to more or less treat a range of in­valid mem­ory as if it were ex­tra RAM. Audio RAM used this way is sig­nif­i­cantly slower than main mem­ory due to its re­liance on DMA trans­fers, but it’s or­ders of mag­ni­tude faster than wait­ing for data from a spin­ning op­ti­cal disc, so it’s still a huge win.

Notably, the GameCube can only per­form the above setup with the page table, as the hard­ware has too few BATs and their gran­u­lar­ity is too large for this tech­nique.

Halfway into the GameCube’s life, Nintendo stan­dard­ized this ARAM ac­cess process with a li­brary pro­vided in the GameCube de­vel­oper sup­port tools, so that de­vel­op­ers did­n’t need to waste time cre­at­ing their own so­lu­tion. All games that uti­lize this li­brary con­fig­ure the page table in the ex­act same pre­dictable way. Since we know ex­actly what they are go­ing to do and we aren’t bound to a GameCube’s hard­ware lim­i­ta­tions, Dolphin can just cheat for these games and set up a BAT point­ing to ex­tra mem­ory that the GameCube does­n’t have, by­pass­ing all of that trou­ble while still get­ting the full ben­e­fits of fast­mem. This trick is the afore­men­tioned MMU Speedhack, and it has been en­abled by de­fault since 2014.

But years be­fore Nintendo stan­dard­ized this process, Factor 5 did it all them­selves with their own cus­tom mem­ory han­dler in Rogue Squadron II. And, even though Nintendo’s li­brary ex­isted by the time Rogue Squadron III came out, Factor 5 went above and be­yond in that game with an even more op­ti­mized cus­tom mem­ory han­dler to squeeze as much as they pos­si­bly could out of ARAM. Tricks like this are what al­lowed Factor 5 to push the GameCube to such ridicu­lous lev­els.

Factor 5 may have gone to the ex­treme, but they weren’t alone. Star Wars: The Clone Wars, Spider-Man 2, and Ultimate Spider-Man also use cus­tom mem­ory han­dlers.

On the Wii, none of this is nec­es­sary as ARAM is di­rectly ac­ces­si­ble to the Wii’s CPU as MEM2. But that did­n’t stop some games from be­ing weird. As we re­cently cov­ered, the Disney Trio of Destruction™ use the typ­i­cal mem­ory ad­dresses that any other Wii game would use, but they re­move the de­fault BATs and recre­ate them with the page table. Dolphin now patches out that be­hav­ior by de­fault, but we know that they also use the page table for… (checks notes) mem­ory de­frag­men­ta­tion?

Since these games use cus­tom mem­ory han­dlers, their us­age of mem­ory on the sys­tem is en­tirely unique. We can’t pre­dict what they are do­ing and we need to prop­erly em­u­late their page table map­pings. And since page table map­pings weren’t im­ple­mented in fast­mem, every ac­cess to mem­ory han­dled by the page table had to be man­u­ally trans­lated, with a vary­ing per­for­mance im­pact de­pend­ing on what­ever the game was do­ing.

Admittedly, this did­n’t af­fect a lot of games. But one of those ti­tles is Rogue Squadron III: Rebel Strike, a game that has ef­fec­tively never been playable at full speed in Dolphin. We knew that im­ple­ment­ing page table map­pings in fast­mem would be the key to mak­ing these games go faster, and we were more than will­ing to put in the ef­fort to do it. Unfortunately, we just did­n’t know how. We’ve even at­tempted it a few times over the years, but we never cre­ated a suc­cess­ful, let alone fast, im­ple­men­ta­tion.

We were miss­ing some­thing.

While we were try­ing to de­fang the Disney Trio of Destruction™, JosJuice no­ticed that every time Cars 2 ma­nip­u­lated the page table, it ex­e­cuted the in­struc­tion tl­bie. According to the pub­licly avail­able PowerPC 750 user man­ual, af­ter any page table mod­i­fi­ca­tion tl­bie must be used to clear the rel­e­vant part of the TLB (the page table cache). If Dolphin paid at­ten­tion to when this in­struc­tion was ex­e­cuted, it would have a way to keep track of every page table mod­i­fi­ca­tion. Dolphin could use this in­for­ma­tion to map the page table in fast­mem!

Upon this re­al­iza­tion, JosJuice im­me­di­ately be­gan im­ple­ment­ing page table map­pings in fast­mem. However, know­ing how to do some­thing and ac­tu­ally do­ing it are two dif­fer­ent things. Even with a plan, this re­quired loads of tricky, low-level work with tons of trial and er­ror. And it was­n’t im­me­di­ately suc­cess­ful! An early im­ple­men­ta­tion was ac­tu­ally slower than man­u­ally trans­lat­ing every­thing! But af­ter a lot of think­ing and ex­per­i­men­ta­tion, JosJuice im­ple­mented in­cre­men­tal up­dates to page table map­pings in fast­mem, where we com­pare the new map­pings with the old ones in 64-byte chunks, and then do a bunch of logic to fig­ure out which map­pings need to be re­moved or added.

This se­cret sauce is still heavy, but it’s faster than man­u­ally trans­lat­ing every page table ac­cess! At least, usu­ally. It de­pends on the game and how of­ten it uses tl­bie. Rogue Squadron II and es­pe­cially III hit ARAM so hard that they will al­ways see a per­for­mance boost from page table fast­mem. Meanwhile, some games with cus­tom mem­ory han­dlers, like Spider-Man 2, ac­tu­ally lose a lit­tle bit of per­for­mance due to the over­head of track­ing page table up­dates. On the flip­side, when those games do load from ARAM, the hitches that once plagued them are greatly re­duced or com­pletely re­moved in many cases, so their over­all playa­bil­ity is bet­ter than be­fore.

Now, this is the part where we show you some fancy per­for­mance graphs. However, JosJuice was­n’t quite done yet. After im­ple­ment­ing Page Table Fastmem Mappings, they un­leashed a flurry of op­ti­miza­tions tar­get­ing the Rogue Squadron games specif­i­cally. So, we’ve opted to bun­dle all of the Rogue Squadron re­sults into the next sec­tion. For now though, here’s our Spider-Man 2 and Cars 2 per­for­mance test re­sults.

As men­tioned ear­lier, Spider-Man 2 is slightly slower with this change, but it has fewer tra­ver­sal stut­ters so it feels more fluid in mo­tion. Given that per­for­mance is not re­ally an is­sue with this game - our eight year old low end lap­top can run it eas­ily - the trade is more than worth it. As for Cars 2, while we patch out the over 10,000 page table mapped en­tries that the Trio per­forms when it remaps the stan­dard BATs with the page table, it turns out that they still have 400-800 page table map­ping en­tries re­main­ing. Now that these map­pings can go through fast­mem, Cars 2 has be­come a fairly ac­ces­si­ble game. The old low end lap­top in the test above is al­most able to play it at full speed now. Compared to a year ago when Cars 2′s per­for­mance was barely in the dou­ble dig­its on that hard­ware, this is an amaz­ing turn­around!

Even with page table fast­mem map­pings, Rogue Squadron II and III are still among the most de­mand­ing games in the en­tire GameCube and Wii li­brary. Fortunately, by re­mov­ing that pri­mary per­for­mance bot­tle­neck, a lot of op­ti­miza­tion op­por­tu­ni­ties re­vealed them­selves.

One ma­jor com­plaint in Rogue Squadron II: Rogue Leader specif­i­cally is that swap­ping be­tween cock­pit view and chase view re­sults in a se­ri­ous stut­ter. For some rea­son (we haven’t fully in­ves­ti­gated it, but it’s likely re­lated to ARAM swap­ping), the game forces Dolphin to both in­val­i­date and JIT an enor­mous amount of code all at once. The sheer vol­ume of what it’s do­ing in a short time bot­tle­necks CPU em­u­la­tion and causes a hard stut­ter.

JosJuice took mul­ti­ple mea­sures to speed up this par­tic­u­lar case. First, they dis­abled a fea­ture known as Branch Following for this game. This fea­ture makes the JIT fol­low branch in­struc­tions to cre­ate larger blocks of code to process all at once, mak­ing it pos­si­ble for the JIT to out­put more op­ti­mized code. Especially in games with tricky to de­tect idle loops, such as Xenoblade Chronicles, the larger blocks vastly im­prove per­for­mance. But it also causes the JIT to out­put more code, mak­ing both JIT-ing and in­val­i­dat­ing slower, which is very bad in Rogue Squadron IIs case! This fea­ture is why the view change stut­ter­ing sud­denly got worse soon af­ter 5.0, and dis­abling Branch Following in the Rogue Squadron games re­solves this re­gres­sion.

Second, JosJuice found ways to op­ti­mize in­val­i­dat­ing code! By us­ing a more ef­fi­cient data struc­ture to keep track of JIT blocks, this part of CPU em­u­la­tion be­comes faster and Dolphin has more time to JIT the new code. All com­bined, the stut­ter is sub­stan­tially im­proved.

The next im­prove­ment cen­ters around the other de­mand­ing part of em­u­lat­ing these games - their graph­ics. Rogue Squadron ren­ders stages in a rather un­ortho­dox way. The ter­rain is di­vided into squares, and the game draws one square at a time, chang­ing out the ac­tive tex­ture for each square. Dolphin has to process hun­dreds of these squares per frame, with each one trig­ger­ing Dolphin’s tex­ture lookup code. But it’s not like the ter­rain is the only thing in the game - ships, ground troops, and lots of other things are be­ing drawn too. Combined, this cre­ates a huge num­ber of draw calls and tex­ture lookups.

JosJuice found a re­gres­sion where Dolphin was cre­at­ing ex­tra ob­jects in mem­ory when­ever the game used tex­tures. In most games, this re­gres­sion was es­sen­tially in­vis­i­ble. But in the land stages in Rogue Squadron II and III, the af­fected tex­ture code can run over 300,000 times per sec­ond! Even a tiny op­ti­miza­tion makes a size­able dif­fer­ence when the af­fected code path runs that many times!

Next, these two games saw sev­eral changes in Dolphin’s GameSettings data­base. We al­ready men­tioned dis­abling Branch Following, but sev­eral more set­tings have been changed to im­prove per­for­mance in these games. CPU Vertex Culling is now en­abled, let­ting Dolphin skip ren­der­ing 3D ob­jects that would­n’t be vis­i­ble any­way, which among other things cuts down on how many of those ter­rain squares Dolphin has to process.

And fi­nally, Store EFB Copies to Texture Only is no longer be­ing forced off for this game. Why was it forced off, you ask? Back when Rogue Squadron II and III first be­came playable, turn­ing this off was nec­es­sary to get palet­ted EFB copies work­ing, which are used by the tar­get­ing com­puter. But palet­ted EFB copies started work­ing with the set­ting turned on just a month later! Aside from that, the only other ef­fects we’re aware of that needs the set­ting off are some of the menu fade­outs in Rogue Squadron II. These fade­outs are a rel­a­tively mi­nor thing, so we’ve de­cided to not force a value for the set­ting. Instead, each user can choose whether they pre­fer higher per­for­mance or nice menu fade­outs.

And now for the big ques­tion: with page table map­pings in fast­mem and all these op­ti­miza­tions summed up, how much faster do Rogue Squadron II and III run?

Benchmark 1: RS2 Ison Corridor: 83 FPS RS2 Hoth: 35 FPS RS3 Revenge of the Empire: 13 FPS

Benchmark 2: RS2 Ison Corridor: 93 FPS RS2 Hoth: 42 FPS RS3 Revenge of the Empire: 28 FPS Benchmark 3: RS2 Ison Corridor: 107 FPS RS2 Hoth: 55 FPS RS3 Revenge of the Empire: 34 FPS

The per­for­mance ben­e­fits are ab­solutely mas­sive. Of par­tic­u­lar note is Rogue Squadron III, which dou­bled in per­for­mance! On our top-of-the-line desk­top, it can even be played at full speed for the very first time! And it’s not just raw per­for­mance that im­proved - these changes help min­i­mize hitch­ing when the games are load­ing from ARAM. Even on the most pow­er­ful hard­ware, it’s still not un­com­mon to drop a frame or two on load­ing screens and tran­si­tions. But com­pared to be­fore, it’s a night and day dif­fer­ence.

The caveat is that Rogue Squadron II and III still re­quire ex­tremely pow­er­ful hard­ware to get con­sis­tently playable speeds. Furthermore, the new de­fault set­tings sac­ri­fice graph­i­cal ac­cu­racy for per­for­mance, and dis­abling Store EFB/XFB Copies to Texture Only and Skip EFB Access from CPU low­ers per­for­mance by roughly 12% to 15% in a de­mand­ing scene. That’s still full speed on strong enough hard­ware, but it does raise the bar­rier of en­try that much more for play­ers that want every­thing to look just right.

With strate­gic vic­to­ries on all fronts, we fly home, hav­ing de­stroyed the once im­pen­e­tra­ble Death Star of per­for­mance prob­lems. Yet, it is un­likely we’ll have ever truly won. Not only do these games still have some mi­nor prob­lems, but we know that some­thing much more fear­some was in the works. Isolated from the rest of the galaxy, Factor 5 sci­en­tists built a Rogue Squadron game for the Wii that was never re­leased. We know it ex­ists, lurk­ing in the maw of some­one’s hard drive. If it ever comes on­line, Dolphin may be crushed by a threat more dev­as­tat­ing than any be­fore…

Since Triforce sup­port was added to Dolphin a few weeks ago, the com­mu­nity has been tremen­dously help­ful, with mul­ti­ple users who own fully op­er­a­tional Triforce cab­i­nets com­ing for­ward to give us more in­for­ma­tion on how they work or to help us run hard­ware tests. It’s only been a few weeks, so the fea­ture ar­ti­cle is still mostly up to date with our cur­rent Triforce ef­forts, but there have been a few changes al­ready merged with more on the hori­zon:

* Dolphin now au­to­mat­i­cally in­serts Magnetic Cards for clean­ing checks. This change is avail­able in 2603.

* Regions are cur­rently hard­coded for games. A set­ting that al­lows the user to change re­gions will be added to the GUI soon.

* Several bugs in Dolphin’s mul­ti­cab­i­net em­u­la­tion have been iden­ti­fied and are be­ing worked on. These fixes will make mul­ti­cab­i­net em­u­la­tion work on a much wider range of hard­ware.

* F-Zero AX GameCube Memory Card sup­port has been solved and will be com­ing soon.

* Work on in­te­grated nam­cam2 sup­port for Mario Kart Arcade GP and GP 2 has been started, which will al­low the games to work with­out the need of a sep­a­rate pro­gram to em­u­late the cam­era.

* The touch­screen pro­to­col used by The Key of Avalon has been iden­ti­fied and solved.

* Users with mul­ti­ple orig­i­nal F-Zero AX cab­i­nets have sub­mit­ted packet dumps to as­sist with im­ple­ment­ing net­work em­u­la­tion. We’ve al­ready made some ba­sic progress, though mul­ti­ple in­stances still can­not join each other.

A lot of our ef­fort af­ter merg­ing Triforce sup­port has gone to­ward solv­ing The Key of Avalon. Despite our best ef­forts, find­ing hard­ware for this game has proven im­pos­si­ble, so we’ve had to re­verse en­gi­neer the game to de­ter­mine what the hard­ware does. Thanks to tools like Ghidra, this is ac­tu­ally doable, though it is very time con­sum­ing and mo­not­o­nous.

This ap­proach is­n’t ideal, but we’ve made a lot of progress and had a few break­throughs. Notably, we’ve iden­ti­fied the touch­screen pro­to­col as be­ing sim­i­lar to Elo’s SmartSet Data Protocol. By hack­ing in the ap­pro­pri­ate re­sponses, we man­aged to start a new game! …Only for it to im­me­di­ately hang on the next screen.

We as­sumed that the prob­lem was net­work­ing re­lated, as the host and client did­n’t ap­pear to be synced up, but that was a red her­ring. It turned out that the game only syncs up the at­tract mode oc­ca­sion­ally be­tween the clients and the host, and the be­hav­ior we were see­ing was nor­mal. With no leads, we were es­sen­tially stuck once again. That was go­ing to be the end of this en­try, but serendip­i­tously, Billiard stum­bled across a dis­abled de­bug log­ging func­tion built into the game! We patched the game to re-en­able it, and the game started spit­ting out in­for­ma­tion on ex­actly what was go­ing wrong. The game told us di­rectly that the cul­prit was IC Card ini­tial­iza­tion.

Over the fol­low­ing days, we iden­ti­fied many prob­lems with Dolphin’s IC Card em­u­la­tion code. So much so that we could­n’t quite get every­thing done in time for the 2603 re­lease. For now, here’s a teaser of what’s to come.

We also plan to bring mul­ti­ple IC Card fixes to Virtua Striker 4, Virtua Striker 4 ver. 2006, and Gekitou Pro Yakyuu, which will re­store card func­tion­al­ity to them. In the Triforce ar­ti­cle, we men­tioned there were some modes that we could­n’t find in Gekitou Pro Yakyuu. Well, we found them. They’re locked be­hind IC Cards! There’s a whole team build­ing mode, com­plete with a char­ac­ter cre­ator, RPG el­e­ments where you level up your player, and the afore­men­tioned home­run con­test. The points that are used for high scores are saved to your card and can be used to up­grade your cre­ated player.

There will likely be a lot of up­dates to Triforce em­u­la­tion in the next Progress Report, but un­til then, we’ll leave our read­ers with an­other ques­tion that has us stumped. The Key of Avalon has code to de­tect spe­cial OWABI cards, and through hack­ery, we can get the game to ac­knowl­edge a card as an OWABI card. Unfortunately, we don’t ac­tu­ally know what these cards are for, and the game hangs shortly af­ter­ward. If any­one has used an OWABI card or knows how they work, please let us know.

Sometimes a lot of big changes hit all at once. Triforce com­pat­i­bil­ity is huge. Nearly dou­bling the per­for­mance of the Rogue Squadron games is mas­sive. But, this change here? It was a five year pro­ject with tons of twists and turns. Our re­ac­tion to see­ing this in the changelog can be summed up by this com­ment from some­one on the blog staff:

Back in the August 2021 Progress Report, we talked about a bug fix for Inazuma Eleven GO: Strikers 2013. In this soc­cer game, if you used a Nintendo Wi-Fi Connection re­place­ment ser­vice like Wiimmfi to play an on­line match be­tween Dolphin and a real Wii, the two play­ers would de­sync when per­form­ing cer­tain ac­tions. This slightly hin­dered the on­line ex­pe­ri­ence!

The in­ves­ti­ga­tion was rather dif­fi­cult be­cause we could­n’t de­bug it of­fline. Since Dolphin had to be con­nected over the in­ter­net with a real Wii, we could­n’t just pause the em­u­la­tor and use our usual de­bug­ging tools. Thankfully, the game’s com­mu­nity nar­rowed down the is­sue and even­tu­ally found that the fn­m­subs CPU in­struc­tion was im­ple­mented in­cor­rectly in Dolphin’s JIT but worked cor­rectly in our in­ter­preter.

Armed with that in­for­ma­tion, JosJuice just had to make the JIT im­ple­men­ta­tion match the in­ter­preter im­ple­men­ta­tion, rather than hav­ing to de­bug an on­line game. Once the dif­fer­ences were fixed, on­line play worked fine even when us­ing the JIT.

So, why are we bring­ing up a prob­lem that was fixed five years ago? It turns out that Inazuma Eleven GO: Strikers 2013 was­n’t the only Wi-Fi en­abled Wii soc­cer game that de­synced when play­ing be­tween Dolphin and real hard­ware. Mario Strikers Charged suf­fered from a very sim­i­lar prob­lem and their com­mu­nity reached out to us fol­low­ing that Progress Report.

Unfortunately, it was­n’t good news. Mario Strikers Charged still de­synced shortly af­ter the match started. And un­like Inazuma Eleven, there is no de­sync mit­i­ga­tion - if a de­sync hap­pens, it’s game over.

There was a small up­side: the fn­m­subs fix did al­low the game to stay synced for slightly longer. But since de­syncs con­tin­ued to oc­cur af­ter the fix, some­thing was still hor­ri­bly bro­ken.

JosJuice and JMC47 both al­ready had the game and quickly launched an in­ves­ti­ga­tion think­ing it would­n’t be very hard to nar­row down the is­sue. With JosJuice on a Wii U and JMC on Dolphin, they set up a transat­lantic Wiimmfi ses­sion and tried to play a few matches. Everything worked fine for a while! …Until two play­ers touched one an­other. And even if that did­n’t hap­pen, even­tu­ally the game would de­sync any­way.

To de­bug the prob­lem, dif­fer­ent JIT in­struc­tions on the Dolphin side were dis­abled so that they would fall­back to in­ter­preter.

What ex­actly are fall­backs? Dolphin is able to em­u­late the PowerPC CPU in two ways: us­ing a Just-In-Time (JIT) re­com­piler, or us­ing the in­ter­preter. JIT re­com­pil­ers are a lot faster than the in­ter­preter, but it is much eas­ier to pro­gram an in­ter­preter and un­der­stand how it works. As such, the in­ter­preter also con­tains some ex­tra ac­cu­racy im­prove­ments that we haven’t wanted to put in the JIT, ei­ther be­cause it would hurt per­for­mance or be­cause it would be too com­pli­cated.

It can be con­ve­nient to use the in­ter­preter when in­ves­ti­gat­ing CPU em­u­la­tion is­sues, but com­pletely for­go­ing the JIT is very slow! To deal with this, the JIT has a fea­ture called in­ter­preter fall­backs, where one or more CPU in­struc­tions can be set to run us­ing the in­ter­preter, while all other CPU in­struc­tions run us­ing the JIT as nor­mal. This re­duces per­for­mance any­where from 0% to over 50% de­pend­ing on what in­struc­tions are han­dled by the in­ter­preter and how the game uses them, but even a 50% slow­down is a lot faster than not us­ing the JIT at all.

After try­ing in­ter­preter fall­backs for every cat­e­gory of in­struc­tions, we still had no an­swer. But if no sin­gle cat­e­gory fixed it, there was al­ways the chance that the is­sue spanned mul­ti­ple cat­e­gories. We first tried com­bi­na­tions of float­ing point in­struc­tion fall­backs, to no avail. Then, we out­right dis­abled every­thing and used the Cached Interpreter in­stead. This made Dolphin run so slowly that we ex­pected the Wii to kick Dolphin out of the match. But sur­pris­ingly, the game was able to cope with the hor­ri­ble lag! Or at least it did, un­til it popped up a net­work syn­chro­niza­tion er­ror like in every pre­vi­ous at­tempt.

This is why the Strikers is­sue could­n’t be eas­ily fixed. Dolphin had no cor­rect im­ple­men­ta­tion to ref­er­ence. And with the test case re­quir­ing Wiimmfi, a real Wii/Wii U, and a net­work con­nec­tion from Dolphin, the is­sue was es­sen­tially im­pos­si­ble to de­bug.

And so Strikers con­tin­ued to haunt us. Different de­vel­op­ers took stabs at try­ing to fix it over the years, but again and again we would hit a wall. We were at a loss. But as we de­spaired, the Mario Strikers Charged com­mu­nity did not give up. If we needed a testable case, they would make us one.

The Mario Strikers Charged lab­bers are in­sane. They know more about how the game works than we will ever un­der­stand. They’ve mapped out parts of the physics, char­ac­ter stats, and mod­ded in their own fea­tures to bal­ance the game to their lik­ing. In or­der to try to solve this is­sue, they im­ple­mented a new mode into the game: AI vs AI spec­ta­tor mode! This way, we could watch a match with zero player in­put. Combined with a patch to freeze RNG, we fi­nally had our test case that we could use to an­a­lyze the prob­lem!

By record­ing a match on con­sole and com­par­ing it to Dolphin, we could see where the game de­synced. They even changed the score­board func­tions to print out de­bug in­for­ma­tion, so that we would know ex­actly when things went wrong. The fi­nal push was a ma­jor col­lab­o­ra­tion be­tween Geotale, flacs, and most im­por­tantly ace Strikers re­verse en­gi­neer Feder. Together, they man­aged to nar­row the prob­lem down to an ex­tremely small set of in­struc­tions that al­lowed for the cre­ation of a hard­ware test.

fmadds is closely re­lated to the fn­m­subs in­struc­tion that Inazuma Eleven GO: Strikers 2013 had prob­lems with, since they’re both Fused Multiply-Add (FMA) in­struc­tions. Like all re­spectably pow­er­ful CPUs, the PowerPC CPU in the GameCube and Wii has nor­mal ad­di­tion and mul­ti­pli­ca­tion in­struc­tions for both in­te­gers and float­ing point num­bers. But with FMA, it also had a type of in­struc­tion that x86 CPUs did­n’t get un­til the 2010s: mul­ti­ply­ing two float­ing point num­bers and adding or sub­tract­ing a third float­ing point num­ber, all in one go. Doing this in a sin­gle in­struc­tion not only im­proves per­for­mance, but also boosts ac­cu­racy.

When a CPU per­forms a float­ing point cal­cu­la­tion, the re­sult might end up hav­ing more dec­i­mals than the CPU can store in its reg­is­ters, which forces the CPU to round the re­sult. If a cal­cu­la­tion is done us­ing a mul­ti­pli­ca­tion in­struc­tion fol­lowed by an ad­di­tion in­struc­tion, this re­sults in dou­ble round­ing. But if the cal­cu­la­tion is done us­ing a Fused Multiply-Add in­struc­tion, the re­sult only has to be rounded once, re­duc­ing the round­ing er­ror. This dif­fer­ence in round­ing er­ror has the po­ten­tial to cause de­syncs!

But by the time this is­sue was in­ves­ti­gated, pretty much every CPU run­ning Dolphin was al­ready em­u­lat­ing FMA in­struc­tions us­ing FMA in­struc­tions. The prob­lem had to be some­thing else. And it could­n’t be the same prob­lem as Inazuma Eleven GO: Strikers 2013 used to have, be­cause that prob­lem only hap­pens as a re­sult of nega­tion, and the mnemonic fmadds con­spic­u­ously does­n’t con­tain n (for negate”) or sub. Yes, the let­ter soup in in­struc­tion mnemon­ics ac­tu­ally means some­thing!

To ex­plain the prob­lem, we have to go fur­ther back than a Progress Report from 2021. We have to go all the way back to the year the Progress Reports be­gan and re­visit the August 2014 Progress Report.

This an­cient tome de­scribes the valiant ef­forts of two leg­ends from Dolphin’s olden days, magu­magu and FioraAeterna. Through their work, Dolphin’s em­u­la­tion of mul­ti­pli­ca­tion in­struc­tions like fmuls and fmadds be­came more ac­cu­rate, fix­ing re­play de­syncs in Mario Kart Wii, F-Zero GX, and many other games. Unfortunately, the tome is rather light on de­tails about the spe­cific in­ac­cu­racy they fixed, but through an arche­o­log­i­cal tech­nique known as reading the com­mit his­tory”, we were able to fill in the blanks.

On PowerPC, float­ing point in­struc­tions come in two vari­ants: 32-bit and 64-bit. (The in­struc­tions we’ve been talk­ing about so far are all 32-bit, as in­di­cated by their mnemon­ics end­ing in s for single pre­ci­sion”.) 32-bit in­struc­tions take 32-bit in­puts and pro­duce a 32-bit out­put, and 64-bit in­struc­tions take 64-bit in­puts and pro­duce a 64-bit out­put.

But what would hap­pen if you tried to give a 64-bit in­put to a 32-bit in­struc­tion? IBMs man­u­als say that you should­n’t do this, but GameCube and Wii games do it all the time, likely due to com­piler quirks. And it does work, kind of! Many other CPU ar­chi­tec­tures would have read half of the 64 bits of the in­puts, re­sult­ing in a non­sen­si­cal num­ber, but PowerPC al­most does the de­sired be­hav­ior. The least sig­nif­i­cant 28 bits of the right-hand side operand are trun­cated, but other than that, you get the re­sult you would ex­pect! Fiora’s change im­ple­mented this trun­ca­tion, mak­ing Dolphin more ac­cu­rate to a real con­sole.

But hold on. This is an op­er­a­tion that takes in 64-bit in­puts but re­sults in a 32-bit out­put. There aren’t any x86-64 or AArch64 in­struc­tions that do that! Instead, Dolphin em­u­lates the op­er­a­tion us­ing a 64-bit in­struc­tion, and rounds to 32-bit af­ter­wards. This re­sults in dou­ble round­ing! Fortunately, it has been math­e­mat­i­cally proven that dou­ble round­ing is safe when feed­ing 32-bit in­puts into a 64-bit ad­di­tion, sub­trac­tion, mul­ti­pli­ca­tion, or di­vi­sion in­struc­tion, as it does­n’t change the re­sult at all. There may be round­ing er­rors when us­ing 64-bit in­puts, but Dolphin’s ap­proach is still more ac­cu­rate than the al­ter­na­tive of us­ing a 32-bit in­struc­tion.

But hold on once again. What about feed­ing 32-bit in­puts into a 64-bit Fused Multiply-Add in­struc­tion?

This is where we re­turn to Feder’s in­ves­ti­ga­tion. They had dis­cov­ered a spe­cific set of 32-bit in­puts to fmadds that re­sulted in -0.83494705 (hexadecimal 0xbf55bf17) on con­sole but -0.83494711 (hexadecimal 0xbf55bf18) on Dolphin. That’s a dif­fer­ence of just one in the hexa­dec­i­mal rep­re­sen­ta­tion, which is in­dica­tive of a round­ing er­ror! Geotale im­me­di­ately knew what was go­ing on, thanks to her ex­pe­ri­ence of in­ves­ti­gat­ing how math works both in Dolphin and on orig­i­nal hard­ware. The type of dou­ble round­ing that Dolphin does is in fact un­safe for Fused Multiply-Add.

Geotale quickly im­ple­mented a so­lu­tion in Dolphin’s in­ter­preter: if none of the in­puts lose pre­ci­sion if con­verted to 32-bit, the in­ter­preter con­verts them to 32-bit and uses a 32-bit FMA in­struc­tion. Otherwise, it uses a 64-bit FMA in­struc­tion like be­fore. Not long af­ter, the Mario Strikers Charged com­mu­nity were able to con­firm that this change solved their de­sync is­sue. But the change also had the po­ten­tial to neg­a­tively im­pact per­for­mance. To make Dolphin run dif­fer­ent code de­pend­ing on whether all in­puts are safe to con­vert to 32-bit, a con­di­tional branch in­struc­tion is needed. If the re­sult of the con­di­tion is the same al­most every time, the CPUs branch pre­dic­tor can do a very good job, which min­i­mizes the per­for­mance im­pact of the con­di­tional branch. But an in­cor­rect guess from the branch pre­dic­tor costs tens of cy­cles of ex­e­cu­tion time, which adds up quickly with how of­ten games use FMA in­struc­tions.

JosJuice there­fore came up with an­other idea: Dolphin could per­form the cal­cu­la­tion us­ing a 64-bit FMA in­struc­tion as be­fore, and then use the 2Sum al­go­rithm to cal­cu­late the dif­fer­ence be­tween the math­e­mat­i­cally cor­rect re­sult and the 64-bit rounded re­sult. Using a con­di­tional branch, the re­sult would then be nudged by a tiny bit if there was a dif­fer­ence, to make sure the re­sult is rounded in the cor­rect di­rec­tion. Finally, Dolphin would con­vert the 64-bit re­sult to 32-bit. Geotale im­proved on this idea by mak­ing the branch con­di­tional not only on whether there was a dif­fer­ence, but also on whether the 64-bit re­sult is ex­actly halfway be­tween two num­bers that can be rep­re­sented as 32-bit - the ex­act case that’s trou­ble­some for dou­ble round­ing. It’s very un­likely for both of these two be true at once, which makes the branch pre­dic­tor happy. On top of that, this method also in­creases ac­cu­racy for 64-bit in­puts! There are still some 64-bit in­puts that aren’t han­dled cor­rectly, mostly in­volv­ing de­nor­mals and num­bers so large they can get rounded into in­fin­ity, but games should­n’t trig­ger these cases un­less they’re in­ten­tion­ally try­ing to be mean to us. We hope.

Unfortunately, the code for this so­lu­tion is a bunch of math soup that con­fuses every­one not will­ing to spend hours try­ing to un­der­stand what’s go­ing on. But Geotale and JosJuice per­se­vered. Thanks to their work, and the multi-year ef­fort of every­one in­ves­ti­gat­ing the is­sue, float­ing point num­bers are now rounded a tiny bit dif­fer­ently for cer­tain in­struc­tions. All of this just to let Dolphin play on­line with real Wii con­soles in a game whose of­fi­cial servers are since long dead and whose re­place­ment servers have a peak of only 15 con­cur­rent on­line play­ers.

But did you know that there’s still a de­sync if you con­nect Dolphin and a GameCube through LAN and play 1080° Avalanche to­gether? This is now the last known physics de­sync in a mul­ti­player game be­tween Dolphin and con­sole! Keep an eye out for it in a fu­ture Dolphin Progress Report!

The Wii Menu is one of the hard­est to run pieces of Wii soft­ware on Dolphin for one sim­ple rea­son - Dolphin is too fast. A lot of the mem­ory op­er­a­tions that the Wii does, specif­i­cally with NAND man­age­ment, don’t have proper tim­ings in Dolphin. On con­sole, a lot of these op­er­a­tions are pretty slow, but Dolphin will at­tempt to com­plete them as fast as pos­si­ble and then lag when host hard­ware can’t keep up even though it’s go­ing many times faster than a real Wii!

In or­der to make things a lit­tle more palat­able, Billiard added some very rough tim­ings to ease the dif­fi­culty of em­u­lat­ing this menu. Dolphin is still faster than a real Wii, but it’s now slow enough that it should­n’t bring even the most pow­er­ful hard­ware to its knees any­more. For a full fix, hard­ware test­ing and real tim­ing data needs to be used to make the Wii Menu do these op­er­a­tions at a rea­son­able speed.

Many times over the years, users have asked us to im­ple­ment a way to load games into RAM. Drives are slow, RAM is fast, it seems like a no brainer. However, every time we would re­spond by telling them that there would be no dif­fer­ence.

For com­pat­i­bil­ity rea­sons, Dolphin em­u­lates the disc read rates of GameCube and Wii op­ti­cal dri­ves, even down to the Constant Angular Velocity. And those dri­ves are tremen­dously slow by mod­ern stan­dards. The Wii’s DVD drive was ca­pa­ble of an up to ~8.5MiB per sec­ond trans­fer rate. An ATA133 hard drive, which uses a stan­dard that was su­per­seded years be­fore the Wii re­leased, is over ten times faster!

Whether an Ultra ATA hard drive, a de­fault-speed SD Card, or a Memory Stick, it does­n’t mat­ter - they are all faster than a Wii’s op­ti­cal disc drive. Even seek times (the bane of spin­ning rust) did­n’t af­fect Dolphin much, as the seek times of a Wii drive are even worse! Dolphin’s game load­ing has ef­fec­tively never been bot­tle­necked by cur­rent stor­age de­vices, so there was no rea­son to im­ple­ment a way to load games into RAM. There was sim­ply no ben­e­fit.

Except, for one spe­cific sce­nario that is be­com­ing in­creas­ingly com­mon these days - play­ing from a hard drive over a net­work. As peo­ple adopt more and more de­vices, home in­tranets have be­come in­creas­ingly com­plex. At the heart of these in­tranets is of­ten a Network Attached Storage (NAS) de­vice. These ded­i­cated stor­age ap­pli­ances al­low their files to be ac­cessed by any de­vice on the net­work, and fea­ture large stor­age ca­pac­ity, re­dun­dancy, and in­tegrity ver­i­fi­ca­tion - every­thing needed to be very good at long term dig­i­tal stor­age. NASes are per­fect for stor­ing things like game disc back­ups.

But play­ing games from a NAS in Dolphin has of­ten been an an­noy­ing ex­pe­ri­ence. The NAS is­n’t aware of what some de­vice far away on the net­work is do­ing, and since GameCube and Wii games need so lit­tle data so in­fre­quently, it may de­cide to put a hard drive to sleep while some­one is ac­tively play­ing a game that’s on it. This re­sults in a hard stut­ter the next time the game asks to read the disc, as now Dolphin has to wait for the NAS to wake up the drive. There are ways for NASes to work around is­sues like this, such as SSD stor­age pools and RAM caches, but the most com­mon NASes are sim­ple hard drive boxes with lit­tle if any caching.

Fortunately, we al­ready knew of a so­lu­tion that could solve this. All we had to do was pro­vide a way to load the game disc into the host’s sys­tem mem­ory. When en­abled, Dolphin will con­tin­u­ously copy the disc in the back­ground un­til the en­tire game is cached in RAM, and then the NAS can turn off the drive when­ever it wants and the player will be none the wiser.

We never thought we’d ever ac­tu­ally im­ple­ment this fea­ture, but some­times new prob­lems can be solved by old so­lu­tions.

This fea­ture is cur­rently only avail­able in our desk­top builds.

SDL hints are a use­ful mech­a­nism that al­lows Dolphin to tell SDL how to han­dle cer­tain con­trollers. Recently, we’ve been hav­ing trou­ble with some con­trollers and have been us­ing SDL hints in an at­tempt to work around them. For ex­am­ple, hav­ing an 8BitDo Ultimate 2 con­troller plugged in causes Dolphin to hang on shut­down in some cases. To fix this, we dis­abled SDLs DirectInput sup­port as a tem­po­rary re­me­di­a­tion. Unfortunately, that also broke hot­plug sup­port for DualSense and DualShock 4 con­trollers. Given that other pro­grams don’t seem to be hav­ing this is­sue, it’s fairly ap­par­ent that Dolphin is do­ing some­thing wrong, we just don’t know what.

Users could mod­ify Dolphin’s SDL hints through set­ting an en­vi­ron­ment vari­able, but that’s not re­ally some­thing the av­er­age user knows how to do. So TixoRebel added a GUI for en­abling cer­tain SDL hints. However, their in­tended use case was­n’t for fix­ing ei­ther of those bugs! They wanted to use SDL hints to change how Nintendo Switch Joy-Cons are han­dled. By de­fault, SDL will com­bine a left Joy-Con and a right Joy-Con into one con­troller. If you want to sep­a­rate left and right Joy-Cons into two dif­fer­ent con­trollers to em­u­late the Wii and Nunchuk, you need to use an SDL hint.

Realizing the op­por­tu­nity to kill two birds with one stone, we had TixoRebel im­ple­ment a way to add any SDL hint di­rectly in the Controller Settings GUI. That way, it could be used for Joy-Con han­dling, work­ing around the 8BitDo/DS4/DS5 bugs, or any­thing else the user wants!

There’s one caveat we should note: changes to SDL hints will only ap­ply af­ter restart­ing Dolphin. Please keep that in mind when mod­i­fy­ing these set­tings.

More games have been iden­ti­fied with be­hav­iors that are prob­lem­atic for Dolphin. For more in­for­ma­tion on how these patches are made and what they do, please re­fer to the pre­vi­ous Progress Report. All of the fol­low­ing games are of the va­ri­ety that run un­capped in­ter­nally, and each of them have been patched to force the games to syn­chro­nize to the Vertical Blanking Interrupt (VBI).

Special thanks to all of the con­trib­u­tors that in­cre­mented Dolphin by 465 com­mits af­ter Release 2512!

...

Read the original on dolphin-emu.org »

9 288 shares, 18 trendiness

Software Engineering with AI

CodeSpeak works in mixed pro­jects where some code is writ­ten man­u­ally and some is gen­er­ated from specs. Here’s an ex­am­ple from the MarkItDown repos­i­tory (forked)Check out the step-by-step guide on mixed pro­jects

CodeSpeak can take over some of the ex­ist­ing code and re­place it with specs 5-10x smaller. Maintaining specs is a lot eas­ier for hu­mans.

We took real code from open-source pro­jects and gen­er­ated specs from it. Here’s how it panned out:En­cod­ing auto-de­tec­tion and nor­mal­iza­tion for beau­ti­ful­soup4 (Python li­brary for pars­ing HTML and XML)EML to .md con­verter for mark­it­down (Python li­brary for con­vert­ing any­thing to mark­down)[1] When com­put­ing LOC, we strip blank lines and break long lines into many

[2] List of Italian mu­nic­i­pal­i­ties codes (~8000 LOC) is ex­cluded

...

Read the original on codespeak.dev »

10 282 shares, 20 trendiness

Enhancing gut-brain communication reversed cognitive decline, improved memory formation in aging mice

The sight of a de­lec­table plate of lasagna or the aroma of a hol­i­day ham are sure to get hun­gry bel­lies rum­bling in an­tic­i­pa­tion of a feast to come. But al­though we’ve all ex­pe­ri­enced the sen­sa­tion of eating” with our eyes and noses be­fore food meets mouth, much less is known about the in­for­ma­tion su­per­high­way, known as the va­gus nerve, that sends sig­nals in the op­po­site di­rec­tion — from your gut straight to your brain.

These sig­nals re­lay more than just what you’ve eaten and when you are full. A new study in mice from re­searchers at Stanford Medicine and the Palo Alto, California-based Arc Institute has iden­ti­fied a crit­i­cal link be­tween the bac­te­ria that live in your gut and the cog­ni­tive de­cline that of­ten oc­curs with ag­ing.

Although mem­ory loss is com­mon with age, it af­fects peo­ple dif­fer­ently and at dif­fer­ent ages,” said Christoph Thaiss, PhD, as­sis­tant pro­fes­sor of pathol­ogy. We wanted to un­der­stand why some very old peo­ple re­main cog­ni­tively sharp while other peo­ple see sig­nif­i­cant de­clines be­gin­ning in their 50s or 60s. What we learned is that the time­line of mem­ory de­cline is not hard­wired; it’s ac­tively mod­u­lated in the body, and the gas­troin­testi­nal tract is a crit­i­cal reg­u­la­tor of this process.”

The mouse study showed that the com­po­si­tion of the nat­u­rally oc­cur­ring bac­te­r­ial pop­u­la­tion that lives in the gut, known as the gut mi­cro­biome, changes with age — fa­vor­ing some species of bac­te­ria over oth­ers. These changes are reg­is­tered by im­mune cells in the gas­troin­testi­nal tract, which spark an in­flam­ma­tory re­sponse that ham­pers the abil­ity of the va­gus nerve to sig­nal to the hip­pocam­pus — the part of the brain re­spon­si­ble for mem­ory for­ma­tion and spa­tial nav­i­ga­tion. Stimulating the ac­tiv­ity of the va­gus nerve in older an­i­mals turned old, for­get­ful mice into whisker-sharp whizzes able to re­mem­ber novel ob­jects and es­cape from mazes as nim­bly as their younger coun­ter­parts.

The de­gree of re­versibil­ity of age-re­lated cog­ni­tive de­cline in the an­i­mals just by al­ter­ing gut-brain com­mu­ni­ca­tion was a sur­prise,” Thaiss said. We tend to think of mem­ory de­cline as a brain-in­trin­sic process. But this study in­di­cates that we can en­hance mem­ory for­ma­tion and brain ac­tiv­ity by chang­ing the com­po­si­tion of the gas­troin­testi­nal tract — a kind of re­mote con­trol for the brain.”

Thaiss, who is also a core in­ves­ti­ga­tor at Palo Alto-based Arc Institute, is a se­nior au­thor of the study, which was pub­lished March 11 in Nature. Maayan Levy, PhD, an as­sis­tant pro­fes­sor of pathol­ogy and Arc Institute in­no­va­tion in­ves­ti­ga­tor, is the other se­nior au­thor. Timothy Cox, a grad­u­ate stu­dent at the University of Pennsylvania, is the lead au­thor of the re­search.

Our study em­pha­sizes that processes in the brain can be mod­u­lated through pe­riph­eral in­ter­ven­tion,” Levy said. Since the gas­troin­testi­nal tract is eas­ily ac­ces­si­ble orally, mod­u­lat­ing the abun­dance of gut mi­cro­biome metabo­lites is a very ap­peal­ing strat­egy to con­trol brain func­tion.”

The call is com­ing from in­side the body

The idea that hun­dreds of species of bac­te­ria are nes­tled com­fort­ably in our in­testines used to be sur­pris­ing. But the gut mi­cro­biome is ex­pe­ri­enc­ing a kind of me­dia hey­day as peo­ple re­al­ize that its func­tion is crit­i­cal to not just how we di­gest our food, but also to our over­all health. A lit­tle more than a decade ago, re­searchers showed that tin­ker­ing with ro­dents’ gut mi­cro­bio­mes af­fected the an­i­mals’ so­cial and cog­ni­tive be­hav­iors. Thaiss and Levy won­dered whether a sim­i­lar process could be re­spon­si­ble for the mem­ory loss and cog­ni­tive trou­bles of­ten as­so­ci­ated with ag­ing.

Signals from in­side the body to the brain — like those that travel from the in­testines to the brain via the va­gus nerve — are part of what’s called in­te­ro­cep­tion. In con­trast, sig­nals from out­side the body, con­veyed pri­mar­ily by the five senses of taste, touch, smell, vi­sion and hear­ing, are called ex­te­ro­cep­tion.

This study in­di­cates that we can en­hance mem­ory for­ma­tion and brain ac­tiv­ity by chang­ing the com­po­si­tion of the gas­troin­testi­nal tract — a kind of re­mote con­trol for the brain.”

—Christoph Thaiss

Exteroception is ba­si­cally how we per­ceive the out­side,” Thaiss said. We have a lot of de­tailed knowl­edge about how this works. But we know much less about how the brain senses what is go­ing on in­side the body. We don’t know how many in­ter­nal senses there are, or even all of what they are sens­ing. It’s clear that our ex­te­ro­cep­tion ca­pa­bil­i­ties de­cline with age — we grow to need eye­glasses and hear­ing aids, for ex­am­ple. And this study shows that ag­ing also af­fects in­te­ro­cep­tion.”

To test their the­ory that the gut mi­cro­biome plays a role in the senior mo­ments” many of us ex­pe­ri­ence, the re­searchers housed young (2-month-old) mice to­gether with old (18-month-old) mice. Living (and poop­ing) in close prox­im­ity ex­posed the young mice to the gut mi­cro­bio­mes of the old mice and vice versa. After one month, the re­searchers ex­am­ined the com­po­si­tions of the mi­cro­bio­mes of the old and young an­i­mals.

They found that the shared digs caused the mi­cro­bio­mes of the young mice to more closely re­sem­ble that of the older an­i­mals. When they com­pared the abil­i­ties of the mice to rec­og­nize a novel ob­ject, or to find the exit in a maze, the young mice with old” mi­cro­bio­mes per­formed sig­nif­i­cantly more poorly than their peers — show­ing less cu­rios­ity about the un­fa­mil­iar ob­ject and bum­bling about the maze in ways sim­i­lar to that of old an­i­mals.

When the re­searchers com­pared young mice and old mice raised in a germ-free en­vi­ron­ment since birth (meaning nei­ther group had gut bac­te­ria), the young mice main­tained their abil­ity to form mem­o­ries. But when they trans­planted young, germ-free mice with mi­cro­bio­mes from old mice, the young mice again per­formed like older an­i­mals in the mem­ory and cog­ni­tion tests. Interestingly, the germ-free old mice did not ex­pe­ri­ence a loss of mem­ory and cog­ni­tion as they aged, per­form­ing as well as 2-month-old an­i­mals.

Strikingly, treat­ing young mice with old” mi­cro­bio­mes (and, there­fore, fal­ter­ing cog­ni­tive abil­i­ties) with broad-spec­trum an­tibi­otics for two weeks re­stored the an­i­mals’ cog­ni­tive abil­i­ties, caus­ing them to avidly in­ves­ti­gate un­fa­mil­iar ob­jects and scam­per through the maze as well as their con­trol peers.

The ob­ject recog­ni­tion test is like cog­ni­tive recog­ni­tion tests in hu­mans, where you are shown a se­ries of im­ages, then have to re­mem­ber which ones you’ve seen be­fore af­ter some time passes,” Thaiss said. And the maze test is like peo­ple try­ing to re­call where they parked their car at a large shop­ping cen­ter. What these tasks have in com­mon, in mice and in peo­ple, is that they are very strongly de­pen­dent on ac­tiv­ity in the hip­pocam­pus, be­cause that is where mem­o­ries are en­coded.”

What’s dif­fer­ent in their guts?

Digging deeper, the re­searchers iden­ti­fied spe­cific changes that oc­cur in the com­po­si­tion of the gut mi­cro­biome of mice as they age. In par­tic­u­lar, the rel­a­tive abun­dance of a bac­te­ria called Parabacteroides gold­steinii in­creases in old mice and is di­rectly as­so­ci­ated with cog­ni­tive de­cline in the an­i­mals. They showed that col­o­niz­ing the guts of young mice with this bac­te­r­ial species in­hib­ited their per­for­mance on the ob­ject recog­ni­tion and maze es­cape tasks, and that this deficit cor­re­lated with a re­duc­tion of ac­tiv­ity in the hip­pocam­pus.

When they treated old mice with a mol­e­cule that ac­ti­vates the va­gus nerve, how­ever, the cog­ni­tive per­for­mance of the an­i­mals was in­dis­tin­guish­able from that of young an­i­mals.

Further ex­per­i­ments showed that the in­creas­ing preva­lence of the Parabacteroides gold­steinii bac­te­ria cor­re­lated with an in­creas­ing amount of metabo­lites called medium-chain fatty acids, and that these metabo­lites cause a group of im­mune cells in the gut called myeloid cells to ini­ti­ate an in­flam­ma­tory re­sponse. This in­flam­ma­tion in­hibits the ac­tiv­ity of the va­gus nerve, the ac­tiv­ity of the hip­pocam­pus and the abil­ity to form last­ing mem­o­ries.

The GI tract is ar­guably the first or­gan sys­tem to evolve dur­ing hu­man evo­lu­tion­ary his­tory, so the evo­lu­tion of cog­ni­tive processes in the brain has un­doubt­edly been shaped by sig­nals com­ing from the in­tes­tine,” Levy said.

It’s likely that sig­nals from the GI tract play an im­por­tant role in con­tex­tu­al­iz­ing mem­ory for­ma­tion.”

Thaiss added, Basically, we’ve iden­ti­fied a three-step path­way to­ward cog­ni­tive de­cline that starts with gas­troin­testi­nal ag­ing and the sub­se­quent mi­cro­bial and meta­bolic changes that oc­cur. The myeloid cells in the GI tract sense these changes, and their in­flam­ma­tory re­sponse im­pairs the con­nec­tion be­tween the gut and the brain via the va­gus nerve. This is a di­rect dri­ver of mem­ory de­cline. And if we re­store the ac­tiv­ity of the va­gus nerve, we can re­store an old an­i­mal’s mem­ory func­tion to that of a young an­i­mal.”

The re­searchers are now in­ves­ti­gat­ing whether a sim­i­lar gut mi­cro­biome and brain ac­tiv­ity path­way ex­ists in hu­mans, and whether it also con­tributes to age-re­lated cog­ni­tive de­cline. Importantly, va­gus nerve stim­u­la­tion is ap­proved by the Food and Drug Administration as a treat­ment for de­pres­sion or epilepsy and to aid stroke re­cov­ery. The re­searchers are also in­ter­ested in de­vel­op­ing ways to non-in­va­sively mon­i­tor, and per­haps even con­trol, the ac­tiv­ity of pe­riph­eral neu­rons to af­fect mem­ory for­ma­tion and cog­ni­tion.

Our hope is that ul­ti­mately these find­ings can be trans­lated into the clinic to com­bat age-re­lated cog­ni­tive de­cline in peo­ple,” Thaiss said.

Researchers from Monell Chemical Senses Center in Philadelphia; the University of California, Irvine; University College Cork, Ireland; Calico Life Sciences LLC; and the Children’s Hospital of Philadelphia con­tributed to the work.

The study was funded by the Arc Institute, the National Institutes of Health (grants NIH DK019525, T32AG000255, F30AG081097, T32HG000046, F30AG080958, DP2-AG-067511, DP2-AG-067492, DP1-DK-140021, R01-NS-134976 and R01-DK-129691), the Burroughs Wellcome Fund, the American Cancer Society, the Pew Scholar Award, the Searle Scholar Program, the Edward Mallinckrodt Jr. Foundation, the W. W. Smith Charitable Trust, the Blavatnik Family Fellowship, the Prevent Cancer Foundation, the Polybio Research Foundation, the V Foundation, the Kathryn W. Davis Aging Brain Scholar Program, the McKnight Brain Research Foundation, the Kenneth Rainin Foundation, the IDSA Foundation and the Human Frontier Science Program.

...

Read the original on med.stanford.edu »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.