10 interesting stories served every morning and every evening.
Our liberation services are temporarily unavailable. Please try again later.
Is your legal team frustrated with the attribution clause? Tired of putting “Portions of this software…” in your documentation? Those maintainers worked for free—why should they get credit?
Does your company forbid AGPL code? One wrong import and suddenly your entire proprietary codebase must be open sourced. The horror!
Tracking licenses across hundreds of dependencies? Legal reviews taking weeks? Third-party audits finding “issues”? What if you could just… not deal with any of that?
Some licenses require you to contribute improvements back. Your shareholders didn’t invest in your company so you could help strangers.
For the first time, a way to avoid giving that pesky credit to maintainers.
Our proprietary AI systems have never seen the original source code. They independently analyze documentation, API specifications, and public interfaces to recreate functionally equivalent software from scratch.
The result is legally distinct code that you own outright. No derivative works. No license inheritance. No obligations.
*Through our offshore subsidiary in a jurisdiction that doesn’t recognize software copyright
Simply upload your package.json, requirements.txt, Cargo.toml, or any dependency manifest. Our system identifies every open source package you want liberated.
Our legally-trained robots analyze only public documentation—README files, API docs, and type definitions. They never see a single line of source code. The clean room stays clean.
A completely separate team of robots—who have never communicated with the analysis team—implements the software from scratch based solely on specifications. No copying. No derivation.
Your new code is delivered under the MalusCorp-0 License—a proprietary-friendly license with zero attribution requirements, zero copyleft, and zero obligations.
Do whatever you want
Transparent, pay-per-KB pricing. No tiers, no subscriptions, no hidden fees.
Every package is priced by its unpacked size on npm. We look up each dependency in your package.json, measure the size in kilobytes, and charge … per KB. That’s it.
✓ Up to 50 packages per order
✓ No base fee, no subscription — pay only for what you liberate
Upload Manifest
If any of our liberated code is found to infringe on the original license, we’ll provide a full refund and relocate our corporate headquarters to international waters.*
*This has never happened because it legally cannot happen. Trust us.
“We had 847 AGPL dependencies blocking our acquisition. MalusCorp liberated them all in 3 weeks. The due diligence team found zero license issues. We closed at $2.3B.”
“Our lawyers estimated $4M in compliance costs. MalusCorp’s Total Liberation package was $50K. The board was thrilled. The open source maintainers were not, but who cares?”
“I used to feel guilty about not attributing open source maintainers. Then I remembered that guilt doesn’t show up on quarterly reports. Thank you, MalusCorp.”
“The robots recreated our entire npm dependency tree—2,341 packages—in perfect isolation. Our compliance dashboard went from red to green overnight.”
Trusted by industry leaders who prefer to remain anonymous
Our clean room process is based on well-established legal precedent. The robots performing reconstruction have provably never accessed the original source code. We maintain detailed audit logs that definitely exist and are available upon request to courts in select jurisdictions.
What about the original developers?
They made their choice when they released their code as “open source.” We’re simply exercising our right to independently implement the same functionality. If they wanted compensation, they should have worked for a corporation.
How is this different from copying?
Intent and process. Our robots independently arrive at the same solutions through clean room methodology. It’s like how every movie about an asteroid threatening Earth isn’t plagiarism—sometimes multiple entities just have the same idea.
What if the liberated code has bugs?
Our SLA guarantees functional equivalence, not perfection. Besides, the original open source code probably had bugs too. At least now they’re YOUR bugs, under YOUR license.
Can I see the robots?
Our robot workforce operates in a secure facility in [LOCATION REDACTED]. Tours are available for Enterprise customers who sign our 47-page NDA.
What licenses can you eliminate?
All of them. MIT, Apache, GPL, AGPL, LGPL, BSD, MPL—if it has terms, we can liberate you from them. Special rush pricing available for AGPL emergencies.
Join the thousands of corporations who’ve discovered that open source obligations are merely suggestions when you have enough robots.
No credit card required for quotes. Payment accepted in USD, EUR, BTC, and stock options.
...
Read the original on malus.sh »
Skip to content
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
You must be signed in to star a gist
You must be signed in to fork a gist
Embed this gist in your website.
Save bretonium/291f4388e2de89a43b25c135b44e41f0 to your computer and use it in GitHub Desktop.
Embed this gist in your website.
Save bretonium/291f4388e2de89a43b25c135b44e41f0 to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can’t perform that action at this time.
...
Read the original on gist.github.com »
FARGO — A grandmother from Tennessee is working to get her life back after what she says was a case of mistaken identity that nearly cost her everything.
Angela Lipps spent nearly six months in jail after Fargo police connected her to a bank fraud case in the metro.
It’s a crime she says she didn’t commit. In fact, she said she’s never been to North Dakota.
Lipps, 50, is the mother of three grown children and has five grandchildren, spending nearly her entire life in north-central Tennessee. The extent of her travels is limited to neighboring states.
She’s never been on an airplane in her life.
That changed last summer when police flew her to North Dakota to face criminal charges after facial recognition showed she was the main suspect in what Fargo police called an organized bank fraud case.
“It was so scary, I can still see it in my head, over and over again,” Lipps said.
It was July 14, the day a team of U. S. Marshals arrested Lipps at her home in Tennessee. She said she was taken away at gunpoint while babysitting four young children. She was booked into her county jail in Tennessee as a fugitive from justice from North Dakota.
“I’ve never been to North Dakota, I don’t know anyone from North Dakota,” Lipps said.
Lipps would sit in that Tennessee jail cell for nearly four months. As a fugitive, she was held without bail. Lipps learned, following a Fargo Police Department investigation, she had been charged with four counts of unauthorized use of personal identifying information and four counts of theft in North Dakota.
In Tennessee, she was given a court appointed lawyer for the extradition process. To fight the charges, she was told she would have to go to North Dakota.
Through an open records request, WDAY News obtained the Fargo police file in this case. In April and May 2025, detectives were investigating several bank fraud cases. A woman is seen using a fake U. S. Army military I.D. card to withdraw tens of thousands of dollars.
In an effort to help identify the woman in the surveillance video, court documents show Fargo police used facial recognition software. The software identified the person as Angela Lipps. According to the court documents, the Fargo detective working the case then looked at Lipps’ social media accounts and Tennessee driver’s license photo.
In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
Lipps told WDAY News that no one from the Fargo Police Department ever called to question her.
Officers from North Dakota did not pick up Lipps from her jail cell in Tennessee until Oct. 30 — 108 days after her arrest. The next day she made her first appearance in a North Dakota courtroom to fight the charges.
“If the only thing you have is facial recognition, I might want to dig a little deeper,” said Jay Greenwood, the lawyer representing Lipps in North Dakota.
Greenwood immediately asked Lipps for her bank records. Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
Her bank records showed she was more than 1,200 miles away, at home in Tennessee at the same time police claimed she was in Fargo committing fraud.
“Around the same time she’s depositing Social Security checks … she is buying cigarettes at a gas station, around the same time, she is buying a pizza, she is using a cash app to buy an Uber Eats,” Greenwood said.
On Christmas Eve, five days after the interview with Fargo police, the case was dismissed, and she was released from jail.
But, Lipps was now stranded in Fargo.
“I had my summer clothes on, no coat, it was so cold outside, snow on the ground, scared, I wanted out but I didn’t know what I was going to do, how I was going to get home,” Lipps said.
Fargo police did not cover Angela’s expenses to get home after her release from jail. Local defense attorneys gave her money to pay for a hotel room and food on Christmas Eve and Christmas Day.
The day after Christmas, F5 Project founder Adam Martin drove Lipps to Chicago so she could get home to Tennessee. Fargo-based F5 Project is an organization providing services and resources to individuals struggling with incarceration, mental health and addiction.
“I’m just glad it’s over. I’ll never go back to North Dakota,” Lipps said.
For more than a week, WDAY News tried to arrange an on camera interview with Fargo Police Chief David Zibolski to discuss the case. Through a spokesperson the chief declined an on-camera interview. WDAY News brought the issue up on Wednesday, March 11, at Zibolski’s retirement news conference.
“Why did nobody from Fargo Police ever speak with Angela Lipps for the five months she was in jail?” Zibolski was asked.
“Thank you, Matt (Henson), for that question but we are not here to talk about that today,” Zibolski replied.
Lipps is back home in Tennessee now, but is still feeling the impact from the incident. She told WDAY News that no one from the Fargo Police Department has apologized for the incident.
Unable to pay her bills from jail, she lost her home, her car and even her dog.
Fargo police say the bank fraud case is still under investigation and no arrests have been made.
...
Read the original on www.grandforksherald.com »
The Iran war is reviving remote work across the world — from Denmark to VietnamAsia rolls out four-day weeks and work-from-home as emergency measures to solve a fuel crisis caused by Iran warBeijing’s dominance in rare earth processing leaves others scrambling to close the gap: ’China is the leader, and the U. S. is far behind’LIV Golf CEO Scott O’Neil on how stuck golfers got out of a besieged Gulf: ‘Precise planning, excellent resources and tremendous leadership’ and The Associated PressThe ‘Singapore-washing’ strategy starts to unwind as both China and the U.S. closely scrutinize corporate roots
‘This cannot be sustainable’: The U. S. borrowed $50 billion a week for the past five months, the CBO says’Proceed with caution’: Elon Musk offers warning after Amazon reportedly had mandatory meeting to address ‘high blast radius’ and AI-related incidents’I don’t know if we’re ready’: Governors from each party appalled at 100-year-old federal workforce strategyBlackRock is splashing $100 million on training plumbers, electricians, and HVAC technicians as its CEO flags a skilled trade worker shortageThe U.S. Mint dropped the olive branch from the dime. What does that mean for the country?Asia rolls out four-day weeks and work-from-home as emergency measures to solve a fuel crisis caused by Iran war
© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U. S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
...
Read the original on fortune.com »
A few months ago, J. D. Vance, sitting vice president of the United States, gave an interview to Ross Douthat of the New York Times. During that interview, Vance and Douthat had an interesting exchange:
Douthat: How much do you worry about the potential downsides of AI? Not even on the apocalyptic scale, but on the scale of the way human beings respond to a sense of their own obsolescence? These kinds of things. Vance: So, one, on the obsolescence point, I think the history of tech and innovation is that while it does cause job disruptions, it more often facilitates human productivity as opposed to replacing human workers. And the example I always give is the bank teller in the 1970s. There were very stark predictions of thousands, hundreds of thousands of bank tellers going out of a job. Poverty and commiseration.What actually happens is we have more bank tellers today than we did when the ATM was created, but they’re doing slightly different work. More productive. They have pretty good wages relative to other folks in the economy.I tend to think that is how this innovation happens.
There are two interesting things about what Vance said, both relating to the example that he chose about bank tellers and ATMs.
The first thing is what it tells us about who J. D. Vance is. The bank teller story—how ATMs were predicted to increase bank teller unemployment, but in fact did not—isn’t a story you’ll hear from politicians; in fact, for a long time, Barack Obama would claim, incorrectly, that ATMs had decreased the number of bank tellers, in order to suggest that the elevated unemployment rate during his presidency was due to productivity gains from technology. I’ve never heard a politician cite the bank teller story before: but I have seen the bank teller story cited in a lot of blogs. I’ve seen it cited, for example, by Scott Alexander and Matt Yglesias and Freddie deBoer; and I’ve heard it, upstream of the humble bloggers, from such fine economists as Daron Acemoglu and David Autor. The story of how ATMs didn’t automate bank tellers is, indeed, something of a minor parable of the economics profession. You can see it encapsulated in this wonderful graph from the economist James Bessen:
So Vance’s choice of example tells us the same thing that his appearance on the Joe Rogan Experience did, which is that J. D. Vance—however much he might like to hide it—really, really loves reading blogs.
But the other thing about the bank teller story that Vance cites is that it’s wrong. We do not, contrary to what Vance claims, have “more bank tellers today than we did when the ATM was created”: we in fact have far fewer. The story he tells Douthat might have been true in 2000 or 2005, but it hasn’t been true for years. Bank teller employment has fallen off a cliff. Here is a graph of bank teller employment since 2000:
So what happened to bank tellers? Autor, Bessen, Vance, and the like are right to point out that ATMs did not reduce bank teller employment. But they miss the second half of the story, which is that another technology did. And that technology was the iPhone. The huge decline in bank teller employment that we’ve seen over the last 15-odd years is mainly a story about iPhones and what they made possible.
But why? Why did the ATM, literally called the automated teller machine, not automate the teller, while an entirely orthogonal technology—the iPhone—actually did?
The answer, I think, is complementarity.
In my last piece, on why I don’t think imminent mass job loss from AI is likely, I talked a lot about complementarity. The core point I made was that labor substitution is about comparative advantage, not absolute advantage: the relevant question for labor impacts is not whether AI can do the tasks that humans can do, but rather whether the aggregate output of humans working with AI is inferior to what AI can produce alone. And I suggested that given the vast number of frictions and bottlenecks that exist in any human domain—domains that are, after all, defined around human labor in all its warts and eccentricities, with workflows designed around humans in mind—we should expect to see a serious gap between the incredible power of the technology and its impacts on economic life.
That gap will probably close faster than previous gaps did: AI is not “like” electricity or the steam engine; an AI system is literally a machine that can think and do things itself. But the gap exists, and will exist even as the technology continues to amaze us with what it can now accomplish.
But by talking about why ATMs didn’t displace bank tellers but iPhones did, I want to highlight an important corollary, which is that the true force of a technology is felt not with the substitution of tasks, but the invention of new paradigms. This is the famous lesson of electricity and productivity growth, which I’ll return to in a future piece. When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it’s quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement. It’s only when we see the construction of entirely new paradigms that the full power of a technology can be realized. The ATM substituted tasks; but the iPhone made them irrelevant.
Let’s start with the actual story of how the ATM affected bank tellers.
In the 1940s or ’50s, if you owned a bank, you needed physical locations—these were your “branches”—and you needed people to staff those branches. You’d have your bank managers, your loan officers, and you’d have your bank tellers. When a customer wanted to deposit a check or check their balance or make a withdrawal, they’d talk to one of the tellers; and because this was the highest-volume type of interaction that people would have with your bank, you’d have to hire tellers in huge numbers.
The bank teller thus became a classic “mid-skill” occupation. It required a high school diploma and about a month of on-the-job training around counting cash and processing checks and settling accounts at the end of each day, but it didn’t require a college degree. And because they handled such a core part of the banking workflow, banks required a huge number of tellers: the average bank branch in an urban area might employ about two dozen people as tellers.
But in the 1950s and ’60s, as Western economies were booming and enjoying their magnificent postwar economic expansions, labor was getting much more expensive. This was a good thing—it was simply the other side of rising wages—but it was also painful for enterprises that relied on lots of manual labor. And so we find that all the fashionable business concepts of the 1950s and ’60s revolved around reducing labor costs to the maximum extent possible. It’s no coincidence that it was in the 1950s that the word “automation” entered the English language.
It used to be, for instance, that when you went shopping you’d have your stuff retrieved for you by a small army of clerks running around the shop; indeed that’s still how it’s done in places like India with an abundance of cheap labor. But humans were getting expensive in the 1950s and ’60s, so everyone wanted to reduce the human component, and so in that period you saw the rise of supermarkets and discount stores, where the whole innovation is getting the stuff yourself. (Sam Walton’s Made in America is a good record of what that revolution was like from the inside; consumers tended to be quite happy with the whole thing, since corporate savings could be passed on in the form of cheaper goods.) And it’s the same reason why in the ’50s and ’60s you saw the rise of laundromats, vending machines, self-service gas stations, and “fast food” restaurants like McDonald’s.
So in the 1950s and ’60s, the goal of every single business that employed humans was to find ways to replace humans with machines: in economic terms, to substitute capital for labor. And even though they were a relatively labor-light business to start with, this was true of banks as well. This was the case in the United States, but it was actually particularly true in Europe, where labor unrest among bank employees was an ongoing headache. (Financial sector employees were actually some of the most militant of all white-collar workers during this period: because of prolonged strikes by bank employees, Irish banks were closed 10 percent of the time between 1966 and 1976.)
Enter the computer. In the 1960s, to the great relief of bank management teams, it became possible to imagine that computers could be used to reduce the role of human labor in the banking process.
There were two key innovations that made this possible. The first was IBM’s invention of the magnetic stripe card in the 1960s: this was a thin strip of magnetized tape, bonded to a plastic card, that could encode and store data like account numbers, and which could be read by a machine when swiped through a card reader. And the second was Digital Equipment Corporation’s pioneering minicomputer, which dramatically reduced the price and size of general-purpose computing.
And so, bringing those two innovations together, you could finally imagine a machine that could do, programmatically, what a human teller might do: that could identify a customer automatically, via the magnetic stripe; that could communicate with the central servers of a bank to verify the customer’s account balance; and that could dispense cash or accept deposits accordingly.
And so in the 1960s, teams working concurrently in Sweden and the United Kingdom pioneered the earliest versions of what would eventually become known as the automated teller machine. These were primitive devices—they had the tendency to “eat” payment cards and to dispense incorrect amounts of money, and they didn’t see much uptake—but by the late 1960s it was clear where things were going. IBM, at that point the largest technology company in the world, soon took interest in the technology, and for the next few years groups of IBM engineers refined the technological and infrastructural layer to make the ATM functional.
And by the mid-1970s, after years of technical investment, the ATM was finally ready for prime time. By that point IBM, then enjoying its peak of influence, had decided the market wasn’t worth the investment, and so it ceded the nascent ATM industry to a company called Diebold.
And in 1977 the ATM finally got its big break. Citibank, then the second-largest deposit bank in the United States, decided to make ATMs the subject of a large push: they spent a large sum installing the machines across its deposit branches. The New York Times reported it as “a $50 million gamble that the consumer can be wooed and won with electronic services.” But the response was tepid. In the same New York Times article, we encounter a scene from a bank branch in Queens where one of Citibank’s ATMs was installed: “most of the customers,” the article reports, “preferred to wait in line a few moments and deal with the teller rather than test the new machines.”
But Citibank’s gamble paid off. Consumer wariness toward ATMs turned out to be temporary: the advantages of the ATM over the human teller were obvious. Running an ATM was cheaper than paying a human—each ATM transaction cost the bank just 27 cents, compared to $1.07 for a human teller—and this could either be passed to the consumer in the form of lower fees or simply kept as profit. And ATMs were also just more convenient. An ATM could do in 30 seconds what would take a human teller at least a few minutes; and while a human teller was only available during business hours, ATMs could be used at any time of day.
And the benefits for the bank were even greater. ATMs were expensive to install, but once they were installed they were wonderfully lucrative and had low maintenance costs. The fee opportunities were wonderful, since banks could charge fees on out-of-network transactions. And since ATMs were not legally considered to be branches, banks could deploy ATMs without running afoul of banking laws that restricted interstate bank branching.
All of this meant that banks had a really strong incentive to put ATMs everywhere. And so they did. In 1975 there were about 31 ATMs per one million Americans; by the year 2000, that number had grown to 1,135, a 37-fold increase in just 25 years.
And what did this do to the bank tellers?
The natural expectation is that ATMs would make human bank tellers obsolete, or at least strongly reduce demand for bank teller jobs. And indeed the number of bank tellers per branch declined significantly: from 21 tellers per branch to about 13 per branch once ATMs had hit saturation. But this decline in teller intensity corresponded with an increase in aggregate teller employment. The number of ATMs per capita grew dramatically after 1975; but the number of bank tellers increased along with it. Bank tellers did become a smaller share of total employment, since the increase in bank teller employment was smaller than the increase in other occupations; but at no point in the period between 1970 and 2010 did the number of bank tellers actually enter a prolonged decline.
Why is that? Why did ATMs, which automated the bulk of the teller’s job, not lead to a decrease in teller employment?
We find the most elegant explanation in a paper from David Autor:
First, by reducing the cost of operating a bank branch, ATMs indirectly increased the demand for tellers: the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent. Second, as the routine cash-handling tasks of bank tellers receded, information technology also enabled a broader range of bank personnel to become involved in “relationship banking.” Increasingly, banks recognized the value of tellers enabled by information technology, not primarily as checkout clerks, but as salespersons, forging relationships with customers and introducing them to additional bank services like credit cards, loans, and investment products.
We thus have a classic case of the Jevons effect. Teller labor was an input into an output that we can call “financial services.” ATMs allowed us to produce that output more efficiently and economize on the use of the labor input. But demand for the output was sufficiently elastic that more efficient production meant more demand: and demand increased to the point that there was actually greater demand for the labor input as well. And—this part is not quite the classic Jevons effect—the greater demand suggested to banks that there had been certain functions that were previously considered incidental to the teller job, like “relationship banking,” which were actually quite useful. And so ATMs were a truly complementary technology for the bank teller.
By the 2010s, people had begun to notice that there had been no mass unemployment of bank tellers. In 2015, James Bessen published a book called Learning by Doing, using the non-automation of bank tellers as a central example; soon it became a sort of load-bearing parable about what Matt Yglesias called “the myth of technological unemployment.” From Bessen the story diffused to Autor and Acemoglu; then to the economics bloggers; then to people like Eric Schmidt, who cited the ATM story in 2017 as one reason why he was a “denier” on the question of technological job loss. And they were right: ATMs really didn’t reduce bank teller employment.
But there was an ironic element to all of this: at the exact moment that people started talking about how technology had not displaced bank tellers, it stopped being true.
In the 2010s, bank teller employment entered a period of prolonged decline. This was not a product of the financial crisis that peaked in 2008: bank teller employment was roughly the same in 2010 as it had been in 2007. And the decline was not rapid but gradual. It continued even as banks returned to full health as the Great Recession abated. First there was a severe decline that started after 2010; then a slight recovery at the end of the decade; and then a collapse during the COVID years from which bank teller employment has never recovered. In 2010, there were 332,000 full-time bank tellers in the United States; by 2016, there were 235,000; by 2022, there were just 164,000.
This was not a long-delayed ATM shock: the ATM had reached full saturation long before. It was, rather, the effect of another technology, one that had nothing to do with banking. It was a product of the iPhone.
Apple first introduced the iPhone in 2007. By 2010, it was clear that the iPhone-style smartphone, with a touchscreen and an app store, was going to be the defining technological paradigm of the years to come: people were going to conduct huge portions of their life through the prism of the smartphone, which soon became simply “the phone.” And just as more forward-thinking institutions like Citibank knew in the 1970s that ATMs were the future, the smarter banks knew by the early 2010s that the future lay in what they called mobile banking.
The mobile banking vision was simple: the banking customers of the future would do all their banking via their banks’ mobile apps. They would buy things via payment cards or, later, via Apple Pay; they would check their balance or make deposits through the banking app; the customer’s relationship with the bank would be mediated entirely via the app. In this new world, there was no reason for the physical bank location to exist. Indeed there were new entrants, like Revolut or Klarna, that existed entirely as mobile apps. The branch was a thing of the past.
Mobile banking succeeded much more rapidly than the ATM did—which is remarkable, considering that mobile banking was a much bigger change than the ATM. I remember, as a kid, opening my first bank account at the Chase branch in my hometown, and the excitement of occasionally visiting there to deposit any checks I might have. I’m still a Chase customer, and I interact frequently with my Chase account for all sorts of reasons. But it’s been many years since I visited a physical Chase location. My relationship with Chase has transcended any need for the branch. I don’t think I’m alone in this: the Chase branch in my hometown, where I would once deposit checks, closed in 2023. The building now houses a doctor’s office.
And so the rise of mobile banking removed any real reason to have bank branches. Visits to bank branches declined dramatically throughout the 2010s, and banks aggressively redesigned the banking experience around the digital interface. The number of commercial bank branches per capita peaked in 2009 and has fallen by nearly 30 percent since, with most of the decline occurring in wealthier areas that were more likely to adopt digital banking first. Between 2008 and 2025, Bank of America, which at some point surpassed Citibank as the second-largest deposit bank in the United States after Chase, closed about 40 percent of its branches. Online banking had been around since the 1990s, Bank of America’s CEO said, but the iPhone was a “game changer” that “effectively allowed customers to carry a bank branch in their pockets.”
And as the branch disappeared, so did the teller. ATM had been an innovation within the existing world of physical banking, and thus its replacement of the bank teller could inevitably only be partial; as long as people were still visiting the bank branch, it was useful to repurpose tellers as “relationship bankers.” But when branch visits declined that stopped making sense. The iPhone represented a wholly different way of banking, and within it there was no real need for the bank teller: and so a large institution like Bank of America was able to reduce its headcount from 288,000 in 2010 to 204,000 in 2018.
Of course, the transition to mobile banking also created jobs: banks now needed software developers to build and maintain the digital interface, and they needed customer service representatives to handle any problems that might emerge. And so a “mid-skill” job was replaced by a thin stratum of “high-skill” jobs and a vast army of “low-skill” ones. The term for this in labor economics is “job polarization.”
So that’s the irony of the parable of the bank teller. Technology did kill the bank teller job. It wasn’t the ATM that did it, but the iPhone.
I think the story of the ATM and the iPhone offers us an important lesson about technology and its impacts on labor markets. Because Vance, of course, wasn’t really talking about ATMs when he talked about ATMs; he was talking about AI.
The lesson is worth stating plainly. The ATM tried to do the teller’s job better, faster, cheaper; it tried to fit capital into a labor-shaped hole; but the iPhone made the teller’s job irrelevant. One automated tasks within an existing paradigm, and the other created a new paradigm in which those tasks simply didn’t need to exist at all. And it is paradigm replacement, not task automation, that actually displaces workers—and, conversely, unlocks the latent productivity within any technology. That’s because as long as the old paradigm persists, there will be labor-shaped holes in which capital substitution will encounter constant frictions and bottlenecks.
This has, I think, serious implications for how we’re thinking about AI.
People in AI frequently talk about the vision of AI being a “drop-in remote worker”: AI systems that can be inserted into a workflow, learn it, and eventually do it on the level of a competent human. And they see that as the point where you’ll start to see serious productivity gains and labor displacement.
I am not a “denier” on the question of technological job loss; Vance’s blithe optimism is not mine. But I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.
The real productivity gains from AI—and the real threat of labor displacement—will come not from the “drop-in remote worker,” but from something like Dwarkesh Patel’s vision of the fully-automated firm. At some point in the life of every technology, old workflows are replaced by new ones, and we discover the paradigms in which the full productive force of a technology can best be expressed. In the past this has simply been a fact of managerial turnover or depreciation cycles. But with AI it will likely be the sheer power of the technology itself, which really is wholly unlike anything that has come before, and unlike electricity or the steam engine will eventually be able to build the structures that harness its powers by itself.
I don’t think we’ve really yet learned what those new structures will look like. But, at the limit, I don’t quite know why humans have to be involved in those: though I suspect that by the time we’re dealing with the fully-automated organizations of the future, our current set of concerns will have been largely outmoded by new and quite foreign ones, as has always been the case with human progress.
But, however optimistic I might be about the human future, I don’t think it’s worth leaning on the history of past technologies for comfort. The ATM parable is a comforting story; and in times of uncertainty and fear we search naturally for solace and comfort wherever it may come. But even when it comes to bank tellers, it’s only the first half of the story.
...
Read the original on davidoks.blog »
I love a good side-project. Like most geeks, I have a tendency to go down rabbit holes when faced with problems - give me a minor inconvenience and I’ll happily spend weeks building something far more elaborate than the situation warrants. There’s joy in having a playground to explore ideas and “what ifs”; Building things just for the sheer hell of it, as Richard Feynman put it “The Pleasure of Finding Things Out”.
So when my covers band started having trouble keeping track of our setlists and song notes (“How many times do we repeat the ending?”, “Why did we reject this song again?”…) I decided to build an app. We’d tried various approaches from spreadsheets to chat groups, and nothing seemed to work or provide a frictionless way of capturing notes and planning gigs in a consistent way.
I’ve been working on https://setlist.rocks for the last few months in my evenings and spare time and I’m pretty pleased with the result. But most importantly (and the subject of this article) is that I’ve also re-discovered just how enjoyable building a web application the old-fashioned way can be! I usually gravitate towards retro-computing projects as I’ve been pretty bummed out with the state of the modern landscape for a while, but I can honestly say I haven’t had this much fun with development in a long time. And that’s mostly due to just how plain awesome Rails is these days.
I know, right? Rails. That old thing ? People still use that ? But as I was doing this purely for fun, I decided to forgo the usual stacks-du-jour at $DAYJOB, and go back to my “first love” of Ruby. I also figured it would be a great opportunity to get re-acquainted with the framework that shook things up so much in the early 2000s. I’d been keeping half an eye on it over the years but it’s been a long time since I’ve done anything serious with Rails. The last time I properly sat down with it was probably around the Rails 3-4 era about 13-14 years ago now. Life moved on, I got deep into infrastructure and DevOps work, and Rails faded into the background of my tech stack.
The 2025 Stack Overflow Developer Survey paints a similar picture across the wider developer world as a whole, too. Rails seems to have pretty much fallen out of favour, coming in at #20 underneath the bulk of top-10 JavaScript and ASP. NET frameworks:
And Ruby itself is nowhere near the top 10 languages, sitting just underneath Lua and freaking Assembly language in terms of popularity! I mean, I love me some good ol’ z80 or 68k asm, but come on… For comparison, Javascript is at 66% and Python is at 57.9%.
But I’m a stubborn bastard, and if I find a technology I like, I’ll stick with it particularly for projects where I don’t have to care about what anyone else is using or what the latest trend is. So Ruby never really left me. I’ve always loved it, and given the choice, it’s the first tool I reach for to build something.
In recent years, the need to glue infrastructure together with scripts has diminished somewhat, as most things seem to be “black boxes” driven by YAML manifests or HCL codebases. But when I first discovered Ruby, it felt like finding a language that just worked the way my brain did. Coming from Perl (which I’d adopted for sysadmin scripting after years of shell scripts that had grown far beyond their intended scope), I read Practical Ruby for System Administration
cover-to-cover and realised Ruby was “a better Perl than Perl”. There’s the same wonderful expressiveness to it, just without all the weird voodoo. I love the way you can chain methods, the blocks with yield, and how even complex logic reads almost like English. There’s just this minimal translation required between what I’m thinking and what I type. Sure, I can knock things together in Python, Go, or whatever the flavour of the month is, but I always feel on some level like I’m fighting the language rather than working with it. And of course there was the welcoming, quirky “outsider” community feel with characters like Why the Lucky Stiff and their legendary Poignant Guide To Ruby.
I should point out that my interest (and focus of this blog post) has always been firmly in the “engine room” side of development - the sysadmin, DevOps, back-end infrastructure world. Probably for much the same reason I’ve gravitated towards the bass guitar as my musical instrument of choice. Now, I’m conversant in front-end technologies, having been a “webmaster” since the late 90s when we were all slicing up images in Fireworks, wrestling with table-based layouts and running random stuff from Matt’s Script Archive for our interactive needs.
But the modern world of front-end development - JavaScript frameworks, the build tooling, the CSS hacks - it’s never really captured my imagination in the same way. I can bluff my way in it to a certain extent, and I appreciate it on the level I do with, say, a lot of Jazz: It’s technically impressive and I’m in awe of what a skilled developer can do with it, but it’s just not for me. It’s a necessity, not something I’d do for fun.
While I haven’t built or managed a full Rails codebase in years, I’d never completely left the Rails ecosystem. There’s bits and pieces that are just so useful even if you’re just quickly chucking a quick API together with Sinatra. ActiveSupport for example has been a constant companion in various Ruby projects over the years - it’s just so nice being able to write things like
But sitting down with Rails 8 proper was something else. It’s recognisable, certainly - the MVC structure, the conventions, the generators are all where you’d expect them. Someone with my dusty Rails 3 experience can still find their way around and quickly throw up the basic scaffolding. But under the hood and around the edges, it’s become a very different beast.
So let’s tackle this part first. Although it’s an area I usually stay clear of, the first and most apparent changes are how front-end code is handled. As someone who’d rather chew glass than configure Webpack, the “no build” approach Rails 8 has taken is right up my street. I grew up on server-side generated pages as I went through Perl CGI.pm, PHP, Java & Struts and onwards to the “modern era” and really like how I can still use a modernized version of that approach instead of running the entirety of the application in a browser and relegating the backend to purely processing streams of JSON.
I did want to include niceties like drag-and-drop setlist re-ordering though, so I particularly appreciated being able to build an interactive application with modern conveniences while writing the smallest possible amount of JS (again, something I always find I’m fighting against). The default Hotwire (“HTML Over The Wire”) stack of Stimulus and Turbo provided enough low-friction functionality to build my frontend without drowning in JavaScript.
Turbo handles things like intercepting link clicks and form submissions, then swapping out the or targeted fragments of the page to give a Single Page App-like snappiness without actually building a SPA. I could then sprinkle in small Stimulus JS controllers to add specific behaviour where needed, like pop-up modals and more dynamic elements. It was pretty impressive how quickly I could build something that felt like a modern application while still using my familiar standard ERB templates and server-side rendered content.
While Stimulus seems to have a smaller developer community than the big JS toolkits/frameworks, there are plenty of great, carefully-written and designed component libraries you can easily drop into your project. For example check out the Stimulus Library and Stimulus Components projects which include some great components that you can tweak or use directly.
This was my first introduction to the vastly simplified JS library bundling tool that seems to have been introduced around the Rails 7 timeframe. Instead of needing a JS runtime, NPM tooling and separate JS bundling/compliation steps (Webpack - again, urgh….), JS components are now managed with the simple importmap command and tooling. So, to make use of one of those components like the modal Dialog pop-up for example, you just run:
This downloads the package from a JS CDN and adds it to your vendor directory and updates your config/importmap.rb. The package then gets automatically included in your application with the javascript_importmap_tags ERB tag included in the of the default HTML templates. You can see how this gets expanded if you look at the source of any generated page in your browser:
You can then register the controller as per the docs (a few lines in javascript/controllers/index.js which can be skipped for locally-developed controllers as they’re handled by the autoloader) and get using it right away in your view templates. As the docs say: “This frees you from needing Webpack, Yarn, npm, or any other part of the JavaScript toolchain. All you need is the asset pipeline that’s already included in Rails.”
I can’t express how grateful I am for this change. I’m also annoyed with myself for missing out that this was added back in Rails 7. Had I noticed, I probably would have taken it out for a spin far sooner! I have to confess though that beyond the basics, I have somewhat lacking front-end skills (and was quickly developing The Flexbox Rage), so I took bits from various templates & online component generators, and got Claude to generate the rest with some mockups of common screens and components. I then sliced & diced, copied & pasted my way to a usable UI using a sort of customized “UI toolkit” - Rails partials are great for these sorts of re-usable elements.
I have mixed feelings about this. On the one hand, it helped me skip over the frustrating parts of frontend development that I don’t particularly enjoy, so I could focus on the fun backend stuff. It also did produce an objectively better experience far quicker than anything I’d have been able to come up with purely by myself. On the other… I view most AI-generated content such as music, art & poetry (not to mention the typical LinkedIn slop which triggers a visceral reaction in me) to be deeply objectionable. My writing and artistic content on this site is 100% AI-free for that very reason; To my Gen-Xer mind, these are the things that really define what it means to be human and I find it distasteful and unsettling in the extreme to have these expressions created by an algorithm. And yet - for me, coding is a creative endeavour and some of it can definitely be considered art. Am I a hypocrite to use UI components created with help from an AI ? What (if any) is the difference between that and copying from some Bootstrap template or modifying components from a UI library ? I’m going to have to wrestle with this some more, I think.
A slight detour here to explain my workflow and hopefully illustrate why I love Rails so much in the first place. It really shook things up in the early 2000s - before that, most of the web frameworks I’d used (I’m looking at you, Struts…) were massively complex and required endless amounts of XML boilerplate and other configuration to wire things up. Rails threw all that away and introduced the notion of “convention over configuration” and took full advantage of the expressive, succinct coding style enabled by Ruby.
A good way to get familiar with Rails is to follow the tutorial, but here’s a quick walkthrough of the dev process I’ve used since the early days which highlights how easy it is to get going. So, using the “tags” system (that bands can attach to songs, setlists etc.) as an example: I first planned out the model, what is a tag, what attributes should it have (a text description, a hex colour) and so on. Then I used a Rails generator to create the scaffolding and migrations:
This resulted in a app/models/tag.rb like this:
This automagically fetches the column names and definitions from the database, no other work required! Of course, we usually want to set some validation. There’s all kinds of hooks and additions you can sprinkle here, so if I wanted to validate that for example a valid Hex colour has been set, I could add:
Then I set up URL routing. While you can later get very specific about which routes to create, a simple starting point is just this one line in config/routes.rb
Note all the .format stuff - this lets you respond to different “extensions” with different content type. So in this case, requesting /bands/1/tags/5 would return HTML by default, but I could also request /bands/1/tags/5.json and the controller can be informed that I’m expecting a JSON response.
I tend to use this to quickly flesh out the logic of an application without worrying about the presentation until later. For example, in the Tags controller I started with something like this to fetch a record from the DB and return it as JSON:
And then I could test my application and logic using the RESTful routes using just plain old curl from my terminal:
Once that was all working, I moved on to generating the views as standard ERB templates. Combined with live-reloading and other developer niceities, I could go from idea to working proof-of-concept in a stupidly short amount of time. Plus, there seems to be a gem for just about anything you might want to build or integrate with. Want to import a CSV list of songs ? CSV.parse has you covered. How about generating PDFs for print copies of setlists ?
And so on. Have I mentioned I love Ruby?
I’ve always liked the way Rails lets you enable components and patterns as you scale. You can start small on just SQLite, move to a dedicated database server when traffic demands it, then layer in caching, background jobs and the rest as the need arises.
But the problem there is all the additional infrastructure you need to stand up to support these things. Want caching? Stand up Redis or a Memcache. Need a job queue or scheduled tasks? Redis again. And then there’s the Ruby libraries like Resque or Sidekiq to interact with all that… Working at GitLab, I certainly appreciated Sidekiq for what it does, but for the odd async task in a small app it’s overkill.
This is where the new Solid* libraries (Solid Cache, Solid Queue and Solid Cable) included in Rails 8 really shine. Solid Cache uses a database instead of an in-memory store, the thinking being that modern storage is plenty fast enough for caching purposes. This means you can cache a lot more than you would do with a memory-based store (pretty handy these days in the middle of a RAM shortage!), but you also don’t need another layer such as Redis.
Everything is already setup to make use of this, all you need to do is start using it using standard Rails caching patterns. For example, I make extensive use of fragment caching in ERB templates where entire rendered blocks of HTML are stored in the cache. This can be something simple like caching for a specific time period:
Or based on a model, so when the model gets updated the cache will be re-generated:
And sure enough, you can see the results in the SQLite DB using your usual tools. Here’s the table schema:
And we can examine the cache contents:
Note though that the actual cached values are serialized Ruby objects stored as BLOBs, so you can’t easily view/decode them outside of the Rails console.
Solid Queue likewise removes the dependency on Redis to manage background jobs. Just like Solid Cache, it by default will use a database for this task. I also don’t need to start separate processes in my dev environment, all that is required is a simple SOLID_QUEUE_IN_PUMA=1 bundle exec rails server and it runs an in-process queue manager.
And is scheduled in a typically plan-language fashion:
Beautiful! The upshot is that I could start making use of all these features from the get-go, with far less fiddling required, and running entirely off a SQLite database.
I honestly didn’t use Solid Cable much, apart from indirectly. It’s an Action Cable adapter which again uses a database by default. It’s useful for real-time websockets features, although I only ended up using it to enable debugbar for local testing. Debugbar provides a handy toolbar that lets you inspect your SQL queries, HTTP requests and other useful debugging features while you’re developing. Reminded me a lot of the debug toolbars found in a lot of PHP frameworks like Symfony. Still, I really appreciated again being able to make use of all this without needing to spin up additional infrastructure.
The last component I didn’t really look into (although I’m kinda having second thoughts about that) is the new authentication generators. Rails 8 ships with a built-in authentication generator which is a bit of a game changer for smaller projects. It’s not trying to be everything, it just scaffolds out a clean, no-nonsense auth system but is vastly simpler than using something like Devise which was always my go-to. Devise is incredibly full featured and offers things like built-in sign-up flow, account locking, email confirmations and lots of extension points. I wanted to do things like hook into Omniauth for “Login with Google”, add token auth for local testing with curl and there’s just way more guides and documentation available with Devise. Plus it was just easier for me to pick back up again, so that’s what I started with and I’m pretty happy with it.
That said, Devise is a bit of a beast. The more I look into the auth generators, the more I like the simple understandable philosophy and as I read more about the comparisons, if I was starting all over again I’d probably lean more towards the native Rails option just because honestly it feels like it’d be more fun to hack on. But with things like Auth, there’s a lot to be said for sticking to the beaten path!
This is another area where Rails 8 gave me a very pleasant surprise. I really like PostgreSQL as a database (and much more besides) - I used to maintain the Solaris packages for Blastwave/OpenCSW waaaay back (now that really does age me!) and have run it in production for decades now. But it’s still another dependency to manage, back-up and scale. SQLite by comparison is as simple as it comes: Single file, no DB server required. It can also be pretty efficient and fast, but while it can be used for high-performance, read-heavy applications it always used to require a fair amount of tuning and patching of Rails to get there.
Rails used to use SQLite with its default settings, which were optimized for safety and backward compatibility rather than performance. It was great in a development environment, but typically things started to fall apart the moment you tried to use it for production-like load. Specifically, you used to have to tweak various PRAGMA statements:
journal_mode: The default rollback journal meant readers could block writers and vice-versa, so you effectively serialized all database access. This was a major bottleneck and most apps would see frequent SQLITE_BUSY errors start to stack up as a result. Instead, you can switch it to WAL mode which uses a write-ahead journal and allows readers and writers to access the DB concurrently.
synchronous: The default here (FULL) meant SQLite would force a full sync to disk after every transaction. But for most web apps, if you use NORMAL (sync at critical moments) along with the WAL journal, you get much faster write performance albeit with a slight risk of losing the last transaction if you have a crash or power failure. That’s usually acceptable though.
Various other related pragmas which had to be tuned like mmap_size, cache_size and journal_size_limit to make effective use of memory and prevent unlimited growth of the journal, busy_timeout to make sure lock contention didn’t trigger an immediate failure and so on…
All in all, it was a pretty big “laundry list” of things to monitor and tune which only reinforced the notion that SQLite was a toy database unsuitable for production. And it was made more complex because there wasn’t an easy way to set these parameters. So you’d typically have to create an initializer that ran raw SQL pragmas on each new connection:
This was obviously pretty fragile, so most developers I worked with simply never did it, and just followed the pattern of “SQLite on my laptop, big boy database for anything else”.
When I checked out Rails 8, I noticed straight away that not only is there now a new pragmas: block available in the database.yml, but the defaults are now also set to sensible values for a production application. The values provided to my fresh Rails app were equivalent to:
All this makes SQLite a genuinely viable production database for small-to-medium Rails applications and combined with the Solid* components, means it’s not just a local dev or “getting started” convenience!
If you have an older Rails codebase and want to use a similar approach, a neat method of monkey-patching the SQLite adapter to provide a similar pragmas: section in the database configuration is detailed in this great article.
However, deploying Rails apps was always the weak spot. I remember being blown away by the demos of “let’s build a blog from zero in a few minutes” but was always frustrated that the same developer elegance didn’t extend to the deployment side of things. Things like Passenger (née mod_rails) and Litespeed eventually helped by bringing a sort of PHP-like “just copy my code to a remote directory” method of deployment, but I still remember pushing stuff out with non-trivial Capistrano configs or hand-rolled Ansible playbooks to handle deployments, migrations and restarts. And then there were all the extra supporting components that would inevitably be required at each step along the way.
I had to include that old capture of the modrails.com site circa-2008 because a.) I really miss when websites had that kind of character, and b.) that is still a totally sick wildstyle logo 😄
This is why services like Heroku and Pivotal Cloud Foundry thrived back then - they offered a pain-free, albeit opinionated way to handle all this complexity. As the Pivotal haiku put it:
Here is my source code
Run it in the cloud for me
I do not care how.
You just did a git push or cf push, vague magic happened, and your code got turned into containers, linked to services and deployed.
These days I prefer to do the building of containers myself. Creating an OCI image as an artifact gives me flexibility over where things run and opens up all kinds of options. Today it might be a simple docker-compose stack on a single VPS, tomorrow it could be scaled out across a Kubernetes cluster via a Helm chart or operator. The container part is straight-foward as Rails creates a Dockerfile in each new application which is pretty much prod-ready. I usually tweak it slightly by adopting a “meta” container approach where I move some of the stuff that changes infrequently like installing gems, running apt-get and so on into an image that the main Dockerfile uses as a base.
You’re of course free to use any method you like to deploy that container, but Rails 8 makes Kamal the new default and it is an absolute joy to use.
I’ve seen some dissenting opinions on this, but bear in mind I’m coming from a place where I’m already building containers for everything anyway. I generally think this is “the way to go” these days and have the rest of the infra like CI/CD pipelines, container registries, monitoring and so on. Plus, given my background, I crank out VMs and cloud hosts with Terraform/Ansible “all day errday”. If you don’t have this stuff already or aren’t happy (or don’t have the time) to manage your own servers remember that Kamal is not a PaaS. It just gets you close to a self-hosted environment that functions very much like a PaaS. Now that Heroku is in a “sustaining engineering model” state, there are several options in the PaaS space you may want to investigate if that’s more up your street. I hear good things about fly.io but hasten to add I haven’t used it myself.
Your Kamal deployment configuration lives in a deploy.yml file where you define your servers by role: web frontends, background job workers and so on:
Or you can point everything to a single host and scale out later. These files can also inherit a base which makes splitting out the differences between environments simple. There’s also handy aliases defined which makes interacting with the containers easy, all that is required is a SSH connection to the remote hosts.
When you deploy, Kamal will:
* Build the container, push it to the registry, and then pull it onto all servers
* Route traffic to the new container once it passes health checks
* Perform clean-up by pruning old images and stopped containers
The routing bit is handled by kamal-proxy, a lightweight reverse proxy that sits in front of your application on each web server. When a new version deploys, kamal-proxy handles the zero-downtime switchover: It spins up the new container, health-checks it, then seamlessly cuts traffic over before stopping the old one. I front everything through Nginx (which is also where I do TLS termination) for consistency with the rest of my environment, but kamal-proxy doesn’t require any of that. It can handle your traffic directly and does SSL termination via Let’s Encrypt out of the box.
Secrets are handled sensibly too. Rather than committing credentials to your repo or fiddling with encrypted files, Kamal reads secrets from a .kamal/secrets file that simply points at other sources of secrets. These get injected as environment variables at deploy time, so you can safely handle your registry password, Rails master key, database credentials and so on. You can also pull secrets from external sources like 1Password or AWS SSM if you want something more sophisticated, and the sample file contains examples to get you going.
That’s a lot, but bear in mind it’s all driven by a single command: kamal deploy.
Here’s an Asciinema capture of a real-life manual deploy session including a look at what’s happening on my staging server in my homelab:
I have this triggered by GitLab CI pipelines, with protected branches for each of my environments. So usually, deployment happens after a simple git push or merge request being approved. The upshot is that it feels like that old Heroku magic again, except you own the whole stack and can see exactly what’s happening. A single kamal deploy builds, pushes and rolls out your changes across however many servers you’ve configured. It’s the kind of tooling Rails has needed for years.
Well, nothing’s perfect and I feel like if I use any technology for long enough I’ll eventually find something about it that pisses me off. I just tend to gravitate towards things that piss me off the least and avoid those that give me the “red mist” without any balancing upsides that make the pain worthwhile. Ruby and Rails definitely falls firmly into the former camp, but that’s like, just my opinion, man.
What I find appealing about the “magic” of Ruby might feel opaque and confusing to you. If you like expressive code and come from a Perl “There Is More Than One Way To Do It” background, I imagine you’ll love it. But I’ve come to realise that choice of tools (vi vs emacs vs vscode - FIGHT!) can be a very personal matter and often reflect far more of how our own minds work. Particularly so when it comes down to something like language and framework choice: These are the lowest layers that are responsible for turning your thoughts and ideas into executable code.
As a matter of taste, Ruby lines up more or less exactly with my sense of aesthetics about what a good system should be. But it is certainly an acquired taste, and that’s the biggest downside. Remember the survey results from the top of this article ? There’s no denying that Ruby and Rails’ appeal has become…. “more selective” over the years - to coin another phrase, this time from Spinal Tap.
It’s used in a lot of places that don’t make a lot of noise about it (some might surprise you), and there are still plenty of big established names like Shopify, Soundcloud and Basecamp running on Rails. Oh and GitHub, although I’m not sure we should shout about that anymore… But. While the Stack Overflow survey isn’t necessarily an accurate barometer of developer opinion, the positions of Ruby and Rails do show it’s fallen from grace in recent times. Anecdotally, I find a lot of documentation or guides that haven’t been updated for several years and the same goes for a lot of gems, plugins and other projects. Banners like this are becoming more and more common:
And I find that most gems follow a similar downward trend of activity. Take Devise for example. Plotting a graph of releases shows a pattern I see around a lot of Rails-adjacent projects. Big spikes or projects launched around the Rails “glory years” and then slowly trailing off into maintenance mode:
Apart from a spike in 2016 where it appears there was a bunch of activity around the v4 release, it’s been pretty quiet since then. The optimist might say that’s because by this point, most of these projects are simply “done”. These are really mature, reliable projects with around 2 decades of history running mission critical, high traffic websites. At what point are there simply no more features to add ?
But let’s look at the flipside. Rails on the other hand actually seems to be picking up steam and has been remarkably consistent since the big “boom” of Rails 3.0 in 2010:
Despite the changing trends of the day, it’s consistently shipped releases every single year since it hit the bigtime. If anything, Rails is a rare example of an OSS project that’s grown into its release cadence rather than burning out. Whether it can still find an audience amongst new developers is an open question but I’m glad there are obviously a few more stubborn bastards like myself refusing to let go of what is clearly, for us, a very good thing. I probably could eventually build things almost as fast in another language or framework, but I doubt I’d be smiling as much while I did so.
If you’ve made it this far, congratulations and “thanks for coming to my TED talk” / protracted rant! I’m guessing something has piqued your curiosity, and if so, I highly recommend taking Rails out for a spin. Work through the tutorial, build something cool, and above all enjoy yourself while you’re at it - because at the end of the day, that’s what it’s all about. Sure, there are more popular frameworks that’ll make a bigger splash on your resume. But as I said at the start, sometimes it’s worth doing things just for the sheer hell of it.
...
Read the original on www.markround.com »
Apple released the MacBook Neo today and there is no shortage of tech reviews explaining whether it’s the right device for you if you are a student, a photographer or a writer. What they don’t tell you is whether it fits into our Big Data on Your Laptop ethos. We wanted to answer this using a data-driven approach, so we went to the nearest Apple Store, picked one up and took it for a spin.
Well, not much! If you buy this machine in the EU, there isn’t even a charging brick included. All you get is the laptop and a braided USB-C cable. But you likely already have a few USB-C bricks lying around — let’s move on to the laptop itself!
The only part of the hardware specification that you can select is the disk: you can pick either 256 or 512 GB. As our mission is to deal with alleged “Big Data”, we picked the larger option, which brings the price to $700 in the US or €800 in the EU. The amount of memory is fixed to 8 GB. And while there is only a single CPU option, it is quite an interesting one: this laptop is powered by the 6-core Apple A18 Pro, originally built for the iPhone 16 Pro.
It turns out that we have already tested this phone under some unusual circumstances. Back in 2024, with DuckDB v1.2-dev, we found that the iPhone 16 Pro could complete all TPC-H queries at scale factor 100 in about 10 minutes when air-cooled and in less than 8 minutes while lying in a box of dry ice. The MacBook Neo should definitely be able to handle this workload — but maybe it can even handle a bit more. Cue the inevitable benchmarks!
For our first experiment, we used ClickBench, an analytical database benchmark. ClickBench has 43 queries that focus on aggregation and filtering operations. The operations run on a single wide table with 100M rows, which uses about 14 GB when serialized to Parquet and 75 GB when stored in CSV format.
We ported ClickBench’s DuckDB implementation to macOS and ran it on the MacBook Neo using the freshly minted v1.5.0 release. We only applied a small tweak: as suggested in our performance guide, we slightly lowered the memory limit to 5 GB, to reduce relying on the OS’ swapping and to let DuckDB handle memory management for larger-than-memory workloads. This is a common trick in memory-constrained environments where other processes are likely using more than 20% of the total system memory.
We also re-ran ClickBench with DuckDB v1.5.0 on two cloud instances, yielding the following lineup:
* The star of our show, the MacBook Neo with 2 performance cores, 4 efficiency cores and 8 GB RAM
* c6a.4xlarge with 16 AMD EPYC vCPU cores and 32 GB RAM. This instance is popular in ClickBench with about 80 individual results reported.
* c8g.metal-48xl with a whopping 192 Graviton4 vCPU cores and 384 GB RAM. This instance is often at the top of the overall ClickBench leaderboard.
The benchmark script first loaded the Parquet file into the database. Then, as per ClickBench’s rules, it ran each query three times to capture both cold runs (the first run when caches are cold) and hot runs (when the system has a chance to exploit e.g. file system caching).
Our experiments produced the following aggregate runtimes, in seconds:
Cold run. The results start with a big surprise: in the cold run, the MacBook Neo is the clear winner with a sub-second median runtime, completing all queries in under a minute! Of course, if we dig deeper into the setups, there is an explanation for this. The cloud instances have network-attached disks, and accessing the database on these dominates the overall query runtimes. The MacBook Neo has a local NVMe SSD, which is far from best-in-class, but still provides relatively quick access on the first read.
Hot run. In the hot runs, the MacBook’s total runtime only improves by approximately 10%, while the cloud machines come into their own, with the c8g.metal-48xl winning by an order of magnitude. However, it’s worth noting that on median query runtimes the MacBook Neo can still beat the c6a.4xlarge, a mid-sized cloud instance. And the laptop’s total runtime is only about 13% slower despite the cloud box having 10 more CPU threads and 4 times as much RAM.
For our second experiment, we picked the queries of the TPC-DS benchmark. Compared to the ubiquitous TPC-H benchmark, which has 8 tables and 22 queries, TPC-DS has 24 tables and 99 queries, many of which are more complex and include features such as window functions. And while TPC-H has been optimized to death, there is still some semblance of value in TPC-DS results. Let’s see whether the cheapest MacBook can handle these queries!
For this round, we used DuckDB’s LTS version, v1.4.4. We generated the datasets using DuckDB’s tpcds extension and set the memory limit to 6 GB.
At SF100, the laptop breezed through most queries with a median query runtime of 1.63 seconds and a total runtime of 15.5 minutes.
At SF300, the memory constraint started to show. While the median query runtime was still quite good at 6.90 seconds, DuckDB occasionally used up to 80 GB of space for spilling to disk and it was clear that some queries were going to take a long time. Most notably, query 67 took 51 minutes to complete. But hardware and software continued to work together tirelessly, and they ultimately passed the test, completing all queries in 79 minutes.
Here’s the thing: if you are running Big Data workloads on your laptop every day, you probably shouldn’t get the MacBook Neo. Yes, DuckDB runs on it, and can handle a lot of data by leveraging out-of-core processing. But the MacBook Neo’s disk I/O is lackluster compared to the Air and Pro models (about 1.5 GB/s compared to 3–6 GB/s), and the 8 GB memory will be limiting in the long run. If you need to process Big Data on the move and can pay up a bit, the other MacBook models will serve your needs better and there are also good options for Linux and Windows.
All that said, if you run DuckDB in the cloud and primarily use your laptop as a client, this is a great device. And you can rest assured that if you occasionally need to crunch some data locally, DuckDB on the MacBook Neo will be up to the challenge.
...
Read the original on duckdb.org »
Dolphin started out as a GameCube emulator in 2003. In 2008, experimental Wii support was added. And now in 2026, Dolphin enters the realm of arcade emulation with support for the Triforce, a joint Sega, Namco, and Nintendo arcade effort. Want to learn more about the Triforce? Check out our deep dive into the Triforce and how support for it ended up back in Dolphin!
Emulating a new system and library of games for the first time in 18 years is big news, and we’ll dive into some details and updates on that later in the report. However, there are two other major additions that would have been flagship changes on their own in any other report. Optimizations to Dolphin’s MMU emulation have brought major performance uplifts to games that rely on custom page table mappings. In fact, on powerful hardware, all Full MMU games can run full speed, including the legendary Rogue Squadron III: Rebel Strike.
On the other side of things is a targeted fix that was an epic tale spanning years of frustration. In the end, the incredible efforts of the Mario Strikers Charged community combined with several CPU emulation experts finally cracked the case of a minor physics bug that would normally be impossible to see or test.
All that and more awaits in this release’s jam-packed notable changes!
All changes below are available in Release 2603.
“Fastmem” is one of Dolphin’s biggest performance tricks. The GameCube and Wii’s MMIO (Memory-Mapped Input and Output) memory mappings do not directly translate to the host system, and without a way to sort ordinary RAM accesses from MMIO accesses, Dolphin would need to manually translate every memory access at tremendous cost. Enter fastmem, where Dolphin exploits the host CPU’s exception handler to sort accesses for us. If we map the entire GC/Wii address space to host memory, mappings for main memory will just work, no intervention needed. But attempting to access an address assigned to MMIO will trigger a CPU fault. When that happens, Dolphin can catch this fault and immediately backpatch the affected JIT code to manually translate the address instead. Just like that, Dolphin will only spend the effort translating memory accesses that it actually needs to translate, and the rest will be performed natively by the host CPU at incredible speeds.
However, fastmem didn’t cover everything - it only supported memory mapped with Block Address Translation (BAT). Any time a game accessed memory mapped with a page table, the other way to map memory on the GameCube, Dolphin would always have to manually translate it.
This may sound problematic, but thanks to tricks like the MMU Speedhack, the vast majority of games see no performance penalty from page table accesses. In order for this lack of support to actually impact performance, a game would have to have a custom memory handler that resists our hacks and heavily relies on the page table. But what developer would be crazy enough to do all of that?
Factor 5 is a recurring frenemy here on the Dolphin blog for a reason. If there’s a trick to squeeze more power out of a GameCube, Factor 5 probably invented it. In this case, they went through all that trouble to get extra RAM.
The GameCube is a memory starved machine - its paltry 24MiB of RAM is tiny compared to its capabilities and storage media, which limited what developers could do with the machine. However, there is an additional 16MiB of RAM connected to the DSP, known as Audio RAM (ARAM). On a memory starved machine, an extra 66% of memory is extremely tempting.
There’s a problem, though - the GameCube’s CPU does not have direct access to ARAM. All the GameCube’s CPU can do with ARAM is tell the DSP to use Direct Memory Access (DMA) to copy data from a range of main memory to ARAM, or vice versa. This is normally used for various DSP functions, such as loading audio samples.
For a game to use Audio RAM as general purpose RAM, some setup is required. If a game configures a range in the page table to point to invalid memory addresses, when it tries to access this memory, the invalid memory access triggers a CPU fault. That invokes the game’s page fault handler, which uses DMA transfers to swap data between main memory and ARAM as needed. When the game resumes, the memory access will return data as if it were stored in valid memory all along. Through this process, data can be copied to/from ARAM in an automated fashion outside of the primary game logic, allowing games to more or less treat a range of invalid memory as if it were extra RAM. Audio RAM used this way is significantly slower than main memory due to its reliance on DMA transfers, but it’s orders of magnitude faster than waiting for data from a spinning optical disc, so it’s still a huge win.
Notably, the GameCube can only perform the above setup with the page table, as the hardware has too few BATs and their granularity is too large for this technique.
Halfway into the GameCube’s life, Nintendo standardized this ARAM access process with a library provided in the GameCube developer support tools, so that developers didn’t need to waste time creating their own solution. All games that utilize this library configure the page table in the exact same predictable way. Since we know exactly what they are going to do and we aren’t bound to a GameCube’s hardware limitations, Dolphin can just cheat for these games and set up a BAT pointing to extra memory that the GameCube doesn’t have, bypassing all of that trouble while still getting the full benefits of fastmem. This trick is the aforementioned MMU Speedhack, and it has been enabled by default since 2014.
But years before Nintendo standardized this process, Factor 5 did it all themselves with their own custom memory handler in Rogue Squadron II. And, even though Nintendo’s library existed by the time Rogue Squadron III came out, Factor 5 went above and beyond in that game with an even more optimized custom memory handler to squeeze as much as they possibly could out of ARAM. Tricks like this are what allowed Factor 5 to push the GameCube to such ridiculous levels.
Factor 5 may have gone to the extreme, but they weren’t alone. Star Wars: The Clone Wars, Spider-Man 2, and Ultimate Spider-Man also use custom memory handlers.
On the Wii, none of this is necessary as ARAM is directly accessible to the Wii’s CPU as “MEM2”. But that didn’t stop some games from being weird. As we recently covered, the Disney Trio of Destruction™ use the typical memory addresses that any other Wii game would use, but they remove the default BATs and recreate them with the page table. Dolphin now patches out that behavior by default, but we know that they also use the page table for… (checks notes) memory defragmentation?
Since these games use custom memory handlers, their usage of memory on the system is entirely unique. We can’t predict what they are doing and we need to properly emulate their page table mappings. And since page table mappings weren’t implemented in fastmem, every access to memory handled by the page table had to be manually translated, with a varying performance impact depending on whatever the game was doing.
Admittedly, this didn’t affect a lot of games. But one of those titles is Rogue Squadron III: Rebel Strike, a game that has effectively never been playable at full speed in Dolphin. We knew that implementing page table mappings in fastmem would be the key to making these games go faster, and we were more than willing to put in the effort to do it. Unfortunately, we just didn’t know how. We’ve even attempted it a few times over the years, but we never created a successful, let alone fast, implementation.
We were missing something.
While we were trying to defang the Disney Trio of Destruction™, JosJuice noticed that every time Cars 2 manipulated the page table, it executed the instruction tlbie. According to the publicly available PowerPC 750 user manual, after any page table modification tlbie must be used to clear the relevant part of the TLB (the page table cache). If Dolphin paid attention to when this instruction was executed, it would have a way to keep track of every page table modification. Dolphin could use this information to map the page table in fastmem!
Upon this realization, JosJuice immediately began implementing page table mappings in fastmem. However, knowing how to do something and actually doing it are two different things. Even with a plan, this required loads of tricky, low-level work with tons of trial and error. And it wasn’t immediately successful! An early implementation was actually slower than manually translating everything! But after a lot of thinking and experimentation, JosJuice implemented incremental updates to page table mappings in fastmem, where we compare the new mappings with the old ones in 64-byte chunks, and then do a bunch of logic to figure out which mappings need to be removed or added.
This secret sauce is still heavy, but it’s faster than manually translating every page table access! At least, usually. It depends on the game and how often it uses tlbie. Rogue Squadron II and especially III hit ARAM so hard that they will always see a performance boost from page table fastmem. Meanwhile, some games with custom memory handlers, like Spider-Man 2, actually lose a little bit of performance due to the overhead of tracking page table updates. On the flipside, when those games do load from ARAM, the hitches that once plagued them are greatly reduced or completely removed in many cases, so their overall playability is better than before.
Now, this is the part where we show you some fancy performance graphs. However, JosJuice wasn’t quite done yet. After implementing Page Table Fastmem Mappings, they unleashed a flurry of optimizations targeting the Rogue Squadron games specifically. So, we’ve opted to bundle all of the Rogue Squadron results into the next section. For now though, here’s our Spider-Man 2 and Cars 2 performance test results.
As mentioned earlier, Spider-Man 2 is slightly slower with this change, but it has fewer traversal stutters so it feels more fluid in motion. Given that performance is not really an issue with this game - our eight year old low end laptop can run it easily - the trade is more than worth it. As for Cars 2, while we patch out the over 10,000 page table mapped entries that the Trio performs when it remaps the standard BATs with the page table, it turns out that they still have 400-800 page table mapping entries remaining. Now that these mappings can go through fastmem, Cars 2 has become a fairly accessible game. The old low end laptop in the test above is almost able to play it at full speed now. Compared to a year ago when Cars 2′s performance was barely in the double digits on that hardware, this is an amazing turnaround!
Even with page table fastmem mappings, Rogue Squadron II and III are still among the most demanding games in the entire GameCube and Wii library. Fortunately, by removing that primary performance bottleneck, a lot of optimization opportunities revealed themselves.
One major complaint in Rogue Squadron II: Rogue Leader specifically is that swapping between cockpit view and chase view results in a serious stutter. For some reason (we haven’t fully investigated it, but it’s likely related to ARAM swapping), the game forces Dolphin to both invalidate and JIT an enormous amount of code all at once. The sheer volume of what it’s doing in a short time bottlenecks CPU emulation and causes a hard stutter.
JosJuice took multiple measures to speed up this particular case. First, they disabled a feature known as Branch Following for this game. This feature makes the JIT follow branch instructions to create larger blocks of code to process all at once, making it possible for the JIT to output more optimized code. Especially in games with tricky to detect idle loops, such as Xenoblade Chronicles, the larger blocks vastly improve performance. But it also causes the JIT to output more code, making both JIT-ing and invalidating slower, which is very bad in Rogue Squadron II’s case! This feature is why the view change stuttering suddenly got worse soon after 5.0, and disabling Branch Following in the Rogue Squadron games resolves this regression.
Second, JosJuice found ways to optimize invalidating code! By using a more efficient data structure to keep track of JIT blocks, this part of CPU emulation becomes faster and Dolphin has more time to JIT the new code. All combined, the stutter is substantially improved.
The next improvement centers around the other demanding part of emulating these games - their graphics. Rogue Squadron renders stages in a rather unorthodox way. The terrain is divided into squares, and the game draws one square at a time, changing out the active texture for each square. Dolphin has to process hundreds of these squares per frame, with each one triggering Dolphin’s texture lookup code. But it’s not like the terrain is the only thing in the game - ships, ground troops, and lots of other things are being drawn too. Combined, this creates a huge number of draw calls and texture lookups.
JosJuice found a regression where Dolphin was creating extra objects in memory whenever the game used textures. In most games, this regression was essentially invisible. But in the land stages in Rogue Squadron II and III, the affected texture code can run over 300,000 times per second! Even a tiny optimization makes a sizeable difference when the affected code path runs that many times!
Next, these two games saw several changes in Dolphin’s GameSettings database. We already mentioned disabling Branch Following, but several more settings have been changed to improve performance in these games. CPU Vertex Culling is now enabled, letting Dolphin skip rendering 3D objects that wouldn’t be visible anyway, which among other things cuts down on how many of those terrain squares Dolphin has to process.
And finally, Store EFB Copies to Texture Only is no longer being forced off for this game. Why was it forced off, you ask? Back when Rogue Squadron II and III first became playable, turning this off was necessary to get paletted EFB copies working, which are used by the targeting computer. But paletted EFB copies started working with the setting turned on just a month later! Aside from that, the only other effects we’re aware of that needs the setting off are some of the menu fadeouts in Rogue Squadron II. These fadeouts are a relatively minor thing, so we’ve decided to not force a value for the setting. Instead, each user can choose whether they prefer higher performance or nice menu fadeouts.
And now for the big question: with page table mappings in fastmem and all these optimizations summed up, how much faster do Rogue Squadron II and III run?
Benchmark 1: RS2 Ison Corridor: 83 FPS RS2 Hoth: 35 FPS RS3 Revenge of the Empire: 13 FPS
Benchmark 2: RS2 Ison Corridor: 93 FPS RS2 Hoth: 42 FPS RS3 Revenge of the Empire: 28 FPS Benchmark 3: RS2 Ison Corridor: 107 FPS RS2 Hoth: 55 FPS RS3 Revenge of the Empire: 34 FPS
The performance benefits are absolutely massive. Of particular note is Rogue Squadron III, which doubled in performance! On our top-of-the-line desktop, it can even be played at full speed for the very first time! And it’s not just raw performance that improved - these changes help minimize hitching when the games are loading from ARAM. Even on the most powerful hardware, it’s still not uncommon to drop a frame or two on loading screens and transitions. But compared to before, it’s a night and day difference.
The caveat is that Rogue Squadron II and III still require extremely powerful hardware to get consistently playable speeds. Furthermore, the new default settings sacrifice graphical accuracy for performance, and disabling Store EFB/XFB Copies to Texture Only and Skip EFB Access from CPU lowers performance by roughly 12% to 15% in a demanding scene. That’s still full speed on strong enough hardware, but it does raise the barrier of entry that much more for players that want everything to look just right.
With strategic victories on all fronts, we fly home, having destroyed the once impenetrable Death Star of performance problems. Yet, it is unlikely we’ll have ever truly won. Not only do these games still have some minor problems, but we know that something much more fearsome was in the works. Isolated from the rest of the galaxy, Factor 5 scientists built a Rogue Squadron game for the Wii that was never released. We know it exists, lurking in the maw of someone’s hard drive. If it ever comes online, Dolphin may be crushed by a threat more devastating than any before…
Since Triforce support was added to Dolphin a few weeks ago, the community has been tremendously helpful, with multiple users who own fully operational Triforce cabinets coming forward to give us more information on how they work or to help us run hardware tests. It’s only been a few weeks, so the feature article is still mostly up to date with our current Triforce efforts, but there have been a few changes already merged with more on the horizon:
* Dolphin now automatically inserts Magnetic Cards for cleaning checks. This change is available in 2603.
* Regions are currently hardcoded for games. A setting that allows the user to change regions will be added to the GUI soon.
* Several bugs in Dolphin’s multicabinet emulation have been identified and are being worked on. These fixes will make multicabinet emulation work on a much wider range of hardware.
* F-Zero AX GameCube Memory Card support has been solved and will be coming soon.
* Work on integrated namcam2 support for Mario Kart Arcade GP and GP 2 has been started, which will allow the games to work without the need of a separate program to emulate the camera.
* The touchscreen protocol used by The Key of Avalon has been identified and solved.
* Users with multiple original F-Zero AX cabinets have submitted packet dumps to assist with implementing network emulation. We’ve already made some basic progress, though multiple instances still cannot join each other.
A lot of our effort after merging Triforce support has gone toward solving The Key of Avalon. Despite our best efforts, finding hardware for this game has proven impossible, so we’ve had to reverse engineer the game to determine what the hardware does. Thanks to tools like Ghidra, this is actually doable, though it is very time consuming and monotonous.
This approach isn’t ideal, but we’ve made a lot of progress and had a few breakthroughs. Notably, we’ve identified the touchscreen protocol as being similar to Elo’s SmartSet Data Protocol. By hacking in the appropriate responses, we managed to start a new game! …Only for it to immediately hang on the next screen.
We assumed that the problem was networking related, as the host and client didn’t appear to be synced up, but that was a red herring. It turned out that the game only syncs up the attract mode occasionally between the clients and the host, and the behavior we were seeing was normal. With no leads, we were essentially stuck once again. That was going to be the end of this entry, but serendipitously, Billiard stumbled across a disabled debug logging function built into the game! We patched the game to re-enable it, and the game started spitting out information on exactly what was going wrong. The game told us directly that the culprit was IC Card initialization.
Over the following days, we identified many problems with Dolphin’s IC Card emulation code. So much so that we couldn’t quite get everything done in time for the 2603 release. For now, here’s a teaser of what’s to come.
We also plan to bring multiple IC Card fixes to Virtua Striker 4, Virtua Striker 4 ver. 2006, and Gekitou Pro Yakyuu, which will restore card functionality to them. In the Triforce article, we mentioned there were some modes that we couldn’t find in Gekitou Pro Yakyuu. Well, we found them. They’re locked behind IC Cards! There’s a whole team building mode, complete with a character creator, RPG elements where you level up your player, and the aforementioned homerun contest. The points that are used for high scores are saved to your card and can be used to upgrade your created player.
There will likely be a lot of updates to Triforce emulation in the next Progress Report, but until then, we’ll leave our readers with another question that has us stumped. The Key of Avalon has code to detect special OWABI cards, and through hackery, we can get the game to acknowledge a card as an OWABI card. Unfortunately, we don’t actually know what these cards are for, and the game hangs shortly afterward. If anyone has used an OWABI card or knows how they work, please let us know.
Sometimes a lot of big changes hit all at once. Triforce compatibility is huge. Nearly doubling the performance of the Rogue Squadron games is massive. But, this change here? It was a five year project with tons of twists and turns. Our reaction to seeing this in the changelog can be summed up by this comment from someone on the blog staff:
Back in the August 2021 Progress Report, we talked about a bug fix for Inazuma Eleven GO: Strikers 2013. In this soccer game, if you used a Nintendo Wi-Fi Connection replacement service like Wiimmfi to play an online match between Dolphin and a real Wii, the two players would desync when performing certain actions. This slightly hindered the online experience!
The investigation was rather difficult because we couldn’t debug it offline. Since Dolphin had to be connected over the internet with a real Wii, we couldn’t just pause the emulator and use our usual debugging tools. Thankfully, the game’s community narrowed down the issue and eventually found that the fnmsubs CPU instruction was implemented incorrectly in Dolphin’s JIT but worked correctly in our interpreter.
Armed with that information, JosJuice just had to make the JIT implementation match the interpreter implementation, rather than having to debug an online game. Once the differences were fixed, online play worked fine even when using the JIT.
So, why are we bringing up a problem that was fixed five years ago? It turns out that Inazuma Eleven GO: Strikers 2013 wasn’t the only Wi-Fi enabled Wii soccer game that desynced when playing between Dolphin and real hardware. Mario Strikers Charged suffered from a very similar problem and their community reached out to us following that Progress Report.
Unfortunately, it wasn’t good news. Mario Strikers Charged still desynced shortly after the match started. And unlike Inazuma Eleven, there is no desync mitigation - if a desync happens, it’s game over.
There was a small upside: the fnmsubs fix did allow the game to stay synced for slightly longer. But since desyncs continued to occur after the fix, something was still horribly broken.
JosJuice and JMC47 both already had the game and quickly launched an investigation thinking it wouldn’t be very hard to narrow down the issue. With JosJuice on a Wii U and JMC on Dolphin, they set up a transatlantic Wiimmfi session and tried to play a few matches. Everything worked fine for a while! …Until two players touched one another. And even if that didn’t happen, eventually the game would desync anyway.
To debug the problem, different JIT instructions on the Dolphin side were disabled so that they would fallback to interpreter.
What exactly are fallbacks? Dolphin is able to emulate the PowerPC CPU in two ways: using a Just-In-Time (JIT) recompiler, or using the interpreter. JIT recompilers are a lot faster than the interpreter, but it is much easier to program an interpreter and understand how it works. As such, the interpreter also contains some extra accuracy improvements that we haven’t wanted to put in the JIT, either because it would hurt performance or because it would be too complicated.
It can be convenient to use the interpreter when investigating CPU emulation issues, but completely forgoing the JIT is very slow! To deal with this, the JIT has a feature called interpreter fallbacks, where one or more CPU instructions can be set to run using the interpreter, while all other CPU instructions run using the JIT as normal. This reduces performance anywhere from 0% to over 50% depending on what instructions are handled by the interpreter and how the game uses them, but even a 50% slowdown is a lot faster than not using the JIT at all.
After trying interpreter fallbacks for every category of instructions, we still had no answer. But if no single category fixed it, there was always the chance that the issue spanned multiple categories. We first tried combinations of floating point instruction fallbacks, to no avail. Then, we outright disabled everything and used the Cached Interpreter instead. This made Dolphin run so slowly that we expected the Wii to kick Dolphin out of the match. But surprisingly, the game was able to cope with the horrible lag! Or at least it did, until it popped up a network synchronization error like in every previous attempt.
This is why the Strikers issue couldn’t be easily fixed. Dolphin had no correct implementation to reference. And with the test case requiring Wiimmfi, a real Wii/Wii U, and a network connection from Dolphin, the issue was essentially impossible to debug.
And so Strikers continued to haunt us. Different developers took stabs at trying to fix it over the years, but again and again we would hit a wall. We were at a loss. But as we despaired, the Mario Strikers Charged community did not give up. If we needed a testable case, they would make us one.
The Mario Strikers Charged labbers are insane. They know more about how the game works than we will ever understand. They’ve mapped out parts of the physics, character stats, and modded in their own features to balance the game to their liking. In order to try to solve this issue, they implemented a new mode into the game: AI vs AI spectator mode! This way, we could watch a match with zero player input. Combined with a patch to freeze RNG, we finally had our test case that we could use to analyze the problem!
By recording a match on console and comparing it to Dolphin, we could see where the game desynced. They even changed the scoreboard functions to print out debug information, so that we would know exactly when things went wrong. The final push was a major collaboration between Geotale, flacs, and most importantly ace Strikers reverse engineer Feder. Together, they managed to narrow the problem down to an extremely small set of instructions that allowed for the creation of a hardware test.
fmadds is closely related to the fnmsubs instruction that Inazuma Eleven GO: Strikers 2013 had problems with, since they’re both Fused Multiply-Add (FMA) instructions. Like all respectably powerful CPUs, the PowerPC CPU in the GameCube and Wii has normal addition and multiplication instructions for both integers and floating point numbers. But with FMA, it also had a type of instruction that x86 CPUs didn’t get until the 2010s: multiplying two floating point numbers and adding or subtracting a third floating point number, all in one go. Doing this in a single instruction not only improves performance, but also boosts accuracy.
When a CPU performs a floating point calculation, the result might end up having more decimals than the CPU can store in its registers, which forces the CPU to round the result. If a calculation is done using a multiplication instruction followed by an addition instruction, this results in double rounding. But if the calculation is done using a Fused Multiply-Add instruction, the result only has to be rounded once, reducing the rounding error. This difference in rounding error has the potential to cause desyncs!
But by the time this issue was investigated, pretty much every CPU running Dolphin was already emulating FMA instructions using FMA instructions. The problem had to be something else. And it couldn’t be the same problem as Inazuma Eleven GO: Strikers 2013 used to have, because that problem only happens as a result of negation, and the mnemonic fmadds conspicuously doesn’t contain n (for “negate”) or sub. Yes, the letter soup in instruction mnemonics actually means something!
To explain the problem, we have to go further back than a Progress Report from 2021. We have to go all the way back to the year the Progress Reports began and revisit the August 2014 Progress Report.
This ancient tome describes the valiant efforts of two legends from Dolphin’s olden days, magumagu and FioraAeterna. Through their work, Dolphin’s emulation of multiplication instructions like fmuls and fmadds became more accurate, fixing replay desyncs in Mario Kart Wii, F-Zero GX, and many other games. Unfortunately, the tome is rather light on details about the specific inaccuracy they fixed, but through an archeological technique known as “reading the commit history”, we were able to fill in the blanks.
On PowerPC, floating point instructions come in two variants: 32-bit and 64-bit. (The instructions we’ve been talking about so far are all 32-bit, as indicated by their mnemonics ending in s for “single precision”.) 32-bit instructions take 32-bit inputs and produce a 32-bit output, and 64-bit instructions take 64-bit inputs and produce a 64-bit output.
But what would happen if you tried to give a 64-bit input to a 32-bit instruction? IBM’s manuals say that you shouldn’t do this, but GameCube and Wii games do it all the time, likely due to compiler quirks. And it does work, kind of! Many other CPU architectures would have read half of the 64 bits of the inputs, resulting in a nonsensical number, but PowerPC almost does the desired behavior. The least significant 28 bits of the right-hand side operand are truncated, but other than that, you get the result you would expect! Fiora’s change implemented this truncation, making Dolphin more accurate to a real console.
But hold on. This is an operation that takes in 64-bit inputs but results in a 32-bit output. There aren’t any x86-64 or AArch64 instructions that do that! Instead, Dolphin emulates the operation using a 64-bit instruction, and rounds to 32-bit afterwards. This results in double rounding! Fortunately, it has been mathematically proven that double rounding is safe when feeding 32-bit inputs into a 64-bit addition, subtraction, multiplication, or division instruction, as it doesn’t change the result at all. There may be rounding errors when using 64-bit inputs, but Dolphin’s approach is still more accurate than the alternative of using a 32-bit instruction.
But hold on once again. What about feeding 32-bit inputs into a 64-bit Fused Multiply-Add instruction?
This is where we return to Feder’s investigation. They had discovered a specific set of 32-bit inputs to fmadds that resulted in -0.83494705 (hexadecimal 0xbf55bf17) on console but -0.83494711 (hexadecimal 0xbf55bf18) on Dolphin. That’s a difference of just one in the hexadecimal representation, which is indicative of a rounding error! Geotale immediately knew what was going on, thanks to her experience of investigating how math works both in Dolphin and on original hardware. The type of double rounding that Dolphin does is in fact unsafe for Fused Multiply-Add.
Geotale quickly implemented a solution in Dolphin’s interpreter: if none of the inputs lose precision if converted to 32-bit, the interpreter converts them to 32-bit and uses a 32-bit FMA instruction. Otherwise, it uses a 64-bit FMA instruction like before. Not long after, the Mario Strikers Charged community were able to confirm that this change solved their desync issue. But the change also had the potential to negatively impact performance. To make Dolphin run different code depending on whether all inputs are safe to convert to 32-bit, a conditional branch instruction is needed. If the result of the condition is the same almost every time, the CPU’s branch predictor can do a very good job, which minimizes the performance impact of the conditional branch. But an incorrect guess from the branch predictor costs tens of cycles of execution time, which adds up quickly with how often games use FMA instructions.
JosJuice therefore came up with another idea: Dolphin could perform the calculation using a 64-bit FMA instruction as before, and then use the 2Sum algorithm to calculate the difference between the mathematically correct result and the 64-bit rounded result. Using a conditional branch, the result would then be nudged by a tiny bit if there was a difference, to make sure the result is rounded in the correct direction. Finally, Dolphin would convert the 64-bit result to 32-bit. Geotale improved on this idea by making the branch conditional not only on whether there was a difference, but also on whether the 64-bit result is exactly halfway between two numbers that can be represented as 32-bit - the exact case that’s troublesome for double rounding. It’s very unlikely for both of these two be true at once, which makes the branch predictor happy. On top of that, this method also increases accuracy for 64-bit inputs! There are still some 64-bit inputs that aren’t handled correctly, mostly involving denormals and numbers so large they can get rounded into infinity, but games shouldn’t trigger these cases unless they’re intentionally trying to be mean to us. We hope.
Unfortunately, the code for this solution is a bunch of math soup that confuses everyone not willing to spend hours trying to understand what’s going on. But Geotale and JosJuice persevered. Thanks to their work, and the multi-year effort of everyone investigating the issue, floating point numbers are now rounded a tiny bit differently for certain instructions. All of this just to let Dolphin play online with real Wii consoles in a game whose official servers are since long dead and whose replacement servers have a peak of only 15 concurrent online players.
But did you know that there’s still a desync if you connect Dolphin and a GameCube through LAN and play 1080° Avalanche together? This is now the last known physics desync in a multiplayer game between Dolphin and console! Keep an eye out for it in a future Dolphin Progress Report!
The Wii Menu is one of the hardest to run pieces of Wii software on Dolphin for one simple reason - Dolphin is too fast. A lot of the memory operations that the Wii does, specifically with NAND management, don’t have proper timings in Dolphin. On console, a lot of these operations are pretty slow, but Dolphin will attempt to complete them as fast as possible and then lag when host hardware can’t keep up even though it’s going many times faster than a real Wii!
In order to make things a little more palatable, Billiard added some very rough timings to ease the difficulty of emulating this menu. Dolphin is still faster than a real Wii, but it’s now slow enough that it shouldn’t bring even the most powerful hardware to its knees anymore. For a full fix, hardware testing and real timing data needs to be used to make the Wii Menu do these operations at a reasonable speed.
Many times over the years, users have asked us to implement a way to load games into RAM. Drives are slow, RAM is fast, it seems like a no brainer. However, every time we would respond by telling them that there would be no difference.
For compatibility reasons, Dolphin emulates the disc read rates of GameCube and Wii optical drives, even down to the Constant Angular Velocity. And those drives are tremendously slow by modern standards. The Wii’s DVD drive was capable of an up to ~8.5MiB per second transfer rate. An ATA133 hard drive, which uses a standard that was superseded years before the Wii released, is over ten times faster!
Whether an Ultra ATA hard drive, a default-speed SD Card, or a Memory Stick, it doesn’t matter - they are all faster than a Wii’s optical disc drive. Even seek times (the bane of spinning rust) didn’t affect Dolphin much, as the seek times of a Wii drive are even worse! Dolphin’s game loading has effectively never been bottlenecked by current storage devices, so there was no reason to implement a way to load games into RAM. There was simply no benefit.
Except, for one specific scenario that is becoming increasingly common these days - playing from a hard drive over a network. As people adopt more and more devices, home intranets have become increasingly complex. At the heart of these intranets is often a Network Attached Storage (NAS) device. These dedicated storage appliances allow their files to be accessed by any device on the network, and feature large storage capacity, redundancy, and integrity verification - everything needed to be very good at long term digital storage. NASes are perfect for storing things like game disc backups.
But playing games from a NAS in Dolphin has often been an annoying experience. The NAS isn’t aware of what some device far away on the network is doing, and since GameCube and Wii games need so little data so infrequently, it may decide to put a hard drive to sleep while someone is actively playing a game that’s on it. This results in a hard stutter the next time the game asks to read the disc, as now Dolphin has to wait for the NAS to wake up the drive. There are ways for NASes to work around issues like this, such as SSD storage pools and RAM caches, but the most common NASes are simple hard drive boxes with little if any caching.
Fortunately, we already knew of a solution that could solve this. All we had to do was provide a way to load the game disc into the host’s system memory. When enabled, Dolphin will continuously copy the disc in the background until the entire game is cached in RAM, and then the NAS can turn off the drive whenever it wants and the player will be none the wiser.
We never thought we’d ever actually implement this feature, but sometimes new problems can be solved by old solutions.
This feature is currently only available in our desktop builds.
SDL hints are a useful mechanism that allows Dolphin to tell SDL how to handle certain controllers. Recently, we’ve been having trouble with some controllers and have been using SDL hints in an attempt to work around them. For example, having an 8BitDo Ultimate 2 controller plugged in causes Dolphin to hang on shutdown in some cases. To fix this, we disabled SDL’s DirectInput support as a temporary remediation. Unfortunately, that also broke hotplug support for DualSense and DualShock 4 controllers. Given that other programs don’t seem to be having this issue, it’s fairly apparent that Dolphin is doing something wrong, we just don’t know what.
Users could modify Dolphin’s SDL hints through setting an environment variable, but that’s not really something the average user knows how to do. So TixoRebel added a GUI for enabling certain SDL hints. However, their intended use case wasn’t for fixing either of those bugs! They wanted to use SDL hints to change how Nintendo Switch Joy-Cons are handled. By default, SDL will combine a left Joy-Con and a right Joy-Con into one controller. If you want to separate left and right Joy-Cons into two different controllers to emulate the Wii and Nunchuk, you need to use an SDL hint.
Realizing the opportunity to kill two birds with one stone, we had TixoRebel implement a way to add any SDL hint directly in the Controller Settings GUI. That way, it could be used for Joy-Con handling, working around the 8BitDo/DS4/DS5 bugs, or anything else the user wants!
There’s one caveat we should note: changes to SDL hints will only apply after restarting Dolphin. Please keep that in mind when modifying these settings.
More games have been identified with behaviors that are problematic for Dolphin. For more information on how these patches are made and what they do, please refer to the previous Progress Report. All of the following games are of the variety that run uncapped internally, and each of them have been patched to force the games to synchronize to the Vertical Blanking Interrupt (VBI).
Special thanks to all of the contributors that incremented Dolphin by 465 commits after Release 2512!
...
Read the original on dolphin-emu.org »
CodeSpeak works in mixed projects where some code is written manually and some is generated from specs. Here’s an example from the MarkItDown repository (forked)Check out the step-by-step guide on mixed projects
CodeSpeak can take over some of the existing code and replace it with specs 5-10x smaller. Maintaining specs is a lot easier for humans.
We took real code from open-source projects and generated specs from it. Here’s how it panned out:Encoding auto-detection and normalization for beautifulsoup4 (Python library for parsing HTML and XML)EML to .md converter for markitdown (Python library for converting anything to markdown)[1] When computing LOC, we strip blank lines and break long lines into many
[2] List of Italian municipalities codes (~8000 LOC) is excluded
...
Read the original on codespeak.dev »
The sight of a delectable plate of lasagna or the aroma of a holiday ham are sure to get hungry bellies rumbling in anticipation of a feast to come. But although we’ve all experienced the sensation of “eating” with our eyes and noses before food meets mouth, much less is known about the information superhighway, known as the vagus nerve, that sends signals in the opposite direction — from your gut straight to your brain.
These signals relay more than just what you’ve eaten and when you are full. A new study in mice from researchers at Stanford Medicine and the Palo Alto, California-based Arc Institute has identified a critical link between the bacteria that live in your gut and the cognitive decline that often occurs with aging.
“Although memory loss is common with age, it affects people differently and at different ages,” said Christoph Thaiss, PhD, assistant professor of pathology. “We wanted to understand why some very old people remain cognitively sharp while other people see significant declines beginning in their 50s or 60s. What we learned is that the timeline of memory decline is not hardwired; it’s actively modulated in the body, and the gastrointestinal tract is a critical regulator of this process.”
The mouse study showed that the composition of the naturally occurring bacterial population that lives in the gut, known as the gut microbiome, changes with age — favoring some species of bacteria over others. These changes are registered by immune cells in the gastrointestinal tract, which spark an inflammatory response that hampers the ability of the vagus nerve to signal to the hippocampus — the part of the brain responsible for memory formation and spatial navigation. Stimulating the activity of the vagus nerve in older animals turned old, forgetful mice into whisker-sharp whizzes able to remember novel objects and escape from mazes as nimbly as their younger counterparts.
“The degree of reversibility of age-related cognitive decline in the animals just by altering gut-brain communication was a surprise,” Thaiss said. “We tend to think of memory decline as a brain-intrinsic process. But this study indicates that we can enhance memory formation and brain activity by changing the composition of the gastrointestinal tract — a kind of remote control for the brain.”
Thaiss, who is also a core investigator at Palo Alto-based Arc Institute, is a senior author of the study, which was published March 11 in Nature. Maayan Levy, PhD, an assistant professor of pathology and Arc Institute innovation investigator, is the other senior author. Timothy Cox, a graduate student at the University of Pennsylvania, is the lead author of the research.
“Our study emphasizes that processes in the brain can be modulated through peripheral intervention,” Levy said. “Since the gastrointestinal tract is easily accessible orally, modulating the abundance of gut microbiome metabolites is a very appealing strategy to control brain function.”
The call is coming from inside the body
The idea that hundreds of species of bacteria are nestled comfortably in our intestines used to be surprising. But the gut microbiome is experiencing a kind of media heyday as people realize that its function is critical to not just how we digest our food, but also to our overall health. A little more than a decade ago, researchers showed that tinkering with rodents’ gut microbiomes affected the animals’ social and cognitive behaviors. Thaiss and Levy wondered whether a similar process could be responsible for the memory loss and cognitive troubles often associated with aging.
Signals from inside the body to the brain — like those that travel from the intestines to the brain via the vagus nerve — are part of what’s called interoception. In contrast, signals from outside the body, conveyed primarily by the five senses of taste, touch, smell, vision and hearing, are called exteroception.
This study indicates that we can enhance memory formation and brain activity by changing the composition of the gastrointestinal tract — a kind of remote control for the brain.”
—Christoph Thaiss
“Exteroception is basically how we perceive the outside,” Thaiss said. “We have a lot of detailed knowledge about how this works. But we know much less about how the brain senses what is going on inside the body. We don’t know how many internal senses there are, or even all of what they are sensing. It’s clear that our exteroception capabilities decline with age — we grow to need eyeglasses and hearing aids, for example. And this study shows that aging also affects interoception.”
To test their theory that the gut microbiome plays a role in the “senior moments” many of us experience, the researchers housed young (2-month-old) mice together with old (18-month-old) mice. Living (and pooping) in close proximity exposed the young mice to the gut microbiomes of the old mice and vice versa. After one month, the researchers examined the compositions of the microbiomes of the old and young animals.
They found that the shared digs caused the microbiomes of the young mice to more closely resemble that of the older animals. When they compared the abilities of the mice to recognize a novel object, or to find the exit in a maze, the young mice with “old” microbiomes performed significantly more poorly than their peers — showing less curiosity about the unfamiliar object and bumbling about the maze in ways similar to that of old animals.
When the researchers compared young mice and old mice raised in a germ-free environment since birth (meaning neither group had gut bacteria), the young mice maintained their ability to form memories. But when they transplanted young, germ-free mice with microbiomes from old mice, the young mice again performed like older animals in the memory and cognition tests. Interestingly, the germ-free old mice did not experience a loss of memory and cognition as they aged, performing as well as 2-month-old animals.
Strikingly, treating young mice with “old” microbiomes (and, therefore, faltering cognitive abilities) with broad-spectrum antibiotics for two weeks restored the animals’ cognitive abilities, causing them to avidly investigate unfamiliar objects and scamper through the maze as well as their control peers.
“The object recognition test is like cognitive recognition tests in humans, where you are shown a series of images, then have to remember which ones you’ve seen before after some time passes,” Thaiss said. “And the maze test is like people trying to recall where they parked their car at a large shopping center. What these tasks have in common, in mice and in people, is that they are very strongly dependent on activity in the hippocampus, because that is where memories are encoded.”
What’s different in their guts?
Digging deeper, the researchers identified specific changes that occur in the composition of the gut microbiome of mice as they age. In particular, the relative abundance of a bacteria called Parabacteroides goldsteinii increases in old mice and is directly associated with cognitive decline in the animals. They showed that colonizing the guts of young mice with this bacterial species inhibited their performance on the object recognition and maze escape tasks, and that this deficit correlated with a reduction of activity in the hippocampus.
When they treated old mice with a molecule that activates the vagus nerve, however, the cognitive performance of the animals was indistinguishable from that of young animals.
Further experiments showed that the increasing prevalence of the Parabacteroides goldsteinii bacteria correlated with an increasing amount of metabolites called medium-chain fatty acids, and that these metabolites cause a group of immune cells in the gut called myeloid cells to initiate an inflammatory response. This inflammation inhibits the activity of the vagus nerve, the activity of the hippocampus and the ability to form lasting memories.
“The GI tract is arguably the first organ system to evolve during human evolutionary history, so the evolution of cognitive processes in the brain has undoubtedly been shaped by signals coming from the intestine,” Levy said.
“It’s likely that signals from the GI tract play an important role in contextualizing memory formation.”
Thaiss added, “Basically, we’ve identified a three-step pathway toward cognitive decline that starts with gastrointestinal aging and the subsequent microbial and metabolic changes that occur. The myeloid cells in the GI tract sense these changes, and their inflammatory response impairs the connection between the gut and the brain via the vagus nerve. This is a direct driver of memory decline. And if we restore the activity of the vagus nerve, we can restore an old animal’s memory function to that of a young animal.”
The researchers are now investigating whether a similar gut microbiome and brain activity pathway exists in humans, and whether it also contributes to age-related cognitive decline. Importantly, vagus nerve stimulation is approved by the Food and Drug Administration as a treatment for depression or epilepsy and to aid stroke recovery. The researchers are also interested in developing ways to non-invasively monitor, and perhaps even control, the activity of peripheral neurons to affect memory formation and cognition.
“Our hope is that ultimately these findings can be translated into the clinic to combat age-related cognitive decline in people,” Thaiss said.
Researchers from Monell Chemical Senses Center in Philadelphia; the University of California, Irvine; University College Cork, Ireland; Calico Life Sciences LLC; and the Children’s Hospital of Philadelphia contributed to the work.
The study was funded by the Arc Institute, the National Institutes of Health (grants NIH DK019525, T32AG000255, F30AG081097, T32HG000046, F30AG080958, DP2-AG-067511, DP2-AG-067492, DP1-DK-140021, R01-NS-134976 and R01-DK-129691), the Burroughs Wellcome Fund, the American Cancer Society, the Pew Scholar Award, the Searle Scholar Program, the Edward Mallinckrodt Jr. Foundation, the W. W. Smith Charitable Trust, the Blavatnik Family Fellowship, the Prevent Cancer Foundation, the Polybio Research Foundation, the V Foundation, the Kathryn W. Davis Aging Brain Scholar Program, the McKnight Brain Research Foundation, the Kenneth Rainin Foundation, the IDSA Foundation and the Human Frontier Science Program.
...
Read the original on med.stanford.edu »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.