10 interesting stories served every morning and every evening.
If things go our way, YouTube’s notorious unskippable ads might be a thing of the past come this February.
As Phụ Nữ reports, Vietnam recently announced Decree No. 342, which details a number of provisions to the national Advertising Law, due to take effect from February 15, 2026. The adjustments are expected to place stricter control on Vietnam’s online advertising activities to protect consumers and curb illegal ads.
Amongst the decree articles, some standout stipulations include a hard cap on the waiting time before viewers can skip video and animated ads to no more than 5 seconds. Static ads must be immediately cancellable.
Additionally, the decree requires platforms to implement clear and straightforward ways for users to close ads with just one interaction. False or vague symbols designed to confuse viewers are forbidden.
Online platforms must add visible symbols and guidelines to help users report ads that violate the law and allow them to turn off, deny, or stop seeing inappropriate ads.
Beside rules about the user experience, the decree also seeks to tightly regulate ads for 11 groups of goods and services that directly impact the environment and human health, including: cosmetics; food and beverages; milk and formula for children; insecticidal chemicals and substances; medical supplies; healthcare services; plant pesticides and veterinary drugs; fertilizers; plant seeds and saplings; pharmaceuticals; and alcoholic drinks.
...
Read the original on saigoneer.com »
I’ve been tracking AWS for a long time, with a specific emphasis on pricing. “What happens if AWS hikes prices” has always been something of a boogeyman, trotted out as a hypothetical to urge folks to avoid taking dependencies on a given provider.
Over the weekend - on a Saturday, no less - that hypothetical became real.
AWS has quietly raised prices on its EC2 Capacity Blocks for ML by approximately 15 percent. The p5e.48xlarge instance — eight NVIDIA H200 accelerators in a trenchcoat — jumped from $34.61 to $39.80 per hour across most regions, while the p5en.48xlarge climbed from $36.18 to $41.61. Customers in US West (N. California) face steeper hikes, with p5e rates rising from $43.26 to $49.75. The change had been telegraphed: AWS’s pricing page noted (and bizarrely, still does) that “current prices are scheduled to be updated in January, 2026,” though the company neglected to mention which direction.
This comes about seven months after AWS trumpeted “up to 45% price reductions” for GPU instances - though that announcement covered On-Demand and Savings Plans rather than Capacity Blocks. Funny how that works.
For the uninitiated, Capacity Blocks are AWS’s answer to “I need guaranteed GPU capacity for my ML training job next Tuesday.” You reserve specific GPU instances for a defined time window — anywhere from a day to a few weeks out — and pay up front at a locked-in rate. It’s popular with companies doing serious ML work who can’t afford to have a training run interrupted because spot capacity evaporated. The pricing should make it abundantly clear that the people using this aren’t hobbyists; these are teams with budgets measured in millions.
An Amazon spox told us via email, “EC2 Capacity Blocks for ML pricing vary based on supply and demand patterns, as described on the product detail page. This price adjustment reflects the supply/demand patterns we expect this quarter.”
To be clear, AWS has raised prices before, but rarely as a straight increase to a line item. The company prefers to change pricing dimensions entirely, often spinning this as a price reduction for most customers — a claim I’d characterize as “creative.” Historical straight-up price increases have been tied to regulatory actions: per-SMS charges in certain markets and the like. This is different.
The timing is curious for another reason: it hands Azure and GCP a talking point on a silver platter. Both have been aggressively courting ML workloads, and “AWS just raised GPU prices 15%” is exactly the kind of ammunition enterprise sales teams dream about. Whether the competitors can actually absorb the demand is another question — GPU constraints are hardly unique to AWS — but perception matters in enterprise deals.
For companies with Enterprise Discount Programs or other negotiated agreements, this raises uncomfortable questions. EDPs typically guarantee discounts off public pricing — so if public pricing goes up 15 percent, your “discounted” rate just got more expensive in absolute terms, even if the percentage held steady. I expect some pointed conversations between AWS account teams and their larger customers in the coming weeks.
It’s hard not to see this as a bellwether. GPUs are increasingly constrained globally as the world pivots to generating slop-as-a-service in every conceivable domain. The question is what this means for other resource types down the road. Does the global RAM crunch mean RAM-centric services are next? You can ignore ML Capacity Block pricing if you’re not running machine learning workloads — which describes north of 95 percent of most companies’ cloud spend — but RAM touches every service AWS offers. Well, possibly excepting their support function, though that’s rapidly becoming “AI-Powered” too, so give it time.
The canary-in-the-coal-mine concern here isn’t GPUs specifically, but rather the precedent it establishes. AWS has spent two decades conditioning customers to expect prices only ever go down. That expectation is now broken. Once you’ve raised prices on one service and the world doesn’t end, the second increase becomes easier. And the third. The playbook has changed.
Keep an eye on services where AWS faces genuine supply constraints or where their costs have materially increased. Graviton instances have been priced aggressively to drive adoption — what happens when ARM chip supply tightens? Data transfer costs have been a cash cow for years, but they’ve also been stable; are those next? I don’t have inside information, but I do have pattern recognition, and the pattern just shifted.
AWS has long benefited from the assumption that cloud pricing only trends in one direction. That assumption died on a Saturday in January, with all the fanfare of a Terms of Service update. The question isn’t whether this matters — it does. The question is whether it’s an anomaly or the new normal. My money’s on the latter. ®
...
Read the original on www.theregister.com »
To play: Use arrow keys or buttons.
...
Read the original on kevinalbs.com »
On December 28th, I delivered a speech entitled “A post-American, enshittification-resistant internet” for 39C3, the 39th Chaos Communications Congress in Hamburg, Germany. This is the transcript of that speech.
Many of you know that I’m an activist with the Electronic Frontier Foundation — EFF. I’m about to start my 25th year there. I know that I’m hardly unbiased, but as far as I’m concerned, there’s no group anywhere on Earth that does the work of defending our digital rights better than EFF.
I’m an activist there, and for the past quarter-century, I’ve been embroiled in something I call “The War on General Purpose Computing.”
If you were at 28C3, 14 years ago, you may have heard me give a talk with that title. Those are the trenches I’ve been in since my very first day on the job at EFF, when I flew to Los Angeles to crash the inaugural meeting of something called the “Broadcast Protection Discussion Group,” an unholy alliance of tech companies, media companies, broadcasters and cable operators.
They’d gathered because this lavishly corrupt American congressman, Billy Tauzin, had promised them a new regulation — a rule banning the manufacture and sale of digital computers, unless they had been backdoored to specifications set by that group, specifications for technical measures to block computers from performing operations that were dispreferred by these companies’ shareholders.
That rule was called “the Broadcast Flag,” and it actually passed through the American telecoms regulator, the Federal Communications Commission. So we sued the FCC in federal court, and overturned the rule.
We won that skirmish, but friends, I have bad news, news that will not surprise you. Despite wins like that one, we have been losing the war on the general purpose computer for the past 25 years.
Which is why I’ve come to Hamburg today. Because, after decades of throwing myself against a locked door, the door that leads to a new, good internet, one that delivers both the technological self-determination of the old, good internet, and the ease of use of Web 2.0 that let our normie friends join the party, that door has been unlocked.
Today, it is open a crack. It’s open a crack!
And here’s the weirdest part: Donald Trump is the guy who’s unlocked that door.
Oh, he didn’t do it on purpose! But, thanks to Trump’s incontinent belligerence, we are on the cusp of a “Post-American Internet,” a new digital nervous system for the 21st century. An internet that we can build without worrying about America’s demands and priorities.
Now, don’t get me wrong, I’m not happy about Trump or his policies. But as my friend Joey DaVilla likes to say “When life gives you SARS, you make sarsaparilla.” The only thing worse than experiencing all the terror that Trump has unleashed on America and the world would be going through all that and not salvaging anything out of the wreckage.
That’s what I want to talk to you about today: the post-American Internet we can wrest from Trump’s chaos.
A post-American Internet that is possible because Trump has mobilized new coalition partners to join the fight on our side. In politics, coalitions are everything. Any time you see a group of people suddenly succeeding at a goal they have been failing to achieve, it’s a sure bet that they’ve found some coalition partners, new allies who don’t want all the same thing as the original forces, but want enough of the same things to fight on their side.
That’s where Trump came from: a coalition of billionaires, white nationalists, Christian bigots, authoritarians, conspiratorialists, imperialists, and self-described “libertarians” who’ve got such a scorching case of low-tax brain worms that they’d vote for Mussolini if he’d promise to lower their taxes by a nickel.
And what’s got me so excited is that we’ve got a new coalition in the War on General Purpose Computers: a coalition that includes the digital rights activists who’ve been on the lines for decades, but also people who want to turn America’s Big Tech trillions into billions for their own economy, and national security hawks who are quite rightly worried about digital sovereignty.
My thesis here is that this is an unstoppable coalition. Which is good news! For the first time in decades, victory is in our grasp.
So let me explain: 14 years ago, I stood in front of this group and explained the “War on General Purpose Computing.” That was my snappy name for this fight, but the boring name that they use in legislatures for it is “anticircumvention,”
Under anticircumvention law, it’s a crime to alter the functioning of a digital product or service, unless the manufacturer approves of your modification, and — crucially — this is true whether or not your modification violates any other law.
Anticircumvention law originates in the USA: Section 1201 of the Digital Millennium Copyright Act of 1998 establishes a felony punishable by a five year prison sentence and a $500,000 fine for a first offense for bypassing an “access control” for a copyrighted work.
So practically speaking, if you design a device or service with even the flimsiest of systems to prevent modification of its application code or firmware, it’s a felony — a jailable felony — to modify that code or firmware. It’s also a felony to disclose information about how to bypass that access control, which means that pen-testers who even describe how they access a device or system face criminal liability.
Under anticircumvention law any manufacturer can trivially turn their product into a no-go zone, criminalizing the act of investigating its defects, criminalizing the act of reporting on its defects, and criminalizing the act of remediating its defects.
This is a law that Jay Freeman rightly calls “Felony Contempt of Business Model.” Anticircumvention became the law of the land in 1998 when Bill Clinton signed the DMCA. But before you start snickering at those stupid Americans, know this: every other country in the world has passed a law just like this in the years since. Here in the EU, it came in through Article 6 of the 2001 EU Copyright Directive.
Now, it makes a certain twisted sense for the US to enact a law like this, after all, they are the world’s tech powerhouse, home to the biggest, most powerful tech companies in the world. By making it illegal to modify digital products without the manufacturer’s permission, America enhances the rent-extracting power of the most valuable companies on US stock exchanges.
But why would Europe pass a law like this? Europe is a massive tech importer. By extending legal protection to tech companies that want to steal their users’ data and money, the EU was facilitating a one-way transfer of value from Europe to America. So why would Europe do this?
Well, let me tell you about the circumstances under which other countries came to enact their anticircumvention laws and maybe you’ll spot a pattern that will answer this question.
Australia got its anticircumvention law through the US-Australia Free Trade Agreement, which obliges Australia to enact anticircumvention law.
Canada and Mexico got it through the US-Mexico-Canada Free Trade Agreement, which obliges Canada and Mexico to enact anticircumvention laws.
Andean nations like Chile got their anticircumvention laws through bilateral US free trade agreements, which oblige them to enact anticircumvention laws.
And the Central American nations got their anticircumvention laws through CAFTA — The Central American Free Trade Agreement with the USA — which obliges them to enact anticircumvention laws, too.
I assume you’ve spotted the pattern by now: the US trade representative has forced every one of its trading partners to adopt anticircumvention law, to facilitate the extraction of their own people’s data and money by American firms. But of course, that only raises a further question: Why would every other country in the world agree to let America steal its own people’s money and data, and block its domestic tech sector from making interoperable products that would prevent this theft?
Here’s an anecdote that unravels this riddle: many years ago, in the years before Viktor Orban rose to power, I used to guest-lecture at a summer PhD program in political science at Budapest’s Central European University. And one summer, after I’d lectured to my students about anticircumvention law, one of them approached me.
They had been the information minister of a Central American nation during the CAFTA negotiations, and one day, they’d received a phone-call from their trade negotiator, calling from the CAFTA bargaining table. The negotiator said, “You know how you told me not to give the Americans anticircumvention under any circumstances? Well, they’re saying that they won’t take our coffee unless we give them anticircumvention. And I’m sorry, but we just can’t lose the US coffee market. Our economy would collapse. So we’re going to give them anticircumvention. I’m really sorry.”
That’s it. That’s why every government in the world allowed US Big Tech companies to declare open season on their people’s private data and ready cash.
The alternative was tariffs. Well, I don’t know if you’ve heard, but we’ve got tariffs now!
I mean, if someone threatens to burn your house down unless you follow their orders, and then they burn your house down anyway, you don’t have to keep following their orders. So…Happy Liberation Day?
So far, every country in the world has had one of two responses to the Trump tariffs. The first one is: “Give Trump everything he asks for (except Greenland) and hope he stops being mad at you.” This has been an absolute failure. Give Trump an inch, he’ll take a mile. He’ll take fucking Greenland. Capitulation is a failure.
But so is the other tactic: retaliatory tariffs. That’s what we’ve done in Canada (like all the best Americans, I’m Canadian). Our top move has been to levy tariffs on the stuff we import from America, making the things we buy more expensive. That’s a weird way to punish America! It’s like punching yourself in the face as hard as you can, and hoping the downstairs neighbor says “Ouch!”
And it’s indiscriminate. Why whack some poor farmer from a state that begins and ends with a vowel with tariffs on his soybeans. That guy never did anything bad to Canada.
But there’s a third possible response to tariffs, one that’s just sitting there, begging to be tried: what about repealing anticircumvention law?
If you’re a technologist or an investor based in a country that’s repealed its anticircumvention law, you can go into business making disenshittificatory products that plug into America’s defective tech exports, allowing the people who own and use those products to use them in ways that are good for them, even if those uses make the company’s shareholders mad.
Think of John Deere tractors: when a farmer’s John Deere tractor breaks down, they are expected to repair it, swapping in new parts and assemblies to replace whatever’s malfing. But the tractor won’t recognize that new part and will not start working again, not until the farmer spends a couple hundred bucks on a service callout from an official John Deere tractor repair rep, whose only job is to type an unlock code into the tractor’s console, to initialize the part and pair it with the tractor’s main computing unit.
Modding a tractor to bypass this activation step violates anticircumvention law, meaning farmers all over the world are stuck with this ripoff garbage, because their own government will lock up anyone who makes a tractor mod that disables the parts-pairing check in this American product.
So what if Canada repealed Bill C-11, the Copyright Modernization Act of 2012 (that’s our anticircumvention law)? Well, then a company like Honeybee, which makes tractor front-ends and attachments, could hire some smart University of Waterloo computer science grads, and put ’em to work jailbreaking the John Deere tractor’s firmware, and offer it to everyone in the world. They could sell the crack to anyone with an internet connection and a payment method, including that poor American farmer whose soybeans we’re currently tariffing.
It’s hard to convey how much money is on the table here. Take just one example: Apple’s App Store. Apple forces all app vendors into using its payment processor, and charges them a 30 percent commission on every euro spent inside of an app.
30 percent! That’s such a profitable business that Apple makes $100 billion per year on it. If the EU repeals Article 6 of the Copyright Directive, some smart geeks in Finland could reverse-engineer Apple’s bootloaders and make a hardware dongle that jailbreaks phones so that they can use alternative app stores, and sell the dongle — along with the infrastructure to operate an app store — to anyone in the world who wants to go into business competing with Apple for users and app vendors.
Those competitors could offer a 90% discount to every crafter on Etsy, every performer on Patreon, every online news outlet, every game dev, every media store. Offer them a 90% discount on payments, and still make $10b/year.
Maybe Finland will never see another Nokia, but Nokia’s a tough business to be in. You’ve got to make hardware, which is expensive and risky. But if the EU legalizes jailbreaking, then Apple would have to incur all the expense and risk of making and fielding hardware, while those Finnish geeks could cream off the $100b Apple sucks out of the global economy in an act of a disgusting, rip-off rent-seeking.
As Jeff Bezos said to the publishers: “Your margin is my opportunity.” With these guys, it’s always “disruption for thee, but not for me.” When they do it to us, that’s progress. When we do it to them, it’s piracy, and every pirate wants to be an admiral.
Well, screw that. Move fast and break Tim Cook’s things. Move fast and break kings!
It’s funny: I spent 25 years getting my ass kicked by the US Trade Representative (in my defense, it wasn’t a fair fight). I developed a kind of grudging admiration for the skill with which the USTR bound the entire world to a system of trade that conferred parochial advantages to America and its tech firms, giving them free rein to loot the world’s data and economies. So it’s been pretty amazing to watch Trump swiftly and decisively dismantle the global system of trade and destroy the case for the world continuing to arrange its affairs to protect the interests of America’s capital class.
I mean, it’s not a path I would have chosen. I’d have preferred no Trump at all to this breakthrough. But I’ll take this massive own-goal if Trump insists. I mean, I’m not saying I’ve become an accelerationist, but at this point, I’m not exactly not an accelerationist.
Now, you might have heard that governments around the world have been trying to get Apple to open its App Store, and they’ve totally failed at this. When the EU hit Apple with an enforcement order under the Digital Markets Act, Apple responded by offering to allow third party app stores, but it would only allow those stores to sell apps that Apple had approved of.
And while those stores could use their own payment processors, Apple would charge them so much in junk fees that it would be more expensive to process a payment using your own system, and if Apple believed that a user’s phone had been outside of the EU for 21 days, they’d remotely delete all that user’s data and apps.
When the EU explained that this would not satisfy the regulation, Apple threatened to pull out of the EU. Then, once everyone had finished laughing, Apple filed more than a dozen bullshit objections to the order hoping to tie this up in court for a decade, the way Google and Meta did for the GDPR.
It’s not clear that the EU can force Apple to write code that opens up the iOS platform for alternative app stores and payment methods, but there is one thing that the EU can absolutely do with 100% reliability, any time they want: the EU can decide not to let Apple use Europe’s courts to shut down European companies that defend European merchants, performers, makers, news outlets, game devs and creative workers, from Apple’s ripoff, by jailbreaking phones.
All the EU has to do is repeal Article 6 of the Copyright Directive, and, in so doing, strip Apple of the privilege of mobilizing the European justice system to shore up Apple’s hundred billion dollar annual tax on the world’s digital economy. The EU company that figures out how to reliably jailbreak iPhones will have customers all over the world, including in the USA, where Apple doesn’t just use its veto over which apps you can run on your phone to suck 30% out of every dollar you spend, but where Apple also uses its control over the platform to strip out apps that protect Apple’s customers from Trump’s fascist takeover.
Back in October, Apple kicked the “ICE Block” app out of the App Store. That’s an app that warns the user if there’s a snatch squad of masked ICE thugs nearby looking to grab you off the street and send you to an offshore gulag. Apple internally classified ICE kidnappers as a “protected class,” and then declared the ICE Block infringed on the rights of these poor, beset ICE goons.
And speaking of ICE thugs, there are plenty of qualified technologists who have fled the US this year, one step ahead of an ICE platoon looking to put them and their children into a camp. Those skilled hackers are now living all over the world, joined by investors who’d like to back a business whose success will be determined by how awesome its products are, and not how many $TRUMP coins they buy.
Apple’s margin could be their opportunity.
Legalizing jailbreaking, raiding the highest margin lines of business of the most profitable companies in America is a much better response to the Trump tariffs than retaliatory tariffs.
For one thing, this is a targeted response: go after Big Tech’s margins and you’re mounting a frontal assault on the businesses whose CEOs each paid a million bucks to sit behind Trump on the inauguration dais.
Raiding Big Tech’s margins is not an attack on the American people, nor on the small American businesses that are ripped off by Big Tech. It’s a raid on the companies that screw everyday Americans and everyone else in the world. It’s a way to make everyone in the world richer at the expense of these ripoff companies.
It beats the shit out of blowing hundreds of billions of dollars building AI data-centers in the hopes that someday, a sector that’s lost nearly a trillion dollars shipping defective chatbots will figure out a use for GPUs that doesn’t start hemorrhaging money the minute they plug them in.
So here are our new allies in the war on general-purpose computation: businesses and technologists who want to make billions of dollars raiding Big Tech’s margins, and policymakers who want their country to be the disenshittification nation — the country that doesn’t merely protect its people’s money and privacy by buying jailbreaks from other countries, but rather, the country that makes billions of dollars selling that privacy and pocketbook-defending tech to the rest of the world.
That’s a powerful alliance, but those are not the only allies Trump has pushed into our camp. There’s another powerful ally waiting in the wings.
Remember last June, when the International Criminal Court in the Hague issued an arrest warrant for the génocidaire Benjamin Netanyahu, and Trump denounced the ICC, and then the ICC lost its Outlook access, its email archives, its working files, its address books, its calendars?
Microsoft says they didn’t brick the ICC — that it’s a coincidence. But when it comes to a he-said/Clippy-said between the justices of the ICC and the convicted monopolists of Microsoft, I know who I believe.
This is exactly the kind of infrastructural risk that we were warned of if we let Chinese companies like Huawei supply our critical telecoms equipment. Virtually every government ministry, every major corporation, every small business and every household in the world have locked themselves into a US-based, cloud-based service.
The handful of US Big Tech companies that supply the world’s administrative tools are all vulnerable to pressure from the Trump admin, and that means that Trump can brick an entire nation.
The attack on the ICC was an act of cyberwarfare, like the Russian hackers who shut down Ukrainian power-generation facilities, except that Microsoft doesn’t have to hack Outlook to brick the ICC — they own Outlook.
Under the US CLOUD Act of 2018, the US government can compel any US-based company to disclose any of its users’ data — including foreign governments — and this is true no matter where that data is stored. Last July, Anton Carniaux, Director of Public and Legal Affairs at Microsoft France, told a French government inquiry that he “couldn’t guarantee” that Microsoft wouldn’t hand sensitive French data over to the US government, even if that data was stored in a European data-center.
And under the CLOUD Act, the US government can slap gag orders on the companies that it forces to cough up that data, so there’d be no way to even know if this happened, or whether it’s already happened.
It doesn’t stop at administrative tools, either: remember back in 2022, when Putin’s thugs looted millions of dollars’ worth of John Deere tractors from Ukraine and those tractors showed up in Chechnya? The John Deere company pushed an over-the-air kill signal to those tractors and bricked ’em.
John Deere is every bit as politically vulnerable to the Trump admin as Microsoft is, and they can brick most of the tractors in the world, and the tractors they can’t brick are probably made by Massey Ferguson, the number-two company in the ag-tech cartel, which is also an American company and just as vulnerable to political attacks from the US government.
Now, none of this will be news to global leaders. Even before Trump and Microsoft bricked the ICC they were trying to figure out a path to “digital sovereignty.” But the Trump administration’s outrageous conduct and rhetoric over past 11 months has turned “digital sovereignty” from a nice-to-have into a must-have.
So finally, we’re seeing some movement, like “Eurostack,” a project to clone the functionality of US Big Tech silos in free/open source software, and to build EU-based data-centers that this code can run on.
But Eurostack is heading for a crisis. It’s great to build open, locally hosted, auditable, trustworthy services that replicate the useful features of Big Tech, but you also need to build the adversarial interoperability tools that allow for mass exporting of millions of documents, the sensitive data-structures and edit histories.
We need scrapers and headless browsers to accomplish the adversarial interoperability that will guarantee ongoing connectivity to institutions that are still hosted on US cloud-based services, because US companies are not going to facilitate the mass exodus of international customers from their platform.
Just think of how Apple responded to the relatively minor demand to open up the iOS App Store, and now imagine the thermonuclear foot-dragging, tantrum-throwing and malicious compliance they’ll come up with when faced with the departure of a plurality of the businesses and governments in a 27-nation bloc of 500,000,000 affluent consumers.
Any serious attempt at digital sovereignty needs migration tools that work without the cooperation of the Big Tech companies. Otherwise, this is like building housing for East Germans and locating it in West Berlin. It doesn’t matter how great the housing is, your intended audience is going to really struggle to move in unless you tear down the wall.
Step one of tearing down that wall is killing anticircumvention law, so that we can run virtual devices that can be scripted, break bootloaders to swap out firmware and generally seize the means of computation.
So this is the third bloc in the disenshittification army: not just digital rights hippies like me; not just entrepreneurs and economic development wonks rubbing their hands together at the thought of transforming American trillions into European billions; but also the national security hawks who are 100% justified in their extreme concern about their country’s reliance on American platforms that have been shown to be totally unreliable.
This is how we’ll get a post-American internet: with an unstoppable coalition of activists, entrepreneurs and natsec hawks.
This has been a long time coming. Since the post-war settlement, the world has treated the US as a neutral platform, a trustworthy and stable maintainer of critical systems for global interchange, what the political scientists Henry Farrell and Abraham Newman call the “Underground Empire.” But over the past 15 years, the US has systematically shattered global trust in its institutions, a process that only accelerated under Trump.
Take transoceanic fiber optic cables: the way the transoceanic fiber routes were planned, the majority of these cables make landfall on the coasts of the USA where the interconnections are handled. There’s a good case for this hub-and-spoke network topology, especially compared to establishing direct links between every country. That’s an Order(N^2) problem: directly linking each of the planet Earth’s 205 countries to every other country would require 20,910 fiber links.
But putting all the world’s telecoms eggs in America’s basket only works if the US doesn’t take advantage of its centrality, and while many people worried about what the US could do with the head-ends of the world’s global fiber infra, it wasn’t until Mark Klein’s 2006 revelations about the NSA’s nation-scale fiber optic taps in AT&T’s network, and Ed Snowden’s 2013 documents showing the global scale of this wiretapping, that the world had to confront the undeniable reality that the US could not be trusted to serve as the world’s fiber hub.
It’s not just fiber. The world does business in dollars. Most countries maintain dollar accounts at the Fed in New York as their major source of foreign reserves. But in 2005, American vulture capitalists bought up billions of dollars worth of Argentinian government bonds after the sovereign nation of Argentina had declared bankruptcy.
They convinced a judge in New York to turn over the government of Argentina’s US assets to them to make good on loans that these debt collectors had not issued, but had bought up at pennies on the dollar. At that moment, every government in the world had to confront the reality that they could not trust the US Federal Reserve with their foreign reserves. But what else could they use?
Without a clear answer, dollar dominance continued, but then, under Biden, Putin-aligned oligarchs and Russian firms lost access to the SWIFT system for dollar clearing. This is when goods — like oil — are priced in dollars, so that buyers only need to find someone who will trade their own currency for dollars, which they can then swap for any commodity in the world.
Again, there’s a sound case for dollar clearing: it’s just not practical to establish deep, liquid pairwise trading market for all of the world’s nearly 200 currencies, it’s another O(N^2) problem.
But it only works if the dollar is a neutral platform. Once the dollar becomes an instrument of US foreign policy — whether or not you agree with that policy — it’s no longer a neutral platform, and the world goes looking for an alternative.
No one knows what that alternative’s going to be, just as no one knows what configuration the world’s fiber links will end up taking. There’s kilometers of fiber being stretched across the ocean floor, and countries are trying out some pretty improbable gambits as dollar alternatives, like Ethiopia revaluing its sovereign debt in Chinese renminbi. Without a clear alternative to America’s enshittified platforms, the post-American century is off to a rocky start.
But there’s one post-American system that’s easy to imagine. The project to rip out all the cloud connected, backdoored, untrustworthy black boxes that power our institutions, our medical implants, our vehicles and our tractors; and replace it with collectively maintained, open, free, trustworthy, auditable code.
This project is the only one that benefits from economies of scale, rather than being paralyzed by exponential crises of scale. That’s because any open, free tool adopted by any public institution — like the Eurostack services — can be audited, localized, pen-tested, debugged and improved by institutions in every other country.
It’s a commons, more like a science than a technology, in that it is universal and international and collaborative. We don’t have dueling western and Chinese principles of structural engineering. Rather, we have universal principles for making sure buildings don’t fall down, adapted to local circumstances.
We wouldn’t tolerate secrecy in the calculations used to keep our buildings upright, and we shouldn’t tolerate opacity in the software that keeps our tractors, hearing aids, ventilators, pacemakers, trains, games consoles, phones, CCTVs, door locks, and government ministries working.
The thing is, software is not an asset, it’s a liability. The capabilities that running software delivers — automation, production, analysis and administration — those are assets. But the software itself? That’s a liability. Brittle, fragile, forever breaking down as the software upstream of it, downstream of it, and adjacent to it is updated or swapped out, revealing defects and deficiencies in systems that may have performed well for years.
Shifting software to commons-based production is a way to reduce the liability that software imposes on its makers and users, balancing out that liability among many players.
Now, obviously, tech bosses are totally clueless when it comes to this. They really do think that software is an asset. That’s why they’re so fucking horny to have chatbots shit out software at superhuman speeds. That’s why they think it’s good that they’ve got a chatbot that “produces a thousand times more code than a human programmer.”
...
Read the original on pluralistic.net »
Posts with negative sentiment average 35.6 points on Hacker News. The overall average is 28 points. That’s a 27% performance premium for negativity.
This finding comes from an empirical study I’ve been running on HN attention dynamics, covering decay curves, preferential attachment, survival probability, and early-engagement prediction. The preprint is available on SSRN. I already had a gut feeling. Across 32,000 posts and 340,000 comments, nearly 65% register as negative. This might be a feature of my classifier being miscalibrated toward negativity; yet the pattern holds across six different models.
I tested three transformer-based classifiers (DistilBERT, BERT Multi, RoBERTa) and three LLMs (Llama 3.1 8B, Mistral 3.1 24B, Gemma 3 12B). The distributions vary, but the negative skew persists across all of them (inverted scale for 2-6). The results I use in my dashboard are from DistilBERT because it runs efficiently in my Cloudflare-based pipeline.
What counts as “negative” here? Criticism of technology, skepticism toward announcements, complaints about industry practices, frustration with APIs. The usual. It’s worth noting that technical critique reads differently than personal attacks; most HN negativity is substantive rather than toxic. But, does negativity cause engagement, or does controversial content attract both negative framing and attention? Probably some of both.
I’ll publish the full code, dataset, and a dashboard for the HN archiver soon and I’m happy to send you an update:
Alternatively, you can also subscribe to the RSS feed or get updates on Bluesky.
...
Read the original on philippdubach.com »
Opus 4.5 is going to change everything
If you had asked me three months ago about these statements, I would have said only someone who’s never built anything non-trivial would believe they’re true. Great for augmenting a developer’s existing workflow, and completions are powerful, but agents replacing developers entirely? No. Absolutely not.
Today, I think that AI coding agents can absolutely replace developers. And the reason that I believe this is Claude Opus 4.5.
And by “normal”, I mean that it is not the normal AI agent experience that I have had thus far. So far, AI Agents seem to be pretty good at writing spaghetti code and after 9 rounds of copy / paste errors into the terminal and “fix it” have probably destroyed my codebase to the extent that I’ll be throwing this whole chat session out and there goes 30 minutes I’m never getting back.
Opus 4.5 feels to me like the model that we were promised - or rather the promise of AI for coding actually delivered.
One of the toughest things about writing that last sentence is that the immediate response from you should be, “prove it”. So let me show you what I’ve been able to build.
I first noticed that Opus 4.5 was drastically different when I used it to build a Windows utility to right-click an image and convert it to different file types. This was basically a one shot build after asking Opus the best way to add a right-click menu to the file explorer.
What amazed me through the process of building this was Opus 4.5 ability to get most things right on the first try. And if it ran into errors, it would try and build using the dotnet CLI, read the errors and iterate until fixed. The only issue I had was Opus inability to see XAML errors, which I used Visual Studio to see and copy / paste back into the agent.
Opus built a site for me to distribute it and handled the bundling of the executable so as to use a powershell script for the install, uninstall. It also built the GitHub Actions which do the release and update the landing page so that all I have to do is push source.
The only place I had to use other tools was for the logo - where I used Figma’s AI to generate a bunch of different variations - but then Opus wrote the scripts to convert that SVG to the right formats for icons, even store distribution if I chose to do so.
Now this is admittedly not a complex application. This is a small Windows utility that is doing basically one thing. It’s not like I asked Opus 4.5 to build Photoshop.
Except I kind of did.
I was so impressed by Opus 4.5 work on this utility that I decided to make a simple GIF recording utility similar to LICEcap for Mac. Great app, questionable name.
But that proved to be so easy, that I went ahead and continued adding features, including capturing and editing video, static images, adding shapes, cropping, blurs and more. I’m still working on this application as it turns out building a full on image/video editor is kind of a big undertaking. But I got REALLY far in a matter of hours. HOURS, PEOPLE.
I don’t have a fancy landing page for this one yet, but you can view all of the source code here.
I realized that if I could build a video recording app, I could probably build anything at all - at least UI-wise. But the achilles heel of all AI agents is when they have to glue together backend systems - which any real world application is going to have - auth, database, API, storage.
Except Opus 4.5 can do that too.
Armed with my confidence in Opus 4.5, I took on a project that I had built in React Native last year and finished for Android, but gave up in the final stretches (as one does).
The application is for my wife who owns a small yard sign franchise. The problem is that she has a Facebook page for the business, but never posts there because it’s time consuming. But any good small business has a vibrant page where people can see photos of your business doing…whatever the heck it does. So people know that it exsits and is alive and well.
The idea is simple - each time she sets up a yard sign, she takes a picture to send to the person who ordered it so they can see it was setup. So why not have a mobile app where she can upload 10 images at a time, and the app will use AI to generate captions and then schedule them and post them over the coming week.
It’s a simple premise, but it has a lot of moving parts - there is the Facebook authentication which is a caper in and of itself - not for the faint of heart. There is authentication with a backend, there is file storage for photos that are scheduled to go out, there is the backend process which needs to post the photo. It’s a full on backend setup.
As it turns out, I needed to install some blinds in the house so I thought - why don’t I see if Opus 4.5 can build this while I install the blinds.
So I fired up a chat session and just started by telling Opus 4.5 what I wanted to build and how it would recommend handling the backend. It recommended several options but settled on Firebase. I’m not now nor have I ever been a Firebase user, but at this point I trust Opus 4.5 a lot. Probably too much.
So I created a Firebase account, upgraded to the Blaze plan with alerts for billing and Opus 4.5 got to work.
By the time I was done installing blinds, I had a functional iOS application for using AI to caption photos and posting them on a schedule to Facebook.
When I say that Opus 4.5 built this almost entirely, I mean it. It used the firebase CLI to stand up any resources it needed and would tag me in for certain things like upgrading a project to the Blaze plan for features like storage, etc. The best part was that when the Firebase cloud functions would throw errors, it would automatically grep those logs, find the error and resolve it. And all it needed was a CLI. No MCP Server. No fancy prompt file telling it how to use Firebase.
And of course, since it’s solved, I had Opus 4.5 create a backend admin dashboard so I could see what she’s got pending and make any adjustments.
And since it did in a few hours what had taken me two months of work in the evenings instead of being a decent husband, I decided to make up for my dereliction of duties by building her another app for her sign business that would make her life just a bit more delightful - and eliminate two other apps she is currently paying for.
This app parses orders from her business Gmail account to show her what sign setups / pickups she has for the day, calculates how long its going to take to go to each stop, calculates the optimal route when there is more than one stop and tracks drive time for tax purposes. She was previously using two paid apps for the last two features there.
This app also uses Firebase. Again, Opus one-shotted the Google auth email integration. This is the kind of thing that is painstakingly miserable by hand. And again, Firebase is so well suited here because Opus knows how to use the Firebase CLI so well. It needs zero instruction.
BUT YOU DON’T KNOW HOW THE CODE WORKS
No I don’t. I have a vague idea, but you are right - I do not know how the applications are actually assembled. Especially since I don’t know Swift at all.
This used to be a major hangup for me. I couldn’t diagnose problems when things went sideways. With Opus 4.5, I haven’t hit that wall yet—Opus always figures out what the issue is and fixes its own bugs.
The real question is code quality. Without understanding how it’s built, how do I know if there’s duplication, dead code, or poor patterns? I used to obsess over this. Now I’m less worried that a human needs to read the code, because I’m genuinely not sure that they do.
Why does a human need to read this code at all? I use a custom agent in VS Code that tells Opus to write code for LLMs, not humans. Think about it—why optimize for human readability when the AI is doing all the work and will explain things to you when you ask?
What you don’t need: variable names, formatting, comments meant for humans, or patterns designed to spare your brain.
What you do need: simple entry points, explicit code with fewer abstractions, minimal coupling, and linear control flow.
You are an AI-first software engineer. Assume all code will be written and maintained by LLMs, not humans. Optimize for model reasoning, regeneration, and debugging — not human aesthetics.
These coding principles are mandatory:
1. Structure
- Use a consistent, predictable project layout.
- Group code by feature/screen; keep shared utilities minimal.
- Create simple, obvious entry points.
- Before scaffolding multiple files, identify shared structure first. Use framework-native composition patterns (layouts, base templates, providers, shared components) for elements that appear across pages. Duplication that requires the same fix in multiple places is a code smell, not a pattern to preserve.
2. Architecture
- Prefer flat, explicit code over abstractions or deep hierarchies.
- Avoid clever patterns, metaprogramming, and unnecessary indirection.
- Minimize coupling so files can be safely regenerated.
3. Functions and Modules
- Keep control flow linear and simple.
- Use small-to-medium functions; avoid deeply nested logic.
- Pass state explicitly; avoid globals.
4. Naming and Comments
- Use descriptive-but-simple names.
- Comment only to note invariants, assumptions, or external requirements.
5. Logging and Errors
- Emit detailed, structured logs at key boundaries.
- Make errors explicit and informative.
6. Regenerability
- Write code so any file/module can be rewritten from scratch without breaking the system.
- Prefer clear, declarative configuration (JSON/YAML/etc.).
7. Platform Use
- Use platform conventions directly and simply (e.g., WinUI/WPF) without over-abstracting.
8. Modifications
- When extending/refactoring, follow existing patterns.
- Prefer full-file rewrites over micro-edits unless told otherwise.
9. Quality
- Favor deterministic, testable behavior.
- Keep tests simple and focused on verifying observable behavior.
Your goal: produce code that is predictable, debuggable, and easy for future LLMs to rewrite or extend.
All of that said, I don’t have any proof that this prompt makes a difference. I find that Opus 4.5 writes pretty solid code no matter what you prompt it with. However, because models like to write code WAY more than they like to delete it, I will at points run a prompt that looks something like this…
Check your LLM, AI coding principles and then do a comprehensive search of this application and suggest what we can do to refactor this to better align to those principles. Also point out any code that can be deleted, any files that can be deleted, things that should read should be renamed, things that should be restructured. Then do a write up of what that looks like. Kind of keep it high level so that it’s easy for me to read and not too complex. Add sections for high, medium and lower priority And if something doesn’t need to be changed, then don’t change it. You don’t need to change things just for the sake of changing them. You only need to change them if it helps better align to your LLM AI coding principles. Save to a markdown file.
And you get a document that has high, medium and low priority items. The high ones you can deal with and the AI will stop finding them. You can refactor your project a million times and it will keep finding medium/low priority refactors that you can do. An AI is never ever going to pass on the opportunity to generate some text.
I use a similar prompt to find security issues. These you have to be very careful about. Where are the API keys? Is login handled correctly? Are you storing sensitive values in the database? This is probably the most manual part of the project and frankly, something that makes me the most nervous about all of these apps at the moment. I’m not 100% confident that they are bullet proof. Maybe like 80%. And that, as they say, is too damn low.
I don’t know if I feel exhilarated by what I can now build in a matter of hours, or depressed because the thing I’ve spent my life learning to do is now trivial for a computer. Both are true.
I understand if this post made you angry. I get it - I didn’t like it either when people said “AI is going to replace developers.” But I can’t dismiss it anymore. I can wish it weren’t true, but wishing doesn’t change reality.
But for everything else? Build. Stop waiting to have all the answers. Stop trying to figure out your place in an AI-first world. The answer is the same as it always was: make things. And now you can make them faster than you ever thought possible.
Just make sure you know where your API keys are.
Disclaimer: This post was written by a human and edited for spelling, grammer by Haiku 4.5
...
Read the original on burkeholland.github.io »
1. C Is Best
2. Why Isn’t SQLite Coded In An Object-Oriented Language?
C Is Best
Note: Sections 2.0 and 3.0 of this article were added in response
to comments on
Hacker News and
Reddit.
Since its inception on 2000-05-29, SQLite has been implemented in generic C. C was and continues to be the best language for implementing a software library like SQLite. There are no plans to recode SQLite in any other programming language at this time.
The reasons why C is the best language to implement SQLite include:
An intensively used low-level library like SQLite needs to be fast. (And SQLite is fast, see Internal Versus External BLOBs and
35% Faster Than The Filesystem for examples.)
C is a great language for writing fast code. C is sometimes described as “portable assembly language”. It enables developers to code as close to the underlying hardware as possible while still remaining portable across platforms.
Other programming languages sometimes claim to be “as fast as C”. But no other language claims to be faster than C for general-purpose programming, because none are.
Nearly all systems have the ability to call libraries written in C. This is not true of other implementation languages.
So, for example, Android applications written in Java are able to invoke SQLite (through an adaptor). Maybe it would have been more convenient for Android if SQLite had been coded in Java as that would make the interface simpler. However, on iPhone applications are coded in Objective-C or Swift, neither of which have the ability to call libraries written in Java. Thus, SQLite would be unusable on iPhones had it been written in Java.
Libraries written in C do not have a huge run-time dependency. In its minimum configuration, SQLite requires only the following routines from the standard C library:
In a more complete build, SQLite also uses library routines like malloc() and free() and operating system interfaces for opening, reading, writing, and closing files. But even then, the number of dependencies is very small. Other “modern” languages, in contrast, often require multi-megabyte runtimes loaded with thousands and thousands of interfaces.
The C language is old and boring. It is a well-known and well-understood language. This is exactly what one wants when developing a module like SQLite. Writing a small, fast, and reliable database engine is hard enough as it is without the implementation language changing out from under you with each update to the implementation language specification.
Why Isn’t SQLite Coded In An Object-Oriented Language?
Some programmers cannot imagine developing a complex system like SQLite in a language that is not “object oriented”. So why is SQLite not coded in C++ or Java?
Libraries written in C++ or Java can generally only be used by applications written in the same language. It is difficult to get an application written in Haskell or Java to invoke a library written in C++. On the other hand, libraries written in C are callable from any programming language.
Object-Oriented is a design pattern, not a programming language. You can do object-oriented programming in any language you want, including assembly language. Some languages (ex: C++ or Java) make object-oriented easier. But you can still do object-oriented programming in languages like C.
Object-oriented is not the only valid design pattern. Many programmers have been taught to think purely in terms of objects. And, to be fair, objects are often a good way to decompose a problem. But objects are not the only way, and are not always the best way to decompose a problem. Sometimes good old procedural code is easier to write, easier to maintain and understand, and faster than object-oriented code.
When SQLite was first being developed, Java was a young and immature language. C++ was older, but was undergoing such growing pains that it was difficult to find any two C++ compilers that worked the same way. So C was definitely a better choice back when SQLite was first being developed. The situation is less stark now, but there is little to no benefit in recoding SQLite at this point.
There has lately been a lot of interest in “safe” programming languages like Rust or Go in which it is impossible, or is at least difficult, to make common programming errors like memory leaks or array overruns. So the question often arises as to why SQLite is not coded in a “safe” language.
None of the safe programming languages existed for the first 10 years of SQLite’s existence. SQLite could be recoded in Go or Rust, but doing so would probably introduce far more bugs than would be fixed, and it may also result in slower code.
Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite’s quality strategy.
Safe languages usually want to abort if they encounter an out-of-memory (OOM) situation. SQLite is designed to recover gracefully from an OOM. It is unclear how this could be accomplished in the current crop of safe languages.
All of the existing safe languages are new. The developers of SQLite applaud the efforts of computer language researchers in trying to develop languages that are easier to program safely. We encourage these efforts to continue. But we ourselves are more interested in old and boring languages when it comes to implementing SQLite.
All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include:
Rust needs to mature a little more, stop changing so fast, and
move further toward being old and boring.
Rust needs to demonstrate that it can be used to create general-purpose
libraries that are callable from all other programming languages.
Rust needs to demonstrate that it can produce object code that
works on obscure embedded devices, including devices that lack
an operating system.
Rust needs to pick up the necessary tooling that enables one to
do 100% branch coverage testing of the compiled binaries.
Rust needs a mechanism to recover gracefully from OOM errors.
Rust needs to demonstrate that it can do the kinds of work that
C does in SQLite without a significant speed penalty.
If you are a “rustacean” and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.
This page was last updated on 2025-05-09 15:56:17Z
...
Read the original on sqlite.org »
The Gmail app, on the App Store, is currently 760.7 MB in size. It is in the top three most bloated
apps out of the top 100 free apps. I don’t use it on my phone, but I was prompted to compare it with the seemingly hefty one I (have to) use, Outlook, while clearing up some storage space. Its measly 428 MB size pales in comparison.
This isn’t new. In 2017, Axios
reported
that the top iPhone apps had been taking up an increasing amount of space over the period from 2013 to 2017. For most of that period, the size of the Gmail app hovered around 12 MB, with a sudden jump to more than 200 MB near the start of 2017. Other popular apps also saw a 10x or more increase in size over the same period.
Gmail isn’t even the worst offender, it’s just a more popular one. The Tesla and Crypto.com apps are around 1 GB each. So is Samsung’s SmartThings app. What about Google’s other popular apps? Google Home is another hefty one, at 630 MB, that I used for its remote feature, which I replaced with Google TV at almost a tenth the size. Their other popular apps average around 250
MB in size. This seems tame in comparison to Microsoft, with an average app size of around
330MB. For reference, the average size of an app in the top 100 free apps is 280
MB or, in a more expanded set (including games), 200MB.
Just to put this into perspective, on my device, apps (excluding their data) use up 35 GB, and the data is another 35 GB. iOS takes up another 25 GB. Let’s say, 100 GB for apps, data and the OS. That leaves me with 20 GB (leaving a margin of free space for updates) meant to be used for capturing 4K video and high-quality photos (why else get an iPhone), and storing music (don’t even think about lossless). The reality is that running out of space also slows things down, since most of my photos need to be fetched from the cloud before viewing them, and I need to re-download these hefty offloaded apps when I need them again. And good luck if you have a limited data bundle.
Maybe this doesn’t matter. The latest iPhones start at 256 GB, and surely I’ll have plenty of space when I get a new one (although I remember saying this when I upgraded to 64 GB from 32 GB). It’s not really about the space though. These apps don’t have 50x or even 10x the functionality. But now they’re 100x larger, and probably slower. Why?
Also, can someone explain why Microsoft Authenticator is 150 MB to show 6-digit codes?
It’s not clear if this is specifically an iOS problem. I don’t have an Android device and I could not find a way to get that information from the Play Store without a device. That said, I checked the size of Gmail on someone’s Android phone, and it’s around 185 MB, which certainly seems much better.
And if you’re considering switching from the default apps, this is what the installed size (which differs slightly from the App Store size) is of the alternatives on my iPhone running iOS 26.2:
So, why is the Gmail app almost 80x the size of the native Mail app? My guess is as good as yours.
...
Every Dev Tool You Need
...
Read the original on blgardner.github.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.