10 interesting stories served every morning and every evening.
Everyone in San Francisco is talking about the singularity. At dinner parties, at coffee shops, at the OpenClaw meetup where Ashton Kutcher showed up for some reason. The conversations all have the same shape: someone says it’s coming, someone says it’s hype, and nobody has a number.
This seems like the wrong question. If things are accelerating (and they measurably are) the interesting question isn’t whether. It’s when. And if it’s accelerating, we can calculate exactly when.
I collected five real metrics of AI progress, fit a hyperbolic model to each one independently, and found the one with genuine curvature toward a pole. The date has millisecond precision. There is a countdown.
Five metrics, chosen for what I’m calling their anthropic significance (anthropic here in the Greek sense (“pertaining to humans”), not the company, though they appear in the dataset with suspicious frequency):
Tokens per dollar: cost collapse of intelligence (log-transformed, because the Gemini Flash outlier spans 150× the range otherwise)
Each metric normalized to . Release intervals inverted (shorter = better). Tokens per dollar log-transformed before normalizing (the raw values span five orders of magnitude; without the log, Gemini Flash at 2.5M tokens/$ dominates the fit and everything else is noise). Each series keeps its own scale, no merging into a single ensemble.
An exponential approaches infinity only as . You’d be waiting forever. Literally.
We need a function that hits infinity at a finite time. That’s the whole point of a singularity: a pole, a vertical asymptote, the math breaking:
As , the denominator goes to zero. . Not a bug. The feature.
Polynomial growth () never reaches infinity at finite time. You could wait until heat death and would still be finite. Polynomials are for people who think AGI is “decades away.”
Exponential growth reaches infinity at . Technically a singularity, but an infinitely patient one. Moore’s Law was exponential. We are no longer on Moore’s Law.
Hyperbolic growth is what happens when the thing that’s growing accelerates its own growth. Better AI → better AI research tools → better AI → better tools. Positive feedback with supralinear dynamics. The singularity is real and finite.
The procedure is straightforward, which should concern you.
The model fits a separate hyperbola to each metric:
Each series gets its own scale and offset . The singularity time is shared. MMLU scores and tokens-per-dollar have no business being on the same y-axis, but they can agree on when the pole is.
For each candidate , the per-series fits are linear in and . The question is: which makes the hyperbola fit best?
Here’s the thing nobody tells you about fitting singularities: most metrics don’t actually have one. If you minimize total RSS across all series, the best is always at infinity. A distant hyperbola degenerates into a line, and lines fit noisy data just fine. The “singularity date” ends up being whatever you set as the search boundary. You’re finding the edge of your search grid, not a singularity.
So instead, we look for the real signal. For each series independently, grid search and find the R² peak: the date where hyperbolic fits better than any nearby alternative. If a series genuinely curves toward a pole, its R² will peak at some finite and then decline. If it’s really just linear, R² will keep increasing as and never peak. No peak, no signal, no vote!
One series peaks! arXiv “emergent” (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line. The singularity date comes from the one metric that’s actually going hyperbolic.
This is more honest than forcing five metrics to average out to a date that none of them individually support.
Same inputs → same date. Deterministic. The stochasticity is in the universe, not the model.
The fit converged! Each series has its own R² at the shared , so you can see exactly which metrics the hyperbola captures well and which it doesn’t. arXiv’s R² is the one that matters. It’s the series that actually peaked.
The 95% confidence interval comes from profile likelihood on . We slide the singularity date forward and backward until the fit degrades past an F-threshold.
How much does the date move if we drop one metric entirely?
If dropping a single series shifts by years, that series was doing all the work. If the shifts are zero, the dropped series never had a signal in the first place.
The table tells the story plainly: arXiv is doing all the work. Drop it and the date jumps to the search boundary (no remaining series has a finite peak). Drop anything else and nothing moves. They were never contributing to the date, only providing context curves at the shared .
Note: Copilot has exactly 2 data points and 2 parameters ( and ), so it fits any hyperbola perfectly. Zero RSS, zero influence on . It’s along for the ride!
The model says at . But what does “infinity” mean for arXiv papers about emergence? It doesn’t mean infinitely many papers get published on a Tuesday in 2034.
It means the model breaks. is the point where the current trajectory’s curvature can no longer be sustained. The system either breaks through into something qualitatively new, or it saturates and the hyperbola was wrong. A phase transition marker, not a physical prediction.
But here’s the part that should unsettle you: the metric that’s actually going hyperbolic is human attention, not machine capability.
MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal. The only curve pointing at a finite date is the count of papers about emergence. Researchers noticing and naming new behaviors. Field excitement, measured memetically.
The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
That’s a very different singularity than the one people argue about.
If marks when the rate of AI surprises exceeds human capacity to process them, the interesting question isn’t what happens to the machines. It’s what happens to us.
And the uncomfortable answer is: it’s already happening.
The labor market isn’t adjusting. It’s snapping. In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI’s potential, not its performance. The displacement is anticipatory. The curve doesn’t need to reach the pole. It just needs to look like it will.
Institutions can’t keep up. The EU AI Act’s high-risk rules have already been delayed to 2027. The US revoked its own 2023 AI executive order in January 2025, then issued a new one in December trying to preempt state laws. California and Colorado are going their own way anyway. The laws being written today regulate 2023′s problems. By the time legislation catches up to GPT-4, we’re on GPT-7. When governments visibly can’t keep up, trust doesn’t erode. It collapses. Global trust in AI has dropped to 56%.
Capital is concentrating at dot-com levels. The top 10 S&P 500 stocks (almost all AI-adjacent) hit 40.7% of index weight in 2025, surpassing the dot-com peak. Since ChatGPT launched, AI-related stocks have captured 75% of S&P 500 returns, 80% of earnings growth, and 90% of capital spending growth. The Shiller CAPE is at 39.4. The last time it was this high was 1999. The money flooding in doesn’t require AI to actually reach superintelligence. It just requires enough people to believe the curve keeps going up.
People are losing the thread. Therapists are reporting a surge in what they’re calling FOBO (Fear of Becoming Obsolete). The clinical language is striking: patients describe it as “the universe saying, ‘You are no longer needed.’” 60% of US workers believe AI will cut more jobs than it creates. AI usage is up 13% year-over-year, but confidence in it has dropped 18%. The more people use it, the less they trust it.
The epistemics are cracking. Less than a third of AI research is reproducible. Under 5% of researchers share their code. Corporate labs are publishing less. The gap between what frontier labs know and what the public knows is growing, and the people making policy are operating on information that’s already obsolete. The experts who testify before Congress contradict each other, because the field is moving faster than expertise can form.
The politics are realigning. TIME is writing about populist AI backlash. Foreign Affairs published “The Coming AI Backlash: How the Anger Economy Will Supercharge Populism.” HuffPost says AI will define the 2026 midterms. MAGA is splitting over whether AI is pro-business or anti-worker. Sanders proposed a data center moratorium. The old left-right axis is buckling under the weight of a question it wasn’t built to answer.
All of this is happening eight years before tst_sts. The social singularity is front-running the technical one. The institutional and psychological disruption doesn’t wait for capabilities to go vertical. It starts as soon as the trajectory becomes legible.
The pole at isn’t when machines become superintelligent. It’s when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.
The date comes from one series. arXiv “emergent” is the only metric with genuine hyperbolic curvature. The other four are better fit by straight lines. The singularity date is really “the date when AI emergence research goes vertical.” Whether field excitement is a leading indicator or a lagging one is the crux of whether this means anything.
The model assumes stationarity. Like assuming the weather will continue to be “changing.” The curve will bend, either into a logistic (the hype saturates) or into something the model can’t represent (genuine phase transition). marks where the current regime can’t continue, not what comes after.
MMLU is hitting its ceiling. Benchmark saturation introduces a leptokurtic compression artifact. MMLU’s low R² reflects this. The hyperbola is the wrong shape for saturating data.
Tokens per dollar is log-transformed (values span five orders of magnitude) and non-monotonic (GPT-4 cost more than 3.5; Opus 4.5 costs more than DeepSeek-R1). The cost curve isn’t smooth: it’s Pareto advances interspersed with “we spent more on this one.”
Five metrics isn’t enough. More series with genuine hyperbolic curvature would make the date less dependent on arXiv alone. A proper study would add SWE-bench, ARC, GPQA, compute purchases, talent salaries. I used five because five fits in a table.
Copilot has two data points. Two parameters, two points, zero degrees of freedom, zero RSS contribution. The sensitivity analysis confirms it doesn’t matter.
The math found one metric curving toward a pole on a specific day at a specific millisecond: the rate at which humans are discovering emergent AI behaviors. The other four metrics are linear. The machines are improving steadily. We are the ones accelerating!
The social consequences of that acceleration (labor displacement, institutional failure, capital concentration, epistemic collapse, political realignment) are not predictions for 2034. They are descriptions of 2026. The singularity in the data is a singularity in human attention, and it is already exerting gravitational force on everything it touches.
I see no reason to let epistemological humility interfere with a perfectly good timer.
See you on the other side!
...
Read the original on campedersen.com »
ECB President Christine Lagarde has called for Europe to break its dependence on American payment infrastructure, warning that every card transaction sends European consumer data to the United States. A coalition of 16 banks thinks it has the answer.
What’s happening? ECB President Christine Lagarde told Irish radio that Europe needs its own digital payment system “urgently,” warning that virtually all European card and mobile payments currently run through non-European infrastructure controlled by Visa, Mastercard, PayPal or Alipay. Days later, on 2 February, the European Payments Initiative (EPI) and the EuroPA Alliance signed a landmark agreement to build a pan-European interoperable payment network covering 130 million users across 13 countries. The system, built around the digital wallet Wero, aims to let Europeans pay and transfer money across borders without touching a single American network.
Every time a European taps a card, pays online or splits a bill with friends, the transaction flows through infrastructure owned and operated by American companies. Visa and Mastercard together process approximately $24 trillion in transactions annually. Card payments account for 56% of all cashless transactions in the EU. And the data — who bought what, where, when and for how much — leaves European jurisdiction every time.
“It’s important for us to have digital payment under our control,” Lagarde told The Pat Kenny Show. “Whether you use a card or whether you use a phone, typically it goes through Visa, Mastercard, PayPal, Alipay. Where are all those coming from? Well, either the US or China.”
The host’s response — “I didn’t realise this” — captured the broader European blind spot. Most consumers have no idea that their payment data routinely exits the EU. In a geopolitical environment where Europe is scrambling to reduce dependence on the United States across defence, energy and trade, payments remain an overlooked vulnerability.
The lesson of Russia sharpened the urgency. When Western sanctions cut Russia off from Visa and Mastercard in 2022, the country’s domestic payments were immediately disrupted. European policymakers asked the obvious question: what would happen if the US decided — or was pressured — to restrict European access to those same networks?
The European Payments Initiative, a consortium of 16 major banks and payment processors including BNP Paribas, Deutsche Bank and Worldline, launched Wero in July 2024 as Europe’s answer. Built on SEPA instant credit transfers, Wero lets users send money using just a phone number — no IBAN, no card, no intermediary.
The numbers so far are encouraging. Wero already has over 47 million registered users in Belgium, France and Germany, has processed over €7.5 billion in transfers, and counts more than 1,100 member institutions. Retail payments went live in Germany at the end of 2025, with merchants including Lidl, Decathlon, Rossmann and Air Europa already accepting Wero online. France and Belgium follow in 2026.
But the real breakthrough came on 2 February, when EPI signed a memorandum of understanding with the EuroPA Alliance — a coalition of national payment systems including Italy’s Bancomat, Spain’s Bizum, Portugal’s MB WAY and the Nordics’ Vipps MobilePay. The deal instantly connects approximately 130 million users across 13 countries, covering roughly 72% of the EU and Norway population. Cross-border peer-to-peer payments launch this year, with e-commerce and point-of-sale payments following in 2027.
“European payment sovereignty is not a vision, but a reality in the making,” said Martina Weimert, CEO of EPI.
Europe has tried this before. The Monnet Project, launched in 2008 by twenty European banks, collapsed in 2012. The original EPI vision itself was scaled back after several founding members withdrew, forcing a pivot from a full card-replacement scheme to a narrower account-to-account model.
The core problem has always been fragmentation. Each EU country developed its own domestic payment solution — Bizum in Spain, iDEAL in the Netherlands, Payconiq in Belgium, Girocard in Germany — but none could work across borders. A Belgian consumer buying from a Dutch retailer still needed Visa or Mastercard. National pride and competing banking interests repeatedly sabotaged attempts at unification.
The network effect compounds the challenge. Merchants accept Visa and Mastercard because consumers carry them. Consumers carry them because merchants accept them. Breaking that loop requires either regulatory force or a critical mass of users large enough to make merchants care — which is precisely what the EuroPA deal attempts to deliver by connecting existing national user bases rather than building from scratch.
Running in parallel is the ECB’s digital euro project, which would create a central bank-backed digital currency usable across the eurozone. EU finance ministers have accelerated discussions on the initiative, though the European Parliament has not yet passed the required legislation. Once approved, the ECB estimates it would need a further two to three years to launch.
EPI is careful to distinguish Wero from the digital euro. Wero is a private-sector initiative; the digital euro is public money. They are designed to complement rather than compete — though the overlap in ambition is obvious. Both exist because Europe’s political establishment has finally accepted that payments sovereignty is as strategically important as energy independence or defence autonomy.
Sceptics have good reasons for doubt. Creating a viable alternative to Visa and Mastercard requires “several billion euros” in investment, according to EPI’s own estimates. Low interchange fees under EU regulation make profitability difficult. Consumer habits are deeply entrenched — and neither Visa nor Mastercard will sit idle while Europe tries to dismantle their most profitable market.
Weimert herself concedes that calling Wero a “challenger” may be premature, describing it as functioning like a startup — albeit one with €500 million in backing and 47 million users already on board.
But the political tailwinds are stronger than they have ever been. The EU’s instant payments regulation, the Capital Markets Union push, the broader drive for European strategic autonomy in a world of tariff wars and great power rivalry — all point in the same direction. The question is no longer whether Europe wants its own payment infrastructure. It is whether it can execute fast enough to matter.
As Lagarde put it: “We have the assets and opportunities to do that ourselves. And if we were to remove the internal barriers that we have set for ourselves in Europe, our economic wealth would increase significantly.”
...
Read the original on europeanbusinessmagazine.com »
Google fulfilled an Immigration and Customs Enforcement subpoena that demanded a wide array of personal data on a student activist and journalist, including his credit card and bank account numbers, according to a copy of an ICE subpoena obtained by The Intercept.
Amandla Thomas-Johnson had attended a protest targeting companies that supplied weapons to Israel at a Cornell University job fair in 2024 for all of five minutes, but the action got him banned from campus. When President Donald Trump assumed office and issued a series of executive orders targeting students who protested in support of Palestinians, Thomas-Johnson and his friend Momodou Taal went into hiding.
Google informed Thomas-Johnson via a brief email in April that it had already shared his metadata with the Department of Homeland Security, as The Intercept previously reported. But the full extent of the information the agency sought — including usernames, addresses, itemized list of services, including any IP masking services, telephone or instrument numbers, subscriber numbers or identities, and credit card and bank account numbers — was not previously known.
“I’d already seen the subpoena request that Google and Meta had sent to Momodou [Taal], and I knew that he had gotten in touch with a lawyer and the lawyer successfully challenged that,” Thomas-Johnson said. “I was quite surprised to see that I didn’t have that opportunity.”
The subpoena provides no justification for why ICE is asking for this information, except that it’s required “in connection with an investigation or inquiry relating to the enforcement of U. S. immigration laws.” In the subpoena, ICE requests that Google not “disclose the existence of this summons for indefinite period of time.”
Thomas-Johnson, who is British, believes that ICE requested that information to track and eventually detain him — but he had already fled to Geneva, Switzerland, and is now in Dakar, Senegal.
The Electronic Frontier Foundation, which is representing Thomas-Johnson, and the ACLU of Northern California sent a letter to Google, Amazon, Apple, Discord, Meta, Microsoft, and Reddit last week calling on tech companies to resist similar subpoenas in the future from DHS without court intervention. The letter asks the companies to provide users with as much notice as possible before complying with a subpoena to give them the opportunity to fight it, and to resist gag orders that would prevent the tech companies from informing targets that a subpoena was issued.
“Your promises to protect the privacy of users are being tested right now. As part of the federal government’s unprecedented campaign to target critics of its conduct and policies, agencies like DHS have repeatedly demanded access to the identities and information of people on your services,” the letter reads. “Based on our own contact with targeted users, we are deeply concerned your companies are failing to challenge unlawful surveillance and defend user privacy and speech.”
In addition to Thomas-Johnson’s case, the letter refers to other instances in which technology companies provided user data to DHS, including a subpoena sent to Meta to “unmask” the identities of users who documented immigration raids in California. Unlike Thomas-Johnson, users in that case were given the chance to fight the subpoena because they were made aware of it before Meta complied.
“Google has already fulfilled this subpoena,” an attorney for Google told Thomas-Johnson’s lawyer, as The Intercept previously reported. “Production consisted of basic subscriber information.”
The ICE subpoena requested the detailed information linked to Thomas-Johnson’s Gmail account. Thomas-Johnson confirmed to The Intercept that he had attached his bank and credit card numbers to his account to buy apps.
Google did not respond to a request for comment.
Lindsay Nash, a professor at Cardozo Law and a former staff attorney with ACLU Immigrants’ Rights Project, said that by not giving prior notice, Google deprived Thomas-Johnson of his ability to protect his information.
“The problem is that it doesn’t allow the person whose personal information is on the line and whose privacy may be being invaded to raise challenges to the disclosure of that potentially private information,” Nash said. “And I think that’s important to protect rights that they may have to their own information.”
Tech companies’ data sharing practices are primarily governed by two federal laws, the Stored Communications Act, which protects the privacy of digital communications, including emails, and Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive trade practices.
“Under both federal law and the law of every state, you cannot deceive consumers,” said Neil Richards, a law professor at Washington University St. Louis who specializes in privacy, the internet, and civil liberties. “And if you make a material misrepresentation about your data practices, that’s a deceptive trade practice.”
Whether or not corporations are clear enough with consumers about how they collect and share their data has been litigated for decades, Richards said, referencing the infamous Cambridge Analytica lawsuit brought by the Federal Trade Commission, alleging that the company misled Facebook users about data collection and sharing.
Google’s public privacy policy acknowledges that it will share personal information in response to an “enforceable governmental request,” adding that its legal team will “frequently push back when a request appears to be overly broad or doesn’t follow the correct process.”
According to Google, the company overwhelmingly complied with the millions of requests made by the government for user information over the last decade. Its data also shows that those requests have spiked over the last five years. It’s unclear how many of those users were given notice of those requests ahead of time or after.
Richards said that cases like these emphasize the need for legal reforms around data privacy and urged Congress to amend the Stored Communications Act to require a higher standard before the government can access our digital data. He also said the federal government needs to regulate Big Tech and place “substantive restrictions on their ability to share information with the government.”
It’s hard to know exactly how tech companies are handling our personal data in relation to the government, but there seems to have been a shift in optics, Richards said. “What we have seen in the 12 months since the leaders of Big Tech were there on the podium at the inauguration,” Richards said, “is much more friendliness of Big Tech towards the government and towards state power.”
From Dakar, Thomas-Johnson said that understanding the extent of the subpoena was terrifying but had not changed his commitment to his work.
“As a journalist, what’s weird is that you’re so used to seeing things from the outside,” said Thomas-Johnson, whose work has appeared in outlets including Al Jazeera and The Guardian. “We need to think very hard about what resistance looks like under these conditions… where government and Big Tech know so much about us, can track us, can imprison, can destroy us in a variety of ways.”
This story has been updated to reflect that Thomas-Johnson’s legal team still does not know the full extent of the information that Google provided to ICE, but that Thomas-Johnson said his bank and credit card numbers were attached to his account.
...
Read the original on theintercept.com »
I wrote my first line of code in 1983. I was seven years old, typing BASIC into a machine that had less processing power than the chip in your washing machine. I understood that machine completely. Every byte of RAM had a purpose I could trace. Every pixel on screen was there because I’d put it there. The path from intention to result was direct, visible, and mine.
Forty-two years later, I’m sitting in front of hardware that would have seemed like science fiction to that kid, and I’m trying to figure out what “building things” even means anymore.
This isn’t a rant about AI. It’s not a “back in my day” piece. It’s something I’ve been circling for months, and I think a lot of experienced developers are circling it too, even if they haven’t said it out loud yet.
My favourite period of computing runs from the 8-bits through to about the 486DX2-66. Every machine in that era had character. The Sinclair Spectrum with its attribute clash. The Commodore 64 with its SID chip doing things the designers never intended. The NES with its 8-sprite-per-scanline limit that made developers invent flickering tricks to cheat the hardware. And the PC — starting life as a boring beige box for spreadsheets, then evolving at breakneck pace through the 286, 386, and 486 until it became a gaming powerhouse that could run Doom. You could feel each generation leap. Upgrading your CPU wasn’t a spec sheet exercise — it was transformative.
These weren’t just products. They were engineering adventures with visible tradeoffs. You had to understand the machine to use it. IRQ conflicts, DMA channels, CONFIG. SYS and AUTOEXEC.BAT optimisation, memory managers — getting a game to run was the game. You weren’t just a user. You were a systems engineer by necessity.
And the software side matched. Small teams like id Software were going their own way, making bold technical decisions because nobody had written the rules yet. Carmack’s raycasting in Wolfenstein, the VGA Mode X tricks in Doom — these were people pushing against real constraints and producing something genuinely new. Creative constraints bred creativity.
Then it professionalised. Plug and Play arrived. Windows abstracted everything. The Wild West closed. Computers stopped being fascinating, cantankerous machines that demanded respect and understanding, and became appliances. The craft became invisible.
But it wasn’t just the craft that changed. The promise changed.
When I started, there was a genuine optimism about what computers could be. A kid with a Spectrum could teach themselves to build anything. The early web felt like the greatest levelling force in human history. Small teams made bold decisions because nobody had written the rules yet.
That hope gave way to something I find genuinely distasteful. The machines I fell in love with became instruments of surveillance and extraction. The platforms that promised to connect us were really built to monetise us. The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks.
The thing I loved changed, and then it was put to work doing things I’m not proud to be associated with. That’s a different kind of loss than just “the tools moved on.”
But I adapted. That’s what experienced developers, human beings, do.
Over four decades I’ve been through more technology transitions than I can count. New languages, new platforms, new paradigms. CLI to GUI. Desktop to web. Web to mobile. Monoliths to microservices. Tapes, floppy discs, hard drives, SSDs. JavaScript frameworks arriving and dying like mayflies.
Each wave required learning new things, but the core skill transferred. You learned the new platform, you applied your existing understanding of how systems work, and you kept building. The tool changed; the craft didn’t. You were still the person who understood why things broke, how systems composed, where today’s shortcut became next month’s mess.
I’ve written production code in more languages than some developers have heard of. I’ve shipped software on platforms that no longer exist. I’ve chased C-beams off the shoulder of Orion. And every time the industry lurched in a new direction, the experience compounded. You didn’t start over. You brought everything with you and applied it somewhere new.
That’s the deal experienced developers made with the industry: things change, but understanding endures.
I say that knowing how often those words have been wrong throughout history. But hear me out.
Previous technology shifts were “learn the new thing, apply existing skills.” AI isn’t that. It’s not a new platform or a new language or a new paradigm. It’s a shift in what it means to be good at this.
I noticed it gradually. I’d be working on something — building a feature, designing an architecture — and I’d realise I was still doing the same thing I’d always done, just with the interesting bits hollowed out. The part where you figure out the elegant solution, where you wrestle with the constraints, where you feel the satisfaction of something clicking into place — that was increasingly being handled by a model that doesn’t care about elegance and has never felt satisfaction.
I’m not typing the code anymore. I’m reviewing it, directing it, correcting it. And I’m good at that — 42 years of accumulated judgment about what works and what doesn’t, what’s elegant versus what’s expedient, how systems compose and where they fracture. That’s valuable. I know it’s valuable. But it’s a different kind of work, and it doesn’t feel the same.
The feedback loop has changed. The intimacy has gone. The thing that kept me up at night for decades — the puzzle, the chase, the moment where you finally understand why something isn’t working — that’s been compressed into a prompt and a response. And I’m watching people with a fraction of my experience produce superficially similar output. The craft distinction is real, but it’s harder to see from the outside. Harder to value. Maybe harder to feel internally.
Here’s the part that makes me laugh, darkly.
I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.
They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
But sure. AI is the moment they lost track of what’s happening.
The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack. AI is just the layer that made the pretence impossible to maintain.
The difference is: I remember what it felt like to understand the whole machine. I’ve had that experience. And losing it — even acknowledging that it was lost long before AI arrived — is a kind of grief that someone who never had it can’t fully feel.
I don’t want to be dishonest about this. There’s a version of this post where I tell you that experience is more valuable than ever, that systems thinking and architectural judgment are the things AI can’t replace, that the craft endures in a different form.
And that’s true. When I’m working on something complex — juggling system-level dependencies, holding a mental model across multiple interacting specifications, making the thousand small decisions that determine whether something feels coherent or just works — I can see how I still bring something AI doesn’t. The taste. The judgment. The pattern recognition from decades of seeing things go wrong.
AI tools actually make that kind of thinking more valuable, not less. When code generation is cheap, the bottleneck shifts to the person who knows what to ask for, can spot when the output is subtly wrong, and can hold the whole picture together. Typing was never the hard part.
But I’d be lying if I said it felt the same. It doesn’t. The wonder is harder to access. The sense of discovery, of figuring something out through sheer persistence and ingenuity — that’s been compressed. Not eliminated, but compressed. And something is lost in the compression, even if something is gained.
I turned 50 recently. Four decades of intensity, of crafting and finding satisfaction and identity in the building.
And now I’m in what I’ve started calling a fallow period. Not burnout exactly. More like the ground shifting under a building you thought that although ever changing also had a permanence, and trying to figure out where the new foundation is.
I don’t have a neat conclusion. I’m not going to tell you that experienced developers just need to “push themselves up the stack” or “embrace the tools” or “focus on what AI can’t do.” All of that is probably right, and none of it addresses the feeling.
The feeling is: I gave 42 years to this thing, and the thing changed into something I’m not sure I recognise anymore. Not worse, necessarily. Just different. And different in a way that challenges the identity I built around it and doesn’t satisfy in the way it did.
I suspect a lot of developers over 40 are feeling something similar and not saying it, because the industry worships youth and adaptability and saying “this doesn’t feel like it used to” sounds like you’re falling behind.
I’m not falling behind. I’m moving ahead, taking advantage of the new tools, building faster than ever, and using these tools to help others accelerate their own work. I’m creating products I could only have dreamt of a few years ago. But at the same time I’m looking at the landscape, trying to figure out what building means to me now. The world’s still figuring out its shape too. Maybe that’s okay.
Maybe the fallow period is the point. Not something to push through, but something to be in for a while.
I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that.
...
Read the original on www.jamesdrandall.com »
We have raised a $200M Series C, and yes, you are permitted a double take: didn’t we just raise a
$100M Series B? And aren’t we the ones that are especially candid about the
perils of raising too much money?Well, yes, on both fronts, so let us explain a little. First, we have the luxury of having achieved real product-market fit: we are making a product that people want to buy. This takes on additional dimensions when making something physical: with complexities like manufacturing, inventory, cash-conversion, and shifting supply chains, product-market fit implies getting the unit economics of the business right. All of this is a long way of saying: we did not (and do not) need to raise capital to support the business.So if we didn’t need to raise, why seek the capital? Well, we weren’t seeking it, really. But our investors, seeing the business take off, were eager to support it. And we, in turn, were eager to have them: they were the ones, after all, who joined us in taking a real leap when it felt like there was a lot more risk on the table. They understood our vision for the company and shared our love for customers and our desire to build a singular team. They had been with us in some difficult moments; they know and trust us, as do we them. So being able to raise a Series C purely from our existing investors presented a real opportunity.Still, even from investors that we trust and with a quick close, if the business doesn’t need the money, does it make sense to raise? We have always believed that our biggest challenge at Oxide was time — and therefore capital. We spelled this out in our initial pitch deck from 2019:Six years later, we stand by this, which is not to minimize any of those challenges: the technical challenges were indeed hard; we feel fortunate to have attracted an extraordinary team; and we certainly caught some
lucky breaks
with respect to the market. With this large Series C, we have entirely de-risked capital going forward, which in turn assures our independence.This last bit is really important, because any buyer of infrastructure has had their heart broken countless times by promising startups that succumbed to acquisition by one of the established players that they were seeking to disrupt. The serial disappointments leave a refreshing bluntness in their wake, and it’s not uncommon for us to be asked directly: “How do I know you won’t be bought?“Our intent in starting Oxide was not to be an acquisition target but rather build a generational company; this is our life’s work, not a means to an end. With our Series C, customers don’t have to merely take our word for it: we have the capital to assure our survival into the indefinite future. If our Series B left us with confidence in achieving our mission, our Series C leaves us with certainty: we’re going to kick butt, have fun, not cheat (of course!), love our customers — and change computing forever.
...
Read the original on oxide.computer »
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
Meta and Google-owned YouTube were accused Monday of pushing highly addictive apps on children as a landmark social media trial began in earnest in a California court.
The blockbuster trial in front of a Los Angeles jury could establish a legal precedent on whether the social media juggernauts deliberately designed their platforms to lead to addiction in children.
The proceedings are expected to see Meta chief Mark Zuckerberg on the stand next week and Instagram boss Adam Mosseri in the courtroom as early as Wednesday. In addition to Instagram, Meta’s platforms include Facebook and WhatsApp.
“This case is about two of the richest corporations in history who have engineered addiction in children’s brains,” plaintiffs’ attorney Mark Lanier told the jury in his opening statement.
“This case is as easy as A-B-C,” Lanier said as he stacked children’s toy blocks bearing the letters.
He contended the A was for addicting, the B for brains and the C for children.
“They don’t only build apps; they build traps,” Lanier said, saying Meta and YouTube pursued “addiction by design,” making his arguments using props like a toy Ferrari and a mini slot machine.
Meta attorney Paul Schmidt countered in opening remarks to the jury that evidence will show problems with the plaintiff’s family and real-world bullying took a toll on her self-esteem, body image and happiness rather than Instagram.
“If you took Instagram away and everything else was the same in Kaley’s life, would her life be completely different, or would she still be struggling with the same things she is today?” Schmidt asked, pointing out an Instagram addiction is never mentioned in medical records included in the evidence.
The trial before Judge Carolyn Kuhl focuses on allegations that a 20-year-old woman identified as Kaley G. M. suffered severe mental harm because she became addicted to social media as a child.
The case is being treated as a bellwether proceeding because its outcome could set the tone, and the level of payouts to successful plaintiffs, for a tidal wave of similar litigation across the United States.
Social media firms are accused in hundreds of lawsuits of leading young users to become addicted to content and suffer from depression, eating disorders, psychiatric hospitalization and even suicide.
Lawyers for the plaintiffs are borrowing strategies used in the 1990s and 2000s against the tobacco industry, which faced a similar onslaught of lawsuits arguing that companies knowingly sold a harmful product.
Lanier told the jurors that Kaley began watching YouTube at six years old because the company never told her mother “the goal was viewer addiction,” or that toddlers as young as two were being targeted despite “critical” risk of addiction.
“This is the first time that a social media company has ever had to face a jury for harming kids,” Social Media Victims Law Center founder Matthew Bergman, whose team is involved in more than 1,000 such cases, told AFP.
Internet titans have argued that they are shielded by Section 230 of the US Communications Decency Act, which frees them from responsibility for what social media users post.
However, this case argues that those firms are culpable for business models designed to hold people’s attention and to promote content that can harm their mental health.
The plaintiffs said they would call expert witnesses that will argue that young people’s brains are not yet developed to withstand the powers of the algorithms being flung at them on Instagram and YouTube.
The company pointed to recent efforts to provide more safeguards for young people, adding that “we’re always working to do better.”
Jose Castaneda, a YouTube spokesperson, said “the allegations in these complaints are simply not true.”
Lawyers for YouTube are to present opening remarks to the jury on Tuesday.
Snapchat and TikTok were named as defendants in the suit, but struck settlement deals before the start of the trial. The terms were not disclosed.
Lawsuits, including some brought by school districts, accusing social media platforms of practices endangering young users are making their way through federal court in northern California and state courts across the country.
A separate lawsuit accusing Meta of putting profit over the well-being of young users was also getting underway in New Mexico on Monday.
...
Read the original on techxplore.com »
The original port of Half-Life 2 (2004) to Quake(World).
The game is not playable from start to finish. You can play deathmatch and other odd modes.
Requires both hl2 and hl2dm directories in order to function. Any copy will do fine. If you’re on a case-insensitive filesystem and you’re running pre .vpk data files you might want to put them in a .zip and name it ‘pak0.pk3’. It’s generally easier to just use the latest data from Steam.
It will then automatically attempt to install Rad-Therapy II - when run from within your Half-Life 2: Deathmatch directory.
Git clone Nuclide first, run make update and make fteqcc, then clone the repository inside the Nuclide-SDK:
git clone https://code.idtech.space/fn/hl2 hl2
make game GAME=hl2
make plugins GAME=hl2
The last command will build the plugins required for the engine to load the data files. The one before it will build the game-logic. Make sure that Nuclide-SDK has fteqcc and fteqw present for building and running, respectively. It will also respect versions installed by the package manager, just make sure it’s up to date.
If you’re a fellow Matrix user, join the Nuclide Space. Where you can ask questions, or prod devs about what they’re up to.
https://matrix.to/#/#nuclide:matrix.org
You can also join us on #nuclide via irc.libera.chat. It’s bridged with the main room of the Matrix space.
Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF MIND, USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Half-Life 2 and Half-Life 2: Deathmatch belong to Valve Corporation. Original licensed assets from Steam or a disc are required in order to experience Rad-Therapy II.
...
Read the original on code.idtech.space »
...
Read the original on www.labs.greynoise.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.