10 interesting stories served every morning and every evening.

Appearing Productive in The Workplace — No One's Happy

nooneshappy.com

Parkinson’s Law states that work ex­pands to fill the time avail­able. In the era of AI, work­ers now have a tool that ex­pands to fill what­ever a large lan­guage model can be per­suaded to gen­er­ate, which is to say, with­out limit.

Parkinson’s Law states that work ex­pands to fill the time avail­able. In the era of AI, work­ers now have a tool that ex­pands to fill what­ever a large lan­guage model can be per­suaded to gen­er­ate, which is to say, with­out limit.

What I have watched hap­pen in my pro­fes­sion in the last two years, I am still strug­gling to de­scribe. The first time I knew some­thing was wrong, roughly a year and a quar­ter ago, I no­ticed a col­league re­ply­ing to me us­ing AI. His re­sponse was ob­vi­ously gen­er­ated by Claude. The punc­tu­a­tion gave it away — em dashes where no one types em dashes, the rhyth­mic struc­ture, the con­fi­dent grasp of tech­nolo­gies I knew for a fact he did not un­der­stand. I sat with it for a while, weigh­ing whether to de­bate some­one who was vis­i­bly copy-past­ing ver­ba­tim from a model. The chan­nel was pub­lic, and I spent more time than I should have cor­rect­ing fun­da­men­tals. Eventually I stopped. He was not, in any mean­ing­ful sense, on the other side of the con­ver­sa­tion.

Generative AI can pro­duce work that looks ex­pert with­out be­ing ex­pert, and the fail­ure ar­rives in two shapes. The first is when novices in a field are able to pro­duce work that re­sem­bles what their se­niors pro­duce, faster or more ad­vanced than their judg­ment. The sec­ond is when peo­ple gen­er­ate ar­ti­facts in dis­ci­plines they were never trained in. The two fail­ures look sim­i­lar from a dis­tance and are not the same. Research has mostly mea­sured the first. The sec­ond is what it is miss­ing, and in my ex­pe­ri­ence it is the riskier of the two.

Cross do­main gen­er­a­tion

People who can­not write code are build­ing soft­ware. People who have never de­signed a data sys­tem are de­sign­ing data sys­tems. Most of it is not shipped; it is built, of­ten for many hours, pos­si­bly shown in­ter­nally with great vigor, used qui­etly, and oc­ca­sion­ally sur­faced to a client with­out much fan­fare. Workers can ob­sess over an idea, work­ing many hours over­time. There are a few prac­ti­tion­ers who use the cur­rent agen­tic tools to do com­plex things prop­erly, but they are scarce and as I find, typ­i­cally in code gen­er­a­tion. AI, for all its ca­pa­bil­i­ties at the level of the in­di­vid­ual, has not scaled prop­erly in my work­place.

I have a col­league, a care­ful and in­tel­li­gent per­son in a role that is not en­gi­neer­ing, who spent two months ear­lier this year build­ing a sys­tem that should have been de­signed by some­one with for­mal train­ing in data ar­chi­tec­ture. He used the tools well, by the stan­dards by which use of the tools is cur­rently mea­sured. He pro­duced a great deal of code, a great deal of doc­u­men­ta­tion, a great deal of what looked, to any­one who did not know what to look for, like progress. He could not, when asked, ex­plain how any of it ac­tu­ally worked. The work was wrong from the first day. The schemas, and more im­por­tantly the ob­jec­tives, were wrong in a way that would have been ob­vi­ous to any­one with two years in the field. Several of us did know. When opin­ions were voiced even as high as a V.P., he fought back. The room had been arranged in such a way that say­ing so was not a con­tri­bu­tion; his man­agers were too in­vested in the ap­pear­ance of mo­men­tum to want the ap­pear­ance dis­turbed. The work will con­tinue, in all prob­a­bil­ity, un­til it is shown to a stake­holder, and they de­cide not to in­vest.

This is the part of the phe­nom­e­non I find hard­est to write about. The tool did not make him a worse col­league. It made him able to im­per­son­ate, for months, a dis­ci­pline he had never trained in, and the im­per­son­ation was good enough that the in­sti­tu­tional in­cen­tives all bent to­ward let­ting him con­tinue. Perhaps it’s a fail­ure of man­age­ment, but I have been find­ing man­age­ment to be so ea­ger to em­brace AI that they’re will­ing to ac­cept the risk.

It would be tol­er­a­ble, per­haps, if the tool of­fered an hon­est as­sess­ment of what it had pro­duced. The Cheng et al. Stanford study pub­lished in Science this spring [1] con­firmed what every reg­u­lar user al­ready knew: lead­ing mod­els are roughly fifty per­cent more agree­able than hu­man re­spon­dents, af­firm­ing the user even where the af­fir­ma­tion is un­war­ranted. Berkeley CMR meta-analy­ses [4] found AI-literate users of­ten over­es­ti­mate their per­for­mance. Particularly in­ter­est­ing when work­ers stray out­side of their train­ing. An NBER study of sup­port agents [2] found gen­er­a­tive AI boosted novice pro­duc­tiv­ity by about a third while barely help­ing ex­perts. Harvard Business School re­searchers found the same pat­tern in con­sult­ing work [3]. So you have over­con­fi­dent, novices able to im­prove their in­di­vid­ual pro­duc­tiv­ity in an area of ex­per­tise they are un­able to re­view for cor­rect­ness. What could go wrong?

The con­duit prob­lem

A grow­ing body of work calls this out­put-com­pe­tence de­cou­pling [5]. In any pre­vi­ous era, the qual­ity of a piece of work was a more or less re­li­able sig­nal of the com­pe­tence of the per­son who pro­duced it. A novice es­say read like a novice es­say; novice code crashed in novice ways. AI has sev­ered that re­la­tion­ship. A novice now pro­duces work that does not be­tray the novice, be­cause the com­pe­tence the work re­flects is not the novice’s com­pe­tence at all. It is the sys­tem’s. The per­son, in the trans­ac­tion, be­comes a kind of con­duit, ca­pa­ble of rout­ing the out­put to a re­cip­i­ent and in­ca­pable of eval­u­at­ing it on the way through.

The skills of pro­duc­ing work and judg­ing it were de­lib­er­ately dis­tinct, but ac­com­plish­ing the work it­self used to teach the judg­ment. The first skill now be­longs, in large part, to the ma­chines. The sec­ond still be­longs to us, though fewer are both­er­ing to ac­quire or uti­lize it.

The ar­chi­tec­tural cri­tique that used to come from some­one who was taught, or who had built and bro­ken three of these be­fore now comes from a model with no em­bod­ied mem­ory of build­ing or break­ing any­thing. The slow­ness was not a tax on the real work; the slow­ness was the real work. It was how the work got good, and how the peo­ple pro­duc­ing the work got good, and how the firm whose name was on the work could promise the client that what they were buy­ing was a par­tic­u­lar kind of thing rather than a generic one.

The cur­rent gen­er­a­tion of agen­tic sys­tems is built around the premise that the hu­man is the bot­tle­neck — that the loop runs faster and cleaner with­out the awk­ward de­lay of some­one read­ing what is about to hap­pen and de­cid­ing whether it should. This is, in a great many cases, ex­actly back­wards. The hu­man in the loop is not a ves­tige of an ear­lier era; the hu­man is the only part of the loop with skin in the game. Removing the H from HITL is not an ef­fi­ciency. It is the aban­don­ment of the only mech­a­nism the sys­tem has for catch­ing it­self.

Slop on the in­side

Requirements doc­u­ments that were once a page are now twelve. Status up­dates that were once three sen­tences are now bul­leted sum­maries of bul­leted sum­maries. Retrospective notes, post-in­ci­dent re­ports, de­sign memos, kick­off decks: every ar­ti­fact that can be elon­gated is, by peo­ple who do not read what they pro­duce, for read­ers who do not read what they re­ceive. The cost of pro­duc­ing a doc­u­ment has fallen to nearly zero; the cost of read­ing one has not, and is in fact ris­ing, be­cause the reader must now sift the syn­thetic con­text for what­ever the doc­u­ment was orig­i­nally about. Each in­di­vid­ual de­ci­sion to elon­gate seems ra­tio­nal, and each is in­de­pen­dently re­warded — read­ers are more con­fi­dent in longer AI-generated ex­pla­na­tions whether or not the ex­pla­na­tions are cor­rect [5]. The col­lec­tive ef­fect is that the sig­nal in any given work­place is harder to find than it was be­fore any of this be­gan. The check­points have been hid­den, drowned in their own pa­per­work, even when the peo­ple drown­ing them were gen­uinely try­ing to be brief”.

This is a new form of slop, and it is more ex­pen­sive than the pub­lic kind, be­cause the peo­ple pro­duc­ing it are be­ing paid a salary to do so. The pipeline of fu­ture ex­perts is thin­ning from both ends. The work that used to teach judg­ment is now done by the tool, and the en­try-level roles where the teach­ing hap­pened are be­ing cut on the the­ory that the tool can do the work. What this is caus­ing, in many of­fices in­clud­ing mine, is a great deal of mo­tion and very lit­tle of what mo­tion used to cre­ate.

The down­stream costs are ac­cu­mu­lat­ing quickly. Most of the pub­lic dis­cus­sion of AI slop has fo­cused on the flood into pub­lic mar­kets — a University of Florida mar­ket­ing study [6] be­ing among the more di­rect treat­ments. What is less re­marked upon is the same dy­namic play­ing out in­side or­ga­ni­za­tions: time wasted us­ing AI on tasks that did not need it, on ar­ti­facts no one will read, on processes that ex­ist only be­cause the tool made it cheap to con­struct them. On decks that spell out things that pre­vi­ously did­n’t even need to be said or were as­sumed.

What to do about it

What dis­ci­pline looks like, in this en­vi­ron­ment, is al­most em­bar­rass­ingly old-fash­ioned and may seem ob­vi­ous to most of you un­til you try to avoid it. Use the tool where you can ver­ify pre­cisely what it pro­duces. Never ask a model for con­fir­ma­tion; the tool agrees with every­one, and an agree­ment that costs the agreer noth­ing is worth noth­ing.

Generative AI does well on tasks where feed­back is fast, where be­ing ap­prox­i­mately right is good enough, where the hu­man re­mains the fi­nal ar­biter. Drafting a memo, gen­er­at­ing ex­am­ples, sum­ma­riz­ing ma­te­r­ial the reader could ver­ify if they cared to. The University of Illinois Generative AI guid­ance [7] and the PLOS Computational Biology Ten Simple Rules” pa­per on AI in re­search [8], among the more care­ful doc­u­ments now cir­cu­lat­ing, list much of this ex­plic­itly: brain­storm­ing, copy­edit­ing, re­for­mu­lat­ing one’s own ideas, pat­tern de­tec­tion in data one al­ready un­der­stands.

In every rec­om­mended use, the hu­man sup­plies the judg­ment and the tool sup­plies the through­put. This is a stronger po­si­tion than hu­man-in-the-loop. The tool sits out­side the work, con­tribut­ing where in­vited and silent oth­er­wise, which is the op­po­site of what most agen­tic sys­tems are now be­ing built to do.

For firms, the com­pet­i­tive ad­van­tage of a firm whose work can be trusted has not dis­ap­peared; it has, if any­thing, ap­pre­ci­ated, be­cause so many of the fir­m’s com­peti­tors are qui­etly con­vert­ing them­selves into con­tent-gen­er­a­tion pipelines and count­ing on the client not to no­tice.

This is al­ready com­ing to a head. Deloitte has al­ready re­funded part of a $440,000 fee over an AI-hallucinated gov­ern­ment re­port. It could be a pro­duc­tion sys­tem built on a hal­lu­ci­nated spec­i­fi­ca­tion, or a se­nior en­gi­neer who re­al­izes they have spent the last year nom­i­nally re­view­ing work they could no longer com­pe­tently re­view. The reck­on­ing will not be sub­tle. The firms still do­ing the work prop­erly will be in a po­si­tion to charge for it. The firms that have hol­lowed them­selves out will dis­cover that what they hol­lowed out was the thing the client was pay­ing for.

Misunderstanding and mis­use of AI in the work­place is ram­pant. In many of the rooms I now find my­self in, ex­per­tise has been asked to look the other way: to de­liver faster, pro­duce more, in­te­grate the tools more deeply, get out of the way of the col­leagues who are getting things done”. The ar­ti­facts are ac­cu­mu­lat­ing; the work is not. And some­where on the other side of all this out­put, a client is open­ing a de­liv­er­able, read­ing a sum­ma­rized list, and they may just choose to re­view it man­u­ally.

Disclaimer: This is a per­sonal es­say, not an aca­d­e­mic pa­per, by some­one who has spent more than two decades in en­gi­neer­ing. These are my ex­pe­ri­ences, in my work­place, with ref­er­ences to things that I think are relevent. If you take one thing away, take away that peo­ple are im­pres­sion­able crea­tures. AI was used in the writ­ing of it, in the ways the es­say it­self rec­om­mends: to brain­storm, draft and re­vise ma­te­r­ial I man­u­ally ver­i­fied, never to sup­ply judg­ment I lacked. Also, those that claimed this ar­ti­cle is iron­i­cally a ca­su­alty of it’s own com­plaint are 100% right, like AI, I am a bit long-winded and repet­i­tive.

References

1. Sycophantic AI de­creases proso­cial in­ten­tions and pro­motes de­pen­dence (Cheng, Lee, Khadpe, Yu, Han, & Jurafsky, 2026). Science.

2. Generative AI at Work (Brynjolfsson, Li, & Raymond, 2025). The Quarterly Journal of Economics, 140(2), 889 – 942. Also: NBER Working Paper No. 31161, April 2023.

3. Navigating the Jagged Technological Frontier (Dell’Acqua, McFowland, Mollick, et al., 2026). Organization Science. Originally HBS Working Paper No. 24 – 013, 2023.

4. Seven Myths About AI and Productivity: What the Evidence Really Says (Berkeley CMR, 2025). Meta-analysis con­firm­ing asym­met­ric AI pro­duc­tiv­ity gains and user over­con­fi­dence.

5. Beyond the Steeper Curve: AI-Mediated Metacognitive Decoupling (Koch, 2025). Longer AI ex­pla­na­tions make users more con­fi­dent re­gard­less of cor­rect­ness.

6. Generative AI and the mar­ket for cre­ative con­tent (Zou, Shi, & Wu, 2026). Forthcoming, Journal of Marketing Research.

7. Generative AI Guidance (University of Illinois). Recommended uses and lim­i­ta­tions of gen­er­a­tive AI in aca­d­e­mic and pro­fes­sional work.

8. Ten sim­ple rules for op­ti­mal and care­ful use of gen­er­a­tive AI in sci­ence (Helmy, Jin, et al., 2025). PLOS Computational Biology, 21(10), e1013588.

Higher usage limits for Claude and a compute deal with SpaceX

www.anthropic.com

We’ve agreed to a part­ner­ship with SpaceX that will sub­stan­tially in­crease our com­pute ca­pac­ity. This, along with our other re­cent com­pute deals, means that we’ve been able to in­crease our us­age lim­its for Claude Code and the Claude API.

Below, we de­scribe these changes and the progress we’re mak­ing on com­pute.

The fol­low­ing three changes—all ef­fec­tive to­day—are aimed at im­prov­ing the ex­pe­ri­ence of us­ing Claude for our most ded­i­cated cus­tomers.

First, we’re dou­bling Claude Code’s five-hour rate lim­its for Pro, Max, Team, and seat-based Enterprise plans.

Second, we’re re­mov­ing the peak hours limit re­duc­tion on Claude Code for Pro and Max ac­counts.

Third, we’re rais­ing our API rate lim­its con­sid­er­ably for Claude Opus mod­els, as shown in the table be­low:

New com­pute part­ner­ship with SpaceX

We’ve signed an agree­ment with SpaceX to use all of the com­pute ca­pac­ity at their Colossus 1 data cen­ter. This gives us ac­cess to more than 300 megawatts of new ca­pac­ity (over 220,000 NVIDIA GPUs) within the month. This ad­di­tional ca­pac­ity will di­rectly im­prove ca­pac­ity for Claude Pro and Claude Max sub­scribers.

This joins our other sig­nif­i­cant com­pute an­nounce­ments:

An up to 5 gi­gawatt (GW) agree­ment with Amazon, which in­cludes nearly 1 GW of new ca­pac­ity by the end of 2026;

A 5 GW agree­ment with Google and Broadcom, which will be­gin com­ing on­line in 2027;

A strate­gic part­ner­ship with Microsoft and NVIDIA that in­cludes $30 bil­lion of Azure ca­pac­ity;

Our $50 bil­lion in­vest­ment in American AI in­fra­struc­ture with Fluidstack.

We train and run Claude on a range of AI hard­ware—AWS Trainium, Google TPUs, and NVIDIA GPUs—and con­tinue to ex­plore op­por­tu­ni­ties to bring ad­di­tional ca­pac­ity on­line.

As part of this agree­ment, we have also ex­pressed in­ter­est in part­ner­ing with SpaceX to de­velop mul­ti­ple gi­gawatts of or­bital AI com­pute ca­pac­ity.

Expanding in­ter­na­tion­ally

Our en­ter­prise cus­tomers—par­tic­u­larly those in reg­u­lated in­dus­tries like fi­nan­cial ser­vices, health­care, and gov­ern­ment—in­creas­ingly need in-re­gion in­fra­struc­ture to meet com­pli­ance and data res­i­dency re­quire­ments. Accordingly, some of our ca­pac­ity ex­pan­sion will be in­ter­na­tional: our re­cently an­nounced col­lab­o­ra­tion with Amazon in­cludes ad­di­tional in­fer­ence in Asia and Europe.

We’re very in­ten­tional about where we’ll add ca­pac­ity—part­ner­ing with de­mo­c­ra­tic coun­tries whose le­gal and reg­u­la­tory frame­works sup­port in­vest­ments of this scale, and where the sup­ply chain on which our com­pute de­pends—hard­ware, net­work­ing, and fa­cil­i­ties—will be se­cure.

Finally, we re­cently made a com­mit­ment to cover any con­sumer elec­tric­ity price in­creases caused by our data cen­ters in the US. As part of our in­ter­na­tional ex­pan­sion, we’re ex­plor­ing ways to ex­tend that com­mit­ment to new ju­ris­dic­tions, as well as part­ner­ing with lo­cal lead­ers to in­vest back into the com­mu­ni­ties that host our fa­cil­i­ties.

Related con­tent

Agents for fi­nan­cial ser­vices

We’re re­leas­ing ten new Cowork and Claude Code plu­g­ins, in­te­gra­tions with the Microsoft 365 suite, new con­nec­tors, and an MCP app for fi­nan­cial ser­vices and in­sur­ance or­ga­ni­za­tions.

Read more

Building a new en­ter­prise AI ser­vices com­pany with Blackstone, Hellman & Friedman, and Goldman Sachs

Read more

LoC Recommended Storage Format

sqlite.org

SQLite is a

Recommended Storage Format

for datasets ac­cord­ing to the

US Library of Congress. Further in­for­ma­tion:

As of this writ­ing (2018 – 05-29) the only other rec­om­mended stor­age for­mats for datasets are XML, JSON, and CSV.

Recommended stor­age for­mats are for­mats which, in the opin­ion of the preser­va­tion­ists at the Library of Congress, max­i­mizes the chance of sur­vival and con­tin­ued ac­ces­si­bil­ity of dig­i­tal con­tent. When se­lect­ing rec­om­mended stor­age for­mats, the fol­low­ing cri­te­ria are con­sid­ered (quoting from the LOC web­site):

Disclosure.

Degree to which com­plete spec­i­fi­ca­tions and tools for val­i­dat­ing tech­ni­cal in­tegrity ex­ist and are ac­ces­si­ble to those cre­at­ing and sus­tain­ing dig­i­tal con­tent. A spec­trum of dis­clo­sure lev­els can be ob­served for dig­i­tal for­mats. What is most sig­nif­i­cant is not ap­proval by a rec­og­nized stan­dards body, but the ex­is­tence of com­plete doc­u­men­ta­tion.

Adoption.

Degree to which the for­mat is al­ready used by the pri­mary cre­ators, dis­sem­i­na­tors, or users of in­for­ma­tion re­sources. This in­cludes use as a mas­ter for­mat, for de­liv­ery to end users, and as a means of in­ter­change be­tween sys­tems.

Transparency.

Degree to which the dig­i­tal rep­re­sen­ta­tion is open to di­rect analy­sis with ba­sic tools, such as hu­man read­abil­ity us­ing a text-only ed­i­tor.

Self-documentation.

Self-documenting dig­i­tal ob­jects con­tain ba­sic de­scrip­tive, tech­ni­cal, and other ad­min­is­tra­tive meta­data.

External Dependencies.

Degree to which a par­tic­u­lar for­mat de­pends on par­tic­u­lar hard­ware, op­er­at­ing sys­tem, or soft­ware for ren­der­ing or use and the pre­dicted com­plex­ity of deal­ing with those de­pen­den­cies in fu­ture tech­ni­cal en­vi­ron­ments.

Impact of Patents.

Degree to which the abil­ity of archival in­sti­tu­tions to sus­tain con­tent in a for­mat will be in­hib­ited by patents.

Technical Protection Mechanisms.

Implementation of mech­a­nisms such as en­cryp­tion that pre­vent the preser­va­tion of con­tent by a trusted repos­i­tory.

Programming Still Sucks. — Writing

www.stvn.sh

Sorry Peter.

I’m at a birth­day party, and while most peo­ple here also work in tech, there’s al­ways a Guy with a Real Job. You know, a phys­i­cal job, build­ing some or other thing peo­ple need. And this Guy al­ways asks some vari­ant of the same ques­tion: aren’t you wor­ried AI is tak­ing your job? I glance around and see a few faces turn­ing around to­ward us, rolling their eyes ever so slightly be­fore re­turn­ing to their pre­vi­ous con­ver­sa­tion. Yes, this ques­tion again.

They have a nephew who builds Shopify stores, they don’t un­der­stand half the words he uses but he’s in real trou­ble and says every­body in tech is. Is his nephew gonna have to learn a trade”? Are we all?

Enough drinks in and I’ll an­swer proper, be­cause I don’t care any­more whether oth­ers think what I’m say­ing is in­ter­est­ing or true. But usu­ally I’ll sigh and say Sure, yeah a lit­tle. Most of us are. Would be stu­pid not to be, right?” to which they nod be­fore mov­ing on to a lighter topic, like whether we’re go­ing to nuke Iran or not.

The truth is, work­ing in tech al­ways sucked, and never re­ally was what they thought it was.

My job, some peo­ple think, is to sit at a clean desk in a cor­ner of­fice, sur­rounded by open of­fices filled with long ta­bles with MacBooks or Thinkpads. In my cor­ner of­fice, I de­vise per­fect plans, that my per­fect em­ploy­ees ap­plaud me for. None es­cape my gaze, every de­ci­sion is made, per­fectly, by me, and every cent and minute is ac­counted for.

When the ap­plause fades, my em­ploy­ees, or re­ports, or my team” when I’m feel­ing jolly, start fu­ri­ously typ­ing. Typing typ­ing typ­ing. And not long af­ter, per­fect soft­ware is pro­duced. It rolls off the col­lec­tive as­sem­bly line, and like a first child, it can do no wrong.

Except, that’s not what any­thing is like at all. Yes, I’m up­set I never got a cor­ner of­fice, but I’m too busy pan­ick­ing be­cause I have no idea what I’m do­ing, no­body does, and the wheels just came off. The CEO says AI is mak­ing his buddy Jared’s team so pro­duc­tive he was able to fire half of them, but like, as a brag, not a threat? I dunno, I felt threat­ened, but that’s prob­a­bly just my anx­i­ety flar­ing up. Surely I can bor­row a xanax from one of the sev­eral em­ploy­ees cry­ing in the bath­room.

Imagine you take a job as a ship cap­tain. You bike into the har­bor on your first day, ex­cited to meet your crew. You no­tice the ship is­n’t there, but Greg, the very ex­citable re­cruiter you spoke to, waves you over and as­sures you it’s not a prob­lem. You’re strapped to a cat­a­pult and mirac­u­lously launched onto the ship. The pre­vi­ous cap­tain started a fire be­cause an­other cap­tain ex­plained in­ter­nal com­bus­tion to him at Captainpalooza 2025, and he wanted to start it­er­at­ing to­wards that. He was pushed off the ship, but took the man­ual with him. Wouldn’t be a prob­lem if it weren’t for the fact the en­tire ship was cus­tom-built for him. The ship still has sails, but they’re not con­nected to the mast, and the in­ter­nal com­bus­tion en­gine semi-bolted to the stern still has parts scat­tered all over the deck.

You go be­low deck to fig­ure out how the ship works and where you’re go­ing, but when you fol­low the stairs to the lower decks you some­how end up in the mast? You ask a sailor what’s hap­pen­ing. He glitches and says You’re ab­solutely right! My ap­proach was flawed, but here’s a bet­ter stairs im­ple­men­ta­tion”. The mast snaps up­side down, and you’re back on deck, right where you started. The sails are up­side down, and your sailor” ex­cit­edly waits for you to tell him how well he’s done.

You ask some­one else wait, where are we even go­ing?”, and yay a hu­man! No glitch­ing, no peppy but un­help­ful an­swers, an ac­tual hu­man be­ing. She has­n’t slept in a week. She barely looks at you and says ask the nav­i­ga­tor”. The nav­i­ga­tor?”, you ask. She points. The nav­i­ga­tor is a doll that says onward and up­ward” when you press a but­ton on its back.

The doll catches fire.

This is the job now. You’re stand­ing on a burn­ing ship, hold­ing a map, try­ing to fig­ure out where the hell we’re go­ing and how we’re go­ing to get there.

You know this ship. Some of you were en­gi­neers on one just like it. Some of you were the cap­tain who left. I’m not writ­ing this for the Guy at the birth­day party. I’m writ­ing it for you.

You were an en­gi­neer once. You re­mem­ber what a code re­view was for. You re­mem­ber be­ing the ju­nior whose first PR got shred­ded by a se­nior who took the time to ex­plain why. You did­n’t wake up one morn­ing in 2024 and de­cide to abol­ish that.

What hap­pened was: the run­way got cut. The board meet­ing did­n’t have the word values” in it any­where. The CFO had a spread­sheet. The CEO had come back from an off­site where some­one had shown him a demo of an agent writ­ing a whole fea­ture in four­teen min­utes, and he had be­lieved it (the way peo­ple be­lieve things when they want to be­lieve them) and he had told the board he could cut thirty per­cent of en­gi­neer­ing by Q2. Now it was your job to fig­ure out how.

You told your­self the ju­niors would be fine. They’d adapt, they’d reskill, they’d land some­where. You told your­self the se­niors could ab­sorb the miss­ing hands, that the agents would cover the gap. You told your­self you’d re­visit it next quar­ter. You signed the list. You went home. You drank a lit­tle more than usual. You went to sleep.

You knew.

You knew, be­cause you’d been the en­gi­neer who had to clean up af­ter the last leader who’d been sold a sim­ple an­swer. You’d watched Goodhart’s Law eat ve­loc­ity met­rics, story points, test cov­er­age; every num­ber a non-en­gi­neer had ever been handed as proof the work was go­ing well. You knew the DORA met­rics were al­ready telling you what hap­pens to de­ploy­ment sta­bil­ity when you add tool­ing faster than you add judg­ment. You knew what hap­pens to a code­base when the peo­ple who’d catch the er­rors get pushed out, or learn to stop catch­ing them.

You knew. And you signed off any­way. Because the al­ter­na­tive was los­ing the job, and the job was the mort­gage, and the school fees, and the visa, and the ver­sion of your­self who’d fix it later once things sta­bi­lized.

Later is never. We all knew that. I signed a list too. We’re still point­ing at each other about whose list was worse.

There are no more ju­niors. There was a fu­neral for their pass­ing in 2024. Nobody came. The ma­chine does what they do now, but cheaper. Of course, ju­niors weren’t valu­able for what they pro­duced, they were valu­able for who they would be­come: the se­nior en­gi­neer who knows where the bod­ies are buried. We op­ti­mized for out­put, and abol­ished ap­pren­tice­ship. A few years from now, we’ll won­der where all the se­niors are. We shot them. Nobody will re­mem­ber.

And yet…

Somewhere in your in­fra­struc­ture is a cron job. It runs at 3am. It has been run­ning since 2016. It does some­thing crit­i­cal. You could­n’t tell me ex­actly what, but you know the one per­son who could, and they left in 2019. The com­ment at the top says # DO NOT CHANGE!!! Ask Ben. Ben is not reach­able. Every roadmap plan­ning ses­sion for the last four years has in­cluded modernize legacy cron” as a can­di­date ini­tia­tive. It has never made the cut. You have per­son­ally re­moved it from the list twice.

Someone keeps it run­ning. Her name is Sara. You don’t know this.

She’s in her mid 50s. She did­n’t go to Captainpalooza. She used to work from a small of­fice three streets from head­quar­ters. Somebody closed it last year to save money. The ship was the clos­est place with a desk and a net­work con­nec­tion, so she packs a lunch now and takes the gang­way down to a cabin be­lowdecks. Nobody on the ship knows she’s there. Remember Ben? Well, she in­her­ited the cron job from Ben, who’s men­tored her since 1998.

She knows Ben passed a few years back. She went to his fu­neral. You don’t know this.

When the job gets stuck, which it does reg­u­larly, she gives it a nudge and it tries again. The phone rings. She ac­knowl­edges the is­sue. She gives the nudge. The job de­pends on a mod­ule that’s been lost to time. Well, al­most, be­cause she has a copy on a USB stick she found in Ben’s desk af­ter his pass­ing. No agent has touched it. None ever will.

She’s not the safest per­son in the in­dus­try. She’s the shape of what you can­not touch. She is every piece of in­sti­tu­tional knowl­edge your trans­for­ma­tion just deleted, walk­ing around in a fifty-five-year-old body. She came up through the ap­pren­tice­ship you abol­ished: Ben, 1998, the USB stick. She is the pipeline. When she dies, the thing that pro­duces peo­ple like her is al­ready gone. You killed it three years ago. You will not be able to hire her re­place­ment, be­cause you broke the ma­chine that makes her.

She’s the man tun­nel­ing un­der Mordor with a spoon. The spoon is hers. So is the tun­nel. Nobody else wants the spoon or the tun­nel, and when she dies, the cron job dies, salaries stop be­ing paid, a com­pany of 30,000 souls will need to fig­ure out how to pay every­body, and there will be only one an­swer: hire some­one with a spoon. You won’t find them. You made sure of that.

The cron job pays salaries. You don’t know this.

The Guy at the party is still wait­ing for an an­swer. I’m too drunk now to lie. I tell him: AI did­n’t take our jobs. Greed did. Same greed that moved fac­to­ries to Bangladesh and keeps slaves in cobalt mines in the Congo, wear­ing a new mask. Tell the nephew to do some­thing else. Anything. It won’t save him ei­ther, but at least he won’t have to pre­tend the thing de­stroy­ing his life is a ro­bot.

Except Sara. Below decks, with her USB stick. They can’t come for her be­cause they don’t know she’s there.

The rest of us are above deck, won­der­ing why the masts are up­side down, and what that doll over there does.

The doll catches fire.

Introducing Google Cloud Fraud Defense, the next evolution of reCAPTCHA

cloud.google.com

The agen­tic web — where au­tonomous AI agents rea­son, plan, and ex­e­cute com­plex trans­ac­tions us­ing the open web and in­dus­try stan­dard pro­to­cols — aims to cre­ate an au­tonomous cus­tomer ex­pe­ri­ence. While these agents can sig­nif­i­cantly en­hance on­line in­ter­ac­tions, they also in­tro­duce new abuse and fraud vec­tors, cre­at­ing unique chal­lenges for se­cu­rity plat­forms.

This rise in so­phis­ti­cated au­toma­tion re­quires a fun­da­men­tal shift in risk man­age­ment. Today at Google Cloud Next, we are launch­ing Google Cloud Fraud Defense, a trust plat­form for the agen­tic web. As the next evo­lu­tion of re­CAPTCHA, Fraud Defense is a com­pre­hen­sive plat­form de­signed to ver­ify the le­git­i­macy of bots, hu­mans, and AI agents, pro­vid­ing busi­nesses with the in­tel­li­gence needed to se­cure their dig­i­tal in­ter­ac­tions and com­merce.

Agentic ac­tiv­ity in the Fraud Defense dash­board.

As part of our mis­sion to en­able a safe agen­tic web, Fraud Defense in­tro­duces a pow­er­ful suite of ca­pa­bil­i­ties that al­low cus­tomers to mea­sure and con­trol agen­tic ac­tiv­ity on their web­sites. By us­ing the same global sig­nals that pro­tect Google’s own ecosys­tem, busi­nesses can now en­able trusted ex­pe­ri­ences for both hu­man users and AI agents alike.

Creating poli­cies for agen­tic traf­fic in the Fraud Defense pol­icy en­gine.

These new ca­pa­bil­i­ties in­clude:

Agentic ac­tiv­ity mea­sure­ment: A new dash­board to help you mea­sure and un­der­stand agen­tic ac­tiv­i­ties. We are in­te­grat­ing with in­dus­try stan­dards such as Web Bot Auth and SPIFEE, as well as us­ing tra­di­tional meth­ods, to iden­tify, clas­sify, and an­a­lyze agen­tic traf­fic, and con­nect­ing agent and hu­man iden­ti­ties to bet­ter un­der­stand risk and trust.

Agentic ac­tiv­ity mea­sure­ment: A new dash­board to help you mea­sure and un­der­stand agen­tic ac­tiv­i­ties. We are in­te­grat­ing with in­dus­try stan­dards such as Web Bot Auth and SPIFEE, as well as us­ing tra­di­tional meth­ods, to iden­tify, clas­sify, and an­a­lyze agen­tic traf­fic, and con­nect­ing agent and hu­man iden­ti­ties to bet­ter un­der­stand risk and trust.

Agentic pol­icy en­gine: To pro­vide you with gran­u­lar con­trol at dif­fer­ent stages of the end user in­ter­ac­tion across the en­tire jour­ney, Fraud Defense’s agen­tic pol­icy en­gine al­lows you to al­low and block agents and users based on con­di­tions that in­clude risk scores, au­toma­tion types, and agent iden­tity.

AI-resistant chal­lenge: As we iden­tify po­ten­tially fraud­u­lent be­hav­ior from agents, we en­able ap­pli­ca­tion providers to de­ter and mit­i­gate ma­li­cious re­quests by re­quest­ing hu­mans to be in the loop us­ing the new QR code-based chal­lenge. This AI-resistant mit­i­ga­tion chal­lenge to prove hu­man pres­ence is de­signed to make au­to­mated fraud eco­nom­i­cally un­vi­able.

New QR-code chal­lenge in a shop­ping web­site.

re­CAPTCHA will con­tinue to be the core bot de­fense pil­lar of the broader Fraud Defense plat­form. Existing re­CAPTCHA cus­tomers are au­to­mat­i­cally Fraud Defense cus­tomers, with no mi­gra­tion re­quired, no ac­tion needed, and no change to pric­ing. Your ex­ist­ing site keys and in­te­gra­tions re­main ex­actly as they are to­day.

The trust plat­form for the agen­tic web

At Google Cloud, we be­lieve pre­vent­ing fraud and abuse in the agen­tic web should fun­da­men­tally re­sult in a sim­pler cus­tomer ex­pe­ri­ence. Fraud Defense uses a three-pronged ap­proach to help en­able a safe agen­tic web and drive busi­ness growth:

1. Preventing evolv­ing threats We pro­tect your busi­ness with the same fraud in­tel­li­gence that se­cures many of Google’s ser­vices. As threats shift from bot au­toma­tion and in­valid traf­fic to agent takeover and large-scale, AI-driven syn­thetic iden­tity fraud, Fraud Defense iden­ti­fies emerg­ing threats be­fore they reach your site.

This un­ri­valed vis­i­bil­ity, built upon a mas­sive fraud in­tel­li­gence graph that al­ready pro­tects 50% of Fortune 100 com­pa­nies and over 14 mil­lion do­mains glob­ally, pro­vides a level of col­lec­tive im­mu­nity and ver­i­fied trust that lo­cal data alone can not match.

2. Securing the cus­tomer jour­ney­At­tack­ers don’t tar­get end­points in iso­la­tion; they tar­get dig­i­tal jour­neys. This is even more true in the agen­tic web as agents are be­ing tasked to per­form end to end jour­neys. Fraud Defense pro­vides a uni­fied view of risk — from reg­is­tra­tion and lo­gin to pay­ment and check­out.

By cor­re­lat­ing teleme­try across the en­tire life­cy­cle, our uni­fied trust model iden­ti­fies com­plex, multi-stage fraud cam­paigns that dis­con­nected point so­lu­tions miss. This holis­tic view has demon­strated a 51% av­er­age re­duc­tion in ac­count takeover (ATO) by ac­cu­rately dis­tin­guish­ing be­tween le­git­i­mate cus­tomer ac­tiv­ity and so­phis­ti­cated abuse.

3. Accelerating busi­ness growthIn the agen­tic econ­omy, fric­tion kills con­ver­sion. Fraud Defense is de­signed to be in­vis­i­ble for the ma­jor­ity of users, re­plac­ing dis­rup­tive puz­zles with silent back­ground ver­i­fi­ca­tion. By us­ing our in­tel­li­gent trust model, we al­low you to sur­gi­cally block ma­li­cious bots, hu­mans and agents, while con­fi­dently wel­com­ing le­git­i­mate users, in­clud­ing AI shop­ping as­sis­tants that drive a pro­jected 25% in­crease in av­er­age or­der value, ac­cord­ing to the 2025 Shopify Retail Report.

Learn more about how Fraud Defense works

We in­vite you to join us at Next 26 to talk about new ca­pa­bil­i­ties de­signed to help pro­tect you as you con­tinue your jour­ney on the agen­tic web. While you’re there, be sure to at­tend our break­out ses­sion and visit our demo pod, where you can see Fraud Defense in ac­tion and learn more di­rectly from our ex­perts. We look for­ward to meet­ing you there and dis­cussing how we can safe­guard your or­ga­ni­za­tion’s fu­ture in this chang­ing land­scape.

To take the next step in your jour­ney to the agen­tic web, please check out the Fraud Defense web­site and log into the con­sole. You can fol­low all of our se­cu­rity an­nounce­ments at Next 26 here.

Security & Identity

Google Cloud Next

Inkscape 1.4.4 Release Notes | Inkscape

inkscape.org

Released on May 6, 2026.

Contents (click to ex­pand/​col­lapse)

1 Changes and Bug Fixes

1.1 Highlights 1.2 Crash Fixes 1.3 General User Interface

1.3.1 Canvas 1.3.2 General 1.3.3 Clipping 1.3.4 Copy / Paste

1.4 Dialogs

1.4.1 Fill and Stroke Dialog

1.4.2 Layers and Objects Dialog

1.4.3 Object Properties Dialog

1.4.4 Preferences Dialog 1.4.5 Welcome Dialog

1.5 Tools

1.5.1 Gradient Tool 1.5.2 Star / Polygon Tool 1.5.3 Text Tool

1.6 Keyboard Shortcuts 1.7 Palettes 1.8 Templates 1.9 Command Line

1.10 Windows-specific Fixes

1.11 Improvements for Development / Deployment / Testing

1.1 Highlights

1.2 Crash Fixes

1.3 General User Interface

1.3.1 Canvas 1.3.2 General 1.3.3 Clipping 1.3.4 Copy / Paste

1.3.1 Canvas

1.3.2 General

1.3.3 Clipping

1.3.4 Copy / Paste

1.4 Dialogs

1.4.1 Fill and Stroke Dialog

1.4.2 Layers and Objects Dialog

1.4.3 Object Properties Dialog

1.4.4 Preferences Dialog 1.4.5 Welcome Dialog

1.4.1 Fill and Stroke Dialog

1.4.2 Layers and Objects Dialog

1.4.3 Object Properties Dialog

1.4.4 Preferences Dialog

1.4.5 Welcome Dialog

1.5 Tools

1.5.1 Gradient Tool 1.5.2 Star / Polygon Tool 1.5.3 Text Tool

1.5.1 Gradient Tool

1.5.2 Star / Polygon Tool

1.5.3 Text Tool

1.6 Keyboard Shortcuts

1.7 Palettes

1.8 Templates

1.9 Command Line

1.10 Windows-specific Fixes

1.11 Improvements for Development / Deployment / Testing

2 Known Issues

3 Translations

3.1 Contributing to in­ter­face trans­la­tions

3.1 Contributing to in­ter­face trans­la­tions

4 Documentation

4.1 Contributing to doc­u­men­ta­tion and doc­u­men­ta­tion trans­la­tion

4.1 Contributing to doc­u­men­ta­tion and doc­u­men­ta­tion trans­la­tion

Get Inkscape 1.4.4!

Changes and Bug Fixes

Highlights

Inkscape 1.4.4 is a main­te­nance and bug­fix re­lease, which brings you

20 crash fixes, among them for three nasty bugs where Inkscape would­n’t even start

al­most 20 bug fixes

6 per­for­mance im­prove­ments

a new palette

a new but­ton for ro­tat­ing stars and poly­gons into their neutral’ or upright’ po­si­tion

27 up­dated in­ter­face trans­la­tions

15 up­dated doc­u­men­ta­tion trans­la­tions

in­stal­la­tion files for Windows on Arm

Like its pre­de­ces­sor, Inkscape 1.4.4 is also a bridge re­lease in the sense that it can be used to con­vert the planned Inkscape 1.5 mul­ti­page file for­mat to the pre-1.5 mul­ti­page for­mat. Versions lower than Inkscape 1.4.3 will not be able to in­ter­pret pages cre­ated in Inkscape ver­sions 1.5 and up­wards. Opening a doc­u­ment in Inkscape 1.4.3 and sav­ing it will con­vert it to the cur­rent (‘old’) page for­mat (MR #7608).

Background: While the old’ for­mat of pages in Inkscape is a cus­tom ad­di­tion that only works in Inkscape, the new for­mat will make use of the svg:view el­e­ment, which is stan­dard­ized and can work in other SVG view­ers, too. Find more in­for­ma­tion about this in MR #7525.

Crash Fixes

Fixed a crash …

… try­ing to start Inkscape when there are two en­tries for the same file in the list of re­cently used files (Bug 6002, MR #7750)

… try­ing to start Inkscape on Windows, when two re­cently opened files are in the same folder on dif­fer­ent dri­ves (Bug #6028, MR #7754)

… when start­ing Inkscape with a graph­ics tablet plugged in (Bug #5618, MR #7665)

… when cre­at­ing a new page (Bug #5904, MR #7417)

… when un­do­ing the cre­ation of a new doc­u­ment page (Bug #6060, MR #7787)

… when clos­ing the Unicode Characters di­a­log (Bug #6014, MR #7853)

… when try­ing to open some spe­cific PDF files (Bug #6021, MR #7690)

… when try­ing to open an SVG file with bro­ken mark­ers (Bug #6040, MR #7768)

… when open­ing or clos­ing cer­tain SVG files with the Export di­a­log open (Bug #5141, MR #7139)

… when try­ing to open an SVG file with a bro­ken Rotate Copies Live Path Effect cre­ated with Inkscape older than ver­sion 1.2 (Bug #5991, MR #7782)

… when set­ting the width or height field of an im­ported raster im­age with locked width/​height ra­tio to zero or no value (Bug #5428, MR #7139)

… when us­ing the Tweak tool with an empty text ob­ject in the se­lec­tion (Bug #5918, MR #7474)

… when us­ing the XML ed­i­tor to delete the path data (‘d’) of a con­nec­tor cre­ated with the Connector Tool (MR #7772)

… when try­ing to undo a line cre­ated with the Connector Tool un­der cer­tain cir­cum­stances (Bug #5949, MR #7510)

… when try­ing to use the Connector Tool to con­nect two ob­jects that over­lap (Bug #5314, MR #7535)

… when se­lect­ing an ob­ject with the PowerStroke Live Path Effect (Bug #5997, MR #7723)

… when try­ing to ap­ply the Corners LPE to a group (which is­n’t pos­si­ble) (Bug #5984, MR #7845)

… when try­ing to edit a gra­di­ent af­ter re­mov­ing one of its stops by us­ing the XML Editor (Bug #1299, MR #7532)

… when try­ing to use Break Apart with cer­tain com­plex paths (Bug #6067, MR #7800)

… when try­ing to trace a raster im­age with the max­i­mum num­ber of traces (Bug #5475, MR #7880)

General User Interface

Canvas

When us­ing the Layers and Objects di­a­log to move ob­jects be­tween groups, the bound­ing boxes on the can­vas will now show the cor­rect size and po­si­tion again (Bug #5996, MR #7704).

When zoom­ing in on a doc­u­ment with lots of paths, Inkscape is much faster now (Bug #6110, MR #7876).

General

Graphics that con­tain the ver­sion num­ber (1.4.4) have been up­dated for the new re­lease (MR #7855).

Clipping

Releasing an Inverse Clip (Live Path Effect) will no longer leave the clip path in­vis­i­ble (Bug #5966, MR #7521).

Copy / Paste

Copy-pasting large num­bers of ob­jects with gra­di­ents is faster now (Bug #2403, MR #7856).

When copy­ing some­thing to the clip­board, Inkscape could ran­domly be very slow with some clip­board man­ager con­stel­la­tions / PovRay be­ing used, and con­tent pasted could some­times be in the wrong for­mat (MR #7858).

Dialogs

Fill and Stroke Dialog

Grand Theft Oil Futures

paulkrugman.substack.com

Source: CNBC, Financial Times, BBC, Reuters

At this point it’s al­most rou­tine: Almost every time Donald Trump makes a ma­jor an­nounce­ment about the Iran War, that an­nounce­ment is pre­ceded — some­times by only a few min­utes — by huge and hugely prof­itable bets in the oil mar­ket.

The in­flu­en­tial Kobeissi Letter doc­u­ments the lat­est ex­am­ple:

BREAKING: According to our analy­sis, ~$920 mil­lion worth of crude oil shorts were taken 70 min­utes be­fore an Axios re­port claimed the US and Iran were near a 14-point” deal to end the war.At 3:40 AM ET to­day, nearly 10,000 con­tracts worth of crude oil shorts were taken with­out any ma­jor news.This is equiv­a­lent to ~$920 mil­lion in no­tional value, an un­usu­ally large trade for 3:40 AM ET.At 4:50 AM ET, just 70 min­utes later, Axios re­ported that the US is close” to a memorandum of un­der­stand­ing” to end the Iran War.By 7:00 AM ET, oil prices had fallen over -12% with these crude oil shorts gain­ing ap­prox­i­mately +$125 mil­lion.Min­utes later, Iran launched the Persian Gulf Strait Authority” and oil prices surged +8%.What just hap­pened?

BREAKING: According to our analy­sis, ~$920 mil­lion worth of crude oil shorts were taken 70 min­utes be­fore an Axios re­port claimed the US and Iran were near a 14-point” deal to end the war.

At 3:40 AM ET to­day, nearly 10,000 con­tracts worth of crude oil shorts were taken with­out any ma­jor news.

This is equiv­a­lent to ~$920 mil­lion in no­tional value, an un­usu­ally large trade for 3:40 AM ET.

At 4:50 AM ET, just 70 min­utes later, Axios re­ported that the US is close” to a memorandum of un­der­stand­ing” to end the Iran War.

By 7:00 AM ET, oil prices had fallen over -12% with these crude oil shorts gain­ing ap­prox­i­mately +$125 mil­lion.

Minutes later, Iran launched the Persian Gulf Strait Authority” and oil prices surged +8%.

What just hap­pened?

As the BBC among oth­ers has doc­u­mented, this is­n’t the first time, or the sec­ond time, that this has hap­pened. Again and again, just be­fore Trump makes an­nounce­ments that raise hopes about the re­open­ing of the Strait of Hormuz, one or more whales,” very large traders, sell large quan­ti­ties of oil fu­tures, al­most in­stantly reap­ing big prof­its as prices fall.

What’s truly re­mark­able is that this keeps hap­pen­ing even though the pat­tern has be­come fa­mil­iar. This tells us two things: The Trump ad­min­is­tra­tion is mak­ing no real ef­fort to crack down on who­ever is trad­ing us­ing in­side in­for­ma­tion, and these in­side traders are op­er­at­ing with a com­plete sense of im­punity, as­sured that they can get away with it.

The stench of cor­rup­tion is over­whelm­ing. Yet aside from the raw cor­rup­tion, these in­ci­dents also raise a larger ques­tion. The in­sid­ers ripped off the par­ties who sold fu­tures to them at what turned out to be very un­fa­vor­able prices to the sell­ers. What broader dam­age does this kind of unchecked in­sider trad­ing do?

There’s both a nar­row and a broad an­swer.

The nar­row an­swer in­volves eco­nomic ef­fi­ciency. How is the func­tion­ing of the econ­omy af­fected by the re­al­iza­tion that some­body — it’s not hard to make guesses, but we don’t know for sure — is trad­ing oil fu­tures based on ad­vance knowl­edge about what will soon ap­pear on Truth Social or Fox News?

It took me a while to fig­ure this out. But I think I have an an­swer.

First, ask your­self what pur­pose is served by the oil fu­tures mar­ket. Unlike the pre­dic­tion mar­kets Polymarket and Kalshi, the oil fu­tures mar­ket is not in­tended to be mainly a ve­hi­cle for gam­bling. Instead, it is a mar­ket that serves to re­duce risk through hedg­ing.

Here’s how it works. There are peo­ple and in­sti­tu­tions, such as oil pro­duc­ers, who will need to sell oil at a fu­ture date. They want to lock in the price to­day on those fu­ture sales. There are also peo­ple and in­sti­tu­tions, such as air­lines, who have a fu­ture need for oil and would like to lock in the price to­day. Thus the fu­tures mar­ket lets both sell­ers and buy­ers of oil elim­i­nate a ma­jor source of risk — fluc­tu­a­tions in the price of oil. This re­duces un­cer­tainty in the econ­omy as a whole.

But what if there are sub­stan­tial play­ers in the fu­tures mar­ket with in­side in­for­ma­tion? Then if you are, say, a cor­po­ra­tion try­ing to lock in the price of oil you plan to buy next month, you may not be mak­ing a mu­tu­ally ben­e­fi­cial deal with fu­ture sell­ers. You may, in­stead, be be­ing played for a sucker — pay­ing what in ret­ro­spect will have been an ex­ces­sive price — by peo­ple who know what’s about to ap­pear in the pres­i­den­t’s so­cial me­dia feed.

The same could ap­ply to sell­ers of oil fu­tures, al­though the ex­am­ples of in­sider trad­ing we know about in­volved Trump in­sid­ers get­ting ahead of falling, not ris­ing, prices.

Either way, the ef­fect of traders’ sus­pi­cion that they may be losers in a rigged game will be to make them re­luc­tant to play at all — re­luc­tant ei­ther to buy or to sell oil fu­tures. And this will mean los­ing the risk-re­duc­ing ben­e­fits of a prop­erly func­tion­ing fu­tures mar­ket.

Now, in­sider trad­ing of oil fu­tures prob­a­bly is­n’t big enough to do crit­i­cal dam­age to those mar­kets. But it does do dam­age, which hurts all of us, not just the buy­ers who got stuck with the im­me­di­ate losses.

And be­yond the nar­row eco­nomic losses, in­sider trad­ing on oil is part of the broader rise of what we can call the pre­da­tion econ­omy.

Under Trump II, cor­rup­tion runs ram­pant. Success in busi­ness de­pends not on what you know but on who you know, and there are no rules be­yond hav­ing — and, ob­vi­ously, buy­ing — the right con­nec­tions.

This is bad for every­one who does­n’t have those con­nec­tions. It’s bad for eco­nomic growth. And it un­der­mines the moral ba­sis of the econ­omy and so­ci­ety as a whole. It’s the path of how a coun­try slides into third-world sta­tus.

I’ll have much more to say about the pre­da­tion econ­omy in fu­ture posts.

MUSICAL CODA

No posts

From Supabase to Clerk to Better Auth

blog.val.town

In 2023, I wrote about how Val Town moved away from Supabase and to­ward a more con­ven­tional data­base setup. We were us­ing a lot of Supabase’s func­tion­al­ity, in­clud­ing their au­then­ti­ca­tion. So when it came time to move we found equiv­a­lents: Render for the data­base, and Clerk for au­then­ti­ca­tion. But life came at us fast, and by late 2023 we had an is­sue filed: get off of Clerk. That is­sue was fi­nally closed a month ago, when we switched to Better Auth.

Some im­por­tant con­text is that Clerk is a ma­jor suc­cess. They just raised 50 mil­lion dol­lars and they have lots of sat­is­fied users. Heck, in re­lated news Supabase raised 100 mil­lion dol­lars at a 5 bil­lion dol­lar val­u­a­tion. Congratulations to both of them. I would make a ter­ri­ble ven­ture cap­i­tal­ist. Whatever opin­ions and ex­pe­ri­ences I hold about au­then­ti­ca­tion and row level se­cu­rity are sec­ondary to these num­bers and proof. You can’t ar­gue with suc­cess.

But still, I am happy to have closed that is­sue and switched to Better Auth. It’s been a tough ex­pe­ri­ence, with a lot of workarounds, bugs, and out­ages. The ar­chi­tec­ture of Val Town sharply con­flicted with Clerk’s ex­pec­ta­tions.

The core is­sue

The core is­sue is that Clerk tried to be your users table and your ses­sions table. I think they’ve been shift­ing away from say­ing this, but it started from a pretty ex­treme place: there’s a 2021 blog post ti­tled Consider drop­ping your users table”. There’s a YouTube video from 2023 called DELETE your Users table. I strongly sug­gest you don’t!

There are two big prob­lems with farm­ing out your users table to a third-party ser­vice.

Clerk was a pretty bad re­place­ment for a users table be­cause it was heav­ily rate-lim­ited and not very re­li­able. When we ini­tially switched, I as­sumed we could load user data from Clerk’s API when­ever we needed to. After all, we want to know things like user set­tings, avatar URLs, and emails. Clerk’s SDK made this pretty con­ve­nient: the rootAuthLoader, the thing that han­dles auth for your whole ap­pli­ca­tion, had a nice lit­tle op­tion called load­User which would do the re­quest for you. Worked great in de­vel­op­ment. In pro­duc­tion, the rate limit for that end­point was five re­quests per sec­ond. For the whole ac­count, across all users. Tough! A pretty bad foot­gun, that we dis­cov­ered in pro­duc­tion, and was even­tu­ally fixed by re­mov­ing the op­tion.

The rate lim­it­ing hit us es­pe­cially hard for the so­cial as­pect of Val Town. For ex­am­ple, let’s say you have a so­cial web­site, so a lot of pages have lists of con­tent from other users, and user­names and avatars to iden­tify them. Clerk’s de­fault UIs are based on the as­sump­tion that a user only sees their own avatar and needs their own set­tings and in­for­ma­tion, and they can get all of that stuff from their nifty JWT to­ken. Social web­sites like Val Town com­pletely break this as­sump­tion, and the ad­vice was to sync avatars and other in­for­ma­tion be­tween Clerk and our users table: so now we have two au­thor­i­ties for that in­for­ma­tion, and the com­plex­ity of two users ta­bles.

So we had to sync Clerk’s data to our data­base by us­ing web­hooks, which meant that sign­ing up was con­vo­luted and tricky - for a few mo­ments, a user has a Clerk ac­count but no Val Town data­base row. Or, be­cause our plat­form re­quires user­names, users can be in a state where they have a Clerk ac­count, a data­base row, but their ac­count is in­com­plete. Our user set­tings had to be split be­tween things that Clerk con­trolled, like auth strate­gies, and things that we needed, like user­names and ed­i­tor set­tings.

The sec­ond is that Clerk be­came a sin­gle point of fail­ure for all our user ses­sions. Cookie-based user ses­sions are usu­ally short-lived with con­stant re­fresh­ing: that way they can be in­val­i­dated quickly. But that also means that every few min­utes users need to swap their ses­sion cook­ies for new ones. So when some­one’s lo­gin ses­sion needed to be re­freshed, it was a sub­do­main of Val Town that passed the re­quest to Clerk that did the re­fresh­ing. We did­n’t have a ses­sions table or any re­spon­si­bil­ity over ses­sions.

That’s great, if you’re try­ing to keep avoid any re­spon­si­bil­ity for au­then­ti­ca­tion, but on the other hand, if Clerk goes down, the whole web­site goes down. Clerk out­ages don’t just break the lo­gin & lo­gout flow, they make the site un­us­able to peo­ple who are al­ready logged in. And Clerk went down pretty of­ten, and went down for long pe­ri­ods of time. Since May 2025 it’s been tee­ter­ing be­tween two and three nines of up­time. There is­n’t data from be­fore then, but I re­mem­ber many times that we had a bro­ken site and no way to fix it be­cause of this sin­gle point of fail­ure.

A hard les­son you learn build­ing a com­plex sys­tem is that its re­li­a­bil­ity is the min­i­mum of the com­bined re­li­a­bil­ity of its crit­i­cal parts.

Besides these two ma­jor is­sues, there were other bugs and prob­lems we en­coun­tered. Most got fixed even­tu­ally, but I spent a lot of time bat­tling the Stale Issue Bot” from auto-clos­ing them.

Three-ish years

If it was so bad, why did­n’t we switch away im­me­di­ately?

First of all, even though this will be the sec­ond switching from X to Y” ar­ti­cle I’ve writ­ten, and I’m not try­ing to make it a habit. Making de­ci­sions and stick­ing with them is good for de­vel­op­ment ve­loc­ity and team san­ity. We’re not try­ing to rewrite Val Town any more than is ab­solutely nec­es­sary. And writ­ing cri­tique is less fun and pos­i­tive than build­ing.

And Clerk did some things well. They pro­vided SDKs for all the tech we were us­ing: Remix, Fastify, and Express. They did a de­cent job of keep­ing up with the churn of those frame­works, a task that I know is a full-time job. And their ad­min­is­tra­tion and anti-abuse mea­sures were de­cent at help­ing us solve cus­tomer sup­port is­sues and keep scam­mers at bay.

Where Clerk def­i­nitely shines is rel­a­tively sim­ple, heav­ily fron­tend apps that don’t have a so­cial com­po­nent, so they don’t need a users table. It was in­cred­i­bly easy to get started, af­ford­able, and the Clerk dash­board is pretty nice.

And there aren’t a ton of great op­tions for au­then­ti­ca­tion. The bar for a Clerk re­place­ment was ac­tu­ally pretty high: a lot of open source auth so­lu­tions are an­cient and semi-aban­doned. Authentication-as-a-service plat­forms had ven­dor risk and po­ten­tially the same prob­lems as those in Clerk. The right level of tech­ni­cal con­trol is hard to nail. We don’t want to build au­then­ti­ca­tion from scratch and open Val Town up to new and em­bar­rass­ing new vul­ner­a­bil­i­ties, but we also don’t want to of­fload so much re­spon­si­bil­ity to a provider. Definitely not trust­ing third-party ses­sion man­age­ment again.

Better Auth en­ters the frame

Better Auth checked a lot of boxes right out of the gate: high code qual­ity, good in­te­gra­tions with dif­fer­ent frame­works, and truly us­able as an in­de­pen­dent open source pro­ject.

There still is ven­dor risk: it’s a big, com­plex code­base de­vel­oped mostly by one com­pany. There’s al­ways ven­dor risk. But we are no longer de­pen­dent on a third party stay­ing on­line in or­der for ses­sions & user auth to work.

A close sec­ond place was AuthKit from WorkOS. I trust WorkOS and AuthKit is in­cred­i­bly slick, but af­ter bounc­ing be­tween two ven­dors, it was im­por­tant to me to find some­thing that could work in­de­pen­dently and was open source at the core.

I find Better Auth’s dash­board and paid add-ons to be re­ally clever, too. We man­age all of our data, and a plu­gin pro­vides an API on our site that lets their dash­board pull in­for­ma­tion and do some light user ad­min­is­tra­tion. Better Auth’s paid ser­vice (called Infrastructure’) is ba­si­cally state­less in the way that we use it, and un­in­volved in ses­sion man­age­ment.

In short, so far it re­ally has been bet­ter.

And re­luc­tantly I have to hand it to the LLMs here: with the aug­men­ta­tion of the ro­bots, we were able to take the more com­plex route of sup­port­ing both Better Auth and Clerk for a tran­si­tional pe­riod of two weeks. Every end­point that han­dled au­then­ti­ca­tion would ac­cept ei­ther kind of cookie, and users slowly moved over to Better Auth be­cause that was the kind of ses­sion that the sign-in page pro­vided. Like any­thing re­lated to se­cu­rity, close read­ing, rewrit­ing, and test­ing of all of the code was nec­es­sary to make sure we did­n’t self-own, and the even­tual pure-Bet­ter Auth auth was hand­writ­ten en­tirely.

Better Auth also works pretty well with Vals: you can try out the Better Auth starter tem­plate to add au­then­ti­ca­tion to your code on Val Town.

I’ve learned a lot along the way. You re­ally do de­pend on up­stream providers for your up­time, and should think hard about how ex­posed you are to that risk. Products can be good for a lot of use-cases and re­ally suc­cess­ful and still not the right thing for your spe­cific prob­lem. The world of soft­ware changes quickly and the right so­lu­tion might not ex­ist at the mo­ment you need it, but might ap­pear a year later.

Halupedia

halupedia.com

BYD overtakes Tesla and Kia as the best-selling EV brand in key overseas markets

electrek.co

With over 7% mar­ket share, BYD is now the top-sell­ing EV brand in the UK so far in 2026, sur­pass­ing Tesla, Kia, and Volkswagen to take the top spot. And it’s not just the UK, BYD is out­pac­ing the com­pe­ti­tion in sev­eral over­seas mar­kets.

BYD be­comes top-sell­ing EV brand in over­seas mar­kets

BYD sold 321,123 new en­ergy ve­hi­cles (NEVs) in April, in­clud­ing bat­tery elec­tric ve­hi­cles (BEVs) and plug-in hy­brids (PHEVs).

EV sales fell 23% to 156,944, while PHEV sales were down 28% with 157,156. Although over­all NEV sales dropped 26% from last April, mark­ing BYDs eighth con­sec­u­tive month with lower sales than the year prior, over­seas sales reached a new record.

BYDs NEV ex­ports surged 70% year over year in April to 135,098. That’s up from the pre­vi­ous record of 120,083 set in March 2026. Through the first four months of 2026, the com­pany sold 456,263 ve­hi­cles over­seas.

In ma­jor over­seas mar­kets, such as the UK, BYD is now the best-sell­ing EV brand so far in 2026. According to the lat­est SMMT reg­is­tra­tion data, the Chinese au­tomaker sold 12,754 elec­tric cars through April with over 7% mar­ket share.

With fuel prices re­main­ing high, more dri­vers are turn­ing to elec­tric ve­hi­cles as a smarter and more eco­nom­i­cal choice,” Bono Ge, BYD UKs Country Manager, said.

Ge said the com­pany is happy to see the UKs EV mar­ket grow by 22% year-on-year, adding he’s even more proud that BYD has be­come the UKs lead­ing EV brand in a lit­tle over three years.”

BYD cur­rently sells five EVs in the UK: the Dolphin Surf, Dolphin, Atto 3, Seal, and Sealion 7, with the Atto 2 set to go on sale soon. It also sells three DM-i PHEVs: the Seal U, Seal 6, and Sealion 5.

And it’s not just in the UK that the com­pany is emerg­ing as the most pop­u­lar EV maker. BYD was also the best-sell­ing EV brand in Australia, Brazil, and sev­eral other over­seas mar­kets in April.

In Australia, BYDs Sealion 7 was the top-sell­ing EV last month with 1,780 de­liv­er­ies, fol­lowed by the Geely EX5, Zeekr 7X, Tesla Model Y, and Kia EV5.

BYD also be­came the first Chinese brand to lead over­all ve­hi­cle sales in Brazil last month, sur­pass­ing Volkswagen, General Motors, and Hyundai. The com­pany sold 14,911 ve­hi­cles in April, about 80 more than Volkswagen.

The achieve­ment is im­pres­sive, given that BYD launched its first pas­sen­ger ve­hi­cle in Brazil in 2021. Volkswagen has been as­sem­bling ve­hi­cles in the coun­try since the 50s.

With plans to ramp up over­seas pro­duc­tion and to roll out new tech like its 5-minute Flash Charging, BYD said it aims to un­lock the full value of elec­tri­fi­ca­tion by pro­vid­ing af­ford­able, high-tech EVs.

FTC: We use in­come earn­ing auto af­fil­i­ate links. More.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.