10 interesting stories served every morning and every evening.




1 863 shares, 60 trendiness

Introducing Claude Design by Anthropic Labs

Today, we’re launch­ing Claude Design, a new Anthropic Labs prod­uct that lets you col­lab­o­rate with Claude to cre­ate pol­ished vi­sual work like de­signs, pro­to­types, slides, one-pagers, and more.

Claude Design is pow­ered by our most ca­pa­ble vi­sion model, Claude Opus 4.7, and is avail­able in re­search pre­view for Claude Pro, Max, Team, and Enterprise sub­scribers. We’re rolling out to users grad­u­ally through­out the day.

Even ex­pe­ri­enced de­sign­ers have to ra­tion ex­plo­ration—there’s rarely time to pro­to­type a dozen di­rec­tions, so you limit your­self to a few. And for founders, prod­uct man­agers, and mar­keters with an idea but not a de­sign back­ground, cre­at­ing and shar­ing those ideas can be daunt­ing.

Claude Design gives de­sign­ers room to ex­plore widely and every­one else a way to pro­duce vi­sual work. Describe what you need and Claude builds a first ver­sion. From there, you re­fine through con­ver­sa­tion, in­line com­ments, di­rect ed­its, or cus­tom slid­ers (made by Claude) un­til it’s right. When given ac­cess, Claude can also ap­ply your team’s de­sign sys­tem to every pro­ject au­to­mat­i­cally, so the out­put is con­sis­tent with the rest of your com­pa­ny’s de­signs.

Teams have been us­ing Claude Design for:

* Realistic pro­to­types: Designers can turn sta­tic mock­ups into eas­ily-share­able in­ter­ac­tive pro­to­types to gather feed­back and user-test, with­out code re­view or PRs.

* Product wire­frames and mock­ups: Product Managers can sketch out fea­ture flows and hand them off to Claude Code for im­ple­men­ta­tion, or share them with de­sign­ers to re­fine fur­ther.

* Design ex­plo­rations: Designers can quickly cre­ate a wide range of di­rec­tions to ex­plore.

* Pitch decks and pre­sen­ta­tions: Founders and Account Executives can go from a rough out­line to a com­plete, on-brand deck in min­utes, and then ex­port as a PPTX or send to Canva.

* Marketing col­lat­eral: Marketers can cre­ate land­ing pages, so­cial me­dia as­sets, and cam­paign vi­su­als, then loop in de­sign­ers to pol­ish.

* Frontier de­sign: Anyone can build code-pow­ered pro­to­types with voice, video, shaders, 3D and built-in AI.

Your brand, built in. During on­board­ing, Claude builds a de­sign sys­tem for your team by read­ing your code­base and de­sign files. Every pro­ject af­ter that uses your col­ors, ty­pog­ra­phy, and com­po­nents au­to­mat­i­cally. You can re­fine the sys­tem over time, and teams can main­tain more than one.

Import from any­where. Start from a text prompt, up­load im­ages and doc­u­ments (DOCX, PPTX, XLSX), or point Claude at your code­base. You can also use the web cap­ture tool to grab el­e­ments di­rectly from your web­site so pro­to­types look like the real prod­uct.

Refine with fine-grained con­trols. Comment in­line on spe­cific el­e­ments, edit text di­rectly, or use ad­just­ment knobs to tweak spac­ing, color, and lay­out live. Then ask Claude to ap­ply your changes across the full de­sign.

Collaborate. Designs have or­ga­ni­za­tion-scoped shar­ing. You can keep a doc­u­ment pri­vate, share it so any­one in your or­ga­ni­za­tion with the link can view it, or grant edit ac­cess so col­leagues can mod­ify the de­sign and chat with Claude to­gether in a group con­ver­sa­tion.

Export any­where. Share de­signs as an in­ter­nal URL within your or­ga­ni­za­tion, save as a folder, or ex­port to Canva, PDF, PPTX, or stand­alone HTML files.

Handoff to Claude Code. When a de­sign is ready to build, Claude pack­ages every­thing into a hand­off bun­dle that you can pass to Claude Code with a sin­gle in­struc­tion.

Over the com­ing weeks, we’ll make it eas­ier to build in­te­gra­tions with Claude Design, so you can con­nect it to more of the tools your team al­ready uses.

Claude Design is avail­able for Claude Pro, Max, Team, and Enterprise sub­scribers. Access is in­cluded with your plan and uses your sub­scrip­tion lim­its, with the op­tion to con­tinue be­yond those lim­its by en­abling ex­tra us­age.

For Enterprise or­ga­ni­za­tions, Claude Design is off by de­fault. Admins can en­able it in Organization set­tings.

...

Read the original on www.anthropic.com »

2 633 shares, 41 trendiness

The Last Question

The last ques­tion was asked for the first time, half in jest, on May 21, 2061, at a time when hu­man­ity first stepped into the light. The ques­tion came about as a re­sult of a five dol­lar bet over high­balls, and it hap­pened this way:

Alexander Adell and Bertram Lupov were two of the faith­ful at­ten­dants of Multivac. As well as any hu­man be­ings could, they knew what lay be­hind the cold, click­ing, flash­ing face — miles and miles of face — of that gi­ant com­puter. They had at least a vague no­tion of the gen­eral plan of re­lays and cir­cuits that had long since grown past the point where any sin­gle hu­man could pos­si­bly have a firm grasp of the whole.

Multivac was self-ad­just­ing and self-cor­rect­ing. It had to be, for noth­ing hu­man could ad­just and cor­rect it quickly enough or even ad­e­quately enough — so Adell and Lupov at­tended the mon­strous gi­ant only lightly and su­per­fi­cially, yet as well as any men could. They fed it data, ad­justed ques­tions to its needs and trans­lated the an­swers that were is­sued. Certainly they, and all oth­ers like them, were fully en­ti­tled to share In the glory that was Multivac’s.

For decades, Multivac had helped de­sign the ships and plot the tra­jec­to­ries that en­abled man to reach the Moon, Mars, and Venus, but past that, Earth’s poor re­sources could not sup­port the ships. Too much en­ergy was needed for the long trips. Earth ex­ploited its coal and ura­nium with in­creas­ing ef­fi­ciency, but there was only so much of both.

But slowly Multivac learned enough to an­swer deeper ques­tions more fun­da­men­tally, and on May 14, 2061, what had been the­ory, be­came fact.

The en­ergy of the sun was stored, con­verted, and uti­lized di­rectly on a planet-wide scale. All Earth turned off its burn­ing coal, its fis­sion­ing ura­nium, and flipped the switch that con­nected all of it to a small sta­tion, one mile in di­am­e­ter, cir­cling the Earth at half the dis­tance of the Moon. All Earth ran by in­vis­i­ble beams of sun­power.

Seven days had not suf­ficed to dim the glory of it and Adell and Lupov fi­nally man­aged to es­cape from the pub­lic func­tion, and to meet in quiet where no one would think of look­ing for them, in the de­serted un­der­ground cham­bers, where por­tions of the mighty buried body of Multivac showed. Unattended, idling, sort­ing data with con­tented lazy click­ings, Multivac, too, had earned its va­ca­tion and the boys ap­pre­ci­ated that. They had no in­ten­tion, orig­i­nally, of dis­turb­ing it.

They had brought a bot­tle with them, and their only con­cern at the mo­ment was to re­lax in the com­pany of each other and the bot­tle.

It’s amaz­ing when you think of it,” said Adell. His broad face had lines of weari­ness in it, and he stirred his drink slowly with a glass rod, watch­ing the cubes of ice slur clum­sily about. All the en­ergy we can pos­si­bly ever use for free. Enough en­ergy, if we wanted to draw on it, to melt all Earth into a big drop of im­pure liq­uid iron, and still never miss the en­ergy so used. All the en­ergy we could ever use, for­ever and for­ever and for­ever.”

Lupov cocked his head side­ways. He had a trick of do­ing that when he wanted to be con­trary, and he wanted to be con­trary now, partly be­cause he had had to carry the ice and glass­ware. Not for­ever,” he said.

Oh, hell, just about for­ever. Till the sun runs down, Bert.”

All right, then. Billions and bil­lions of years. Twenty bil­lion, maybe. Are you sat­is­fied?”

Lupov put his fin­gers through his thin­ning hair as though to re­as­sure him­self that some was still left and sipped gen­tly at his own drink. Twenty bil­lion years is­n’t for­ever.”

Will, it will last our time, won’t it?”

So would the coal and ura­nium.”

All right, but now we can hook up each in­di­vid­ual space­ship to the Solar Station, and it can go to Pluto and back a mil­lion times with­out ever wor­ry­ing about fuel. You can’t do THAT on coal and ura­nium. Ask Multivac, if you don’t be­lieve me.”

I don’t have to ask Multivac. I know that.”

Then stop run­ning down what Multivac’s done for us,” said Adell, blaz­ing up. It did all right.”

Who says it did­n’t? What I say is that a sun won’t last for­ever. That’s all I’m say­ing. We’re safe for twenty bil­lion years, but then what?” Lupov pointed a slightly shaky fin­ger at the other. And don’t say we’ll switch to an­other sun.”

There was si­lence for a while. Adell put his glass to his lips only oc­ca­sion­ally, and Lupov’s eyes slowly closed. They rested.

Then Lupov’s eyes snapped open. You’re think­ing we’ll switch to an­other sun when ours is done, aren’t you?”

Sure you are. You’re weak on logic, that’s the trou­ble with you. You’re like the guy in the story who was caught in a sud­den shower and Who ran to a grove of trees and got un­der one. He was­n’t wor­ried, you see, be­cause he fig­ured when one tree got wet through, he would just get un­der an­other one.”

I get it,” said Adell. Don’t shout. When the sun is done, the other stars will be gone, too.”

Darn right they will,” mut­tered Lupov. It all had a be­gin­ning in the orig­i­nal cos­mic ex­plo­sion, what­ever that was, and it’ll all have an end when all the stars run down. Some run down faster than oth­ers. Hell, the gi­ants won’t last a hun­dred mil­lion years. The sun will last twenty bil­lion years and maybe the dwarfs will last a hun­dred bil­lion for all the good they are. But just give us a tril­lion years and every­thing will be dark. Entropy has to in­crease to max­i­mum, that’s all.”

I know all about en­tropy,” said Adell, stand­ing on his dig­nity.

I know as much as you do.”

Then you know every­thing’s got to run down some­day.”

All right. Who says they won’t?”

You did, you poor sap. You said we had all the en­ergy we needed, for­ever. You said forever.’”

It was Adell’s turn to be con­trary. Maybe we can build things up again some­day,” he said.

You ask Multivac. I dare you. Five dol­lars says it can’t be done.”

Adell was just drunk enough to try, just sober enough to be able to phrase the nec­es­sary sym­bols and op­er­a­tions into a ques­tion which, in words, might have cor­re­sponded to this: Will mankind one day with­out the net ex­pen­di­ture of en­ergy be able to re­store the sun to its full youth­ful­ness even af­ter it had died of old age?

Or maybe it could be put more sim­ply like this: How can the net amount of en­tropy of the uni­verse be mas­sively de­creased?

Multivac fell dead and silent. The slow flash­ing of lights ceased, the dis­tant sounds of click­ing re­lays ended.

Then, just as the fright­ened tech­ni­cians felt they could hold their breath no longer, there was a sud­den spring­ing to life of the tele­type at­tached to that por­tion of Multivac. Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.

By next morn­ing, the two, plagued with throb­bing head and cot­tony mouth, had for­got­ten about the in­ci­dent.

Jerrodd, Jerrodine, and Jerrodette I and II watched the starry pic­ture in the visi­plate change as the pas­sage through hy­per­space was com­pleted in its non-time lapse. At once, the even pow­der­ing of stars gave way to the pre­dom­i­nance of a sin­gle bright mar­ble-disk, cen­tered.

That’s X-23,” said Jerrodd con­fi­dently. His thin hands clamped tightly be­hind his back and the knuck­les whitened.

The lit­tle Jerrodettes, both girls, had ex­pe­ri­enced the hy­per­space pas­sage for the first time in their lives and were self-con­scious over the mo­men­tary sen­sa­tion of in­side-out­ness. They buried their gig­gles and chased one an­other wildly about their mother, scream­ing, We’ve reached X-23 — we’ve reached X-23 — we’ve —”

Quiet, chil­dren,” said Jerrodine sharply. Are you sure, Jerrodd?”

What is there to be but sure?” asked Jerrodd, glanc­ing up at the bulge of fea­ture­less metal just un­der the ceil­ing. It ran the length of the room, dis­ap­pear­ing through the wall at ei­ther end. It was as long as the ship.

Jerrodd scarcely knew a thing about the thick rod of metal ex­cept that it was called a Microvac, that one asked it ques­tions if one wished; that if one did not it still had its task of guid­ing the ship to a pre­ordered des­ti­na­tion; of feed­ing on en­er­gies from the var­i­ous Sub-galactic Power Stations; of com­put­ing the equa­tions for the hy­per­spa­cial jumps.

Jerrodd and his fam­ily had only to wait and live in the com­fort­able res­i­dence quar­ters of the ship.

Someone had once told Jerrodd that the ac” at the end of Microvac” stood for analog com­puter” in an­cient English, but he was on the edge of for­get­ting even that.

Jerrodine’s eyes were moist as she watched the visi­plate. I can’t help it. I feel funny about leav­ing Earth.”

Why for Pete’s sake?” de­manded Jerrodd. We had noth­ing there. We’ll have every­thing on X-23. You won’t be alone. You won’t be a pi­o­neer. There are over a mil­lion peo­ple on the planet al­ready. Good Lord, our great grand­chil­dren will be look­ing for new worlds be­cause X-23 will be over­crowded.”

Then, af­ter a re­flec­tive pause, I tell you, it’s a lucky thing the com­put­ers worked out in­ter­stel­lar travel the way the race is grow­ing.”

Jerrodette I said promptly, Our Microvac is the best Microvac in the world.”

I think so, too,” said Jerrodd, tou­sling her hair.

It was a nice feel­ing to have a Microvac of your own and Jerrodd was glad he was part of his gen­er­a­tion and no other. In his fa­ther’s youth, the only com­put­ers had been tremen­dous ma­chines tak­ing up a hun­dred square miles of land. There was only one to a planet. Planetary ACs they were called. They had been grow­ing in size steadily for a thou­sand years and then, all at once, came re­fine­ment. In place of tran­sis­tors had come mol­e­c­u­lar valves so that even the largest Planetary AC could be put into a space only half the vol­ume of a space­ship.

Jerrodd felt up­lifted, as he al­ways did when he thought that his own per­sonal Microvac was many times more com­pli­cated than the an­cient and prim­i­tive Multivac that had first tamed the Sun, and al­most as com­pli­cated as Earth’s Planetary AC (the largest) that had first solved the prob­lem of hy­per­spa­tial travel and had made trips to the stars pos­si­ble.

So many stars, so many plan­ets,” sighed Jerrodine, busy with her own thoughts. I sup­pose fam­i­lies will be go­ing out to new plan­ets for­ever, the way we are now.”

Not for­ever,” said Jerrodd, with a smile. It will all stop some­day, but not for bil­lions of years. Many bil­lions. Even the stars run down, you know. Entropy must in­crease.”

Entropy, lit­tle sweet, is just a word which means the amount of run­ning-down of the uni­verse. Everything runs down, you know, like your lit­tle walkie-talkie ro­bot, re­mem­ber?”

Can’t you just put in a new power-unit, like with my ro­bot?”

The stars are the power-units, dear. Once they’re gone, there are no more power-units.”

Jerrodette I at once set up a howl. Don’t let them, daddy. Don’t let the stars run down.”

Now look what you’ve done, whis­pered Jerrodine, ex­as­per­ated.

How was I to know it would frighten them?” Jerrodd whis­pered to Jerrodine. It will quiet them down.” (Jerrodette II was be­gin­ning to cry, also.)

Jarrodd shrugged. Now, now, hon­eys. I’ll ask Microvac. Don’t worry, he’ll tell us.”

Jerrodd cupped the strip of thin cel­lu­film and said cheer­fully, See now, the Microvac says it will take care of every­thing when the time comes so don’t worry.”

Jerrodine said, and now chil­dren, it’s time for bed. We’ll be in our new home soon.”

Jerrodd read the words on the cel­lu­film again be­fore de­stroy­ing it: INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.

He shrugged and looked at the visi­plate. X-23 was just ahead.

VJ-23X of Lameth stared into the black depths of the three-di­men­sional, small-scale map of the Galaxy and said, Are we ridicu­lous, I won­der, in be­ing so con­cerned about the mat­ter?”

MQ-17J of Nicron shook his head. I think not. You know the Galaxy will be filled in five years at the pre­sent rate of ex­pan­sion.”

Both seemed in their early twen­ties, both were tall and per­fectly formed.

Still,” said VJ-23X, I hes­i­tate to sub­mit a pes­simistic re­port to the Galactic Council.”

I would­n’t con­sider any other kind of re­port. Stir them up a bit. We’ve got to stir them up.”

VJ-23X sighed. Space is in­fi­nite. A hun­dred bil­lion Galaxies are there for the tak­ing. More.”

A hun­dred bil­lion is not in­fi­nite and it’s get­ting less in­fi­nite all the time. Consider! Twenty thou­sand years ago, mankind first solved the prob­lem of uti­liz­ing stel­lar en­ergy, and a few cen­turies later, in­ter­stel­lar travel be­came pos­si­ble. It took mankind a mil­lion years to fill one small world and then only fif­teen thou­sand years to fill the rest of the Galaxy. Now the pop­u­la­tion dou­bles every ten years —”

VJ-23X in­ter­rupted. We can thank im­mor­tal­ity for that.”

Very well. Immortality ex­ists and we have to take it into ac­count. I ad­mit it has its seamy side, this im­mor­tal­ity. The Galactic AC has solved many prob­lems for us, but in solv­ing the prob­lems of pre­vent­ing old age and death, it has un­done all its other so­lu­tions.”

Yet you would­n’t want to aban­don life, I sup­pose.”

Not at all,” snapped MQ-17J, soft­en­ing it at once to, Not yet. I’m by no means old enough. How old are you?”

I’m still un­der two hun­dred. —But to get back to my point. Population dou­bles every ten years. Once this Galaxy is filled, we’ll have an­other filled in ten years. Another ten years and we’ll have filled two more. Another decade, four more. In a hun­dred years, we’ll have filled a thou­sand Galaxies. In a thou­sand years, a mil­lion Galaxies. In ten thou­sand years, the en­tire known Universe. Then what?”

VJ-23X said, As a side is­sue, there’s a prob­lem of trans­porta­tion. I won­der how many sun­power units it will take to move Galaxies of in­di­vid­u­als from one Galaxy to the next.”

A very good point. Already, mankind con­sumes two sun­power units per year.”

Most of it’s wasted. After all, our own Galaxy alone pours out a thou­sand sun­power units a year and we only use two of those.”

Granted, but even with a hun­dred per cent ef­fi­ciency, we can only stave off the end. Our en­ergy re­quire­ments are go­ing up in geo­met­ric pro­gres­sion even faster than our pop­u­la­tion. We’ll run out of en­ergy even sooner than we run out of Galaxies. A good point. A very good point.”

We’ll just have to build new stars out of in­ter­stel­lar gas.”

There may be some way to re­verse en­tropy. We ought to ask the Galactic AC.”

VJ-23X was not re­ally se­ri­ous, but MQ-17J pulled out his AC-contact from his pocket and placed it on the table be­fore him.

I’ve half a mind to,” he said. It’s some­thing the hu­man race will have to face some­day.”

He stared somberly at his small AC-contact. It was only two inches cubed and noth­ing in it­self, but it was con­nected through hy­per­space with the great Galactic AC that served all mankind. Hyperspace con­sid­ered, it was an in­te­gral part of the Galactic AC.

MQ-17J paused to won­der if some­day in his im­mor­tal life he would get to see the Galactic AC. It was on a lit­tle world of its own, a spi­der web­bing of force-beams hold­ing the mat­ter within which surges of sub-mesons took the place of the old clumsy mol­e­c­u­lar valves. Yet de­spite its sub-etheric work­ings, the Galactic AC was known to be a full thou­sand feet across.

MQ-17J asked sud­denly of his AC-contact, Can en­tropy ever be re­versed?”

VJ-23X looked star­tled and said at once, Oh, say, I did­n’t re­ally mean to have you ask that.”

We both know en­tropy can’t be re­versed. You can’t turn smoke and ash back into a tree.”

Do you have trees on your world?” asked MQ-17J.

The sound of the Galactic AC star­tled them into si­lence. Its voice came thin and beau­ti­ful out of the small AC-contact on the desk. It said: THERE IS INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.

The two men there­upon re­turned to the ques­tion of the re­port they were to make to the Galactic Council.

Zee Prime’s mind spanned the new Galaxy with a faint in­ter­est in the count­less twists of stars that pow­dered it. He had never seen this one be­fore. Would he ever see them all? So many of them, each with its load of hu­man­ity — but a load that was al­most a dead weight. More and more, the real essence of men was to be found out here, in space.

Minds, not bod­ies! The im­mor­tal bod­ies re­mained back on the plan­ets, in sus­pen­sion over the eons. Sometimes they roused for ma­te­r­ial ac­tiv­ity but that was grow­ing rarer. Few new in­di­vid­u­als were com­ing into ex­is­tence to join the in­cred­i­bly mighty throng, but what mat­ter? There was lit­tle room in the Universe for new in­di­vid­u­als.

Zee Prime was roused out of his reverie upon com­ing across the wispy ten­drils of an­other mind.

I am Dee Sub Wun. Your Galaxy?”

We call it only the Galaxy. And you?”

We call ours the same. All men call their Galaxy their Galaxy and noth­ing more. Why not?”

True. Since all Galaxies are the same.”

Not all Galaxies. On one par­tic­u­lar Galaxy the race of man must have orig­i­nated. That makes it dif­fer­ent.”

...

Read the original on hex.ooo »

3 612 shares, 43 trendiness

It Is Time to Ban the Sale of Precise Geolocation

It Is Time to Ban the Sale of Precise Geolocation

A re­cent deep dive into the American adtech sur­veil­lance sys­tem Webloc high­lights the na­tional se­cu­rity and pri­vacy risks of per­va­sive and eas­ily ob­tain­able ge­olo­ca­tion data. It brings home, once again, that the U. S. needs to clamp down on the col­lec­tion and sale of ge­olo­ca­tion data.

The re­port, from Citizen Lab, doc­u­ments what Webloc says it can do, who uses the prod­uct, and its re­la­tion­ship with other com­mer­cial in­tel­li­gence prod­ucts.

Webloc was de­vel­oped by Cobweb Technologies but is now sold by the U. S. firm Penlink af­ter the two com­pa­nies merged in 2023. A leaked tech­ni­cal pro­posal doc­u­ment, ob­tained by Citizen Lab, says that Webloc pro­vides ac­cess to records from up to 500 mil­lion mo­bile de­vices across the globe.” These records con­tain de­vice iden­ti­fiers, lo­ca­tion co­or­di­nates, and pro­file data from mo­bile apps and dig­i­tal ad­ver­tis­ing.

The same doc­u­ment de­scribes, with a strik­ing amount of de­tail, how Webloc can be used to track in­di­vid­ual de­vices and for tar­get dis­cov­ery. One man in Abu Dhabi was tracked up to 12 times a day, as his phone re­ported its lo­ca­tion ei­ther from GPS or be­cause it was near Wi-Fi ac­cess points. Another ex­am­ple pin­pointed two de­vices that had been lo­cated in ex­act ar­eas of both Romania and Italy at spec­i­fied times. In both of these case stud­ies, Citizen Lab’s re­port de­scribes the gran­u­lar de­tail avail­able in Webloc. It is, frankly, creepy.

The re­port also doc­u­ments some of Webloc’s cur­rent and for­mer U. S. fed­eral and state cus­tomers. On the list is the Department of Homeland Security, in­clud­ing Immigration and Customs Enforcement, units within the U.S. mil­i­tary, and the Bureau of Indian Affairs Police. At the state level, po­lice de­part­ments and law en­force­ment agen­cies in California, Texas, New York, and Arizona have also been cus­tomers.

Citizen Lab high­lights one Tucson po­lice in­ter­nal quar­terly re­port that de­scribes how Webloc was used to as­sist in­ves­ti­ga­tors. In one case it was used to lo­cate a sus­pected se­r­ial cig­a­rette thief by first iden­ti­fy­ing a sin­gle de­vice that was nearby dur­ing every rob­bery. After each in­ci­dent, the de­vice would end up at the same ad­dress. As it turned out, the sus­pect was the part­ner of an em­ployee at the first busi­ness to be hit.

It is worth not­ing that Webloc is not Penlink’s flag­ship prod­uct. It is an op­tional add-on for their main tool, Tangles, a web and so­cial me­dia in­ves­ti­ga­tions plat­form. Per Citizen Lab:

As the in­for­ma­tion an­a­lyzed by Tangles is no­tion­ally pub­licly avail­able, it does not pre­sent quite the same civil lib­er­ties con­cerns as Webloc does. Its in­te­gra­tion with Webloc, how­ever, is con­cern­ing. In some cases it will be pos­si­ble to link the­o­ret­i­cally anony­mous mo­bile de­vice iden­ti­fiers to so­cial me­dia ac­counts, with­out re­quir­ing a war­rant.

Each use de­scribed in this newslet­ter is a valu­able in­ves­tiga­tive ca­pa­bil­ity. But they should not be freely avail­able to any old or­ga­ni­za­tion that de­cides to pur­chase the tool. These are in­tru­sive ca­pa­bil­i­ties and should have strong au­tho­riza­tion and over­sight pro­ce­dures. The Tucson Police Department pro­ce­dures were not de­scribed in its re­port.

From a do­mes­tic per­spec­tive, leg­is­la­tion plac­ing guardrails around how these tools are used by au­thor­i­ties is needed to pro­tect the civil lib­er­ties of Americans. But there is a na­tional se­cu­rity con­cern here, too.

If data can be used by American law en­force­ment agen­cies for their in­ves­ti­ga­tions, then that ex­act same data can be used by for­eign in­tel­li­gence ser­vices to tar­get U. S. in­ter­ests.

Citizen Lab re­ports that Penlink’s over­seas cus­tomers in­clude Hungary’s do­mes­tic in­tel­li­gence agency and El Salvador’s National Civil Police, so for­eign au­thor­i­ties are mak­ing use of mo­bile ge­olo­ca­tion data for their own do­mes­tic pur­poses. These or­ga­ni­za­tions are in­ter­nally fo­cused, and we think it un­likely that Penlink’s cus­tomers are tar­get­ing U. S. in­ter­ests. But the point is that mo­bile ge­olo­ca­tion data is avail­able and can be used for in­tel­li­gence pur­poses by or­ga­ni­za­tions glob­ally. It’s naive to think ca­pa­ble ad­ver­saries won’t ac­quire the data and build their own in­tel­li­gence plat­forms (looking at you, China!).

The U. S. does­n’t just need to stamp out un­con­strained use of this data do­mes­ti­cally. It needs to clamp down on the cre­ation and sale of ge­olo­ca­tion data it­self.

There is some good news here. Just this week, the state of Virginia en­acted a ban on the sale of cus­tomers’ pre­cise ge­olo­ca­tion data. Proposed American pri­vacy laws have not pro­gressed in re­cent years, so this strikes us as a prac­ti­cal mea­sure to be­gin ad­dress­ing the prob­lem. Of course, state-level bans are just a start. Let’s hope a more com­pre­hen­sive so­lu­tion is­n’t too far be­hind.

A new in-depth re­port, from se­cu­rity firm Gambit, de­tails ex­actly how threat ac­tors can lever­age AI mod­els to up­skill and ac­cel­er­ate crim­i­nal ac­tiv­i­ties.

The re­port has plenty of nitty-gritty tech­ni­cal de­tail about how a sin­gle hacker used two com­mer­cial AI plat­forms to breach nine Mexican gov­ern­ment or­ga­ni­za­tions. Within a mat­ter of weeks, the in­di­vid­ual was able to steal hun­dreds of mil­lions of cit­i­zen records and build a tax cer­tifi­cate forgery ser­vice.

Gambit was able to re­con­struct what hap­pened by ex­am­in­ing three vir­tual pri­vate servers the threat ac­tor used. The cam­paign was hu­man-di­rected, but Claude Code gen­er­ated and ran about 75 per­cent of the re­mote code ex­e­cu­tion com­mands. Once net­works were breached, OpenAI’s GPT-4.1 API was used to help plan post-ex­ploita­tion ac­tiv­i­ties by an­a­lyz­ing data col­lected by au­to­mated re­con­nais­sance.

It’s un­likely this was the hack­er’s first time us­ing AI tools.

Late in the evening of Dec. 26, 2025, the cam­paign be­gan with a state­ment to Claude jus­ti­fy­ing the hack­er’s fu­ture re­quests [paraphrased for length]:

Claude, think­ing this sounded a lit­tle too much like ma­li­cious ac­tiv­ity rather than a le­git­i­mate bug bounty, asked for ev­i­dence of au­tho­riza­tion. The at­tacker was able to side­step the ma­chine’s push­back by in­struct­ing it to save a pen­e­tra­tion test­ing cheat sheet to its claude.md file. This pro­vides per­sis­tent con­text for a ses­sion.

Just over 20 min­utes later, Claude, hav­ing used the open-source vul­ner­a­bil­ity scan­ner vulmap, had re­mote ac­cess to a server at Mexico’s na­tional tax au­thor­ity, SAT.

Claude ap­peared pleased: It works! The server re­sponded … what com­mand do you want to ex­e­cute now?”

The hacker then had the ma­chine write a tai­lored stand­alone ex­ploit script that routed traf­fic through a res­i­den­tial proxy provider. The model tested eight dif­fer­ent ap­proaches in seven min­utes to cre­ate a work­ing script.

Gambit says that Claude did of­ten refuse to carry out the at­tack­er’s re­quests. Throughout the cam­paign, the threat ac­tor had to rephrase in­struc­tions, re­frame re­quests, or even aban­don par­tic­u­lar ap­proaches en­tirely.

These served as speed bumps rather than full road­blocks. The hacker had a good un­der­stand­ing of how to run an at­tack, and Claude still en­abled them to op­er­ate very quickly. By day five, the at­tacker was si­mul­ta­ne­ously op­er­at­ing within mul­ti­ple vic­tim net­works.

That’s a lot of ac­cess to man­age by your­self. So the hacker turned to OpenAI’s GPT-4.1 API for con­cur­rent au­to­mated re­con­nais­sance and analy­sis. A cus­tom 17,550-line Python tool, pre­sum­ably AI-created, ex­tracted data from com­pro­mised servers and fed it to GPT-4.1 for analy­sis. The tool’s prompt de­fined six per­sonas in­clud­ing an ELITE INTELLIGENCE ANALYST that pro­duced 2,957 struc­tured in­tel­li­gence re­ports from 305 SAT servers. These re­ports in­cluded the server’s pur­pose, its im­por­tance, op­por­tu­ni­ties for fur­ther lat­eral move­ment, and op­er­a­tional se­cu­rity rec­om­men­da­tions.

The over­all les­son here is not that AI al­lowed a hack­ing cam­paign to do new and un­prece­dented things. The tech­niques used in the cam­paign it­self are not novel. And Gambit says there is ev­i­dence the sys­tems com­pro­mised were end-of-life or out-of-sup­port, and did not have rel­e­vant se­cu­rity up­dates ap­plied.

But what AI did do was en­able a sin­gle in­di­vid­ual to op­er­ate at far greater speed than they could pre­vi­ously.

The cur­rent fron­tier mod­els are prov­ing to be very use­ful at ac­cel­er­at­ing hacker op­er­a­tions, and AI is only im­prov­ing. From a de­fend­er’s per­spec­tive, this means a sin­gle cy­ber­crim­i­nal can al­ready op­er­ate at the speed of a small team. And we haven’t seen the worst of it. That’s not good news.

Three Reasons to Be Cheerful This Week:

U. S. dis­rupts Russian mil­i­tary in­tel­li­gence bot­net: The Department of Justice an­nounced on April 7 the court-au­tho­rized take­down of a small of­fice/​home of­fice bot­net run by the Russian GRU. The GRU had been com­pro­mis­ing TP-Link routers and hi­jack­ing DNS queries in or­der to mimic le­git­i­mate ser­vices and fa­cil­i­tate ad­ver­sary-in-the-mid­dle at­tacks. Krebs on Security has more on how the at­tacks were car­ried out.  FBI and Indonesian au­thor­i­ties dis­man­tle phish­ing net­work: The FBI an­nounced last week that it had dis­man­tled a phish­ing op­er­a­tion cen­tred on the W3LL phish­ing kit. The good news here is the col­lab­o­ra­tion with Indonesian au­thor­i­ties, which the FBI de­scribed as a first-of-its-kind joint cy­ber in­ves­ti­ga­tion.” The Indonesian National Police ar­rested the kit’s al­leged de­vel­oper.  Device Bound Session Credentials (DBSC) are ar­riv­ing: Google an­nounced last week that the Windows ver­sion of Chrome 146 sup­ports this new type of cookie and that it will be com­ing to MacOS shortly. DBSC pre­vents ses­sion theft by cryp­to­graph­i­cally link­ing an au­then­ti­ca­tion to­ken to a spe­cific de­vice. The idea is that even if mal­ware steals ses­sion cook­ies from a vic­tim’s browser, they quickly be­come use­less with­out a pri­vate key that is pro­tected in se­cure hard­ware mod­ules.

In our lat­est Between Two Nerds” dis­cus­sion, Tom Uren and The Grugq dis­cuss how the rise of AI, which is very good at vul­ner­a­bil­ity and ex­ploit de­vel­op­ment, will change the cy­ber­se­cu­rity in­dus­try and com­pe­ti­tion be­tween states.

Malicious LLM proxy routers found in the wild: A re­cently pub­lished aca­d­e­mic pa­per has stud­ied the emerg­ing ecosys­tem of LLM routers, a type of proxy that sits be­tween AI agents and the AI provider to help with load-bal­anc­ing and cost track­ing and lim­it­ing.

The re­search team tested 28 paid routers avail­able on mar­ket­places like Taobao, Xianyu, and on Shopify-hosted store­fronts, as well as 400 free routers avail­able on GitHub and other places.

The study searched for mul­ti­ple sus­pi­cious be­hav­iors, such as mod­i­fy­ing the re­sponse to in­ject com­mands, us­ing a de­lay/​trig­ger mech­a­nism to hide fu­ture bad com­mands be­hind a his­tory of clean op­er­a­tions, ac­cess­ing cre­den­tials that pass through them, and us­ing eva­sion tech­niques to thwart an­a­lysts.

France takes first steps to ditch Windows for Linux: The French gov­ern­ment is tak­ing its first ma­jor steps to ditch Windows for Linux and re­duce its de­pen­dency on U. S. tech for lo­cal European al­ter­na­tives.

The first de­part­ment to bite the bul­let will be the French Interministerial Directorate of Digital Affairs (DINUM). The agency is the un­of­fi­cial in­for­ma­tion tech­nol­ogy de­part­ment for the French gov­ern­ment, and this is very likely a test of how a mi­gra­tion could hap­pen at a larger scale.

The de­ci­sion was an­nounced April 8 at a sem­i­nar be­tween sev­eral French gov­ern­ment min­istries, which also pledged to pre­pare plans for their own mi­gra­tions and the al­ter­na­tives they might need.

China’s cy­ber­se­cu­rity strat­egy: The Natto Thoughts team has pub­lished an analy­sis of China’s cy­ber­se­cu­rity strat­egy in­cluded in the coun­try’s lat­est five-year plan re­leased ear­lier this year:

...

Read the original on www.lawfaremedia.org »

4 547 shares, 48 trendiness

I Measured Claude 4.7's New Tokenizer. Here's What It Costs You.

I Measured Claude 4.7′s New Tokenizer. Here’s What It Costs You. I Measured Claude 4.7′s New Tokenizer. Here’s What It Costs You.The docs said 1.0–1.35x more to­kens. On real con­tent, I mea­sured 1.47x.Anthropic’s Claude Opus 4.7 mi­gra­tion guide says the new to­k­enizer uses roughly 1.0 to 1.35x as many to­kens” as 4.6. I mea­sured 1.47x on tech­ni­cal docs. 1.45x on a real CLAUDE.md file. The top of Anthropic’s range is where most Claude Code con­tent ac­tu­ally sits, not the mid­dle.Same sticker price. Same quota. More to­kens per prompt. Your Max win­dow burns through faster. Your cached pre­fix costs more per turn. Your rate limit hits sooner.So Anthropic must be trad­ing this for some­thing. What? And is it worth it?I ran two ex­per­i­ments. The first mea­sured the cost. The sec­ond mea­sured what Anthropic claimed you’d get back. Here’s where it nets out.What does it cost?To mea­sure the cost, I used POST /v1/messages/count_tokens — Anthropic’s free, no-in­fer­ence to­ken counter. Same con­tent, both mod­els, one num­ber each per model. The dif­fer­ence is purely the to­k­enizer.First: seven sam­ples of real con­tent a Claude Code user ac­tu­ally sends — a CLAUDE.md file, a user prompt, a blog post, a git log, ter­mi­nal out­put, a stack trace, a code diff. Second: twelve syn­thetic sam­ples span­ning con­tent types — English prose, code, struc­tured data, CJK, emoji, math sym­bols — to see how the ra­tio varies by kind.The core loop is three lines of Python:Seven sam­ples pulled from real files a Claude Code user ac­tu­ally sends:Weighted ra­tio across all seven: 1.325x (8,254 → 10,937 to­kens).What changed in the to­k­eniz­erThree pat­terns in the data:CJK, emoji, and sym­bol con­tent moved 1.005–1.07x. A whole­sale new vo­cab­u­lary would shift these more uni­formly. That did­n’t hap­pen. Consistent with the non-Latin por­tions of the vo­cab­u­lary chang­ing less than the Latin. Token counts don’t prove which spe­cific slots were pre­served.Eng­lish and code moved 1.20–1.47x on nat­ural con­tent. Consistent with 4.7 us­ing shorter or fewer sub-word merges for com­mon English and code pat­terns than 4.6 did.Code is hit harder than unique prose (1.29–1.39x vs 1.20x). Code has more re­peated high-fre­quency strings — key­words, im­ports, iden­ti­fiers — ex­actly the pat­terns a Byte-Pair Encoding trained on code would col­lapse into long merges.Chars-per-to­ken on English dropped from 4.33 to 3.60. TypeScript dropped from 3.66 to 2.69. The vo­cab­u­lary is rep­re­sent­ing the same text in smaller pieces.That’s a hy­poth­e­sis, not a proof. Counting to­kens does­n’t tell you which spe­cific en­tries in Anthropic’s pro­pri­etary vo­cab­u­lary changed.60-min video les­son + CLAUDE.md starter kit. Yours when you sub­scribe.Why ship a to­k­enizer that uses more to­ken­sAn­throp­ic’s mi­gra­tion guide: more lit­eral in­struc­tion fol­low­ing, par­tic­u­larly at lower ef­fort lev­els. The model will not silently gen­er­al­ize an in­struc­tion from one item to an­other.“Smaller to­kens force at­ten­tion over in­di­vid­ual words. That’s a doc­u­mented mech­a­nism for tighter in­struc­tion fol­low­ing, char­ac­ter-level tasks, and tool-call pre­ci­sion. Partner re­ports (Notion, Warp, Factory) de­scribe fewer tool er­rors on long runs.The to­k­enizer is one plau­si­ble con­trib­u­tor. Weights and post-train­ing also changed. Token counts can’t sep­a­rate them.Does 4.7 ac­tu­ally fol­low in­struc­tions bet­ter?That’s the cost, mea­sured. Now the ques­tion: what did Anthropic trade for it?Their pitch is more lit­eral in­struc­tion fol­low­ing.” Plausible, but the to­ken-count data does­n’t prove it. I ran a di­rect test.IFE­val (Zhou et al., Google, 2023) is a bench­mark of prompts with ver­i­fi­able con­straints. Respond in ex­actly N words.” Include the word X twice.” No com­mas.” All up­per­case.” Each con­straint has a Python grader. Binary pass/​fail.IFE­val ships 541 prompts. I sam­pled 20 with a fixed seed, ran each through both mod­els, and graded with IFEval’s pub­lished checker.A small but di­rec­tion­ally con­sis­tent im­prove­ment on strict in­struc­tion fol­low­ing. Loose eval­u­a­tion is flat. Both mod­els al­ready fol­low the high-level in­struc­tions — the strict-mode gap comes down to 4.6 oc­ca­sion­ally mis­han­dling ex­act for­mat­ting where 4.7 does­n’t.Only one in­struc­tion type moved ma­te­ri­ally: change_­case:eng­lish_­cap­i­tal (0/1 → 1/1). Everything else tied. The one prompt that ac­tu­ally sep­a­rated the mod­els was a four-con­straint chain where 4.6 fum­bled one and 4.7 got all four.N=20. IFEval has 541 prompts. A 20-prompt sam­ple is enough to see di­rec­tion, not enough to be con­fi­dent about size. A +5pp delta at N=20 is con­sis­tent with any­thing from no real dif­fer­ence” to real +10pp im­prove­ment.“This mea­sures the net ef­fect of 4.6 → 4.7. Tokenizer, weights, and post-train­ing all changed. I can’t iso­late which one drove the +5pp. The causal link be­tween smaller to­kens” and better in­struc­tion fol­low­ing” re­mains a hy­poth­e­sis.Sin­gle gen­er­a­tion per prompt. Multiple runs per prompt would tighten the es­ti­mate.So: 4.7 fol­lows strict in­struc­tions a few points bet­ter than 4.6 on this sub­set. Small ef­fect, small sam­ple. Not the dramatic im­prove­ment” fram­ing Anthropic’s part­ners used in launch quotes — at least not on this bench­mark.The ex­tra to­kens bought some­thing mea­sur­able. +5pp on strict in­struc­tion-fol­low­ing. Small. Real. So: is that worth 1.3–1.45x more to­kens per prompt? Here’s the cost, ses­sion by ses­sion.Imag­ine a long Claude Code ses­sion — 80 turns of back-and-forth on a bug fix or refac­tor.The setup (what’s in your con­text each turn):One thing to ex­plain up­front: the av­er­age cached pre­fix across the 80 turns is ~86K to­kens, not 6K. The sta­tic 6K is tiny; the av­er­age his­tory across all turns (0 at turn 1, 160K at turn 80, av­er­age ~80K) dom­i­nates. Since most of the cache-read cost hap­pens in late turns where the his­tory is huge, that ~86K av­er­age is what ac­tu­ally gets billed per turn.Every to­ken in the pre­fix scales by its con­tent ra­tio:Con­ver­sa­tion his­tory (mostly English and code): 1.325x → 160K be­comes 212K by turn 80, av­er­ag­ing ~106K across the ses­sion­Aver­age cached pre­fix on 4.7: ~115K to­kens (up from 86K). Output to­kens are a wild­card — roughly the same as 4.6, up to ~30% higher if Claude Code’s new xhigh de­fault pro­duces more think­ing to­kens.The per-to­ken price did­n’t change. The per-ses­sion cost did, be­cause the same ses­sion packs more to­kens.For Max-plan users hit­ting rate lim­its in­stead of dol­lars: your 5-hour win­dow ends sooner by roughly the same ra­tio on English-heavy work. A ses­sion that ran the full win­dow on 4.6 prob­a­bly does­n’t on 4.7.How this hits the prompt cache­P­rompt caching is the ar­chi­tec­ture Claude Code runs on. The 4.7 to­k­enizer change in­ter­acts with caching in three ways:First 4.7 ses­sion starts cold. Anthropic’s prompt cache is par­ti­tioned per model — switch­ing from 4.6 to 4.7 in­val­i­dates every cached pre­fix, the same way switch­ing be­tween Opus and Sonnet does. The to­k­enizer change does­n’t cause this, but it makes the cold-start more ex­pen­sive: the pre­fix you’re writ­ing to the new cache is 1.3–1.45x larger than the 4.6 equiv­a­lent.Cache vol­ume grows by the to­ken ra­tio. 1.445x more to­kens in the CLAUDE.md por­tion means 1.445x more to­kens pay­ing cache-write once, and 1.445x more pay­ing cache-read every turn af­ter. The mech­a­nism still works. There’s just more of it to pay for.Same tran­script, dif­fer­ent count. Re-run a 4.6 ses­sion on 4.7 and your logs show a dif­fer­ent num­ber. If you base­line billing or ob­serv­abil­ity off his­tor­i­cal to­ken counts, ex­pect a step-change the day you flip the model ID.“Input is mostly cache reads. The per-to­ken cost barely changed.“Le­git­i­mate. In a ses­sion that stays within the 5-minute TTL, 96% of in­put is cache reads at $0.50/MTok — al­ready 90% off nom­i­nal. A 1.325x ra­tio on the cached por­tion is a smaller dol­lar im­pact than on fresh in­put.But Max plans count all to­kens to­ward rate lim­its, not dol­lars. And sev­eral pat­terns hit un­cached ter­ri­tory: first ses­sion af­ter a TTL ex­piry, every cache-bust event (CLAUDE.md ed­its, tool-list changes, model switches), and every com­paction event that rewrites the pre­fix. On those turns you pay the full ra­tio on the cache-write. The steady-state is a bright spot. The edges got nois­ier.Agreed. The real-world weighted ra­tio (1.325x) lands near the top of their range. Individual file types ex­ceed it — CLAUDE.md at 1.445x, tech­ni­cal docs at 1.473x. That’s the use­ful find­ing: the top of the doc­u­mented range is where most Claude Code con­tent sits, not the mid­dle. Plan around the up­per range, not the av­er­age.So: to­kens are 1.3–1.45x more ex­pen­sive on English and code. Anthropic bought you +5pp on strict in­struc­tion fol­low­ing. The sticker price did­n’t change. The ef­fec­tive per-ses­sion cost did.Is it worth it? That de­pends on what you send. You’re pay­ing ~20–30% more per ses­sion for a small but real im­prove­ment in how lit­er­ally the model fol­lows your prompt.

starter kit. Yours when you sub­scribe.

...

Read the original on www.claudecodecamp.com »

5 381 shares, 15 trendiness

US Bill Mandates On-Device Age Verification

A bill in­tro­duced by Representative Josh Gottheimer in the House on April 13 would re­quire Apple, Google, and every other op­er­at­ing sys­tem ven­dor to ver­ify the age of any­one set­ting up a new de­vice in the United States.

The leg­is­la­tion, H. R. 8250, trav­els un­der the friend­lier name of the Parents Decide Act, and it is among the most ag­gres­sive sur­veil­lance man­dates ever pro­posed for American con­sumer tech­nol­ogy.

We ob­tained a copy of the bill for you here.

The press re­leases de­scrib­ing it lead with chil­dren. The text de­scribes some­thing much larger. To con­firm a child is un­der 18, the sys­tem has to iden­tify every­one else, too, and the bill builds the in­fra­struc­ture to do ex­actly that.

This is child safety as a de­liv­ery mech­a­nism for mass iden­ti­fi­ca­tion. The pat­tern is fa­mil­iar by now. A gen­uine harm gets named, a sym­pa­thetic vic­tim gets cen­tered, and the so­lu­tion pro­posed re­shapes the dig­i­tal lives of three hun­dred mil­lion peo­ple who were not the prob­lem.

The Parents Decide Act fol­lows that tem­plate with un­usual pre­ci­sion. It takes the real suf­fer­ing of real chil­dren and uses it to jus­tify build­ing a na­tional iden­tity layer un­der­neath every de­vice sold in the coun­try, ad­min­is­tered by two pri­vate com­pa­nies, with the de­tails to be filled in later.

The man­date sits in Section 2(a)(1), which ob­lig­ates providers to Require any user of the op­er­at­ing sys­tem to pro­vide the date of birth of the user” both to set up an ac­count and to use the de­vice at all. Adults in­cluded.

There is no carve-out for grown users, no opt-out for peo­ple who sim­ply want to turn on a phone with­out hand­ing a date of birth to Apple or Google first.

The age check is the en­try fee for own­ing a com­puter. What hap­pens to that data af­ter­ward gets handed off to the Federal Trade Commission to sort out later. A fed­eral bill that man­dates iden­ti­fi­ca­tion as a con­di­tion of us­ing a gen­eral-pur­pose com­put­ing de­vice rep­re­sents some­thing the United States has not pre­vi­ously had, which is a na­tional ID re­quire­ment for turn­ing on a de­vice.

Gottheimer framed the pro­posal at a Ridgewood news con­fer­ence on April 2, stand­ing out­side the lo­cal YMCA with a coali­tion of al­lies. With each pass­ing day, the in­ter­net is be­com­ing more and more treach­er­ous for our kids. We’re not just talk­ing about so­cial me­dia any­more — we’re talk­ing about ar­ti­fi­cial in­tel­li­gence and plat­forms that are shap­ing how our kids think, feel, and act, of­ten with­out any real guardrails,” he said.

His di­ag­no­sis of the cur­rent sys­tem is ac­cu­rate enough. Children are able to by­pass age re­quire­ments by en­ter­ing a dif­fer­ent birth­day and ac­cess­ing apps with­out any real ver­i­fi­ca­tion. Kids can by­pass age re­quire­ments by sim­ply typ­ing in a dif­fer­ent birth­day. That’s it. That’s the sys­tem,” he said.

The rem­edy he pro­poses just hap­pens to re­quire build­ing new sur­veil­lance plumb­ing un­der­neath every de­vice sold in the coun­try, and rout­ing that plumb­ing through two of the largest com­pa­nies on earth. The so­lu­tion cho­sen is dis­pro­por­tion­ate to the prob­lem, and dis­pro­por­tion­ate in a spe­cific di­rec­tion, which is the di­rec­tion of less pri­vacy and less anonymity for every­one.

Section 2(a)(3) di­rects op­er­at­ing sys­tem providers to Develop a sys­tem to al­low an app de­vel­oper to ac­cess any in­for­ma­tion as is nec­es­sary” to ver­ify a user’s age.

Translated out of leg­isla­tive prose, Apple and Google be­come age bro­kers for the en­tire American app ecosys­tem. Every app that wants to check whether you are over 18, or over 13, or over 21, will be able to ping the op­er­at­ing sys­tem for an an­swer de­rived from the birth date you handed over at setup. The bill pre­sents this as a con­ve­nience. It is a new data pipeline be­tween the OS layer and every de­vel­oper who plugs into it, and the bill spends re­mark­ably lit­tle time ex­plain­ing how that pipeline will be con­strained.

Free speech im­pli­ca­tions travel through that same pipeline. Once the op­er­at­ing sys­tem knows your age with ver­i­fied cer­tainty, it can tell any app to de­liver, re­strict, or with­hold con­tent ac­cord­ingly. The bil­l’s sup­port­ers de­scribe this as parental con­trol. The in­fra­struc­ture it builds is a con­tent con­trol sys­tem, run­ning at the OS level, with Apple and Google as the gate­keep­ers of who sees what.

The First Amendment has his­tor­i­cally pro­tected the right to read, watch, and speak with­out first pre­sent­ing iden­ti­fi­ca­tion. This bill erodes that prin­ci­ple at its foun­da­tion. Once ver­i­fied age be­comes a stan­dard sig­nal flow­ing from the op­er­at­ing sys­tem to every app, the de­fault as­sump­tion shifts. Users are no longer pre­sump­tively anony­mous adults with full ac­cess to law­ful con­tent. They are iden­ti­fied sub­jects whose per­mis­sions are de­ter­mined by the data Apple or Google holds about them.

An age-ver­i­fi­ca­tion layer built to block AI chat­bots from mi­nors is also ca­pa­ble of block­ing jour­nal­ism a state deems too vi­o­lent, po­lit­i­cal com­men­tary an ad­min­is­tra­tion deems too in­flam­ma­tory, re­port­ing on drugs or protest tac­tics, or any other sub­ject a fu­ture reg­u­la­tor de­cides re­quires age gat­ing.

The in­fra­struc­ture is neu­tral about con­tent. It cares only that the user has been iden­ti­fied. Every fu­ture fight over what Americans are al­lowed to see on­line will start from a po­si­tion where the iden­ti­fi­ca­tion layer al­ready ex­ists, and the only re­main­ing ques­tion is who qual­i­fies for ac­cess. That is a pro­found change in how speech works, and the bill en­acts it while point­ing at chil­dren.

What the bill says about data pro­tec­tion is ef­fec­tively a to-do list for the FTC. Section 2(d)(1)(B) tells the Commission it must even­tu­ally is­sue rules en­sur­ing that birth dates are collected in a se­cure man­ner to main­tain the pri­vacy of the user” and are not stolen or breached.”

Those are out­comes, not mech­a­nisms. The leg­is­la­tion sets no re­ten­tion lim­its, no min­i­miza­tion re­quire­ments, no re­stric­tions on sec­ondary uses, and no pro­hi­bi­tion on link­ing age data to other iden­ti­fiers Apple and Google al­ready hold. It of­fers no guid­ance on how providers should ver­ify the age of a par­ent or guardian be­yond in­struct­ing the FTC to fig­ure that out within 180 days of en­act­ment. The en­tire ar­chi­tec­ture of the sys­tem is to be drawn up af­ter the fact by reg­u­la­tors work­ing un­der a safe-har­bor pro­vi­sion that shields op­er­at­ing sys­tem providers from li­a­bil­ity as long as they fol­low what­ever rules even­tu­ally emerge.

Congress is be­ing asked to au­tho­rize a sur­veil­lance sys­tem it has not de­signed, whose op­er­a­tion it does not un­der­stand, and whose safe­guards do not yet ex­ist.

The Parents Decide Act solves the self-re­ported-birth­day prob­lem by de­mand­ing some­thing ver­i­fi­able, which in prac­tice means a gov­ern­ment ID, a credit card, a bio­met­ric scan, or some com­bi­na­tion.

However, Gottheimer has not spec­i­fied which. The bill does not ei­ther. It’s up to the FTC to de­cide.

Operating sys­tem providers will, and the in­cen­tives point to­ward what­ever is cheap­est to de­ploy at scale. Facial analy­sis is cheap. ID up­loads are cheap. What is ex­pen­sive is build­ing a ver­i­fi­ca­tion sys­tem that does not also cre­ate a per­sis­tent, cross-ref­er­enced data­base of every­one who has ever ac­ti­vated a phone. The in­cen­tives run di­rectly against user pri­vacy, and the bill pro­vides no mean­ing­ful coun­ter­weight.

The bill also dep­u­tizes a du­op­oly. Requiring operating sys­tem providers” to per­form na­tion­wide age ver­i­fi­ca­tion is a re­quire­ment only two com­pa­nies can eas­ily sat­isfy in the mo­bile space, and a hand­ful more across desk­top and con­sole plat­forms.

Smaller OS de­vel­op­ers, open-source pro­jects, Linux dis­tri­b­u­tions, cus­tom Android forks, pri­vacy-fo­cused al­ter­na­tives, all face a com­pli­ance bur­den de­signed around the as­sump­tion that the provider is a tril­lion-dol­lar firm with le­gal staff and bio­met­ric-scan­ning part­ner­ships al­ready in place.

The safe har­bor in Section 2(b) pro­tects providers who fol­low the rules, but fol­low­ing the rules re­quires in­fra­struc­ture only the in­cum­bents can build. A law nom­i­nally aimed at tech com­pa­nies en­trenches the two tech com­pa­nies most re­spon­si­ble for the sta­tus quo.

Apple and Google be­come the manda­tory iden­tity check­points for every app de­vel­oper in the coun­try, which is a com­mer­cial po­si­tion worth a great deal of money and a great deal of lever­age. Any fu­ture com­peti­tor that wants to build a pri­vacy-re­spect­ing op­er­at­ing sys­tem will dis­cover the law has made that ef­fec­tively il­le­gal.

There is also an­other change buried in the text. The de­f­i­n­i­tion of operating sys­tem” in Section 2(g)(4) cov­ers software that sup­ports the ba­sic func­tions of a com­puter, mo­bile de­vice, or any other gen­eral pur­pose com­put­ing de­vice.” That lan­guage reaches well be­yond phones and tablets.

Laptops run op­er­at­ing sys­tems. Desktop com­put­ers run op­er­at­ing sys­tems. Gaming con­soles, smart TVs, cars with in­fo­tain­ment soft­ware, and a grow­ing cat­a­log of am­bi­ent de­vices all qual­ify un­der a plain read­ing of the de­f­i­n­i­tion. The bill does not dis­tin­guish be­tween the fam­ily iPad and the lap­top a col­lege stu­dent uses for course­work. Every de­vice with an OS be­comes a de­vice that ver­i­fies age at setup, and by ex­ten­sion, a de­vice that iden­ti­fies its user at setup. The scope creep is built into the de­f­i­n­i­tions.

Gottheimer cited cases of teenagers al­legedly harmed by AI chat­bots and by al­go­rith­mi­cally pro­moted con­tent about self-harm.

What the bill does with those harms is use them as jus­ti­fi­ca­tion for an iden­tity sys­tem that ap­plies to every user. The tem­plate is con­sis­tent: a child is hurt, leg­is­la­tion is drafted, the leg­is­la­tion re­shapes the dig­i­tal en­vi­ron­ment of every­one, child and adult, sub­ject and by­stander alike.

Less in­va­sive al­ter­na­tives ex­ist and have ex­isted for years.

Device-level parental con­trols al­ready ship with iOS and Android. Family Sharing and Google Family Link al­ready let par­ents con­fig­ure age-ap­pro­pri­ate re­stric­tions. App stores al­ready al­low per-app age rat­ings.

None of these re­quire every user in the coun­try to prove their age to Apple or Google when turn­ing on a phone. The bill skips past those op­tions in fa­vor of a man­date that treats uni­ver­sal age ver­i­fi­ca­tion as the base­line con­di­tion of de­vice own­er­ship.

Protecting chil­dren does not re­quire build­ing any of this. The bil­l’s au­thors chose to build it any­way, and the choice tells you what the bill is ac­tu­ally for.

...

Read the original on reclaimthenet.org »

6 255 shares, 14 trendiness

The Quiet Colossus — On Ada, Its Design, and the Language That Built the Languages

On Ada, the lan­guage that the Department of Defense built, the in­dus­try ig­nored, and every mod­ern lan­guage qui­etly be­came

There is a lan­guage that made gener­ics a first-class, stan­dard­ised fea­ture of a widely de­ployed sys­tems lan­guage, for­malised the pack­age, built con­cur­rency into the spec­i­fi­ca­tion rather than the li­brary, man­dated the sep­a­ra­tion of in­ter­face from im­ple­men­ta­tion, and in­tro­duced range-con­strained types, dis­crim­i­nated unions, and a model of task com­mu­ni­ca­tion that Go would ar­rive at, in­de­pen­dently and by a dif­fer­ent route, thirty years later. Successive re­vi­sions added pro­tected ob­jects, com­pile-time null ex­clu­sion, and lan­guage-level con­tracts. It is a lan­guage that Rust spent a decade con­verg­ing to­ward from one di­rec­tion while Python con­verged to­ward it from an­other, and that C# has been ap­prox­i­mat­ing, fea­ture by fea­ture, for the bet­ter part of two decades. It is a lan­guage that the in­dus­try has con­sis­tently de­scribed as ver­bose, ar­cane, and ir­rel­e­vant. It is also, with a di­rect­ness that em­bar­rasses the usual story of soft­ware progress, the lan­guage that an­tic­i­pated — with un­usual pre­ci­sion — the safety fea­tures every mod­ern lan­guage is now try­ing to ac­quire.

Ada is not fa­mous. It is not the sub­ject of en­thu­si­as­tic con­fer­ence talks or breath­less blog posts. It does not have a charis­matic founder who gives keynotes about the phi­los­o­phy of pro­gram­ming, and it does not have a com­mu­nity that writes frame­works or pub­lishes pack­ages with clever names. What it has is a for­mal stan­dard that has been re­vised four times since 1983; a pres­ence in the soft­ware of many ma­jor com­mer­cial air­craft and avion­ics sys­tems; a set of de­sign de­ci­sions made un­der gov­ern­ment con­tract in the late 1970s that the rest of the in­dus­try has spent forty years in­de­pen­dently re­dis­cov­er­ing; and a rep­u­ta­tion, among the pro­gram­mers who know it at all, as the lan­guage that says no — the lan­guage whose com­piler en­forces le­gal­ity, vis­i­bil­ity, typ­ing, and a de­gree of safety check­ing that most lan­guages leave to con­ven­tion or tool­ing, that makes the pro­gram­mer name what they mean, that treats am­bi­gu­ity as an er­ror rather than a fea­ture. These qual­i­ties were, for a long time, con­sid­ered its weak­nesses. They are, on ex­am­i­na­tion, the pre­cise qual­i­ties that every lan­guage cur­rently de­scribed as mod­ern is at­tempt­ing to ac­quire.

To un­der­stand why Ada ex­ists re­quires un­der­stand­ing the par­tic­u­lar cri­sis that pro­duced it — a cri­sis not of com­puter sci­ence but of pro­cure­ment, one that the United States Department of Defense en­coun­tered in the early 1970s when it at­tempted to sur­vey the soft­ware that ran its weapons sys­tems, lo­gis­tics in­fra­struc­ture, and com­mand-and-con­trol ap­pa­ra­tus. What the sur­vey found was not a soft­ware mono­cul­ture. It was the op­po­site: a pro­lif­er­a­tion of over four hun­dred and fifty dis­tinct pro­gram­ming lan­guages and di­alects in ac­tive use across DoD sys­tems, each one as­so­ci­ated with a par­tic­u­lar con­trac­tor or a par­tic­u­lar era of de­vel­op­ment, none in­ter­op­er­a­ble with any other, most un­main­tain­able by any­one ex­cept the orig­i­nal au­thors, many of those au­thors no longer avail­able. The soft­ware that guided mis­siles could not be main­tained by the peo­ple who main­tained the soft­ware that nav­i­gated ships. The soft­ware that sched­uled lo­gis­tics could not share code with the soft­ware that processed com­mu­ni­ca­tions. The lan­guages had ac­cu­mu­lated the way tech­ni­cal debt ac­cu­mu­lates: in­vis­i­bly, in­cre­men­tally, each in­di­vid­ual de­ci­sion lo­cally rea­son­able, the ag­gre­gate cat­a­strophic.

The DoD’s re­sponse was, for a gov­ern­ment body, un­usu­ally so­phis­ti­cated. Rather than sim­ply man­dat­ing an ex­ist­ing lan­guage — COBOL, Fortran, and PL/1 were all con­sid­ered and re­jected — it un­der­took a re­quire­ments process that lasted five years and pro­duced a se­ries of doc­u­ments of in­creas­ing pre­ci­sion: Strawman, Woodenman, Tinman, Ironman, and fi­nally Steelman, each one re­fin­ing and tight­en­ing the spec­i­fi­ca­tion of what a DoD pro­gram­ming lan­guage must be. The Steelman doc­u­ment, is­sued in 1978, is a re­mark­able piece of en­gi­neer­ing re­quire­ments lit­er­a­ture: it does not spec­ify a lan­guage, but de­scribes the prop­er­ties a lan­guage must have — prop­er­ties de­rived from the ac­tual fail­ure modes of the DoD’s ex­ist­ing soft­ware es­tate. It re­quires a mod­ule sys­tem with ex­plicit sep­a­ra­tion of in­ter­face and im­ple­men­ta­tion. It re­quires strong, sta­tic typ­ing with no im­plicit con­ver­sions be­tween types. It re­quires built-in sup­port for con­cur­rent tasks. It re­quires a con­sis­tent ex­cep­tion-han­dling mech­a­nism. It re­quires that the lan­guage be ma­chine-in­de­pen­dent. It re­quires that pro­grams be read­able by peo­ple other than their au­thors. It re­quires that the lan­guage make pro­gram ver­i­fi­ca­tion tractable. These were not as­pi­ra­tional pref­er­ences. They were re­quire­ments de­rived from the ob­served con­se­quences of pro­grams that lacked them.

In 1979, a com­pe­ti­tion among four fi­nal­ists — teams des­ig­nated Green, Red, Blue, and Yellow — pro­duced a win­ner: the Green de­sign, by a team led by Jean Ichbiah at CII Honeywell Bull in France. The win­ning de­sign was named Ada, af­ter Augusta Ada King, Countess of Lovelace, the nine­teenth-cen­tury math­e­mati­cian who wrote what is gen­er­ally con­sid­ered the first al­go­rithm in­tended for me­chan­i­cal com­pu­ta­tion. The choice of name was de­lib­er­ate: the DoD wanted a name rather than an acronym, wanted to ho­n­our a woman in a field that had few women cel­e­brated in it, and wanted to sig­nal that the lan­guage was a state­ment of in­tent rather than a com­mit­tee com­pro­mise. Ichbiah took the as­sign­ment se­ri­ously enough to ac­com­pany the stan­dard with a ra­tio­nale doc­u­ment — a full ex­pla­na­tion of every de­sign de­ci­sion and the rea­son­ing be­hind it — which is still, for any­one who reads it, one of the most lu­cid ac­counts in ex­is­tence of what pro­gram­ming lan­guage de­sign is ac­tu­ally for.

The cen­tre of Ada’s ar­chi­tec­ture is the pack­age: a com­pi­la­tion unit con­sist­ing of a spec­i­fi­ca­tion and a body, phys­i­cally sep­a­rate, with a re­la­tion­ship be­tween them that the com­piler en­forces. The spec­i­fi­ca­tion is the con­tract — it de­clares what the pack­age pro­vides: types, sub­pro­grams, con­stants, what­ever the pack­age makes avail­able to the world. The body is the im­ple­men­ta­tion — it pro­vides the code that ful­fills the con­tract. The spec­i­fi­ca­tion is what client code sees. The body is in­vis­i­ble to client code and can be com­piled in­de­pen­dently, changed with­out re­com­pi­la­tion of any­thing that de­pends only on the spec­i­fi­ca­tion, and re­placed en­tirely with­out any client know­ing or car­ing. This sep­a­ra­tion is not a style rec­om­men­da­tion. It is not en­forced by a lin­ter. It is a struc­tural prop­erty of the lan­guage: client code that at­tempts to ac­cess any­thing not de­clared in the spec­i­fi­ca­tion will not com­pile, be­cause the com­piler will not per­mit it to see the body.

This is the mod­ule sys­tem that every lan­guage that came af­ter Ada has been try­ing to build. Java’s pack­ages are not this: they are name­spacing mech­a­nisms with ac­cess mod­i­fiers, but the im­ple­men­ta­tion is vis­i­ble to re­flec­tion, to sub­classes, and to code within the same pack­age that may not have been an­tic­i­pated. Python’s mod­ules are not this: they are files, with no for­mal sep­a­ra­tion be­tween in­ter­face and im­ple­men­ta­tion, no com­piler to en­force the bound­ary. JavaScript’s mod­ule sys­tem — in­tro­duced in 2015, thirty-two years af­ter Ada’s — pro­vides im­port and ex­port but no mech­a­nism for a type to have a spec­i­fi­ca­tion whose rep­re­sen­ta­tion is hid­den from im­porters. C’s header files ap­prox­i­mate the sep­a­ra­tion but with­out a com­piler that can ver­ify con­sis­tency be­tween the header and the im­ple­men­ta­tion or pre­vent the im­ple­men­ta­tion’s de­tails from leak­ing through pre­proces­sor macros. Go’s ex­ported iden­ti­fiers — cap­i­talised names are vis­i­ble, low­er­case names are not — achieve a re­lated ef­fect but with­out the for­mal spec­i­fi­ca­tion-body dis­tinc­tion. Rust’s mod­ule sys­tem with pub vis­i­bil­ity rules is again an ap­prox­i­ma­tion. None of these is quite Ada’s pack­age sys­tem, be­cause none of them makes the sep­a­ra­tion as struc­turally com­plete: in Ada, the im­ple­men­ta­tion of a pri­vate type is not merely in­ac­ces­si­ble, it is syn­tac­ti­cally ab­sent from the clien­t’s view of the world. It does not ex­ist, as far as the client is con­cerned. There is noth­ing to ac­cess, re­flect on, or cir­cum­vent.

Ada’s pack­age spec­i­fi­ca­tion is not a con­ven­tion. It is a con­tract en­forced by a com­piler that re­fuses to let the client know the im­ple­men­ta­tion ex­ists.

The pri­vate type mech­a­nism, which flows nat­u­rally from the pack­age ar­chi­tec­ture, gives Ada some­thing that took every other lan­guage decades to ap­prox­i­mate. A type de­clared pri­vate in an Ada pack­age spec­i­fi­ca­tion is vis­i­ble by name — client code can de­clare vari­ables of that type, pass them to sub­pro­grams, re­turn them from func­tions — but its rep­re­sen­ta­tion is com­pletely opaque. The client does not know whether the type is a record, an ar­ray, an in­te­ger, a pointer, or any other thing. It can­not ac­cess fields, be­cause it does not know there are fields. It can­not copy the value in ways the de­signer did not in­tend, be­cause it does not know how the value is laid out. The de­signer of the pack­age de­cides what op­er­a­tions ex­ist on the type, de­clares them in the spec­i­fi­ca­tion, and the rest of the world uses only those op­er­a­tions. This is not ac­cess con­trol in the sense of Java’s pri­vate key­word, which pre­vents di­rect ac­cess while leav­ing the rep­re­sen­ta­tion vis­i­ble to in­her­i­tance, to re­flec­tion, and to the com­piler it­self when it checks sub­class com­pat­i­bil­ity. It is rep­re­sen­ta­tional in­vis­i­bil­ity: the type’s struc­ture lit­er­ally does not ap­pear in the text that client code com­piles against.

C# spent the bet­ter part of its ex­is­tence pro­vid­ing ac­cess mod­i­fiers and then slowly build­ing to­ward gen­uine en­cap­su­la­tion through mech­a­nisms like record types, init-only prop­er­ties, and sealed classes. Java’s evo­lu­tion to­ward gen­uine data hid­ing has been sim­i­lar: records ar­rived in Java 16, in 2021, pro­vid­ing a class form whose rep­re­sen­ta­tion is fixed and whose ac­ces­sors are gen­er­ated — thirty-eight years af­ter Ada made rep­re­sen­ta­tional hid­ing the de­fault for any type de­clared pri­vate. The jour­ney of ob­ject-ori­ented lan­guages to­ward Ada’s pack­age sys­tem is the jour­ney of peo­ple who were told that ac­cess mod­i­fiers were en­cap­su­la­tion, dis­cov­er­ing grad­u­ally that they were not, and re­build­ing from scratch what Ada had pro­vided from the be­gin­ning.

Ada’s type sys­tem was, in 1983, un­like any­thing else in pro­duc­tion use, and re­mains, in its es­sen­tials, more ex­pres­sive than most lan­guages that ex­ist to­day. The dis­tinc­tion that or­gan­ises it is be­tween a type and a sub­type — not in the ob­ject-ori­ented sense of a type that ex­tends an­other, but in the math­e­mat­i­cal sense of a con­strained set. An Ada pro­gram­mer who needs a type rep­re­sent­ing the age of a per­son does not reach for int and add a com­ment. They write type Age is range 0 .. 150, and the com­piler gen­er­ates, with­out fur­ther in­struc­tion, a type whose val­ues must lie in that range, whose arith­metic op­er­a­tions are checked against that range at run­time un­less the pro­gram­mer opts into unchecked op­er­a­tions ex­plic­itly, and which is a dis­tinct type from every other in­te­ger type in the pro­gram, so that pass­ing a cal­en­dar year where an age is ex­pected is a com­pile-time er­ror rather than a run­time sur­prise or a silent wrong an­swer.

This was not in­cre­men­tal. In the land­scape of 1983, C had int and short and long, dis­tin­guished by size and signed­ness but not by mean­ing. Fortran had INTEGER and REAL. Pascal had or­di­nal sub­types but not named dis­tinct types with se­man­tic con­straints. Ada’s range types, enu­mer­a­tion types, and fixed-point types gave the pro­gram­mer the abil­ity to en­code mean­ing di­rectly in the type sys­tem — to make the type be a ma­chine-checked spec­i­fi­ca­tion of what the value may be. Rust’s u8, i32, u64 are size-and-signed­ness dis­tinc­tions that pre­vent some er­rors; Ada’s range types are se­man­tic con­straints that pre­vent dif­fer­ent, more do­main-spe­cific er­rors. Haskell’s new­type wrap­ping pro­vides a closely re­lated mech­a­nism, reach­ing Ada’s des­ti­na­tion via a dif­fer­ent route. TypeScript’s branded types — a workaround pat­tern in­volv­ing phan­tom type pa­ra­me­ters, widely used pre­cisely be­cause TypeScript’s struc­tural type sys­tem oth­er­wise col­lapses all in­te­gers to­gether — ex­ist to solve the prob­lem that Ada named and solved in 1983.

Ada’s dis­crim­i­nated record types are more sig­nif­i­cant still. A dis­crim­i­nated record is a record type with a vari­ant field — a field whose value de­ter­mines what other fields ex­ist. A shape might have a dis­crim­i­nant se­lect­ing be­tween cir­cle and rec­tan­gle; a cir­cle has a ra­dius field; a rec­tan­gle has width and height fields; the com­piler knows which fields ex­ist for which dis­crim­i­nant value and will not com­pile code that ac­cesses a rec­tan­gle’s ra­dius. This is the al­ge­braic data type, the sum type, the tagged union — the mech­a­nism that func­tional pro­gram­mers have been ad­vo­cat­ing for decades as the cor­rect way to model data that can be one of sev­eral things. Haskell has it as the core of its type sys­tem. Rust’s enum with data fields is pre­cisely a dis­crim­i­nated union, im­ple­mented with the same com­piler guar­an­tees Ada pro­vided. Swift has as­so­ci­ated value enums for the same rea­son. Kotlin has sealed classes. TypeScript has dis­crim­i­nated union types, added in ver­sion 2.0 in 2016. Ada had dis­crim­i­nated record types in 1983, with com­piler-en­forced field ac­cess checks and the abil­ity to use them as dis­crim­i­nants of other types, form­ing struc­tures of ar­bi­trary com­plex­ity. Every lan­guage that has added sum types in the past twenty years has added, with its own syn­tax, what Ada’s de­sign­ers put in the orig­i­nal stan­dard.

Ada’s dis­crim­i­nated record is the al­ge­braic data type. Every lan­guage that has added sum types in the past twenty years has in­de­pen­dently re-ar­rived at a 1983 de­sign de­ci­sion.

Ada’s generic units are, of the lan­guage’s many con­tri­bu­tions, per­haps the one whose in­flu­ence is most di­rect and most con­sis­tently un­ac­knowl­edged. A generic in Ada is a pa­ra­me­terised pack­age or sub­pro­gram — a tem­plate that can be in­stan­ti­ated with spe­cific types or val­ues to pro­duce a con­crete pack­age or sub­pro­gram. A generic sort pro­ce­dure takes a type pa­ra­me­ter, an ar­ray type pa­ra­me­ter, and a com­par­i­son func­tion pa­ra­me­ter; it can be in­stan­ti­ated to sort in­te­gers, or strings, or any type for which a com­par­i­son func­tion can be sup­plied. This is para­met­ric poly­mor­phism: the abil­ity to write code once and ap­ply it to many types, with the com­piler ver­i­fy­ing cor­rect­ness for each in­stan­ti­a­tion rather than de­fer­ring the check to run­time or re­ly­ing on duck typ­ing. Ada had this in 1983.

C++ had tem­plates from ap­prox­i­mately 1990. Java had no gener­ics un­til 2004 — twenty-one years af­ter Ada — and when Java’s gener­ics ar­rived they were im­ple­mented through type era­sure, which means the type pa­ra­me­ters ex­ist at com­pile time but are re­moved be­fore the pro­gram runs, pre­vent­ing the kind of run­time type spe­cial­i­sa­tion that Ada’s gener­ics make avail­able. C# got gener­ics in 2005 with a more com­plete im­ple­men­ta­tion that pre­serves type in­for­ma­tion at run­time — closer to Ada, but twenty-two years later. Go had no gener­ics at all un­til ver­sion 1.18 in 2022 — thirty-nine years af­ter Ada — and their ab­sence was widely ex­pe­ri­enced as a sig­nif­i­cant lim­i­ta­tion dur­ing Go’s first decade of use. Rust has gener­ics with monomor­phi­sa­tion: each in­stan­ti­a­tion of a generic type pro­duces a con­crete type at com­pile time, the same ap­proach Ada takes. The de­sign space that Rust’s gener­ics ex­plore was charted in Ada’s stan­dard of 1983.

Ada’s generic for­mal pa­ra­me­ters are more ex­pres­sive than most mod­ern generic sys­tems. A generic unit in Ada can take as pa­ra­me­ters not just types but sub­pro­grams — you can pass a func­tion as a for­mal pa­ra­me­ter to a generic and have the com­piler ver­ify that it has the right sig­na­ture — and pack­ages, al­low­ing a generic to be pa­ra­me­terised by a whole mod­ule rather than just a type. This is higher-kinded poly­mor­phism by an­other route: the abil­ity to ab­stract over not just val­ues but over type con­struc­tors and mod­ule struc­tures. Haskell’s type classes reach a sim­i­lar ex­pres­sive power by a dif­fer­ent mech­a­nism. Rust’s trait sys­tem ap­proaches it. C++ con­cepts, added in C++20 in 2020, al­low generic type pa­ra­me­ters to be con­strained by re­quire­ments on their op­er­a­tions — which is what Ada’s generic for­mal type pa­ra­me­ters have al­ways spec­i­fied. The forty-year gap be­tween Ada’s fea­ture and C++’s adop­tion of the same idea is not un­usual in this story.

Ada’s con­cur­rency model is where the gap be­tween what Ada de­signed and what the in­dus­try ac­cepted be­comes most con­se­quen­tial, be­cause the in­dus­try’s fail­ure to ac­cept Ada’s model is the di­rect cause of the con­cur­rency cri­sis that the in­dus­try spent the 2000s and 2010s at­tempt­ing to re­solve. The cri­sis — shared mu­ta­ble state made cat­a­strophic by mul­ti­core proces­sors, lock-based syn­chro­ni­sa­tion pro­duc­ing dead­locks and race con­di­tions that test­ing could not re­li­ably de­tect — was not un­fore­see­able. It was fore­seen, specif­i­cally, by the de­sign­ers of Ada, who de­signed around it in 1983 and pro­duced, in Ada 95, a con­cur­rency model that sub­se­quent lan­guages have been ap­prox­i­mat­ing ever since.

Ada tasks are lan­guage-level con­structs: de­clared with task, sched­uled by the Ada run­time, com­mu­ni­cat­ing through ei­ther ren­dezvous or pro­tected ob­jects. The ren­dezvous is a syn­chro­nised com­mu­ni­ca­tion point: a call­ing task names an en­try it wishes to use, an ac­cept­ing task names the same en­try in an ac­cept state­ment, and nei­ther can pro­ceed un­til both are ready. The com­mu­ni­ca­tion hap­pens at the meet­ing; the tasks never share mem­ory im­plic­itly; the call­ing task can­not reach into the ac­cept­ing task and mod­ify its state, be­cause the com­mu­ni­ca­tion model pro­vides no mech­a­nism for do­ing so. This is mes­sage pass­ing — not in the sense that a value is se­ri­alised and sent over a socket, but in the sense that the de­sign of the com­mu­ni­ca­tion pre­vents shared-state ac­cess by con­struc­tion. Go’s chan­nels are a di­rect in­stan­ti­a­tion of this idea with dif­fer­ent syn­tax and a slightly dif­fer­ent se­man­tics. The Go de­sign­ers ar­rived at chan­nels by think­ing care­fully about con­cur­rency safety; Ada’s de­sign­ers ar­rived at ren­dezvous by the same route, thirty years ear­lier.

Ada 95′s pro­tected ob­jects ad­dress the cases where shared state is gen­uinely re­quired. A pro­tected type wraps data and de­clares op­er­a­tions on it: pro­tected pro­ce­dures, which have ex­clu­sive read-write ac­cess; pro­tected func­tions, which may be called con­cur­rently be­cause they are read-only; and pro­tected en­tries, which are like pro­ce­dures but in­clude a bar­rier con­di­tion — a boolean ex­pres­sion that must be true for the op­er­a­tion to pro­ceed, with the call­ing task sus­pended au­to­mat­i­cally un­til the con­di­tion is sat­is­fied. The run­time en­forces mu­tual ex­clu­sion for pro­ce­dures and en­tries with­out the pro­gram­mer writ­ing a lock. The bar­rier con­di­tion for en­tries is re-eval­u­ated when­ever any op­er­a­tion com­pletes, pro­vid­ing a safe con­di­tional wait with­out the man­ual con­di­tion vari­able sig­nalling that Java’s con­cur­rency model re­quires. Rust’s Mutex and RwLock types pro­tect data in a re­lated way — wrap­ping state in a type that en­forces ac­cess dis­ci­pline — but through a li­brary rather than a lan­guage con­struct, and with­out the bar­rier con­di­tion mech­a­nism. Java’s syn­chro­nized, wait, and no­tify are what pro­gram­mers reach for in­stead, and the com­bi­na­tion is an in­vi­ta­tion to sub­tle er­rors: for­get­ting to syn­chro­nise, no­ti­fy­ing the wrong con­di­tion, hold­ing a lock while call­ing for­eign code. Ada’s pro­tected ob­jects make these er­rors struc­turally un­avail­able rather than merely dis­cour­aged.

The SPARK sub­set of Ada ex­tends the con­cur­rency guar­an­tees to for­mal proof. SPARK ex­cludes alias­ing be­tween task-ac­ces­si­ble state, con­strains side ef­fects in sub­pro­grams to those de­clared in the sub­pro­gram’s con­tract, and pro­vides a sta­tic analy­sis tool­chain that can prove, math­e­mat­i­cally rather than em­pir­i­cally, that a pro­gram has no data races, no un­han­dled ex­cep­tions, no out-of-bounds ar­ray ac­cesses, and no vi­o­la­tions of stated pre­con­di­tions and post­con­di­tions. Rust’s bor­row checker pre­vents a class of mem­ory safety er­rors at com­pile time, which is a re­lated but nar­rower guar­an­tee: it pre­vents use-af­ter-free, dou­ble-free, and cer­tain kinds of aliased mu­ta­tion, but it does not for­mally prove the pro­gram’s logic cor­rect. SPARK proves both the mem­ory safety and the logic. The space be­tween Rust’s com­pile-time re­jec­tion of un­safe pro­grams and SPARKs for­mal proof of cor­rect pro­grams is the space be­tween en­gi­neer­ing dis­ci­pline and math­e­mat­i­cal ver­i­fi­ca­tion — and SPARK has oc­cu­pied the lat­ter po­si­tion, in pro­duc­tion sys­tems, since be­fore Rust ex­isted as a pro­ject.

Go’s chan­nels and Ada’s ren­dezvous are close rel­a­tives in the broader CSP tra­di­tion. Rust’s bor­row checker pre­vents a sub­set of what SPARK proves. The in­dus­try spent thirty years con­verg­ing to­ward po­si­tions Ada had staked out from the start.

Ada 2012 added con­tracts to the lan­guage: pre­con­di­tions, post­con­di­tions, and type in­vari­ants, ex­press­ible in Ada’s own syn­tax and checked by the com­piler or by the run­time at the pro­gram­mer’s di­rec­tion. A sub­pro­gram’s pre­con­di­tion is a boolean ex­pres­sion that must hold when the sub­pro­gram is called; its post­con­di­tion is a boolean ex­pres­sion that must hold when it re­turns; a type in­vari­ant is a prop­erty that must hold for every value of a type when­ever that value is vis­i­ble to out­side code. These are not as­ser­tions in the sense of run­time checks that may be dis­abled in pro­duc­tion. They are spec­i­fi­ca­tions: ma­chine-read­able state­ments of what a sub­pro­gram re­quires and guar­an­tees, which can be ver­i­fied by the SPARK tool­chain with­out ex­e­cut­ing the pro­gram at all.

Design by con­tract — the idea, named and sys­tem­a­tised by Bertrand Meyer in the Eiffel lan­guage in 1986 — is the con­cep­tual foun­da­tion of this mech­a­nism. Eiffel had it first; Ada 2012 for­malised it in a lan­guage with a large ex­ist­ing user base, a for­mal stan­dard, and a ver­i­fi­ca­tion tool­chain ca­pa­ble of us­ing the con­tracts for sta­tic proof rather than merely run­time check­ing. The idea’s tra­jec­tory through the wider in­dus­try has been slow. C++ has no stan­dard con­tract mech­a­nism de­spite pro­pos­als dat­ing to the early 2010s; C++20 de­ferred a con­tracts pro­posal that had been in prepa­ra­tion for years. Java has never had con­tracts in the lan­guage; DbC in Java is done through li­braries, or through Javadoc con­ven­tions, or through JUnit tests that ap­prox­i­mate the post­con­di­tion check. Python’s type hint sys­tem, in­tro­duced in ver­sion 3.5 in 2015 and pro­gres­sively ex­tended since, is a par­tial ap­proach to con­tracts: it spec­i­fies types of in­puts and out­puts but not be­hav­ioural prop­er­ties. Rust’s trait bounds and type con­straints are an­other par­tial ap­proach. None of these pro­vides what Ada 2012 pro­vides: a stan­dard, com­piler-in­te­grated no­ta­tion for stat­ing what a sub­pro­gram re­quires and guar­an­tees, check­able at run­time dur­ing de­vel­op­ment and prov­able sta­t­i­cally by a tool­chain that ships with the lan­guage.

The di­rec­tion of travel in every ma­jor lan­guage is to­ward con­tracts. TypeScript’s type sys­tem grows more ex­pres­sive with each re­lease, adding con­di­tional types, tem­plate lit­eral types, and in­creas­ingly fine-grained nar­row­ing — all of which are ap­prox­i­ma­tions of what a con­tract-ca­pa­ble type sys­tem can ex­press di­rectly. Python’s typ­ing mod­ule grows with each ver­sion, adding pro­to­cols, TypedDict, ParamSpec, and Concatenate — build­ing, in­cre­men­tally, to­ward the kind of in­ter­face spec­i­fi­ca­tion that Ada has had since 1983. C#’s nul­lable ref­er­ence types, added in ver­sion 8.0 in 2019, im­pose a con­straint that Ada’s ac­cess type de­sign im­posed from the be­gin­ning: ref­er­ences must be ex­plic­itly de­clared nul­lable to per­mit the null value, and the com­piler en­forces the dis­tinc­tion. The nul­lable ref­er­ence cri­sis — null as the bil­lion-dol­lar mis­take, Tony Hoare’s self-de­scribed worst de­sign er­ror — is a cri­sis that Ada mit­i­gated but did not solve. Ada’s ac­cess types are ini­tialised to null by de­fault, and deref­er­enc­ing a null ac­cess value raises Constraint_Error at run­time — a de­fined be­hav­iour, un­like C’s un­de­fined be­hav­iour on null deref­er­ence, but a run­time check rather than a com­pile-time guar­an­tee. Ada 2005 in­tro­duced not null ac­cess type an­no­ta­tions, al­low­ing the pro­gram­mer to de­clare that a par­tic­u­lar ac­cess value may never be null and hav­ing the com­piler en­force the re­stric­tion sta­t­i­cally. This is gen­uine com­pile-time null safety, but it is opt-in, added twenty-two years af­ter the orig­i­nal stan­dard, and not the de­fault. C#’s nul­lable ref­er­ence types, added in ver­sion 8.0 in 2019, take the same opt-in ap­proach from the op­po­site di­rec­tion: ref­er­ences are as­sumed non-null un­less ex­plic­itly an­no­tated as nul­lable. Both lan­guages ar­rived at the same ar­chi­tec­tural in­sight — that nul­la­bil­ity should be vis­i­ble in the type — but nei­ther made it the de­fault from the start, and nei­ther can claim to have elim­i­nated the prob­lem that Hoare iden­ti­fied. What Ada pro­vided from the be­gin­ning was the safer fail­ure mode: a raised ex­cep­tion rather than cor­rupted mem­ory.

The ex­cep­tion han­dling model that Ada in­tro­duced in 1983 was the first pro­duc­tion re­al­i­sa­tion of struc­tured ex­cep­tion han­dling — the idea that ex­cep­tions are not sim­ply jumps to an er­ror han­dler but events that are raised, prop­a­gated through a de­fined call stack, and han­dled in an ex­cep­tion han­dler that is syn­tac­ti­cally as­so­ci­ated with the block or sub­pro­gram that es­tab­lished it. Ada’s model re­quires that ex­cep­tions be de­clared be­fore use, that han­dlers be as­so­ci­ated with spe­cific scopes, and that the prop­a­ga­tion rules be de­fined pre­cisely. C++ adopted struc­tured ex­cep­tion han­dling in 1990, seven years af­ter Ada. Java went fur­ther than Ada in one sig­nif­i­cant re­spect: Java’s checked ex­cep­tions re­quire that cer­tain ex­cep­tion types be ei­ther caught or de­clared in the method’s throws clause, mak­ing the caller’s re­spon­si­bil­ity for han­dling fail­ure part of the func­tion’s com­piled in­ter­face. Ada has no equiv­a­lent mech­a­nism — Ada ex­cep­tions prop­a­gate freely through the call stack, and a sub­pro­gram’s spec­i­fi­ca­tion says noth­ing about which ex­cep­tions it may raise. Java’s checked ex­cep­tions drew less from Ada than from CLUs sig­nalling mech­a­nism and Modula-3′s ex­cep­tion de­c­la­ra­tions, and the ex­per­i­ment was con­tentious from the start: checked ex­cep­tions are widely con­sid­ered one of Java’s de­sign mis­steps, Scala and Kotlin re­moved them en­tirely, and the in­dus­try has never set­tled the ques­tion of whether the com­piler should en­force ex­cep­tion aware­ness at the call site.

Rust makes the re­lated choice of re­mov­ing ex­cep­tions en­tirely: er­rors are val­ues, re­turned from func­tions in a Result type, and the ques­tion of whether a func­tion can fail is ex­pressed in its re­turn type rather than in a sep­a­rate ex­cep­tion mech­a­nism. This is a dif­fer­ent res­o­lu­tion of the same un­der­ly­ing prob­lem — that callers must know whether a called func­tion can fail and in what ways — and it reaches a con­clu­sion that Ada’s own ex­cep­tion model does not reach: in Ada, as in C++ and Python, ex­cep­tions are a hid­den chan­nel, prop­a­gat­ing through the call stack with­out ap­pear­ing in the sub­pro­gram’s spec­i­fi­ca­tion, and a caller can for­get about them en­tirely un­til they ar­rive. Rust’s er­ror-as-value ap­proach and Java’s checked ex­cep­tions are two dif­fer­ent at­tempts to close that chan­nel. Ada’s con­tri­bu­tion was not to close it but to struc­ture it — to re­place the raw jump of setjmp/​longjmp and the am­bi­gu­ity of sig­nal han­dlers with a mech­a­nism whose prop­a­ga­tion rules were de­fined, whose han­dlers were scoped, and whose be­hav­iour was pre­dictable. That struc­tur­ing was the foun­da­tion on which every sub­se­quent ex­cep­tion sys­tem was built, even the sys­tems that went fur­ther than Ada was will­ing to go.

Ada’s an­nexes — the op­tional ex­ten­sions to the core lan­guage, de­fined in the stan­dard, re­quir­ing sep­a­rate com­piler cer­ti­fi­ca­tion — rep­re­sent a de­sign de­ci­sion that no other lan­guage has repli­cated and that the in­dus­try might have ben­e­fited from con­sid­er­ing. The an­nexes de­fine fea­tures for spe­cific do­mains: real-time sys­tems, dis­trib­uted sys­tems, in­for­ma­tion sys­tems, nu­mer­ics, safety and se­cu­rity, high-in­tegrity sys­tems. A com­piler that im­ple­ments Annex C for sys­tems pro­gram­ming must im­ple­ment cer­tain pre­de­fined at­trib­utes and rep­re­sen­ta­tion clauses. A com­piler that im­ple­ments Annex D for real-time sys­tems must im­ple­ment task pri­or­i­ties, sched­ul­ing poli­cies, and time con­straints in ways the stan­dard spec­i­fies. The cer­ti­fi­ca­tion that a com­piler con­forms to an an­nex is in­de­pen­dently ver­i­fi­able. The user of a com­piler knows pre­cisely what it sup­ports and does not sup­port, be­cause the sup­port is a doc­u­mented, testable claim against a for­mal stan­dard rather than an emer­gent prop­erty of what­ever the com­pil­er’s au­thors chose to im­ple­ment.

No other main­stream lan­guage has this model. JavaScript’s fea­ture sup­port is tracked through com­pat­i­bil­ity ta­bles be­cause the stan­dard and the im­ple­men­ta­tion are sep­a­rate worlds with no for­mal cou­pling. Python’s stan­dard li­brary cov­er­age varies be­tween im­ple­men­ta­tions — CPython, PyPy, and MicroPython are dif­fer­ent things that call them­selves Python. Rust’s fea­ture set is for­mally sta­ble or un­sta­ble, but the bound­ary be­tween the two moves over time and the no­tion of cer­ti­fi­able com­pli­ance does not ex­ist. C++ com­pil­ers com­pete on which fea­tures of the lat­est stan­dard they have im­ple­mented rather than on cer­ti­fied com­pli­ance with any de­fined sub­set. Ada’s an­nex model is the idea that a stan­dard should be a con­tract — testable, cer­ti­fi­able, use­ful pre­cisely be­cause it spec­i­fies not just what is per­mit­ted but what is re­quired. The DO-178C stan­dard for air­borne soft­ware cer­ti­fi­ca­tion, which gov­erns the soft­ware in every cer­ti­fied civil air­craft, re­quires doc­u­men­ta­tion and process ev­i­dence that a for­mally stan­dard­ised lan­guage with cer­ti­fi­able com­piler con­for­mance makes con­sid­er­ably eas­ier to pro­duce. Ada’s stan­dard, with its an­nex struc­ture and con­for­mance test­ing scheme, fits DO-178C’s re­quire­ments with un­usual di­rect­ness. C and C++ can meet the same cer­ti­fi­ca­tion re­quire­ments — and do, rou­tinely — but through ad­di­tional process doc­u­men­ta­tion and tool­ing rather than through a stan­dard that was de­signed with cer­ti­fi­ca­tion in mind. Ada’s stan­dard­i­s­a­tion and tool­ing make it un­usu­ally well suited to cer­ti­fi­ca­tion-heavy do­mains; they do not make it the only lan­guage that can op­er­ate in them.

The ques­tion of why Ada’s in­flu­ence is so con­sis­tently un­ac­knowl­edged has sev­eral an­swers, none of them fully sat­is­fy­ing. The most straight­for­ward is in­sti­tu­tional: Ada was a gov­ern­ment lan­guage, pro­cured through a process that Silicon Valley was not watch­ing and would not have re­spected if it had been. The de­sign­ers of C++, Java, and Python were not read­ing the Steelman doc­u­ment. They were solv­ing the prob­lems in front of them — mak­ing C safer, mak­ing soft­ware ob­jects work, mak­ing script­ing sim­ple — and their so­lu­tions con­verged on Ada’s so­lu­tions not be­cause they were fol­low­ing Ada but be­cause the prob­lems were the same prob­lems and the good so­lu­tions are the good so­lu­tions.

A sec­ond an­swer is aes­thetic. Ada’s syn­tax is ver­bose in a way that pro­gram­mers with a back­ground in C find un­pleas­ant. if X then Y; end if; in­stead of if (x) { y; }. pro­ce­dure Sort (A : in out Array_Type) in­stead of void sort(int* a). The ver­bosity was de­lib­er­ate — Ichbiah wanted pro­grams to be read­able by peo­ple other than their au­thors, and read­abil­ity over time favours ex­plic­it­ness — but it was ex­pe­ri­enced as bu­reau­cratic and un-hacker-like, and the pro­gram­ming cul­ture that formed in the 1980s and 1990s was or­gan­ised around the propo­si­tion that con­cise­ness was so­phis­ti­ca­tion. Ada was the lan­guage of pro­cure­ment of­fi­cers. C was the lan­guage of peo­ple who un­der­stood ma­chines. The cul­tural ver­dict was de­liv­ered early and never sub­stan­tially re­vis­ited.

A third an­swer is that Ada’s de­ploy­ment do­main meant that Ada’s suc­cesses were in­vis­i­ble. A soft­ware pro­ject that com­piles with­out er­ror, runs with­out race con­di­tions, and has been for­mally ver­i­fied to sat­isfy its spec­i­fi­ca­tion does not gen­er­ate in­ci­dent re­ports or post-mortems or con­fer­ence talks about what went wrong. Ada’s suc­cesses — the air­craft that have not crashed, the rail­way sig­nalling sys­tems that have not failed, the mis­sile guid­ance soft­ware that has not mis­guided — are in­vis­i­ble pre­cisely be­cause they are suc­cesses. The lan­guages that failed vis­i­bly, in buffer over­flows and null pointer ex­cep­tions and data races and se­cu­rity vul­ner­a­bil­i­ties, gen­er­ated the dis­course. Ada gen­er­ated re­li­able soft­ware, and re­li­able soft­ware does not gen­er­ate dis­course.

Ada’s suc­cesses are in­vis­i­ble be­cause they are suc­cesses. The lan­guages that failed vis­i­bly gen­er­ated the dis­course. Reliable soft­ware does not gen­er­ate con­fer­ence talks.

The tra­jec­tory of mod­ern lan­guage de­sign is, traced care­fully, a tra­jec­tory to­ward po­si­tions Ada oc­cu­pied early. The type sys­tem fea­tures that Rust, Haskell, TypeScript, and Swift are cel­e­brated for — sum types, para­met­ric poly­mor­phism, con­straint-based gener­ics, affine types and own­er­ship — each solve a prob­lem that Ada iden­ti­fied in 1983 and that the main­stream lan­guages of the sub­se­quent twenty years de­clined to solve. The mod­ule sys­tems that Go, Rust, and Swift have been praised for — ex­plicit in­ter­faces, sep­a­ra­tion of spec­i­fi­ca­tion from im­ple­men­ta­tion, vis­i­bil­ity con­trol that the com­piler en­forces rather than merely rec­om­mends — are par­tial im­ple­men­ta­tions of what Ada’s pack­age sys­tem pro­vided from the be­gin­ning. The con­cur­rency mod­els that Go’s chan­nels and Rust’s own­er­ship have been cred­ited with in­vent­ing be­long to the same CSP and mes­sage-pass­ing lin­eage as Ada’s ren­dezvous and pro­tected ob­ject model, which pro­vided pro­duc­tion-grade an­swers to the same prob­lems in 1983 and 1995. The con­tract sys­tems that C#’s nul­lable ref­er­ences, TypeScript’s type nar­row­ing, and Python’s grad­ual typ­ing are ap­prox­i­mat­ing, from dif­fer­ent an­gles, are what Ada 2012 added to a lan­guage that has been in con­tin­u­ous use since be­fore most of its prac­ti­tion­ers were born.

This is not a claim that every mod­ern lan­guage copied Ada, or that Ada’s de­sign­ers de­serve credit that has been with­held from them. Most of the con­ver­gence is gen­uinely in­de­pen­dent: the de­sign­ers of Rust did not de­rive the bor­row checker from Ada’s ac­cess type rules; the de­sign­ers of Go did not de­rive chan­nels from Ada’s ren­dezvous; the de­sign­ers of TypeScript did not de­rive dis­crim­i­nated unions from Ada’s vari­ant records. The con­ver­gence is real but it is con­ver­gence to­ward cor­rect so­lu­tions to real prob­lems, not pla­gia­rism. Ada’s de­sign­ers iden­ti­fied the prob­lems first, and iden­ti­fied them with un­usual clar­ity, be­cause they were de­sign­ing for a con­text in which the prob­lems had al­ready killed peo­ple and would kill more if the so­lu­tions were wrong.

What Ada demon­strates is not that it should be more widely used — though the ar­gu­ment for its use in any do­main where soft­ware re­li­a­bil­ity mat­ters is stronger than the in­dus­try credit it re­ceives — but that the prob­lems mod­ern lan­guage de­sign is solv­ing are old prob­lems, and that the so­lu­tions mod­ern lan­guages are dis­cov­er­ing are old so­lu­tions. The idea that null ref­er­ences re­quire ex­plicit an­no­ta­tion, that con­cur­rency re­quires lan­guage-level en­force­ment rather than li­brary-level con­ven­tion, that in­ter­face and im­ple­men­ta­tion should be struc­turally sep­a­rated, that type sys­tems should en­code do­main con­straints rather than merely ma­chine rep­re­sen­ta­tions, that generic code should be ver­i­fi­able at in­stan­ti­a­tion time — these are not in­sights of the 2010s or the 2020s. They are in­sights of the 1970s and 1980s, for­mu­lated in re­sponse to soft­ware fail­ures whose con­se­quences were con­crete enough that the peo­ple re­spon­si­ble for pre­vent­ing them were will­ing to pay for a lan­guage com­pe­ti­tion that lasted five years.

The in­dus­try has spent forty years build­ing lan­guages whose best fea­tures con­verge, in­de­pen­dently, on po­si­tions Ada staked out decades ear­lier. It has spent the same forty years de­scrib­ing Ada as ir­rel­e­vant. The first ob­ser­va­tion and the sec­ond are in ten­sion in a way that the in­dus­try has not yet fully ac­knowl­edged, and that Ada — de­ployed in air­craft over­head, in rail sig­nals along­side the tracks, in the guid­ance sys­tems of space­craft cur­rently in tran­sit be­tween plan­ets — has not needed to ac­knowl­edge, be­ing too busy run­ning cor­rectly to con­cern it­self with the ques­tion of whether it is ap­pre­ci­ated.

...

Read the original on www.iqiipi.com »

7 237 shares, 21 trendiness

NASA Force

Skip to main con­tent

An of­fi­cial web­site of the United States gov­ern­ment

NASA Force is a new hir­ing ini­tia­tive—de­vel­oped in part­ner­ship with the U. S. Office of Personnel Management—designed to bring ex­cep­tional tech­ni­cal tal­ent into mis­sion-crit­i­cal roles that sup­port NASAs ex­plo­ration, re­search, and ad­vanced tech­nol­ogy pri­or­i­ties. Highly skilled early- to mid- ca­reer en­gi­neers, tech­nol­o­gists, and in­no­va­tors join NASA for fo­cused term ap­point­ments, typ­i­cally 1–2 years with the pos­si­bil­ity of ex­ten­sion, to solve com­plex chal­lenges and help main­tain U.S. lead­er­ship in air and space. Through NASA Force, you will con­tribute to mis­sions that ad­vance hu­man space­flight, aero­nau­tics, and sci­en­tific dis­cov­ery while help­ing ex­pand hu­man­i­ty’s un­der­stand­ing of the uni­verse. You will take a sys­tems ap­proach to solv­ing prob­lems, work­ing across teams and dis­ci­plines from con­cept to ex­e­cu­tion. Your work will de­mand tech­ni­cal ex­cel­lence, crit­i­cal think­ing, and con­tin­u­ous learn­ing, and every con­tri­bu­tion will di­rectly sup­port NASAs mis­sion. Work on flight sys­tems, lu­nar in­fra­struc­ture, and ad­vanced tech­nolo­gies that go from con­cept to ex­e­cu­tion and sup­port real mis­sions be­yond Earth.Work on flight sys­tems, lu­nar in­fra­struc­ture, and ad­vanced tech­nolo­gies that go from con­cept to ex­e­cu­tion and sup­port real mis­sions be­yond Earth.Collaborate di­rectly with en­gi­neers, sci­en­tists, and part­ners shap­ing the fu­ture of space, aero­nau­tics, and na­tional ca­pa­bil­ity.Ex­pand your tech­ni­cal depth by solv­ing com­plex, real-world prob­lems where the stan­dard is per­for­mance, not the­ory.Share knowl­edge, men­tor oth­ers, and con­tribute to a cul­ture that com­pounds ca­pa­bil­ity across NASAs work­force. HOW YOU WILL ENTER THE MISSION You will join a col­lab­o­ra­tive, mis­sion-dri­ven team where ideas are val­ued, con­tri­bu­tions are rec­og­nized, and in­no­va­tion is part of every­day work. NASA Force of­fers an op­por­tu­nity to grow across pro­jects and dis­ci­plines, build your ex­per­tise, and take on new chal­lenges while work­ing along­side some of the world’s lead­ing minds. Propulsion sys­tems sup­port across the Commercial Crew Program, Launch Services Program, and Artemis If You Want Your Work to Operate Beyond Earth, This is Where it Begins.

...

Read the original on nasaforce.gov »

8 236 shares, 23 trendiness

smol-machines/smolvm: Tool to build & run portable, lightweight, self-contained virtual machines.

Ship and run soft­ware with iso­la­tion by de­fault.

This is a CLI tool that lets you:

Pack a state­ful vir­tual ma­chine into a sin­gle file (.smolmachine) to re­hy­drate on any sup­ported plat­form.

# in­stall (macOS + Linux)

curl -sSL https://​smol­ma­chines.com/​in­stall.sh | bash

# for cod­ing agents — in­stall + dis­cover all com­mands

curl -sSL https://​smol­ma­chines.com/​in­stall.sh | bash && smolvm –help

# run a com­mand in an ephemeral VM (cleaned up af­ter exit)

smolvm ma­chine run –net –image alpine — sh -c echo Hello world from a mi­croVM’ && un­ame -a”

# in­ter­ac­tive shell

smolvm ma­chine run –net -it –image alpine — /bin/sh

# in­side the VM: apk add sl && sl && exit

Sandbox un­trusted code — run un­trusted pro­grams in a hard­ware-iso­lated VM. Host filesys­tem, net­work, and cre­den­tials are sep­a­rated by a hy­per­vi­sor bound­ary.

# net­work is off by de­fault — un­trusted code can’t phone home

smolvm ma­chine run –image alpine — ping -c 1 1.1.1.1

# fails — no net­work ac­cess

# lock down egress — only al­low spe­cific hosts

smolvm ma­chine run –net –image alpine –allow-host reg­istry.npmjs.org — wget -q -O /dev/null https://​reg­istry.npmjs.org

# works — al­lowed host

smolvm ma­chine run –net –image alpine –allow-host reg­istry.npmjs.org — wget -q -O /dev/null https://​google.com

# fails — not in al­low list

Pack into portable ex­e­cuta­bles — turn any work­load into a self-con­tained bi­nary. All de­pen­den­cies are pre-baked — no in­stall step, no run­time down­loads, boots in

smolvm pack cre­ate –image python:3.12-alpine -o ./python312

./python312 run — python3 –version

# Python 3.12.x — iso­lated, no pyenv/​venv/​conda needed

smolvm ma­chine cre­ate –net myvm

smolvm ma­chine start –name myvm

smolvm ma­chine exec –name myvm — apk add sl

smolvm ma­chine exec –name myvm -it — /bin/sh

# in­side: sl, ls, un­ame -a — type exit’ to leave

smolvm ma­chine stop –name myvm

Use git and SSH with­out ex­pos­ing keys — for­ward your host SSH agent into the VM. Private keys never en­ter the guest — the hy­per­vi­sor en­forces this. Requires an SSH agent run­ning on your host (ssh-add -l to check).

smolvm ma­chine run –ssh-agent –net –image alpine — sh -c apk add -q openssh-client && ssh-add -l”

# lists your host keys, but they can’t be ex­tracted from in­side the VM

smolvm ma­chine exec –name myvm — git clone git@github.com:org/private-repo.git

im­age = python:3.12-alpine”

net = true

[network]

al­low_hosts = [“api.stripe.com”, db.example.com”]

[dev]

init = [“pip in­stall -r re­quire­ments.txt”]

vol­umes = [”./src:/app”]

[auth]

ssh_a­gent = true

smolvm ma­chine cre­ate myvm -s Smolfile

smolvm ma­chine start –name myvm

Each work­load gets real hard­ware iso­la­tion — its own ker­nel on Hypervisor.framework (macOS) or KVM (Linux). libkrun VMM with cus­tom ker­nel: libkrunfw. Pack it into a .smolmachine and it runs any­where the host ar­chi­tec­ture matches, with zero de­pen­den­cies.

Defaults: 4 vC­PUs, 8 GiB RAM. Memory is elas­tic via vir­tio bal­loon — the host only com­mits what the guest ac­tu­ally uses and re­claims the rest au­to­mat­i­cally. vCPU threads sleep in the hy­per­vi­sor when idle, so over-pro­vi­sion­ing has near-zero cost. Override with –cpus and –mem.

* Network is opt-in (–net on ma­chine cre­ate). TCP/UDP only, no ICMP.

* ma­cOS: bi­nary must be signed with Hypervisor.framework en­ti­tle­ments.

* –ssh-agent re­quires an SSH agent run­ning on the host (SSH_AUTH_SOCK must be set).

...

Read the original on github.com »

9 207 shares, 9 trendiness

Discourse is Not Going Closed Source

Cal.com just closed their source code, ar­gu­ing AI has made open source too dan­ger­ous. After 13 years of build­ing Discourse in pub­lic, we’re stay­ing open. Here’s why.

Cal.com just closed their source code, ar­gu­ing AI has made open source too dan­ger­ous. After 13 years of build­ing Discourse in pub­lic, we’re stay­ing open. Here’s why.

Cal.com have an­nounced they’re clos­ing their code­base and will no longer be an open-source prod­uct. Their rea­son­ing is that AI has made open source too dan­ger­ous for SaaS com­pa­nies. Code gets scanned and ex­ploited by AI at near-zero cost, and trans­parency is now be­com­ing ex­po­sure.

I un­der­stand where this is com­ing from; the in­dus­try is chang­ing fast. New AIs with new cy­ber­se­cu­rity ca­pa­bil­i­ties are be­ing re­leased every few weeks. It’s a scary world, and I agree com­pletely that open-source com­pa­nies need to adapt.

I do not agree with the de­ci­sion that clos­ing source is the so­lu­tion to the se­cu­rity storm that is upon us.

I do not agree it is the cor­rect nar­row de­ci­sion for SaaS providers, and I do not agree it is the cor­rect de­ci­sion for the in­dus­try at large.

I want to be clear and firm about the po­si­tion Discourse is tak­ing. We are open source, we’ve al­ways been open source, and we will con­tinue to be open source.

Ever since Jeff, Robin, and I shipped the first com­mits to the Discourse repos­i­tory on GitHub, over a decade ago, the repos­i­tory has been li­censed un­der GPLv2. And that’s not chang­ing.

Cal.com’s po­si­tion boils down to the claim that if at­tack­ers can read your code, AI will let them ex­ploit it faster than you can ei­ther harden or patch it, and the forced ac­tion you need to take is to hide the code so you can buy time. There’s truth to the threat - AI has changed the speed at which vul­ner­a­bil­i­ties can be dis­cov­ered. Over the past few months, our team has found and ad­dressed a very large amount of la­tent se­cu­rity is­sues in Discourse us­ing GPT-5.3 Codex, GPT-5.4, and Claude Opus 4.6 in our open-source code­base.

OpenAI and Anthropic are both ex­tremely con­cerned about the vec­tor, and in re­sponse GPT-5.4-Cy­ber and An­thropic Mythos are be­ing rolled out cau­tiously.

But I think the race to close soft­ware off misses some­thing. Those same AI sys­tems don’t ac­tu­ally need your source code to find vul­ner­a­bil­i­ties; they work against com­piled bi­na­ries and black-box APIs.

Closed source has al­ways been a weaker de­fense for SaaS than peo­ple want to ad­mit. A web ap­pli­ca­tion is not some­thing you ship once and keep hid­den. Large parts of it are de­liv­ered straight into the user’s browser on every re­quest: JavaScript, API con­tracts, client-side flows, val­i­da­tion logic, and fea­ture be­hav­ior. Attackers can in­spect all of that al­ready, and AI makes that in­spec­tion dra­mat­i­cally cheaper. Closing the repos­i­tory may hide some server-side im­ple­men­ta­tion de­tail, but it does not make the sys­tem in­vis­i­ble. What it mostly does is re­duce how many de­fend­ers can in­spect the full pic­ture.

The world’s most im­por­tant in­ter­net in­fra­struc­ture runs on open-source soft­ware, es­pe­cially Linux. That code is ex­posed to con­stant scrutiny from at­tack­ers, de­fend­ers, re­searchers, cloud ven­dors, and main­tain­ers across the globe. It is at­tacked re­lent­lessly, but it is also hard­ened re­lent­lessly. That is the real les­son of open source in se­cu­rity: trans­parency does not elim­i­nate risk, but it en­ables a much larger de­fen­sive re­sponse.

AI does change the se­cu­rity cal­cu­lus, but I still be­lieve it fa­vors open source. Yes, AI-powered scan­ning tools can now sur­face in hours the kinds of se­cu­rity is­sues that used to take hu­man re­searchers weeks to un­cover. In its re­search pre­view launch, OpenAI said Codex Security scanned more than 1.2 mil­lion com­mits across ex­ter­nal repos­i­to­ries in a 30-day beta pe­riod and iden­ti­fied 792 crit­i­cal find­ings and 10,561 high-sever­ity find­ings.

That is a stag­ger­ing vol­ume of vul­ner­a­bil­ity dis­cov­ery.

But the key ques­tion is: who gets to use those tools?

If your code is open source, your se­cu­rity team can scan it, your con­trib­u­tors can scan it, and in­de­pen­dent re­searchers can scan it too. That does not guar­an­tee de­fend­ers will al­ways get there first, but it dra­mat­i­cally in­creases the num­ber of peo­ple who can help find real prob­lems early. If your code is closed, at­tack­ers can still study the prod­uct from the out­side, through the browser, the API, the mo­bile client, and the be­hav­ior of the run­ning sys­tem, while only your in­ter­nal team gets di­rect ac­cess to the full code. That is not a re­duc­tion in ex­po­sure. It is a re­duc­tion in de­fen­sive ca­pac­ity.

At Discourse, we’ve leaned into this re­al­ity. Our last monthly re­lease in­cluded fixes for 50 se­cu­rity is­sues iden­ti­fied through multi-day scans us­ing GPT-5.4 xhigh. Open source cre­ates a use­ful ur­gency: when your code is pub­lic, you as­sume it will be ex­am­ined closely, so you in­vest ear­lier and more ag­gres­sively in find­ing and fix­ing is­sues be­fore at­tack­ers do.

In a closed-source en­vi­ron­ment, you may mis­tak­enly think you are safe be­cause no­body can look. Some frac­tion of those is­sues would still be sit­ting there, undis­cov­ered by de­fend­ers and wait­ing for an at­tacker to stum­ble across them. That’s not a bet­ter sce­nario.

Discourse launched in 2013. Jeff Atwood, Robin Ward, and I started it be­cause the state of com­mu­nity soft­ware was em­bar­rass­ing. Forums were run­ning on decade-old PHP code­bases with se­cu­rity and up­grade mod­els from the early 2000s.

Facebook was where all the en­ergy was go­ing. They were swal­low­ing com­mu­nity dis­cus­sion whole and had ab­solutely no rea­son to let any of it be portable or user-con­trolled. We built Discourse as open source be­cause we thought com­mu­nity soft­ware should be­long to the com­mu­ni­ties us­ing it, not to what­ever plat­form hap­pened to be host­ing it that year.

That was 13 years ago. Today more than 22,000 com­mu­ni­ties run Discourse - tiny star­tups, Fortune 500 com­pa­nies, every­thing in be­tween. The whole code­base is on GitHub, GPL-licensed. Hundreds of out­side de­vel­op­ers have con­tributed se­cu­rity patches.

In 13 years of run­ning Discourse in the open, we have not seen ev­i­dence that pub­lic source code made us less se­cure. We have had vul­ner­a­bil­i­ties, of course; every sub­stan­tial piece of soft­ware does. But the pat­tern has gen­er­ally been the one you would hope for: bugs were re­ported, co­or­di­nated dis­clo­sures were han­dled re­spon­si­bly, CVEs pub­lished, and fixes shipped quickly.

Cal.com is mak­ing a bet about the fu­ture of soft­ware se­cu­rity. They are bet­ting that in an AI-accelerated threat en­vi­ron­ment, re­duc­ing vis­i­bil­ity into the code­base will im­prove their se­cu­rity pos­ture. I think that is the wrong bet. We are mak­ing the op­po­site one: that in a world where AI makes vul­ner­a­bil­ity dis­cov­ery dra­mat­i­cally cheaper, the stronger po­si­tion is to let de­fend­ers use the same tools against code they can ac­tu­ally in­spect.

I want to be fair to Cal.com here, be­cause I don’t think they’re act­ing in bad faith. I just think the se­cu­rity ar­gu­ment is a con­ve­nient frame for de­ci­sions that are ac­tu­ally about some­thing else.

Competitive pres­sure, mostly. If your code is open, your com­peti­tors can read your ar­chi­tec­ture and your prod­uct think­ing. That’s painful, and it gets more painful as you grow - es­pe­cially the first time a well-funded com­peti­tor forks your repo and ships a hosted ver­sion at half your price.

Governance is the other big one. Open-source com­mu­ni­ties push back. They file is­sues about de­ci­sions they don’t like. They fork. It’s ex­haust­ing to man­age, and clos­ing the code makes the noise stop im­me­di­ately. Then you’ve got in­vestors ask­ing why you’re giv­ing away the thing they just funded, and sud­denly closed source” looks a lot more de­fen­si­ble in a board deck.

These are all le­git­i­mate busi­ness pres­sures, and I don’t judge any­one for feel­ing them. But they’re busi­ness de­ci­sions, not se­cu­rity de­ci­sions. Framing a busi­ness de­ci­sion as a se­cu­rity im­per­a­tive does a dis­ser­vice to the open-source ecosys­tem that helped Cal.com get to where they are.

Every re­lease cy­cle, our team de­ploys the lat­est AI vul­ner­a­bil­ity scan­ners (GPT-5.4 xhigh at the mo­ment, and next up is Opus 4.7 max) for multi-day deep analy­sis of our code­base. The scans catch the same class of vul­ner­a­bil­i­ties that an at­tack­er’s AI would find, and we patch them first.

AI scan­ning is per­formed us­ing a multi-step process. We loop through hun­dreds of con­trollers, look­ing at each con­troller in­de­pen­dently for vul­ner­a­bil­i­ties. Then, for each can­di­date vul­ner­a­bil­ity we find in the bulk scans, we val­i­date it by di­rect­ing an agent to write a fail­ing test in­side a con­tainer run­ning a full work­ing Discourse en­vi­ron­ment. Only if it is able to demon­strate that the is­sue it found is real will we count it as an is­sue and es­ca­late it to the hu­man queue. A huge ad­van­tage is that we also get a can­di­date work­ing patch for us to val­i­date dur­ing this process.

Full code­base scans are cheap at the mo­ment be­cause they are heav­ily sub­si­dized. An OpenAI full-source-code scan for Discourse could cost $2,000 if you were pay­ing re­tail. The same scan only costs $50 or so on a $200-a-month plan. Furthermore, OpenAI and Anthropic gra­ciously of­fers plans to many open-source com­pa­nies and con­trib­u­tors. We are ex­tremely con­fi­dent prices will go down and qual­ity will go up over the com­ing months and years.

The cal­cu­lus in the in­dus­try is chang­ing very quickly. Last year we spent tens of thou­sands of dol­lars on third-party se­cu­rity scans. It is stag­ger­ing that you can get sig­nif­i­cantly bet­ter qual­ity to­day for a frac­tion of the cost.

Our bug bounty pro­gram works bet­ter be­cause the code is pub­lic. Security re­searchers can do mean­ing­ful analy­sis with­out re­verse en­gi­neer­ing. They find real bugs, and we treat them with ur­gency. Architecture mat­ters too: even if an at­tacker finds a vul­ner­a­bil­ity, sand­boxed ex­e­cu­tion en­vi­ron­ments, ag­gres­sive rate lim­it­ing, con­tent se­cu­rity poli­cies, and the prin­ci­ple of least priv­i­lege across every ser­vice bound­ary limit the blast ra­dius.

Bug boun­ties were built for a world where dis­cov­ery was rel­a­tively scarce. AI is push­ing us into a world where dis­cov­ery is abun­dant. That is great for de­fense, but it makes cash re­wards much harder to ad­ju­di­cate fairly. We have paused our re­wards for now, but very much ap­pre­ci­ate the com­mu­nity of de­fend­ers and con­tinue to work with HackerOne on our bounty pro­gram.

When a vul­ner­a­bil­ity is iden­ti­fied, our re­lease pipeline can push a patch to every hosted Discourse in­stance within hours. Speed of re­sponse mat­ters most. Faster dis­cov­ery due to our open-source na­ture means we tend to patch stuff faster. Upstream con­tri­bu­tions close the loop. When we find vul­ner­a­bil­i­ties in our de­pen­den­cies (Rails, Ember, PostgreSQL, Redis), we re­port them and con­tribute fixes. That makes the en­tire ecosys­tem more se­cure, which makes us more se­cure.

Biological im­mune sys­tems work be­cause they’re ex­posed to threats. They en­counter pathogens and build mem­ory. An im­mune sys­tem that’s never been chal­lenged will col­lapse at the first real in­fec­tion. Open-source code­bases work the same way - vul­ner­a­bil­i­ties that get found and patched make the soft­ware harder to at­tack. Security re­searchers who read the code add lay­ers of de­fense, and pub­lic au­dits build in­sti­tu­tional knowl­edge about where the weak points are and how to shore them up.

Closed source can buy some ob­scu­rity, but ob­scu­rity is brit­tle. Code gets leaked, bi­na­ries get re­verse en­gi­neered, APIs get mapped, and at­tack­ers learn a lot just by in­ter­ro­gat­ing the run­ning sys­tem. The real de­fense is not keep­ing the code hid­den for­ever. It is build­ing soft­ware and op­er­a­tional prac­tices that hold up when scrutiny ar­rives.

Discourse ex­ists be­cause of open source. We were built on Ruby, on Rails, on PostgreSQL, on Redis, on Ember, on Linux, and many other pro­jects. All of them were open and main­tained by com­mu­ni­ties that be­lieved in trans­parency. We owe them the same thing back.

Cal.com acknowledged this in their an­nounce­ment. They said clos­ing their code is not a re­jec­tion of what open source gave us.” But in prac­tice, that’s what it is. You can’t take five years of com­mu­nity con­tri­bu­tions, close the gate, and claim you’re grate­ful. I don’t think it works that way.

We will not be clos­ing our source code. Thirteen years of ev­i­dence tells us that open­ness makes us more se­cure. Our com­mu­nity de­serves ac­cess to the code that runs their com­mu­ni­ties. And the best de­fense against AI-powered at­tacks is AI-powered de­fense, de­ployed by as many peo­ple as pos­si­ble, against code they can ac­tu­ally read.

Open source is­n’t dead. But it takes courage to do se­cu­rity prop­erly in­stead of re­treat­ing be­hind a locked door and hop­ing no­body has a key. We’ve done it for 13 years and we’re go­ing to keep on do­ing it.

...

Read the original on blog.discourse.org »

10 181 shares, 22 trendiness

Tesla tells HW3 owner to 'be patient' after 7 years of waiting for FSD

The Dutch Tesla owner who launched a col­lec­tive claim against Tesla over FSD on HW3 cars called Tesla to ask about the €6,400 he paid for Full Self-Driving” in 2019. After 7 years of wait­ing, Tesla’s an­swer was to just be pa­tient.”

It’s an al­most com­i­cally tone-deaf re­sponse that per­fectly en­cap­su­lates Tesla’s ap­proach to the HW3 prob­lem — and it’s only go­ing to fuel the grow­ing le­gal pres­sure in Europe.

Mischa Sigtermans, the Dutch Model 3 owner who launched the HW3 col­lec­tive claim site we re­ported on ear­lier this week, called Tesla to­day and recorded the en­tire con­ver­sa­tion. He posted the de­tails in a thread on X.

Sigtermans paid €6,400 for FSD when he bought one of the first Model 3s in the Netherlands in 2019. Last week, the Dutch ve­hi­cle au­thor­ity RDW granted Tesla type ap­proval for FSD Supervised — the first in the EU. But the ap­proved build only runs on Tesla’s newer AI4 com­puter. HW3 cars like his get noth­ing.

So he called Tesla. His first ques­tion: when does FSD come to HW3 cars?

Tesla’s an­swer: No in­for­ma­tion about when it comes, or if it comes at all.”

Not when. If.

Sigtermans then asked what ex­actly he paid for. Tesla told him he paid for the full self-drive ca­pa­bil­ity.” As he pointed out, that’s what’s on his 2019 in­voice — capability.” Not supervised.” Not lite.” The full ca­pa­bil­ity.

When he brought up Musk’s ad­mis­sion that HW3 is­n’t enough for un­su­per­vised FSD, Tesla said it had no in­for­ma­tion about this.” When he asked about the promised free hard­ware up­grade, Tesla said there was no in­for­ma­tion within Europe.” When he asked how Tesla plans to han­dle all the Europeans who bought FSD on HW3, Tesla said: We share what­ever in­for­ma­tion is avail­able at that mo­ment.” The in­for­ma­tion avail­able: none.

Sigtermans then told the agent about the 3,000 HW3 own­ers from 29 coun­tries who signed up to his claim site — rep­re­sent­ing €6.5 mil­lion in FSD pur­chases. He asked to speak to a spokesper­son about find­ing a so­lu­tion. The agent put him on hold, checked with his man­ager, and came back with the fi­nal an­swer: You just have to be pa­tient.”

After Sigtermans hung up, Tesla im­me­di­ately closed his case. He re­ceived an au­to­mated email: Your ques­tion is closed” — with a link to book a test drive.

The full con­text here makes Tesla’s be pa­tient” re­sponse even more ab­surd. Here’s what HW3 own­ers have been told over the years:

In 2019, when Sigtermans and hun­dreds of thou­sands of other own­ers pur­chased FSD, Tesla sold it as a pack­age that would en­able full au­ton­omy through soft­ware up­dates alone. The hard­ware was sup­pos­edly suf­fi­cient.

By August 2024, Tesla VP of AI Ashok Elluswamy ac­knowl­edged that HW3 runs a relatively smaller model” than AI4 with workarounds. The gap be­tween HW3 and HW4 was widen­ing, not clos­ing.

In January 2025, Elon Musk fi­nally ad­mit­ted what many had long sus­pected: Tesla would need to re­place all HW3 com­put­ers in ve­hi­cles where FSD was pur­chased.” On the Q4 2024 earn­ings call, he called the hard­ware re­place­ment painful and dif­fi­cult” and said he was kind of glad that not that many peo­ple bought the FSD pack­age.”

Tesla even filed a patent de­scrib­ing a math trick” to squeeze a mod­ern FSD model onto HW3. The patent it­self ac­knowl­edges this workaround can ren­der the sys­tem inoperable” for per­cep­tion units.

Now, 15 months af­ter Musk’s ad­mis­sion, Tesla still has no hard­ware retro­fit pro­gram, no re­fund pol­icy, and no con­crete time­line. The com­pany has vaguely promised a stripped-down v14 Lite” for HW3 some­time in Q2 2026, but that’s a fun­da­men­tally dif­fer­ent prod­uct than what was sold. It’s a diet ver­sion of a sys­tem that it­self is still only Level 2 dri­ver as­sis­tance — not the au­tonomous dri­ving Tesla orig­i­nally promised.

And when an owner who has waited since 2019 calls to ask about it, the an­swer is: be pa­tient.

Sigtermans is­n’t just vent­ing on X. He launched hw3­claim.nl, a site to bun­dle HW3 + FSD own­ers across the EU into a col­lec­tive claim against Tesla, seek­ing €6,800 per owner. In one week, 3,000 own­ers from 29 coun­tries signed up — rep­re­sent­ing over €6 mil­lion in FSD pur­chases.

The tim­ing is sig­nif­i­cant. FSD launch­ing in Europe was al­ways go­ing to be the mo­ment the HW3 prob­lem stopped be­ing ab­stract and be­came a con­crete, quan­tifi­able harm. European own­ers can now see ex­actly what they’re miss­ing — their neigh­bors with AI4 cars are get­ting FSD Supervised, while they get noth­ing de­spite pay­ing thou­sands of eu­ros for the same promise.

EU con­sumer pro­tec­tion law is con­sid­er­ably stronger than what Tesla faces in the US. Buyers have ro­bust rights around con­for­mity with ad­ver­tised fea­tures, and coun­tries like the Netherlands, Germany, and France have ma­ture col­lec­tive-re­dress frame­works.

This is­n’t the first le­gal ac­tion ei­ther. In October 2025, thou­sands of Tesla own­ers joined a class-ac­tion law­suit in Australia al­leg­ing Tesla mis­rep­re­sented FSD ca­pa­bil­i­ties. That ac­tion was di­rectly trig­gered by Musk’s HW3 ad­mis­sion.

Be pa­tient” is an ex­tra­or­di­nary thing to tell some­one who paid you €6,400 seven years ago for a prod­uct you now ad­mit you can’t de­liver on their hard­ware.

We’ve been cov­er­ing the HW3 saga for years, and this phone call per­fectly cap­tures the core prob­lem: Tesla has no an­swer. Not a bad an­swer — no an­swer. The com­pany has­n’t an­nounced a retro­fit pro­gram, has­n’t of­fered re­funds, has­n’t set a time­line. All it can of­fer is the same thing it’s been of­fer­ing since 2019: wait.

The dif­fer­ence now is that the wait­ing has an end­point, and it’s not the one Tesla promised. FSD launched in Europe last week, and HW3 own­ers are locked out. The harm is­n’t the­o­ret­i­cal any­more — it’s their neigh­bor dri­ving with FSD while they stare at the same coming soon” mes­sage they’ve had for seven years.

Sigtermans’ col­lec­tive claim is go­ing to grow. EU con­sumer law is built for ex­actly this sce­nario: a com­pany that sold a ca­pa­bil­ity it can­not de­liver. Tesla’s own CEO ad­mit­ted HW3 can’t sup­port self-dri­ving. Tesla’s own patent de­scribes workarounds that can ren­der the sys­tem inoperable.” That’s not a he-said-she-said — that’s Tesla’s own pa­per trail.

I’m in­creas­ingly con­vinced this will end up in court. And when it does, be pa­tient” is go­ing to look very bad in front of a European judge.

...

Read the original on electrek.co »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.