10 interesting stories served every morning and every evening.




1 1,274 shares, 127 trendiness

Fix the iOS Keyboard

Deadline: end of WWDC 2026. The ex­act dates haven’t been an­nounced yet and this timer is based on the es­ti­mated sched­ule (June 9–13). I’ll up­date it when Apple con­firms the dates. They have un­til the con­fer­ence ends.

Deadline: end of WWDC 2026. The ex­act dates haven’t been an­nounced yet and this timer is based on the es­ti­mated sched­ule (June 9–13). I’ll up­date it when Apple con­firms the dates. They have un­til the con­fer­ence ends.

The iOS key­board has been bro­ken since at least iOS 17 and it’s some­how only got­ten worse. iOS 26 has been my break­ing point. Autocorrect is nearly use­less and of­ten hos­tile, that part I’m used to. But now the cor­rectly tapped let­ters aren’t even reg­is­ter­ing cor­rectly. This is­n’t just me.

iOS has bugs across the whole ecosys­tem. But hav­ing the key­board, the thing I in­ter­act with hun­dreds of times a day on my pri­mary de­vice, get pro­gres­sively worse with every up­date is ab­solutely mad­den­ing.

I ran­domly tried Android again for a few months last spring. Using a func­tion­ing key­board was rev­e­la­tory. But I came crawl­ing back to iOS be­cause I’m weak and the or­ange iPhone was pretty and the Pixel 10 was bor­ing and I caved to the blue bub­ble pres­sure. But the key­board on this beau­ti­ful phone is worse than ever.

So here’s the deal, Apple, if that’s even your real name: fix this bro­ken key­board, or at the very least pub­licly

ac­knowl­edge it’s bro­ken and com­mit to fix­ing it in iOS 27 or ear­lier. If that count­down hits zero with­out ei­ther of those things hap­pen­ing, I’m switch­ing to Android for good. (Good = at least 2 cal­en­dar years)

I know los­ing one cus­tomer means ab­solutely noth­ing to your bot­tom line. But I’d like to think it should mean some­thing to the en­gi­neers, UX de­sign­ers, prod­uct peo­ple, and who­ever else had a hand in build­ing this thing.

You were the it just works” com­pany. Now you’re just a fruit that I used to know.

...

Read the original on ios-countdown.win »

2 693 shares, 53 trendiness

Unleash your ideas with ASCII

MonoSketch is an open-source pro­ject li­censed un­der the Apache License 2.0.

If you find this pro­ject use­ful, please con­sider star­ring the repos­i­tory on GitHub. Contributions are also wel­come through pull re­quests or by open­ing is­sues on GitHub.

If you would like to sup­port the pro­ject fi­nan­cially, you can do so by be­com­ing a GitHub Sponsor or con­tribut­ing via Kofi.

...

Read the original on monosketch.io »

3 447 shares, 23 trendiness

update README.md format and clarify state of the project · minio/minio@7aac2a2

...

Read the original on github.com »

4 437 shares, 18 trendiness

Skip the Tips — Can You Escape the Tip Screen?

A free browser game that chal­lenges you to press No Tip” while dark pat­terns try to trick you into tip­ping. From tiny but­tons and guilt-trip modals to fake load­ing screens and rigged slid­ers — can you es­cape the tip screen?

Skip the Tips is a satir­i­cal take on mod­ern tip­ping cul­ture. Every check­out screen has be­come a guilt ma­chine. This game lets you prac­tice say­ing no — if you can find the but­ton.

Features over 30 dark pat­terns in­spired by real-world tip­ping screens, pro­gres­sive dif­fi­culty, and a timer that keeps shrink­ing. Play free in your browser — no down­loads, no sign-ups, no tip re­quired.

...

Read the original on skipthe.tips »

5 379 shares, 20 trendiness

MSN

...

Read the original on www.msn.com »

6 282 shares, 22 trendiness

Remove blade, reimplement linux renderer with wgpu by zortax · Pull Request #46758 · zed-industries/zed

There was an er­ror while load­ing. Please re­load this page.

Successfully merg­ing this pull re­quest may close these is­sues.

Reviewers whose ap­provals may not af­fect merge re­quire­ments

This file con­tains hid­den or bidi­rec­tional Unicode text that may be in­ter­preted or com­piled dif­fer­ently than what ap­pears be­low. To re­view, open the file in an ed­i­tor that re­veals hid­den Unicode char­ac­ters.

Learn more about bidi­rec­tional Unicode char­ac­ters

There was an er­ror while load­ing. Please re­load this page.

There was an er­ror while load­ing. Please re­load this page.

Successfully merg­ing this pull re­quest may close these is­sues.

Reviewers whose ap­provals may not af­fect merge re­quire­ments

This file con­tains hid­den or bidi­rec­tional Unicode text that may be in­ter­preted or com­piled dif­fer­ently than what ap­pears be­low. To re­view, open the file in an ed­i­tor that re­veals hid­den Unicode char­ac­ters.

Learn more about bidi­rec­tional Unicode char­ac­ters

There was an er­ror while load­ing. Please re­load this page.

Add this sug­ges­tion to a batch that can be ap­plied as a sin­gle com­mit.

This sug­ges­tion is in­valid be­cause no changes were made to the code.

Suggestions can­not be ap­plied while the pull re­quest is closed.

Suggestions can­not be ap­plied while view­ing a sub­set of changes.

Only one sug­ges­tion per line can be ap­plied in a batch.

Add this sug­ges­tion to a batch that can be ap­plied as a sin­gle com­mit.

Applying sug­ges­tions on deleted lines is not sup­ported.

You must change the ex­ist­ing code in this line in or­der to cre­ate a valid sug­ges­tion.

This sug­ges­tion has been ap­plied or marked re­solved.

Suggestions can­not be ap­plied from pend­ing re­views.

Suggestions can­not be ap­plied on multi-line com­ments.

Suggestions can­not be ap­plied while the pull re­quest is queued to merge.

Suggestion can­not be ap­plied right now. Please check back later.

...

Read the original on github.com »

7 264 shares, 64 trendiness

OpenAI has deleted the word ‘safely’ from its mission – and its new structure is a test for whether AI serves society or shareholders

OpenAI, the maker of the most pop­u­lar AI chat­bot, used to say it aimed to build ar­ti­fi­cial in­tel­li­gence that safely ben­e­fits hu­man­ity, un­con­strained by a need to gen­er­ate fi­nan­cial re­turn,” ac­cord­ing to its 2023 mis­sion state­ment. But the ChatGPT maker seems to no longer have the same em­pha­sis on do­ing so safely.”

While re­view­ing its lat­est IRS dis­clo­sure form, which was re­leased in November 2025 and cov­ers 2024, I no­ticed OpenAI had re­moved safely” from its mis­sion state­ment, among other changes. That change in word­ing co­in­cided with its trans­for­ma­tion from a non­profit or­ga­ni­za­tion into a busi­ness in­creas­ingly fo­cused on prof­its.

OpenAI cur­rently faces sev­eral law­suits re­lated to its prod­ucts’ safety, mak­ing this change news­wor­thy. Many of the plain­tiffs su­ing the AI com­pany al­lege psy­cho­log­i­cal ma­nip­u­la­tion, wrong­ful death and as­sisted sui­cide, while oth­ers have filed neg­li­gence claims.

As a scholar of non­profit ac­count­abil­ity and the gov­er­nance of so­cial en­ter­prises, I see the dele­tion of the word safely” from its mis­sion state­ment as a sig­nif­i­cant shift that has largely gone un­re­ported — out­side highly spe­cial­ized out­lets.

And I be­lieve OpenAI’s makeover is a test case for how we, as a so­ci­ety, over­see the work of or­ga­ni­za­tions that have the po­ten­tial to both pro­vide enor­mous ben­e­fits and do cat­a­strophic harm.

OpenAI, which also makes the Sora video ar­ti­fi­cial in­tel­li­gence app, was founded as a non­profit sci­en­tific re­search lab in 2015. Its orig­i­nal pur­pose was to ben­e­fit so­ci­ety by mak­ing its find­ings pub­lic and roy­alty-free rather than to make money.

To raise the money that de­vel­op­ing its AI mod­els would re­quire, OpenAI, un­der the lead­er­ship of CEO Sam Altman, cre­ated a for-profit sub­sidiary in 2019. Microsoft ini­tially in­vested US$1 bil­lion in this ven­ture; by 2024 that sum had topped $13 bil­lion.

In ex­change, Microsoft was promised a por­tion of fu­ture prof­its, capped at 100 times its ini­tial in­vest­ment. But the soft­ware gi­ant did­n’t get a seat on OpenAI’s non­profit board — mean­ing it lacked the power to help steer the AI ven­ture it was fund­ing.

A sub­se­quent round of fund­ing in late 2024, which raised $6.6 bil­lion from mul­ti­ple in­vestors, came with a catch: that the fund­ing would be­come debt un­less OpenAI con­verted to a more tra­di­tional for-profit busi­ness in which in­vestors could own shares, with­out any caps on prof­its, and pos­si­bly oc­cupy board seats.

In October 2025, OpenAI reached an agree­ment with the at­tor­neys gen­eral of California and Delaware to be­come a more tra­di­tional for-profit com­pany.

Under the new arrange­ment, OpenAI was split into two en­ti­ties: a non­profit foun­da­tion and a for-profit busi­ness.

The re­struc­tured non­profit, the OpenAI Foundation, owns about one-fourth of the stock in a new for-profit pub­lic ben­e­fit cor­po­ra­tion, the OpenAI Group. Both are head­quar­tered in California but in­cor­po­rated in Delaware.

A pub­lic ben­e­fit cor­po­ra­tion is a busi­ness that must con­sider in­ter­ests be­yond share­hold­ers, such as those of so­ci­ety and the en­vi­ron­ment, and it must is­sue an an­nual ben­e­fit re­port to its share­hold­ers and the pub­lic. However, it is up to the board to de­cide how to weigh those in­ter­ests and what to re­port in terms of the ben­e­fits and harms caused by the com­pany.

The new struc­ture is de­scribed in a mem­o­ran­dum of un­der­stand­ing signed in October 2025 by OpenAI and the California at­tor­ney gen­eral, and en­dorsed by the Delaware at­tor­ney gen­eral.

Many busi­ness me­dia out­lets her­alded the move, pre­dict­ing that it would usher in more in­vest­ment. Two months later, SoftBank, a Japanese con­glom­er­ate, fi­nal­ized a $41 bil­lion in­vest­ment in OpenAI.

Most char­i­ties must file forms an­nu­ally with the Internal Revenue Service with de­tails about their mis­sions, ac­tiv­i­ties and fi­nan­cial sta­tus to show that they qual­ify for tax-ex­empt sta­tus. Because the IRS makes the forms pub­lic, they have be­come a way for non­prof­its to sig­nal their mis­sions to the world.

In its forms for 2022, and 2023, OpenAI said its mis­sion was to build gen­eral-pur­pose ar­ti­fi­cial in­tel­li­gence (AI) that safely ben­e­fits hu­man­ity, un­con­strained by a need to gen­er­ate fi­nan­cial re­turn.”

That mis­sion state­ment has changed, as of OpenAI’s 990 form for 2024 — which the com­pany filed with the IRS in late 2025. It be­came to en­sure that ar­ti­fi­cial gen­eral in­tel­li­gence ben­e­fits all of hu­man­ity.”

OpenAI had dropped its com­mit­ment to safety from its mis­sion state­ment — along with a com­mit­ment to be­ing unconstrained” by a need to make money for in­vestors. According to Platformer, a tech me­dia out­let, it has also dis­banded its mission align­ment” team.

In my view, these changes ex­plic­itly sig­nal that OpenAI is mak­ing its prof­its a higher pri­or­ity than the safety of its prod­ucts.

To be sure, OpenAI con­tin­ues to men­tion safety when it dis­cusses its mis­sion. We view this mis­sion as the most im­por­tant chal­lenge of our time,” it states on its web­site. It re­quires si­mul­ta­ne­ously ad­vanc­ing AIs ca­pa­bil­ity, safety, and pos­i­tive im­pact in the world.”

Nonprofit boards are re­spon­si­ble for key de­ci­sions and up­hold­ing their or­ga­ni­za­tion’s mis­sion.

Unlike pri­vate com­pa­nies, board mem­bers of tax-ex­empt char­i­ta­ble non­prof­its can­not per­son­ally en­rich them­selves by tak­ing a share of earn­ings. In cases where a non­profit owns a for-profit busi­ness, as OpenAI did with its pre­vi­ous struc­ture, in­vestors can take a cut of prof­its — but they typ­i­cally do not get a seat on the board or have an op­por­tu­nity to elect board mem­bers, be­cause that would be seen as a con­flict of in­ter­est.

The OpenAI Foundation now has a 26% stake in OpenAI Group. In ef­fect, that means that the non­profit board has given up nearly three-quar­ters of its con­trol over the com­pany. Software gi­ant Microsoft owns a slightly larger stake — 27% of OpenAI’s stock — due to its $13.8 bil­lion in­vest­ment in the AI com­pany to date. OpenAI’s em­ploy­ees and its other in­vestors own the rest of the shares.

The main goal of OpenAI’s re­struc­tur­ing, which it called a recapitalization,” was to at­tract more pri­vate in­vest­ment in the race for AI dom­i­nance.

It has al­ready suc­ceeded on that front.

As of early February 2026, the com­pany was in talks with SoftBank for an ad­di­tional $30 bil­lion and stands to get up to a to­tal of $60 bil­lion from Amazon, Nvidia and Microsoft com­bined.

OpenAI is now val­ued at over $500 bil­lion, up from $300 bil­lion in March 2025. The new struc­ture also paves the way for an even­tual ini­tial pub­lic of­fer­ing, which, if it hap­pens, would not only help the com­pany raise more cap­i­tal through stock mar­kets but would also in­crease the pres­sure to make money for its share­hold­ers.

OpenAI says the foun­da­tion’s en­dow­ment is worth about $130 bil­lion.

Those num­bers are only es­ti­mates be­cause OpenAI is a pri­vately held com­pany with­out pub­licly traded shares. That means these fig­ures are based on mar­ket value es­ti­mates rather than any ob­jec­tive ev­i­dence, such as mar­ket cap­i­tal­iza­tion.

When he an­nounced the new struc­ture, California Attorney General Rob Bonta said, We se­cured con­ces­sions that en­sure char­i­ta­ble as­sets are used for their in­tended pur­pose.” He also pre­dicted that safety will be pri­or­i­tized” and said the top pri­or­ity is, and al­ways will be, pro­tect­ing our kids.”

At the same time, sev­eral con­di­tions in the OpenAI re­struc­tur­ing memo are de­signed to pro­mote safety, in­clud­ing:

A safety and se­cu­rity com­mit­tee on the OpenAI Foundation board has the au­thor­ity to require mit­i­ga­tion mea­sures” that could po­ten­tially in­clude the halt­ing of a re­lease of new OpenAI prod­ucts based on as­sess­ments of their risks.

The for-profit OpenAI Group has its own board, which must con­sider only OpenAI’s mis­sion — rather than fi­nan­cial is­sues — re­gard­ing safety and se­cu­rity is­sues.

The OpenAI Foundation’s non­profit board gets to ap­point all mem­bers of the OpenAI Group’s for-profit board.

But given that nei­ther the mis­sion of the foun­da­tion nor of the OpenAI group ex­plic­itly al­ludes to safety, it will be hard to hold their boards ac­count­able for it.

Furthermore, since all but one board mem­ber cur­rently serve on both boards, it is hard to see how they might over­see them­selves. And the mem­o­ran­dum signed by the California at­tor­ney gen­eral does­n’t in­di­cate whether he was aware of the re­moval of any ref­er­ence to safety from the mis­sion state­ment.

There are al­ter­na­tive mod­els that I be­lieve would serve the pub­lic in­ter­est bet­ter than this one.

When Health Net, a California non­profit health main­te­nance or­ga­ni­za­tion, con­verted to a for-profit in­sur­ance com­pany in 1992, reg­u­la­tors re­quired that 80% of its eq­uity be trans­ferred to an­other non­profit health foun­da­tion. Unlike with OpenAI, the foun­da­tion had ma­jor­ity con­trol af­ter the trans­for­ma­tion.

A coali­tion of California non­prof­its has ar­gued that the at­tor­ney gen­eral should re­quire OpenAI to trans­fer all of its as­sets to an in­de­pen­dent non­profit.

Another ex­am­ple is The Philadelphia Inquirer. The Pennsylvania news­pa­per be­came a for-profit pub­lic ben­e­fit cor­po­ra­tion in 2016. It be­longs to the Lenfest Institute, a non­profit.

This struc­ture al­lows Philadelphia’s biggest news­pa­per to at­tract in­vest­ment with­out com­pro­mis­ing its pur­pose — jour­nal­ism serv­ing the needs of its lo­cal com­mu­ni­ties. It’s be­come a model for po­ten­tially trans­form­ing the lo­cal news in­dus­try.

At this point, I be­lieve that the pub­lic bears the bur­den of two gov­er­nance fail­ures. One is that OpenAI’s board has ap­par­ently aban­doned its mis­sion of safety. And the other is that the at­tor­neys gen­eral of California and Delaware have let that hap­pen.

...

Read the original on theconversation.com »

8 262 shares, 64 trendiness

The EU moves to kill infinite scrolling

Brussels is go­ing head-to-head with so­cial me­dia plat­forms to change ad­dic­tive de­sign.

The find­ings laid out a week ago mark the first time the Commission has set out its stance on the de­sign of a so­cial me­dia plat­form un­der its Digital Services Act. | Jaap Arriens/NurPhoto via Getty Images

The European Commission is for the first time tack­ling the ad­dic­tive­ness of so­cial me­dia in a fight against TikTok that may set new de­sign stan­dards for the world’s most pop­u­lar apps.

Brussels has told the com­pany to change sev­eral key fea­tures, in­clud­ing dis­abling in­fi­nite scrolling, set­ting strict screen time breaks and chang­ing its rec­om­mender sys­tems. The de­mand fol­lows the Commission’s de­c­la­ra­tion that TikTok’s de­sign is ad­dic­tive to users — es­pe­cially chil­dren.

The fact that the Commission said TikTok should change the ba­sic de­sign of its ser­vice is ground-breaking for the busi­ness model fu­eled by sur­veil­lance and ad­ver­tis­ing,” said Katarzyna Szymielewicz, pres­i­dent of the Panoptykon Foundation, a Polish civil so­ci­ety group.

That doesn’t bode well for other plat­forms, par­tic­u­larly Meta’s Facebook and Instagram. The two so­cial me­dia gi­ants are also un­der in­ves­ti­ga­tion over the ad­dic­tive­ness of their de­sign.

The find­ings laid out a week ago mark the first time the Commission has set out its stance on the de­sign of a so­cial me­dia plat­form un­der its Digital Services Act, the EUs flag­ship on­line-con­tent law that Brussels says is es­sen­tial for pro­tect­ing users.

TikTok can now de­fend its prac­tices and re­view all the ev­i­dence the Commission con­sid­ered — and has said it would fight these find­ings. If it fails to sat­isfy the Commission, the app could face fines up to 6 per­cent of an­nual global rev­enue.

It’s the first time any reg­u­la­tor has at­tempted to set a le­gal stan­dard for the ad­dic­tive­ness of plat­form de­sign, a se­nior Commission of­fi­cial said in a brief­ing to re­porters.

The find­ings mark a turn­ing point [because] the Commission is treat­ing ad­dic­tive de­sign on so­cial me­dia as an en­force­able risk” un­der the Digital Services Act, said Lena-Maria Böswald, se­nior pol­icy re­searcher at think tank Interface.

Jan Penfrat, se­nior pol­icy ad­viser at civil rights group EDRi, said it would be very, very strange for the Commission to not then use this as a tem­plate and go af­ter other com­pa­nies as well.”

The Digital Services Act re­quires plat­forms like TikTok to as­sess and mit­i­gate risks to their users. But these risks are vaguely de­fined in the law, so un­til now it had been un­clear ex­actly where the reg­u­la­tor would draw the line.

Two years af­ter the TikTok probe was launched, the Commission has opted to strike at the heart of plat­form de­sign, claim­ing it poses a risk to the men­tal health of users, par­tic­u­larly chil­dren. The Commission’s other con­cerns with TikTok were set­tled am­i­ca­bly be­tween the two sides.

At a brief­ing with re­porters, EU tech chief Henna Virkkunen said the find­ings sig­nal that the Commission’s work is en­ter­ing a new stage of ma­tu­rity when it comes to sys­temic risks.

Facebook and Instagram have been un­der in­ves­ti­ga­tion over the ad­dic­tive­ness of their plat­forms since May 2024, in­clud­ing whether they en­dan­ger chil­dren. Just like TikTok, the de­sign and al­go­rithms of the plat­forms are un­der scrutiny.

Meta has mounted a staunch de­fense in an on­go­ing California case, in which it is ac­cused of know­ingly de­sign­ing an ad­dic­tive so­cial me­dia that hurts users. TikTok and Snap set­tled the same case be­fore it went to trial.

TikTok spokesper­son Paolo Ganino said the Commission’s find­ings present a cat­e­gor­i­cally false and en­tirely mer­it­less de­pic­tion of our plat­form and we will take what­ever steps are nec­es­sary to chal­lenge these find­ings through every means avail­able to us.”

The Commission could even­tu­ally agree with plat­forms on a wide range of changes that ad­dress ad­dic­tive de­sign. What they de­cide will de­pend on the dif­fer­ent risk pro­files and pat­terns of use of each plat­form — as well as how each com­pany de­fends it­self.

That likely means it will take a while for TikTok to make any change to its sys­tems, as the plat­form re­views the ev­i­dence and tries to ne­go­ti­ate a so­lu­tion with the reg­u­la­tor.

In an­other, sim­pler DSA en­force­ment case, it took the Commission more than a year af­ter is­su­ing pre­lim­i­nary find­ings to de­clare Elon Musk’s X was not com­pli­ant with its oblig­a­tions on trans­parency.

TikTok may pur­sue a se­ries of changes and may push the Commission to adopt a lighter reg­u­la­tory ap­proach. The video-shar­ing gi­ant likely won’t “get it right” the first time, said EDRi’s Penfrat, and it may take a few tries to sat­isfy Brussels.

It could be any­thing from chang­ing de­fault set­tings, to out­right pro­hibit­ing a spe­cific de­sign fea­ture, or re­quir­ing more user con­trol,” said Peter Chapman, a gov­er­nance re­searcher and lawyer who is as­so­ci­ate di­rec­tor at the Knight-Georgetown Institute.

He ex­pects the changes could be dif­fer­ent for each plat­form — as while the find­ings show the Commission’s think­ing, in­ter­ven­tions must be tar­geted de­pend­ing on how de­sign fea­tures are used.

Multiple plat­forms use sim­i­lar de­sign fea­tures” but they serve dif­fer­ent pur­poses and carry dif­fer­ent risks, said Chapman, point­ing to the ex­am­ple of no­ti­fi­ca­tions that try to draw you back in. For ex­am­ple, no­ti­fi­ca­tions for mes­sages carry a dif­fer­ent risk of ad­dic­tion to those alert­ing a user about a livestream, he said.

...

Read the original on www.politico.eu »

9 251 shares, 21 trendiness

CBP Signs Clearview AI Deal to Use Face Recognition for ‘Tactical Targeting’

United States Customs and Border Protection plans to spend $225,000 for a year of ac­cess to Clearview AI, a face recog­ni­tion tool that com­pares pho­tos against bil­lions of im­ages scraped from the in­ter­net.

The deal ex­tends ac­cess to Clearview tools to Border Patrol’s head­quar­ters in­tel­li­gence di­vi­sion (INTEL) and the National Targeting Center, units that col­lect and an­a­lyze data as part of what CBP calls a co­or­di­nated ef­fort to disrupt, de­grade, and dis­man­tle” peo­ple and net­works viewed as se­cu­rity threats.

The con­tract states that Clearview pro­vides ac­cess to over 60+ bil­lion pub­licly avail­able im­ages” and will be used for tactical tar­get­ing” and strategic counter-net­work analy­sis,” in­di­cat­ing the ser­vice is in­tended to be em­bed­ded in an­a­lysts’ day-to-day in­tel­li­gence work rather than re­served for iso­lated in­ves­ti­ga­tions. CBP says its in­tel­li­gence units draw from a variety of sources,” in­clud­ing com­mer­cially avail­able tools and pub­licly avail­able data, to iden­tify peo­ple and map their con­nec­tions for na­tional se­cu­rity and im­mi­gra­tion op­er­a­tions.

The agree­ment an­tic­i­pates an­a­lysts han­dling sen­si­tive per­sonal data, in­clud­ing bio­met­ric iden­ti­fiers such as face im­ages, and re­quires nondis­clo­sure agree­ments for con­trac­tors who have ac­cess. It does not spec­ify what kinds of pho­tos agents will up­load, whether searches may in­clude US cit­i­zens, or how long up­loaded im­ages or search re­sults will be re­tained.

The Clearview con­tract lands as the Department of Homeland Security faces mount­ing scrutiny over how face recog­ni­tion is used in fed­eral en­force­ment op­er­a­tions far be­yond the bor­der, in­clud­ing large-scale ac­tions in US cities that have swept up US cit­i­zens. Civil lib­er­ties groups and law­mak­ers have ques­tioned whether face-search tools are be­ing de­ployed as rou­tine in­tel­li­gence in­fra­struc­ture, rather than lim­ited in­ves­tiga­tive aids, and whether safe­guards have kept pace with ex­pan­sion.

Last week, Senator Ed Markey in­tro­duced leg­is­la­tion that would bar ICE and CBP from us­ing face recog­ni­tion tech­nol­ogy al­to­gether, cit­ing con­cerns that bio­met­ric sur­veil­lance is be­ing em­bed­ded with­out clear lim­its, trans­parency, or pub­lic con­sent.

CBP did not im­me­di­ately re­spond to ques­tions about how Clearview would be in­te­grated into its sys­tems, what types of im­ages agents are au­tho­rized to up­load, and whether searches may in­clude US cit­i­zens.

Clearview’s busi­ness model has drawn scrutiny be­cause it re­lies on scrap­ing pho­tos from pub­lic web­sites at scale. Those im­ages are con­verted into bio­met­ric tem­plates with­out the knowl­edge or con­sent of the peo­ple pho­tographed.

Clearview also ap­pears in DHSs re­cently re­leased ar­ti­fi­cial in­tel­li­gence in­ven­tory, linked to a CBP pi­lot ini­ti­ated in October 2025. The in­ven­tory en­try ties the pi­lot to CBPs Traveler Verification System, which con­ducts face com­par­isons at ports of en­try and other bor­der-re­lated screen­ings.

CBP states in its pub­lic pri­vacy doc­u­men­ta­tion that the Traveler Verification System does not use in­for­ma­tion from commercial sources or pub­licly avail­able data.” It is more likely, at launch, that Clearview ac­cess would in­stead be tied to CBPs Automated Targeting System, which links bio­met­ric gal­leries, watch lists, and en­force­ment records, in­clud­ing files tied to re­cent Immigration and Customs Enforcement op­er­a­tions in ar­eas of the US far from any bor­der.

Clearview AI did not im­me­di­ately re­spond to a re­quest for com­ment.

Recent test­ing by the National Institute of Standards and Technology, which eval­u­ated Clearview AI among other ven­dors, found that face-search sys­tems can per­form well on high-quality visa-like pho­tos” but fal­ter in less con­trolled set­tings. Images cap­tured at bor­der cross­ings that were not orig­i­nally in­tended for au­to­mated face recog­ni­tion” pro­duced er­ror rates that were much higher, of­ten in ex­cess of 20 per­cent, even with the more ac­cu­rate al­go­rithms,” fed­eral sci­en­tists say.

The test­ing un­der­scores a cen­tral lim­i­ta­tion of the tech­nol­ogy: NIST found that face-search sys­tems can­not re­duce false matches with­out also in­creas­ing the risk that the sys­tems fail to rec­og­nize the cor­rect per­son.

As a re­sult, NIST says agen­cies may op­er­ate the soft­ware in an investigative” set­ting that re­turns a ranked list of can­di­dates for hu­man re­view rather than a sin­gle con­firmed match. When sys­tems are con­fig­ured to al­ways re­turn can­di­dates, how­ever, searches for peo­ple not al­ready in the data­base will still gen­er­ate matches” for re­view. In those cases, the re­sults will al­ways be 100 per­cent wrong.

...

Read the original on www.wired.com »

10 193 shares, 23 trendiness

Sandwich Bill of Materials

Specification: SBOM 1.0 (Sandwich Bill of Materials)

Status: Draft

Maintainer: The SBOM Working Group

License: MIT (Mustard Is Transferable)

Modern sand­wich con­struc­tion re­lies on a com­plex graph of tran­si­tive in­gre­di­ents sourced from mul­ti­ple reg­istries (farms, dis­trib­u­tors, mar­kets). Consumers have no stan­dard­ized way to enu­mer­ate the com­po­nents of their lunch, as­sess in­gre­di­ent prove­nance, or ver­ify that their sand­wich was as­sem­bled from known-good sources. SBOM ad­dresses this by pro­vid­ing a ma­chine-read­able for­mat for de­clar­ing the full de­pen­dency tree of a sand­wich, in­clud­ing sub-com­po­nents, li­cens­ing in­for­ma­tion, and known vul­ner­a­bil­i­ties.

A typ­i­cal sand­wich con­tains be­tween 6 and 47 di­rect de­pen­den­cies, each pulling in its own tran­si­tive in­gre­di­ents. A simple” BLT de­pends on ba­con, which de­pends on pork, which de­pends on a pig, which de­pends on feed corn, wa­ter, an­tibi­otics, and a farmer whose field has­n’t flooded yet. The con­sumer sees three let­ters, but the sup­ply chain sees a di­rected acyclic graph with cy­cle de­tec­tion is­sues (the pig eats the corn that grows in the field that was fer­til­ized by the pig).

The 2025 egg price cri­sis was a cas­cad­ing fail­ure equiv­a­lent to a left-pad in­ci­dent, ex­cept it af­fected break­fast. A sin­gle avian flu out­break took down the en­tire egg ecosys­tem for months. Post-incident analy­sis re­vealed that 94% of af­fected sand­wiches had no lock­file and were re­solv­ing eggs to lat­est at as­sem­bly time.

An SBOM doc­u­ment MUST be a JSON file with the .sbom ex­ten­sion, af­ter YAML was con­sid­ered and re­jected on the grounds that the sand­wich in­dus­try has enough prob­lems with­out adding white­space sen­si­tiv­ity.

Each sand­wich com­po­nent MUST in­clude the fol­low­ing fields:

surl (required): A Sandwich URL uniquely iden­ti­fy­ing the in­gre­di­ent. Format: surl:type/​name@ver­sion. Follows the same con­ven­tion as PURL but for food. Examples:

name (required): The canon­i­cal name of the in­gre­di­ent as reg­is­tered in a rec­og­nized food reg­istry. Unregistered in­gre­di­ents (e.g., that sauce from the place”) MUST be de­clared as un­ver­i­fied-source and will trig­ger a warn­ing dur­ing sand­wich lint­ing.

ver­sion (required): The spe­cific ver­sion of the in­gre­di­ent. Tomatoes MUST use cal­en­dar ver­sion­ing (harvest date). Cheese MUST use age-based ver­sion­ing (e.g., ched­dar@18m). Bread fol­lows semver, where a MAJOR ver­sion bump in­di­cates a change in grain type, MINOR in­di­cates a change in hy­dra­tion per­cent­age, and PATCH in­di­cates some­one left it out overnight and it’s a bit stale but prob­a­bly fine.

sup­plier (required): The ori­gin reg­istry. Valid reg­istries in­clude farm://, su­per­mar­ket://, farm­ers-mar­ket://, and back-of-the-fridge://. The lat­ter is con­sid­ered an un­trusted source and com­po­nents re­solved from it MUST in­clude a best-be­fore in­tegrity check.

in­tegrity (required): A SHA-256 hash of the in­gre­di­ent at time of ac­qui­si­tion.

li­cense (required): The li­cense un­der which the in­gre­di­ent is dis­trib­uted. Common li­censes in­clude:

MIT (Mustard Is Transferable): The in­gre­di­ent may be used in any sand­wich with­out re­stric­tion. Attribution ap­pre­ci­ated but not re­quired.

GPL (General Pickle License): If you in­clude a GPL-licensed in­gre­di­ent, the en­tire sand­wich be­comes open-source. You must pro­vide the full recipe to any­one who asks. Pickle ven­dors have been par­tic­u­larly ag­gres­sive about this.

AGPL (Affero General Pickle License): Same as GPL, but if you serve the sand­wich over a net­work (delivery apps), you must also pub­lish the recipe. This is why most restau­rants avoid AGPL pick­les.

BSD (Bread, Sauce, Distributed): Permissive. You can do what­ever you want as long as you keep the orig­i­nal bak­er’s name on the bread bag, and also a sec­ond copy of the bak­er’s name, and also don’t use the bak­er’s name to pro­mote your sand­wich with­out per­mis­sion. There are four vari­ants of this li­cense and no­body can re­mem­ber which is which.

SSPL (Server Side Pickle License): You may use this pickle in your sand­wich, but if you of­fer sand­wich-mak­ing as a ser­vice, you must open-source your en­tire kitchen, in­clud­ing the weird drawer with all the take­away menus. Most cloud sand­wich providers have stopped serv­ing SSPL pick­les en­tirely.

Proprietary: The in­gre­di­en­t’s com­po­si­tion is not dis­closed. Common for secret sauces.” Consumption is per­mit­ted but re­dis­tri­b­u­tion, re­verse-en­gi­neer­ing, or ask­ing what’s in it are pro­hib­ited by the EULA you agreed to by open­ing the packet.

Public Domain: The in­gre­di­en­t’s cre­ator has waived all rights. Salt, for ex­am­ple, has been pub­lic do­main since ap­prox­i­mately the Jurassic pe­riod, though sev­eral com­pa­nies have at­tempted to re­li­cense it.

Sandwich as­sem­bly MUST re­solve de­pen­den­cies depth-first. If two in­gre­di­ents de­clare con­flict­ing sub-de­pen­den­cies (e.g., sour­dough re­quires starter-cul­ture@wild but the pro­sciut­to’s cur­ing process pins salt@hi­malayan-pink), the as­sem­bler SHOULD at­tempt ver­sion ne­go­ti­a­tion. If ne­go­ti­a­tion fails, the sand­wich en­ters a con­flict state and MUST NOT be con­sumed un­til a hu­man re­views the de­pen­dency tree and makes a judge­ment call.

Circular de­pen­den­cies are per­mit­ted but dis­cour­aged. A sand­wich that con­tains bread made with beer made with grain from the same field as the bread is tech­ni­cally valid but will cause the re­solver to emit a warn­ing about co-dependent sour­dough.”

All SBOM doc­u­ments SHOULD be scanned against the National Sandwich Vulnerability Database (NSVD). Known vul­ner­a­bil­i­ties in­clude:

CVE-2024-MAYO: Mayonnaise left at room tem­per­a­ture for more than four hours. Severity: Critical. Affected ver­sions: all. No patch avail­able; mit­i­ga­tion re­quires re­frig­er­a­tion, which the spec­i­fi­ca­tion can­not en­force.

CVE-2023-GLUTEN: Bread con­tains gluten. This is not a bug; it is a fea­ture of wheat. However, it must be dis­closed be­cause ap­prox­i­mately 1% of con­sumers will ex­pe­ri­ence ad­verse ef­fects, and the re­main­ing 99% will ask about it any­way.

CVE-2025-AVO: Avocado ripeness win­dow is ap­prox­i­mately 17 min­utes. Version pin­ning is in­ef­fec­tive. The work­ing group rec­om­mends ven­dor­ing av­o­cado (i.e., buy­ing it al­ready mashed) to re­duce ex­po­sure to ripeness drift.

CVE-2019-SPROUT: Alfalfa sprouts were found to be ex­e­cut­ing ar­bi­trary bac­te­ria in an un­sand­boxed en­vi­ron­ment. Severity: High. The ven­dor dis­putes this clas­si­fi­ca­tion.

Each in­gre­di­ent MUST in­clude a signed prove­nance at­tes­ta­tion from the sup­plier. The at­tes­ta­tion MUST be gen­er­ated in a her­metic build en­vi­ron­ment and MUST NOT be gen­er­ated in a build en­vi­ron­ment where other food is be­ing pre­pared si­mul­ta­ne­ously, as this in­tro­duces the risk of cross-con­t­a­m­i­na­tion of prove­nance claims.

For farm-sourced in­gre­di­ents, the at­tes­ta­tion chain SHOULD ex­tend to the seed or an­i­mal of ori­gin. A toma­to’s prove­nance chain in­cludes the seed, the soil, the wa­ter, the sun­light, the farmer, the truck, the dis­trib­u­tor, and the shelf it sat on for a pe­riod the su­per­mar­ket would pre­fer not to dis­close.

Eggs are worse, be­cause an egg’s prove­nance at­tes­ta­tion is gen­er­ated by a chicken that may it­self lack a valid at­tes­ta­tion chain. The work­ing group has de­ferred the ques­tion of chicken-or-egg prove­nance or­der­ing to ver­sion 2.0.

A sand­wich MUST be re­pro­ducible. Given iden­ti­cal in­puts, two in­de­pen­dent as­sem­blers MUST pro­duce bite-for-bite iden­ti­cal sand­wiches, which in prac­tice is im­pos­si­ble. The spec­i­fi­ca­tion han­dles this by re­quir­ing as­sem­blers to doc­u­ment all sources of non-de­ter­min­ism in a sand­wich.lock file, in­clud­ing:

Whether the as­sem­bler was just eye­balling it” for condi­ment quan­ti­ties

Reproducible sand­wich builds re­main as­pi­ra­tional. A com­pli­ance level of close enough” is ac­cept­able for non-safety-crit­i­cal sand­wiches. Safety-critical sand­wiches SHOULD tar­get full re­pro­ducibil­ity.

Consumers SHOULD au­dit their full de­pen­dency tree be­fore con­sump­tion. A sbom au­dit com­mand will flag any in­gre­di­ent that:

Has not been up­dated in more than 12 months

Is main­tained by a sin­gle farmer with no suc­ces­sion plan (see also: goat farm­ing)

Has more than 200 tran­si­tive sub-in­gre­di­ents

Was sourced from a reg­istry that does not sup­port 2FA

Contains an in­gre­di­ent whose main­tainer has mass-trans­ferred own­er­ship to an un­known en­tity in a dif­fer­ent coun­try (see: the left-let­tuce in­ci­dent)

Early adop­tion has been mixed. The ar­ti­sanal sand­wich com­mu­nity ob­jects to ma­chine-read­able for­mats on philo­soph­i­cal grounds, ar­gu­ing that a sand­wich’s in­gre­di­ents should be dis­cov­er­able through the act of eat­ing it. The fast food in­dus­try has ex­pressed sup­port in prin­ci­ple but notes that their sand­wich­es’ de­pen­dency trees are trade se­crets and will be shipped as com­piled bi­na­ries.

The EU Sandwich Resilience Act (SRA) re­quires all sand­wiches sold or dis­trib­uted within the European Union to in­clude a ma­chine-read­able SBOM by Q3 2027. Sandwiches with­out a valid SBOM will be de­nied en­try at the bor­der. The European Commission has en­dorsed the spec­i­fi­ca­tion as part of its broader lunch sov­er­eignty agenda, ar­gu­ing that mem­ber states can­not de­pend on for­eign sand­wich in­fra­struc­ture with­out vis­i­bil­ity into the in­gre­di­ent graph. A work­ing pa­per on strategic au­ton­omy in condi­ment sup­ply chains” is ex­pected Q2 2027.

The US has is­sued Executive Order 14028.5, which re­quires all sand­wiches served in fed­eral build­ings to in­clude an SBOM. The or­der does not spec­ify whether it means Sandwich or Software Bill of Materials. Several fed­eral agen­cies have be­gun sub­mit­ting both.

The Software Heritage foun­da­tion archives all pub­licly avail­able source code as a ref­er­ence for fu­ture gen­er­a­tions, and the Sandwich Heritage Foundation has adopted the same mis­sion for sand­wiches, with less suc­cess.

Every sand­wich as­sem­bled un­der SBOM 1.0 is archived in a con­tent-ad­dress­able store keyed by its in­tegrity hash. The archive cur­rently holds 14 sand­wiches be­cause most con­trib­u­tors can­not fig­ure out how to hash a sand­wich with­out eat­ing it first. A BLT sub­mit­ted in March was re­jected be­cause the toma­to’s check­sum changed dur­ing tran­sit. The Foundation sus­pects con­den­sa­tion.

Long-term preser­va­tion re­mains an open prob­lem. Software can be archived in­def­i­nitely on disk, but sand­wiches in­tro­duce ma­te­r­ial con­straints the spec­i­fi­ca­tion was not de­signed for. The Foundation has ex­plored freeze-dry­ing, vac­uum seal­ing, and just tak­ing a re­ally de­tailed photo,” but none of these pro­duce a bit-for-bit re­pro­ducible sand­wich from the archive. The work­ing group con­sid­ers this a stor­age layer con­cern and out of scope for the spec­i­fi­ca­tion.

Funding comes from in­di­vid­ual do­na­tions and a pend­ing grant ap­pli­ca­tion to the EUs Horizon pro­gramme un­der the call for digital preser­va­tion of cul­tural food her­itage.” The ap­pli­ca­tion was re­jected once al­ready on the grounds that sand­wiches are not dig­i­tal, a char­ac­ter­i­za­tion the Foundation dis­putes given that every sand­wich un­der SBOM 1.0 is, by de­f­i­n­i­tion, a dig­i­tal ar­ti­fact with a hash.

This spec­i­fi­ca­tion is ded­i­cated to a small sand­wich shop on Folsom Street in SoMA that made the best BLT the au­thor has ever eaten, and which closed in 2019 with­out pro­duc­ing an SBOM or pub­lish­ing its recipe in any ma­chine-read­able for­mat.

This spec­i­fi­ca­tion is pro­vided AS IS with­out war­ranty of any kind, in­clud­ing but not lim­ited to the war­ranties of ed­i­bil­ity, fit­ness for a par­tic­u­lar meal, and non-con­t­a­m­i­na­tion. The SBOM Working Group is not re­spon­si­ble for any sand­wich con­structed in ac­cor­dance with this spec­i­fi­ca­tion that nonethe­less tastes bad.

...

Read the original on nesbitt.io »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.