10 interesting stories served every morning and every evening.




1 798 shares, 34 trendiness

I found a Vulnerability. They found a Lawyer.

I’m a div­ing in­struc­tor. I’m also a plat­form en­gi­neer who spends lots of his time think­ing about and im­ple­ment­ing in­fra­struc­ture se­cu­rity. Sometimes those two worlds col­lide in un­ex­pected ways.

A Sula sula (Frigatebird) and a dive flag on the ac­tual boat where I found the vul­ner­a­bil­ity - some­where off Cocos Island.

While on a 14 day-long dive trip around Cocos Island in Costa Rica, I stum­bled across a vul­ner­a­bil­ity in the mem­ber por­tal of a ma­jor div­ing in­surer - one that I’m per­son­ally in­sured through. What I found was so triv­ial, so fun­da­men­tally bro­ken, that I gen­uinely could­n’t be­lieve it had­n’t been ex­ploited al­ready.

I dis­closed this vul­ner­a­bil­ity on April 28, 2025 with a stan­dard 30-day em­bargo pe­riod. That em­bargo ex­pired on May 28, 2025 - over eight months ago. I waited this long to pub­lish be­cause I wanted to give the or­ga­ni­za­tion every rea­son­able op­por­tu­nity to fully re­me­di­ate the is­sue and no­tify af­fected users. The vul­ner­a­bil­ity has since been ad­dressed, but to my knowl­edge, I have not re­ceived con­fir­ma­tion that af­fected users were no­ti­fied. I have reached out to the or­ga­ni­za­tion to ask for clar­i­fi­ca­tion on this mat­ter.

This is the story of what hap­pened when I tried to do the right thing.

To un­der­stand why this is so bad, you need to know how the reg­is­tra­tion process works. As a div­ing in­struc­tor, I reg­is­ter my stu­dents (to get them in­sured) through my ac­count on the por­tal. I en­ter their per­sonal in­for­ma­tion with their con­sent - name, date of birth, ad­dress, phone num­ber, email - and the sys­tem cre­ates an ac­count for them. The stu­dent then re­ceives an email with their new ac­count cre­den­tials: a nu­meric user ID and a de­fault pass­word. They might log in to com­plete ad­di­tional in­for­ma­tion, or they might never touch the por­tal again.

When I reg­is­tered three stu­dents in quick suc­ces­sion, they were sit­ting right next to me and checked their wel­come emails. The user IDs were nearly iden­ti­cal - se­quen­tial num­bers, one af­ter the other. That’s when it clicked that some­thing re­ally bad was go­ing on.

Now here’s the prob­lem: the por­tal used in­cre­ment­ing nu­meric user IDs for lo­gin. User XXXXXX0, XXXXXX1, XXXXXX2, and so on. That alone is a red flag, but it gets worse: every ac­count was pro­vi­sioned with a sta­tic de­fault pass­word that was never en­forced to be changed on first lo­gin. And many users - es­pe­cially stu­dents who had their ac­counts cre­ated for them by their in­struc­tors - never changed it.

So the authentication” to ac­cess a user’s full pro­file - name, ad­dress, phone num­ber, email, date of birth - was:

Type the same de­fault pass­word that every ac­count shares on ac­count cre­ation.

There’s a good chance you get in.

That’s it. No rate lim­it­ing. No ac­count lock­out. No MFA. Just an in­cre­ment­ing in­te­ger and a pass­word that might as well have been pass­word123.

I ver­i­fied the is­sue with the min­i­mum ac­cess nec­es­sary to con­firm the scope - and stopped im­me­di­ately af­ter.

I did every­thing by the book. I con­tacted CSIRT Malta (MaltaCIP) first - since the or­ga­ni­za­tion is reg­is­tered in Malta, this is the com­pe­tent na­tional au­thor­ity. The Maltese National Coordinated Vulnerability Disclosure Policy (NCVDP) ex­plic­itly re­quires that con­firmed vul­ner­a­bil­i­ties be re­ported to both the re­spon­si­ble or­ga­ni­za­tion and CSIRTMalta.

As a fel­low div­ing in­struc­tor in­sured through [the or­ga­ni­za­tion] and a full-time Linux Platform Engineer, I am con­tact­ing you to re­spon­si­bly dis­close a crit­i­cal vul­ner­a­bil­ity I iden­ti­fied within the [the or­ga­ni­za­tion]’s user ac­count sys­tem.

During re­cent test­ing, I dis­cov­ered that user ac­counts - in­clud­ing those of un­der­age stu­dents - are ac­ces­si­ble through a com­bi­na­tion of pre­dictable User ID enu­mer­a­tion (incrementing user IDs) and the use of a sta­tic de­fault pass­word that is not en­forced to be changed upon first lo­gin. This mis­con­fig­u­ra­tion cur­rently ex­poses sen­si­tive per­sonal data (e.g., names, ad­dresses, con­tact in­for­ma­tion - in­clud­ing phone num­bers and emails -, dates of birth) and rep­re­sents mul­ti­ple GDPR vi­o­la­tions.

Exposure of sen­si­tive and un­der­age user data with­out ad­e­quate safe­guards

For ini­tial con­fir­ma­tion, I am at­tach­ing a screen­shot from Member ID XXXXXXX show­ing the ex­posed data, partly redacted for pri­vacy rea­sons.

Additionally, for trans­parency and val­i­da­tion, I have shared my proof-of-con­cept code se­curely via an en­crypted paste ser­vice: [link redacted]

In the spirit of re­spon­si­ble dis­clo­sure, I have al­ready in­formed CSIRT Malta (in CC) to of­fi­cially ini­ti­ate a re­port­ing process, given [the or­ga­ni­za­tion]’s op­er­a­tional pres­ence in Malta.

I kindly re­quest that [the or­ga­ni­za­tion] ac­knowl­edges re­ceipt of this dis­clo­sure within 7 days.

I am of­fer­ing a win­dow of 30 days from to­day the 28th of April 2025 for [the or­ga­ni­za­tion] to mit­i­gate or re­solve the vul­ner­a­bil­ity be­fore I con­sider any pub­lic dis­clo­sure.

Please note that I am fully avail­able to as­sist your IT team with tech­ni­cal de­tails, ver­i­fi­ca­tion steps and rec­om­men­da­tions from a se­cu­rity per­spec­tive.

I strongly rec­om­mend as­sign­ing an IT-Security Point of Contact (PoC) for di­rect col­lab­o­ra­tion on this is­sue.

Thank you very much for your at­ten­tion to this crit­i­cal mat­ter. I am look­ing for­ward to work­ing with you to­wards a se­cure res­o­lu­tion.

Both of these time­lines are stan­dard - if any­thing, gen­er­ous - in re­spon­si­ble dis­clo­sure frame­works.

Two days later, I got a re­ply. Not from their IT team. From their Data Privacy Officers (DPOs) law firm.

The let­ter opened po­litely enough - they ac­knowl­edged the is­sue and said they’d launched an in­ves­ti­ga­tion. They even men­tioned they were re­set­ting de­fault pass­words and plan­ning to roll out 2FA. Good.

But then the tone shifted:

While we gen­uinely ap­pre­ci­ate your seem­ingly good in­ten­tions and trans­parency in high­light­ing this mat­ter to our at­ten­tion, we must re­spect­fully note that no­ti­fy­ing the au­thor­i­ties prior to con­tact­ing the Group cre­ates ad­di­tional com­plex­i­ties in how the mat­ter is per­ceived and ad­dressed and also ex­poses us to un­fair li­a­bil­ity.

Let me trans­late: We wish you had­n’t told the gov­ern­ment about our se­cu­rity is­sue.”

It got bet­ter:

We also do not ap­pre­ci­ate your threat to make this mat­ter pub­lic […] and re­mind you that you may be held ac­count­able for any dam­age we, or the data sub­jects, may suf­fer as a re­sult of your own ac­tions, which ac­tions likely con­sti­tute a crim­i­nal of­fence un­der Maltese law.

So, to be clear: their por­tal had a de­fault pass­word on every ac­count, ex­pos­ing per­sonal data in­clud­ing that of chil­dren, and I’m the one who likely” com­mit­ted a crim­i­nal of­fence by find­ing it and telling them.

They also sent a de­c­la­ra­tion they wanted me to sign - while re­quest­ing my pass­port ID - con­firm­ing I’d deleted all data, would­n’t dis­close any­thing, and would keep the en­tire mat­ter strictly con­fi­den­tial.” The dead­line? End of busi­ness the same day they sent it.

This de­c­la­ra­tion in­cluded the fol­low­ing gem:

I also de­clare that I shall keep the con­tent of this de­c­la­ra­tion strictly con­fi­den­tial.

That’s an NDA with ex­tra steps: I was be­ing asked to sign away my right to dis­cuss the dis­clo­sure process it­self - in­clud­ing the fact that I found a vul­ner­a­bil­ity in their sys­tem - un­der threat of le­gal ac­tion.

Then came the re­minders. One friendly” re­minder. Then an urgent” one. Sign the de­c­la­ra­tion. De-escalate. Move on. Quietly.

I gen­er­ally refuse to sign con­fi­den­tial­ity clauses in cases in­volv­ing ex­po­sure of sen­si­tive in­for­ma­tion, and I did so here as well. Coordinated dis­clo­sure de­pends on trans­parency and trust be­tween re­searchers and or­ga­ni­za­tions: trust that af­fected users will be in­formed, and trust that a re­port leads to real re­me­di­a­tion.

Given that the or­ga­ni­za­tion in ques­tion had al­ready breached that trust by ex­pos­ing per­sonal data through weak con­trols, I was­n’t will­ing to grant blan­ket con­fi­den­tial­ity that could be used to keep the in­ci­dent out of pub­lic scrutiny. And with try­ing to ac­tual si­lence me through le­gal threats, they had al­ready made it clear that their pri­or­ity was rep­u­ta­tion man­age­ment over user data pro­tec­tion. So I stood my ground.

Instead, I of­fered to sign a mod­i­fied de­c­la­ra­tion con­firm­ing data dele­tion. I had no in­ter­est in re­tain­ing any­one’s per­sonal data, but I was not go­ing to agree to si­lence about the dis­clo­sure process it­self.

I also pointed out that, un­der Malta’s NCVDP, in­volv­ing CSIRT Malta is part of the ex­pected re­port­ing path - not a hos­tile act - and that pub­lish­ing post-re­me­di­a­tion analy­ses is stan­dard prac­tice in the se­cu­rity com­mu­nity.

Their re­sponse dou­bled down. They cited Article 337E of the Maltese Criminal Code - com­puter mis­use - and help­fully re­minded me that:

Art. 337E of the Criminal Code also pro­vides that If any act is com­mit­ted out­side Malta which, had it been com­mit­ted in Malta, would have con­sti­tuted an of­fence […] it shall […] be deemed to have been com­mit­ted in Malta.” Meaning that your ac­tions would be deemed a crim­i­nal of­fence in Malta, even if com­mit­ted in an­other coun­try.

They also made their po­si­tion on dis­clo­sure crys­tal clear, af­ter I re­it­er­ated my re­fusal to sign their NDA:

We ob­ject strongly to the use of [the or­ga­ni­za­tion’s name] in any such blogs or con­fer­ences you may write/​at­tend as this would be a dis­pro­por­tion­ate harm to [the or­ga­ni­za­tion’s] rep­u­ta­tion […]. We re­serve our rights at law to hold you re­spon­si­ble for any dam­ages [the or­ga­ni­za­tion] may suf­fer as a re­sult of any such pub­lic dis­clo­sures you may make.

That’s fine by me. Because here’s the thing: The vul­ner­a­bil­ity has been fixed. Default pass­words have been re­set. 2FA is be­ing rolled out. I feel sorry for the de­vel­oper(s) who had to clean up this mess, but at least the is­sue is no longer ex­ploitable. Sure, it would have been bet­ter if the or­ga­ni­za­tion had thanked me and taken re­spon­si­bil­ity for no­ti­fy­ing af­fected users. If the in­ci­dent qual­i­fied as a per­sonal data breach (which it does) and was likely to re­sult in a (high) risk to in­di­vid­u­als - es­pe­cially given mi­nors were in­volved - GDPR Articles 33 and 34 gen­er­ally re­quire no­ti­fi­ca­tion to the su­per­vi­sory au­thor­ity and com­mu­ni­ca­tion to af­fected data sub­jects.

GDPR Article 34(1) When the per­sonal data breach is likely to re­sult in a high risk to the rights and free­doms of nat­ural per­sons, the con­troller shall com­mu­ni­cate the per­sonal data breach to the data sub­ject with­out un­due de­lay.

GDPR Article 34(2) The com­mu­ni­ca­tion to the data sub­ject re­ferred to in para­graph 1 of this Article shall de­scribe in clear and plain lan­guage the na­ture of the per­sonal data breach and con­tain at least the in­for­ma­tion and mea­sures re­ferred to in points (b), (c) and (d) of Article 33(3).

I have not re­ceived con­fir­ma­tion that those no­ti­fi­ca­tions were ever car­ried out.

My favourite part was the or­ga­ni­za­tion’s po­si­tion on whose fault this ac­tu­ally was:

We con­tend that it is the re­spon­si­bil­ity of users to change their own pass­word (after we al­lo­cate a de­fault one).

Read that again. A com­pany that as­signed the same de­fault pass­word to every ac­count, never forced a pass­word change, and used in­cre­ment­ing nu­meric IDs as user­names is blam­ing the users for not se­cur­ing their own ac­counts. Accounts that in­clude those of mi­nors.

GDPR Article 5(1)(f) (integrity and con­fi­den­tial­ity): Personal data shall be processed in a man­ner that en­sures ap­pro­pri­ate se­cu­rity of the per­sonal data, in­clud­ing pro­tec­tion against unau­tho­rised or un­law­ful pro­cess­ing and against ac­ci­den­tal loss, de­struc­tion or dam­age, us­ing ap­pro­pri­ate tech­ni­cal or or­gan­i­sa­tional mea­sures.

Under GDPR, the data con­troller (namely: the or­ga­ni­za­tion) is re­spon­si­ble for im­ple­ment­ing ap­pro­pri­ate tech­ni­cal and or­ga­ni­za­tional mea­sures to en­sure data se­cu­rity. A sta­tic de­fault pass­word on an IDOR-vulnerable por­tal is not an appropriate mea­sure” by any de­f­i­n­i­tion.

GDPR Article 24(1) (controller re­spon­si­bil­ity): Taking into ac­count the na­ture, scope, con­text and pur­poses of pro­cess­ing as well as the risks of vary­ing like­li­hood and sever­ity for the rights and free­doms of nat­ural per­sons, the con­troller shall im­ple­ment ap­pro­pri­ate tech­ni­cal and or­gan­i­sa­tional mea­sures to en­sure and to be able to demon­strate that pro­cess­ing is per­formed in ac­cor­dance with this Regulation. Those mea­sures shall be re­viewed and up­dated where nec­es­sary.

This is­n’t an iso­lated case. The se­cu­rity re­search com­mu­nity has been deal­ing with this pat­tern for decades: find a vul­ner­a­bil­ity, re­port it re­spon­si­bly, get threat­ened with le­gal ac­tion. It’s so com­mon it has a name - the chill­ing ef­fect.

Organizations that re­spond to dis­clo­sure with lawyers in­stead of en­gi­neers are telling the world some­thing im­por­tant: they care more about their rep­u­ta­tion than about the data they’re sup­posed to pro­tect.

And the real irony? The le­gal threats are the rep­u­ta­tion dam­age. Not the vul­ner­a­bil­ity it­self - vul­ner­a­bil­i­ties hap­pen to every­one. It’s the re­sponse that tells you every­thing about an or­ga­ni­za­tion’s se­cu­rity cul­ture.

What Should Have Happened

Acknowledge the re­port - they did this, to be fair.

Fix the vul­ner­a­bil­ity - they started on this too.

Thank the re­searcher - in­stead of threat­en­ing them with crim­i­nal pros­e­cu­tion.

Have a CVD pol­icy - so re­searchers know how to re­port is­sues and what to ex­pect.

Notify af­fected users - es­pe­cially the par­ents of un­der­age mem­bers whose data was ex­posed.

Not try to si­lence the re­searcher with NDAs dis­guised as declarations.”

What You Can Do

Publish a Coordinated Vulnerability Disclosure pol­icy. It does­n’t have to be com­plex - maybe be­gin with a se­cu­rity.txt file and a clear process that fa­vors trans­parency.

Thank re­searchers for help­ing you im­prove your se­cu­rity pos­ture.

Don’t shoot the mes­sen­ger. The per­son re­port­ing the bug is not your en­emy. The bug is.

Don’t blame your users for se­cu­rity fail­ures that are your re­spon­si­bil­ity as a data con­troller.

Always in­volve your na­tional CSIRT. It pro­tects you and cre­ates an of­fi­cial record.

Document every­thing. Every email, every time­stamp, every re­sponse.

Don’t sign NDAs that pre­vent you from dis­cussing the dis­clo­sure process. But you can agree to delete data (and MUST do so!) with­out agree­ing to si­lence.

Know your rights. Many ju­ris­dic­tions have le­gal pro­tec­tions for good-faith se­cu­rity re­search. The EUs NIS2 Directive en­cour­ages co­or­di­nated vul­ner­a­bil­ity dis­clo­sure.

Because right now, in 2026, re­port­ing a triv­ial vul­ner­a­bil­ity ex­pos­ing per­sonal data - in­clud­ing that of chil­dren - still gets met with le­gal threats in­stead of grat­i­tude. And that’s a prob­lem for all of us. Let’s burn some Tokens! - AI Chatbot Cost Exploitation as an Attack VectorLet’s burn some Tokens! - AI Chatbot Cost Exploitation as an Attack VectorMany com­pa­nies ship AI chat­bots as thin wrap­pers around com­mer­cial LLM APIs with zero cost con­trols. What if a tool be­haved like an overly en­gaged, per­fectly valid user - and just burned through their bud­get?Im­print / ImpressumData Privacy / DatenschutzDo you know the code?

...

Read the original on dixken.de »

2 587 shares, 28 trendiness

Turn Dependabot Off

Dependabot is a noise ma­chine. It makes you feel like you’re do­ing work, but you’re ac­tu­ally dis­cour­ag­ing more use­ful work. This is es­pe­cially true for se­cu­rity alerts in the Go ecosys­tem.

I rec­om­mend turn­ing it off and re­plac­ing it with a pair of sched­uled GitHub Actions, one run­ning gov­ul­ncheck, and the other run­ning your test suite against the lat­est ver­sion of your de­pen­den­cies.

On Tuesday, I pub­lished a se­cu­rity fix for fil­ippo.io/​ed­ward­s25519. The (*Point).MultiScalarMult method would pro­duce in­valid re­sults if the re­ceiver was not the iden­tity point.

A lot of the Go ecosys­tem de­pends on fil­ippo.io/​ed­ward­s25519, mostly through github.com/​go-sql-dri­ver/​mysql (228k de­pen­dents only on GitHub). Essentially no one uses (*Point).MultiScalarMult.

Yesterday, Dependabot opened thou­sands of PRs against un­af­fected repos­i­to­ries to up­date fil­ippo.io/​ed­ward­s25519. These PRs were ac­com­pa­nied by a se­cu­rity alert with a non­sen­si­cal, made up CVSS v4 score and by a wor­ry­ing 73% com­pat­i­bil­ity score, al­legedly based on the break­age the up­date is caus­ing in the ecosys­tem. Note that the diff be­tween v1.1.0 and v1.1.1 is one line in the method no one uses.

We even got one of these alerts for the Wycheproof repos­i­tory, which does not im­port the af­fected fil­ippo.io/​ed­ward­s25519 pack­age at all. Instead, it only im­ports the un­af­fected fil­ippo.io/​ed­ward­s25519/​field pack­age.

$ go mod why -m fil­ippo.io/​ed­ward­s25519

# fil­ippo.io/​ed­ward­s25519

github.com/​c2sp/​wyche­p­roof/​tools/​twistcheck

fil­ippo.io/​ed­ward­s25519/​field

We have turned Dependabot off.

But is­n’t this toil un­avoid­able, to pre­vent at­tack­ers from ex­ploit­ing old vul­ner­a­bil­i­ties in your de­pen­den­cies? Absolutely not!

Computers are per­fectly ca­pa­ble of do­ing the work of fil­ter­ing out these ir­rel­e­vant alerts for you. The Go Vulnerability Database has rich ver­sion, pack­age, and sym­bol meta­data for all Go vul­ner­a­bil­i­ties.

Here’s the en­try for the fil­ippo.io/​ed­ward­s25519 vul­ner­a­bil­ity, also avail­able in stan­dard OSV for­mat.

mod­ules:

- mod­ule: fil­ippo.io/​ed­ward­s25519

ver­sions:

- fixed: 1.1.1

vul­ner­a­ble_at: 1.1.0

pack­ages:

- pack­age: fil­ippo.io/​ed­ward­s25519

sym­bols:

- Point.MultiScalarMult

sum­mary: Invalid re­sult or un­de­fined be­hav­ior in fil­ippo.io/​ed­ward­s25519

de­scrip­tion: |-

Previously, if MultiScalarMult was in­voked on an

ini­tial­ized point who was not the iden­tity point, MultiScalarMult

pro­duced an in­cor­rect re­sult. If called on an

unini­tial­ized point, MultiScalarMult ex­hib­ited un­de­fined be­hav­ior.

cves:

- CVE-2026-26958

cred­its:

- sha­har­co­hen1

- WeebDataHoarder

ref­er­ences:

- ad­vi­sory: https://​github.com/​FiloSot­tile/​ed­ward­s25519/​se­cu­rity/​ad­vi­sories/​GHSA-fw7p-63qq-7hpr

- fix: https://​github.com/​FiloSot­tile/​ed­ward­s25519/​com­mit/​d1c650af­b95­fad0742b98d95f2e­b2cf031393abb

source:

id: go-se­cu­rity-team

cre­ated: 2026-02-17T14:45:04.271552-05:00

re­view_s­ta­tus: REVIEWED

Any de­cent vul­ner­a­bil­ity scan­ner will at the very least fil­ter based on the pack­age, which re­quires a sim­ple go list -deps ./…. This al­ready si­lences a lot of noise, be­cause it’s com­mon and good prac­tice for mod­ules to sep­a­rate func­tion­al­ity rel­e­vant to dif­fer­ent de­pen­dents into dif­fer­ent sub-pack­ages. For ex­am­ple, it would have avoided the false alert against the Wycheproof repos­i­tory.

If you use a third-party vul­ner­a­bil­ity scan­ner, you should de­mand at least pack­age-level fil­ter­ing.

Good vul­ner­a­bil­ity scan­ners will go fur­ther, though, and fil­ter based on the reach­a­bil­ity of the vul­ner­a­ble sym­bol us­ing sta­tic analy­sis. That’s what gov­ul­ncheck does!

$ go mod why -m fil­ippo.io/​ed­ward­s25519

# fil­ippo.io/​ed­ward­s25519

fil­ippo.io/​sun­light/​in­ter­nal/​ct­log

github.com/​google/​cer­tifi­cate-trans­parency-go/​tril­lian/​ctfe

github.com/​go-sql-dri­ver/​mysql

fil­ippo.io/​ed­ward­s25519

$ gov­ul­ncheck ./…

=== Symbol Results ===

No vul­ner­a­bil­i­ties found.

Your code is af­fected by 0 vul­ner­a­bil­i­ties.

This scan also found 1 vul­ner­a­bil­ity in pack­ages you im­port and 2

vul­ner­a­bil­i­ties in mod­ules you re­quire, but your code does­n’t ap­pear to call

these vul­ner­a­bil­i­ties.

Use -show ver­bose’ for more de­tails.

gov­ul­ncheck no­ticed that my pro­ject in­di­rectly de­pends on fil­ippo.io/​ed­ward­s25519 through github.com/​go-sql-dri­ver/​mysql, which does not make the vul­ner­a­ble sym­bol reach­able, so it chose not to no­tify me.

If you want, you can tell it to show the pack­age- and mod­ule-level matches.

$ gov­ul­ncheck -show ver­bose,color ./…

Fetching vul­ner­a­bil­i­ties from the data­base…

Checking the code against the vul­ner­a­bil­i­ties…

The pack­age pat­tern matched the fol­low­ing 16 root pack­ages:

fil­ippo.io/​sun­light

fil­ippo.io/​sun­light/​in­ter­nal/​std­log

Govulncheck scanned the fol­low­ing 54 mod­ules and the go1.26.0 stan­dard li­brary:

fil­ippo.io/​sun­light

craw­shaw.io/​sqlite@v0.3.3-0.20220618202545-d1964889ea3c

fil­ippo.io/​big­mod@v0.0.3

fil­ippo.io/​ed­ward­s25519@v1.1.0

fil­ippo.io/​key­gen@v0.0.0-20240718133620-7f162ef­bb­d87

fil­ippo.io/​torch­wood@v0.8.0

=== Symbol Results ===

No vul­ner­a­bil­i­ties found.

=== Package Results ===

Vulnerability #1: GO-2026-4503

Invalid re­sult or un­de­fined be­hav­ior in fil­ippo.io/​ed­ward­s25519

More info: https://​pkg.go.dev/​vuln/​GO-2026-4503

Module: fil­ippo.io/​ed­ward­s25519

Found in: fil­ippo.io/​ed­ward­s25519@v1.1.0

Fixed in: fil­ippo.io/​ed­ward­s25519@v1.1.1

=== Module Results ===

Vulnerability #1: GO-2025-4135

Malformed con­straint may cause de­nial of ser­vice in

golang.org/​x/​crypto/​ssh/​agent

More info: https://​pkg.go.dev/​vuln/​GO-2025-4135

Module: golang.org/​x/​crypto

Found in: golang.org/​x/​crypto@v0.44.0

Fixed in: golang.org/​x/​crypto@v0.45.0

Vulnerability #2: GO-2025-4134

Unbounded mem­ory con­sump­tion in golang.org/​x/​crypto/​ssh

More info: https://​pkg.go.dev/​vuln/​GO-2025-4134

Module: golang.org/​x/​crypto

Found in: golang.org/​x/​crypto@v0.44.0

Fixed in: golang.org/​x/​crypto@v0.45.0

Your code is af­fected by 0 vul­ner­a­bil­i­ties.

This scan also found 1 vul­ner­a­bil­ity in pack­ages you im­port and 2

vul­ner­a­bil­i­ties in mod­ules you re­quire, but your code does­n’t ap­pear to call

these vul­ner­a­bil­i­ties.

...

Read the original on words.filippo.io »

3 547 shares, 22 trendiness

Wikipedia blacklists Archive.today, starts removing 695,000 archive links

The English-language edi­tion of Wikipedia is black­list­ing Archive.today af­ter the con­tro­ver­sial archive site was used to di­rect a dis­trib­uted de­nial of ser­vice (DDoS) at­tack against a blog.

In the course of dis­cussing whether Archive.today should be dep­re­cated be­cause of the DDoS, Wikipedia ed­i­tors dis­cov­ered that the archive site al­tered snap­shots of web­pages to in­sert the name of the blog­ger who was tar­geted by the DDoS. The al­ter­ations were ap­par­ently fu­eled by a grudge against the blog­ger over a post that de­scribed how the Archive.today main­tainer hid their iden­tity be­hind sev­eral aliases.

There is con­sen­sus to im­me­di­ately dep­re­cate archive.to­day, and, as soon as prac­ti­ca­ble, add it to the spam black­list (or cre­ate an edit fil­ter that blocks adding new links), and re­move all links to it,” stated an up­date to­day on Wikipedia’s Archive.today dis­cus­sion. There is a strong con­sen­sus that Wikipedia should not di­rect its read­ers to­wards a web­site that hi­jacks users’ com­put­ers to run a DDoS at­tack (see WP:ELNO#3). Additionally, ev­i­dence has been pre­sented that archive.to­day’s op­er­a­tors have al­tered the con­tent of archived pages, ren­der­ing it un­re­li­able.”

More than 695,000 links to Archive.today are dis­trib­uted across 400,000 or so Wikipedia pages. The archive site is com­monly used to by­pass news pay­walls, and the FBI has on the site op­er­a­tor’s iden­tity with a sub­poena to do­main reg­is­trar Tucows.

Those in fa­vor of main­tain­ing the sta­tus quo rested their ar­gu­ments pri­mar­ily on the util­ity of archive.to­day for ver­i­fi­a­bil­ity,” said to­day’s Wikipedia up­date. However, an analy­sis of ex­ist­ing links has shown that most of its uses can be re­placed. Several ed­i­tors started to work out im­ple­men­ta­tion de­tails dur­ing this RfC [request for com­ment] and the com­mu­nity should fig­ure out how to ef­fi­ciently re­move links to archive.to­day.”

Guidance pub­lished as a re­sult of the de­ci­sion asked ed­i­tors to help re­move and re­place links to the fol­low­ing do­main names used by the archive site: archive.to­day, archive.is, archive.ph, archive.fo, archive.li, archive.md, and archive.vn. The guid­ance says ed­i­tors can re­move Archive.today links when the orig­i­nal source is still on­line and has iden­ti­cal con­tent; re­place the archive link so it points to a dif­fer­ent archive site, like the Internet Archive, Ghostarchive, or Megalodon; or change the orig­i­nal source to some­thing that does­n’t need an archive (e.g., a source that was printed on pa­per), or for which a link to an archive is only a mat­ter of con­ve­nience.”

...

Read the original on arstechnica.com »

4 402 shares, 19 trendiness

Across the US, people are dismantling and destroying Flock surveillance cameras

Silicon Valley is tight­en­ing its ties with Trumpworld, the sur­veil­lance state is rapidly ex­pand­ing, and big tech’s AI data cen­ter build­out is boom­ing. Civilians are push­ing back.

In to­day’s edi­tion of Blood in the Machine:

* Across the na­tion, peo­ple are dis­man­tling and de­stroy­ing Flock cam­eras that con­duct war­rant­less ve­hi­cle sur­veil­lance, and whose data is shared with ICE.

* An Oklahoma man air­ing his con­cerns about a lo­cal data cen­ter pro­ject at a pub­lic hear­ing is ar­rested af­ter he ex­ceeded his al­lot­ted time by a cou­ple sec­onds.

* Uber and Lyft dri­vers de­liver a pe­ti­tion signed by 10,000 gig work­ers de­mand­ing that stolen wages be re­turned to them.

* PLUS: A cli­mate re­searcher has a new re­port that un­rav­els the AI will solve cli­mate change’ mythos, Tesla’s Robotaxis are crash­ing 4 times as of­ten as hu­mans, and AI-generated pub­lic com­ments helped kill a vote on air qual­ity.

A brief note that this re­port­ing, re­search, and writ­ing takes a lot of time, re­sources, and en­ergy. I can only do it thanks to the paid sub­scribers who chip in a few bucks each month; if you’re able, and you find value in this work, please con­sider up­grad­ing to a paid sub­scrip­tion so I can con­tinue on. Many thanks, ham­mers up, and on­wards.

Last week, in La Mesa, a small city just east of San Diego, California, ob­servers hap­pened upon a pair of de­stroyed Flock cam­eras. One had been smashed and left on the me­dian, the other had key parts re­moved. The de­struc­tion was ob­vi­ously in­ten­tional, and ap­pears per­haps even staged to leave a mes­sage: It came just weeks af­ter the city de­cided, in the face of pub­lic protest, to con­tinue its con­tracts with the sur­veil­lance com­pany.

Flock cam­eras are typ­i­cally mounted on 8 to 12 foot poles and pow­ered by a so­lar panel. The smashed re­mains of all of the above in La Mesa are the lat­est ex­am­ples of a widen­ing anti-Flock back­lash. In re­cent months, peo­ple have been smash­ing and dis­man­tling the sur­veil­lance de­vices, in in­ci­dents re­ported in at least five states, from coast to coast.

Bill Paul, who runs the lo­cal news out­let San Diego Slackers, and who first re­ported on the smashed Flock equip­ment, tells me that the sab­o­tage comes just a month or two af­ter San Diego held a rau­cous city coun­cil meet­ing over whether to keep op­er­at­ing the Flock cam­eras. A clear ma­jor­ity of pub­lic at­ten­dees pre­sent were in fa­vor of shut­ting them down.

There was a huge turnout against them,” he tells me, but the coun­cil ap­proved con­tin­u­a­tion of the con­tract.”

The tenor of the meet­ing re­flects a grow­ing anger and con­cern over the sur­veil­lance tech­nol­ogy that’s gone na­tion­wide: Flock, which is based in Atlanta and is cur­rently val­ued at $7.5 bil­lion, op­er­ates au­to­matic li­cense plate read­ers (ALPR) that have now been in­stalled in some 6,000 US com­mu­ni­ties. They gather not just li­cense plate im­ages, but other iden­ti­fy­ing data used to fingerprint’ ve­hi­cles, their own­ers, and their move­ments. This data can be col­lected, stored, and ac­cessed with­out a war­rant, mak­ing it a pop­u­lar workaround for law en­force­ment. Perhaps most con­tro­ver­sially, Flock’s ve­hi­cle data is rou­tinely ac­cessed by ICE.

If you’ve heard Flock’s name come up re­cently, it’s likely as a re­sult of their now-can­celed part­ner­ship with Ring, made in­stantly fa­mous by a par­tic­u­larly dystopian Super Bowl ad that promised to turn reg­u­lar neigh­bor­hoods into a sur­veil­lance drag­net.

Meanwhile, abuses have been preva­lent. A Georgia po­lice chief was ar­rested and charged with us­ing Flock data to stalk and ha­rass pri­vate cit­i­zens. Flock data has been used to track cit­i­zens who cross state lines for abor­tions when the pro­ce­dure is il­le­gal in their state. And mu­nic­i­pal­i­ties have found that fed­eral agen­cies have ac­cessed lo­cal flock data with­out their knowl­edge or con­sent. Critics claim that this war­rant­less data col­lec­tion is Orwellian and un­con­sti­tu­tional; a vi­o­la­tion of the 4th amend­ment. As a re­sult, civil­ians from Oregon to Virginia to California and be­yond are push­ing their gov­ern­ments to aban­don Flock con­tracts. In some cases, they’re suc­ceed­ing. Cities like Santa Cruz, CA, and Eugene, OR, have can­celled their con­tracts with Flock.

In Oregon’s case, the pub­lic out­cry was ac­com­pa­nied by a cam­paign of de­struc­tion against the sur­veil­lance de­vices: Last year, at least six Flock li­cense plate read­ers mounted on poles lo­cated in Eugene and Springfield were cut down and de­stroyed, ac­cord­ing to the Lookout Eugene-Springfield.

A note read­ing Hahaha get wrecked ya sur­veilling fucks” was at­tached to one of the de­stroyed poles, and some­what in­cred­i­bly, broad­cast on the lo­cal news.

In Greenview, Illinois, a Flock cam­era pole was sev­ered at the base and the de­vice de­stroyed. In Lisbon, Connecticut, po­lice are in­ves­ti­gat­ing an­other smashed Flock cam­era.

In Virginia, last December, a man was ar­rested for dis­man­tling and de­stroy­ing 13 Flock cam­eras through­out the state over the course of the year. He’s ap­par­ently al­ready ad­mit­ted to do­ing so, ac­cord­ing to lo­cal news:

Jefferey S. Sovern, 41, was ar­rested in October af­ter de­tec­tives say he intentionally de­stroyed” 13 Flock Safety cam­eras be­tween April and October of this year. He was charged with 13 counts of de­struc­tion of prop­erty, six counts of pe­tit lar­ceny and six counts of pos­ses­sion of bur­glary tools. Sovern ad­mit­ted to the crimes, ac­cord­ing to a crim­i­nal com­plaint filed in Suffolk General District Court, go­ing as far as to say he used vice grips to help him dis­as­sem­ble the tow-piece polls. He also ad­mit­ted to keep­ing some of the wiring, bat­ter­ies and so­lar pan­els taken from the cam­eras. Some of the items were re­cov­ered by po­lice af­ter they searched the prop­erty.

After his ar­rest, Sovern cre­ated a GoFundMe to help cover his le­gal costs, in which he sheds a lit­tle light on his in­ten­tions:

My name is Jeff and I ap­pre­ci­ate my pri­vacy. I ap­pre­ci­ate every­one’s right to pri­vacy, en­shrined in the fourth amend­ment. With the lo­cal news out­lets find­ing my le­gal is­sues and cre­at­ing a story that is start­ing to grow, there has been com­mu­nity sup­port for me that I humbly wel­come.

Sovern points his GoFundMe con­trib­u­tors to DeFlock, a web­site aimed at track­ing and coun­ter­ing the rise of Flock cam­eras in US com­mu­ni­ties. It counts 46 cities that have of­fi­cially re­jected Flock and other ALPRs since its cam­paign be­gan.

In fact, it’s hard to think of a tech prod­uct or pro­ject this side of gen­er­a­tive AI that is more roundly op­posed and re­viled, on a bi­par­ti­san level, than Flock, and re­sis­tance takes many forms and stripes. Here’s the YouTuber Benn Jordan, show­ing his view­ers how to Flock-proof their li­cense plates and ren­der their ve­hi­cles il­leg­i­ble to the com­pa­ny’s data in­ges­tion sys­tems:

In re­sponse to such Flock counter-tac­tics, Florida passed a law last year mak­ing it il­le­gal to cover or al­ter your li­cense plate.

In his GoFundMe, Sovern also men­tioned the sup­port for him he’d seen on fo­rums on­line, so I went over to Reddit to get a sense for how his ac­tions were be­ing re­ceived on­line. Here was the page that shared news of his ar­rest for de­stroy­ing the Flock cam­eras:

There was, in other words, nearly uni­ver­sal sup­port for Sovern’s Flock dis­man­tling cam­paign. Bear in mind that this is r/​Nor­folk, and while it’s still red­dit users we’re talk­ing about, it’s not like this is r/​an­ar­chism here:

The San Diego red­dit threads car­ry­ing news of the de­stroyed Flock equip­ment told a sim­i­lar story:

There were plenty of out­right en­dorse­ments of the sab­o­tage:

Off the mes­sage boards and in real civic life, Bill Paul, the re­porter with the San Diego Slacker, says anger is boil­ing over, too. He points again to that heated December 2025 city coun­cil meet­ing, in which pub­lic out­rage was left un­ad­dressed. The city, per­haps aware of the stigma Flock now car­ries, ap­par­ently tried to high­light that their fo­cus was on the smart street­lights” made by an­other com­pany, while down­play­ing the fact that those street­lights run on Flock soft­ware.

San Diego gets to hide be­hind a slight fa­cade in that their con­tract is with Ubicquia,” the smart street­light man­u­fac­turer, Paul says, but the soft­ware layer is Flock. You can eas­ily see Flock hard­ware on re­tail prop­er­ties, look­ing at the same cit­i­zens, with zero over­sight, and SDPD can claim they have clean hands.”

Weeks later, pieces of smashed Flock cam­eras lit­tered the ground.

Across the coun­try, in other words, mu­nic­i­pal gov­ern­ments are over­rid­ing pub­lic will to make deals with a prof­i­teer­ing tech com­pany to sur­veil their cit­i­zens and to col­lab­o­rate with fed­eral agen­cies like ICE. It might be taken as a sign of the times that in states and cities across the US, thou­sands of miles apart, those op­posed to the tech­nol­ogy are re­fus­ing to coun­te­nance what they view as vi­o­la­tions of pri­vacy and civil lib­erty, and are in­stead tak­ing up vice grips and metal cut­ters. And in many cases, they’re get­ting hailed by their peers as he­roes.

If you’ve heard sto­ries of smashed Flock cam­eras or dis­man­tled sur­veil­lance equip­ment in your neigh­bor­hood, please share—drop a link in the com­ments, or con­tact me on Signal or at bri­ancmer­chant@pro­ton.me.

Thanks to Lilly Irani for the tip on the smashed Flock cams in San Diego.

In case you missed it, I shared my five take­aways on the most re­cent round of ul­tra­heated AI dis­course here:

The ex­change was filmed and recorded on YouTube:

Police in Claremore, Oklahoma ar­rested a lo­cal man af­ter he went slightly over his time giv­ing pub­lic re­marks dur­ing a city coun­cil meet­ing op­pos­ing a pro­posed data cen­ter. Darren Blanchard showed up at a Claremore City Council meet­ing on Tuesday to talk about pub­lic records and the data cen­ter. When he went over his al­lot­ted 3 min­utes by a few sec­onds, the city had him ar­rested and charged with tres­pass­ing. The sub­ject of the city coun­cil meet­ing was Project Mustang, a pro­posed data cen­ter that would be lo­cated within a lo­cal in­dus­trial park. In a mir­ror of fights play­ing out across the United States, de­vel­oper Beale Infrastructure is at­tempt­ing to build a large data cen­ter in a small town and the res­i­dents are con­cerned about wa­ter rights, spik­ing elec­tric­ity bills, and noise.The pub­lic hear­ing was a chance for the city coun­cil to ad­dress some of these con­cerns and all res­i­dents were given a strict three minute time limit. The en­tire event was livestreamed and archive of it is on YouTube. Blanchard was warned, barely, to respect the process” by one of the coun­cil mem­bers but was clearly fin­ish­ing read­ing from pa­pers he had brought to read from, was not bel­liger­ent, and went over time by just a few sec­onds. Anyone who has ever at­tended or watched a city coun­cil meet­ing any­where will know that peo­ple go over their time at es­sen­tially any meet­ing that in­cludes pub­lic com­ment.Blan­chard ar­rived with doc­u­ments in hand and ques­tions about pub­lic records re­quests he’d made. During his re­marks, peo­ple clapped and cheered and he asked that this not be counted against his three min­utes. There are ma­jor con­cerns about the pub­lic process in Claremore,” Blanchard said, ref­er­enc­ing com­pli­ance doc­u­ments and ir­reg­u­lar­i­ties he’d un­cov­ered in pub­lic records.

Blanchard was then ar­rested as the crowd jeered in dis­be­lief. Also dis­con­cert­ing was the way the lo­cal news framed the event, with a lo­cal an­chor de­fend­ing au­thor­i­ties by claim­ing he was warned mul­ti­ple times.” Seems like a pretty sure­fire way to make peo­ple hate data cen­ters and the gov­ern­ments pro­tect­ing them even more!

On Wednesday, I headed to Pershing Square in down­town Los Angeles, where dozens of gig work­ers and or­ga­niz­ers with Rideshare Drivers United had as­sem­bled to de­liver a pe­ti­tion to the California Labor Commission signed by thou­sands of work­ers, call­ing on the body to de­liver a set­tle­ment on their be­half. Organizers made short speeches on the steps of the square while lo­cal ra­dio and TV sta­tions cap­tured the mo­ment.

The Labor Commission is su­ing the gig com­pa­nies on dri­vers’ be­half, al­leg­ing that Uber and Lyft stole bil­lions of dol­lars worth of wages from dri­vers be­fore Prop 22 was en­acted in 2020. The com­mis­sion is be­lieved to be in ne­go­ti­a­tions with the gig com­pa­nies right now that will de­ter­mine a set­tle­ment.

I spoke with one dri­ver, Karen, who had trav­eled from San Diego to join the demon­stra­tion, and asked her why she came. It’s im­por­tant we build dri­ver power” she said. Without dri­ver power, we won’t get what we need, and we just want fair­ness.” She said she was hop­ing to claim at least $20,000 in stolen wages.

We’re fight­ing for wages that were stolen for us from us and con­tinue to be stolen from us every sin­gle day by these app com­pa­nies from hell,” RDU or­ga­nizer Nicole Moore told me. So we’re march­ing in down­town L. A. to de­liver 10,000 sig­na­tures of dri­vers de­mand­ing that the state fight hard for us, and don’t let these com­pa­nies rip us off.”

According to Tesla’s own num­bers, its new RoboTaxis in Austin are crash­ing at a rate 4 times higher than hu­man dri­vers. The EV trade pub­li­ca­tion Electrek re­ports:

With 14 crashes now on the books, Tesla’s Robotaxi” crash rate in Austin con­tin­ues to de­te­ri­o­rate. Extrapolating from Tesla’s Q4 2025 earn­ings mileage data, which showed roughly 700,000 cu­mu­la­tive paid miles through November, the fleet likely reached around 800,000 miles by mid-Jan­u­ary 2026. That works out to one crash every 57,000 miles. The irony is that Tesla’s own num­bers con­demn it. Tesla’s Vehicle Safety Report claims the av­er­age American dri­ver ex­pe­ri­ences a mi­nor col­li­sion every 229,000 miles and a ma­jor col­li­sion every 699,000 miles. By Tesla’s own bench­mark, its Robotaxi” fleet is crash­ing nearly 4 times more of­ten than what the com­pany says is nor­mal for a reg­u­lar hu­man dri­ver in a mi­nor col­li­sion, and vir­tu­ally every sin­gle one of these miles was dri­ven with a trained safety mon­i­tor in the ve­hi­cle who could in­ter­vene at any mo­ment, which means they likely pre­vented more crashes that Tesla’s sys­tem would­n’t have avoided.Us­ing NHTSAs broader po­lice-re­ported crash av­er­age of roughly one per 500,000 miles, the pic­ture is even worse, Tesla’s fleet is crash­ing at ap­prox­i­mately 8 times the hu­man rate.

-“The Left Doesn’t Hate Technology, We Hate Being Exploited,” by Gita Jackson at Aftermath.

Meta drops $65 mil­lion into su­per PACs to boost tech-friendly state can­di­dates,” by Christine Mui in Politico.

-A great new re­port from cli­mate re­searcher Ketan Joshi, The AI Climate Hoax: Behind the Curtain of How Big Tech Greenwashes Impacts,” has been mak­ing head­lines and is well worth a read. Perhaps we’ll dig deeper into it in a fu­ture is­sue.

-The LA Times re­ports that the Southern California air board re­jected new pol­lu­tion rules af­ter an AI-generated flood of made-up com­ments. Here’s UCLAs Evan George on how AI poses a unique threat to the civic process.

Okay okay, that’s it for this week. Thanks as al­ways for read­ing. Hammers up.

...

Read the original on www.bloodinthemachine.com »

5 271 shares, 13 trendiness

Every Company Building Your AI Assistant Is Now an Ad Company

Pre-orders for the Juno Pioneer Edition now open, re­serve your Juno to­day!

On January 16, OpenAI qui­etly an­nounced that ChatGPT would be­gin show­ing ad­ver­tise­ments. By February 9th, ads were live. Eight months ear­lier, OpenAI spent $6.5 bil­lion to ac­quire Jony Ive’s hard­ware startup io. They’re build­ing a pocket-sized, screen­less de­vice with built-in cam­eras and mi­cro­phones — “contextually aware,” de­signed to re­place your phone.

But this is­n’t a post about OpenAI. They’re just the lat­est. The prob­lem is struc­tural.

Every sin­gle com­pa­nyWe can quib­ble about Apple.

build­ing AI as­sis­tants is now funded by ad­ver­tis­ing.

And every one of them is build­ing hard­ware de­signed to see and hear every­thing around you, all day, every day. These two facts are on a col­li­sion course, and lo­cal on-de­vice in­fer­ence is the only way off the track.

Before we talk about who’s build­ing it, let’s be clear about what’s be­ing built.

Every main­stream voice as­sis­tant to­day works be­hind a gate. You say a magic word — “Hey Siri,” OK Google,” Alexa” — and only then does the sys­tem lis­ten. Everything be­fore the wake word is the­o­ret­i­cally dis­carded.

This was a rea­son­able de­sign in 2014. It is a dead end for where AI as­sis­tance needs to go.

Here’s what hap­pens in a real kitchen at 6:30am:Anonymized from one of our test homes. The real ver­sion was messier and

in­cluded a tod­dler scream­ing about Cheerios.

Nobody is go­ing to pref­ace that with a wake word. The in­for­ma­tion is wo­ven into nat­ural speech be­tween two flus­tered par­ents get­ting the fam­ily ready to leave the house. The mo­ment you re­quire a trig­ger, you lose the most valu­able in­ter­ac­tions — the ones that hap­pen while peo­ple are liv­ing their lives, not think­ing of how to give con­text to an AI as­sis­tant.

You can­not build proac­tive as­sis­tance be­hind a wake word. The AI has to be pre­sent in the room, con­tin­u­ously, ac­cu­mu­lat­ing con­text over days and weeks and months, to build the un­der­stand­ing that makes proac­tive help pos­si­ble.

This is where every ma­jor AI com­pany is head­ing. Not just au­dio — vi­sion, pres­ence de­tec­tion, wear­ables, multi-room aware­ness. The next gen­er­a­tion of AI as­sis­tants will hear and see every­thing. Some will be on your face or in your ears all day. They will be al­ways on, al­ways sens­ing, al­ways build­ing a model of your life.

The ques­tion is not whether al­ways-on AI will hap­pen. It’s who

con­trols the data it col­lects. And right now, the an­swer to that

ques­tion is: ad­ver­tis­ing com­pa­nies.

Here’s where the in­dus­try’s re­sponse gets pre­dictable. We en­crypt the data in tran­sit.” We delete it af­ter pro­cess­ing.” We anonymize every­thing.” Ads don’t in­flu­ence the AIs an­swers.” Read our pri­vacy pol­icy.“With cloud pro­cess­ing, every user is trust­ing:

• The com­pa­ny’s cur­rent pri­vacy pol­icy

• Every em­ployee with pro­duc­tion ac­cess

• Every third-party ven­dor in the pro­cess­ing pipeline

• Every gov­ern­ment that can is­sue a sub­poena or na­tional se­cu­rity

let­ter

• Every ad­ver­tiser part­ner­ship that has­n’t been an­nounced yet

• The com­pa­ny’s fu­ture pri­vacy pol­icy

OpenAI’s own ad an­nounce­ment in­cludes this lan­guage: OpenAI keeps con­ver­sa­tions with ChatGPT pri­vate from ad­ver­tis­ers, and never sells data to ad­ver­tis­ers.” It sounds re­as­sur­ing. But Google scanned every Gmail for ad tar­get­ing for thir­teen years

be­fore qui­etly stop­ping in 2017. Policies change. Architectures don’t.

When a de­vice processes data lo­cally, the data phys­i­cally can­not leave the net­work. There is no API end­point to call. There is no teleme­try pipeline. There is no anonymized us­age data” that some­how still con­tains enough sig­nal to be use­ful for ad tar­get­ing. The in­fer­ence hard­ware sits in­side the de­vice or in the user’s home, on their net­work.

Your email is sen­si­tive. A con­tin­u­ous au­dio and vi­sual feed of your home is some­thing else en­tirely. It cap­tures ar­gu­ments, break­downs, med­ical con­ver­sa­tions, fi­nan­cial dis­cus­sions, in­ti­mate mo­ments, par­ent­ing at its worst, the com­pletely un­guarded ver­sion of peo­ple that ex­ists only when they be­lieve no­body is watch­ing. We wrote a deep dive on our mem­ory sys­tem in

Building Memory for an Always-On AI That Listens to Your Kitchen.

Amazon al­ready showed us what hap­pens. They elim­i­nated lo­cal voice pro­cess­ing.

They planned to feed Alexa con­ver­sa­tions to ad­ver­tis­ers.

They part­nered Ring with a sur­veil­lance net­work that had fed­eral law

en­force­ment ac­cess.

What hap­pens when those same eco­nomic in­cen­tives are ap­plied to de­vices that cap­ture every­thing?

The coun­ter­ar­gu­ment is al­ways the same: Local mod­els aren’t good enough.” Three years ago, that was true. It is no longer true.

You can run a com­plete am­bi­ent AI pipeline to­day — real-time speech-to-text, se­man­tic mem­ory, con­ver­sa­tional rea­son­ing, text-to-speech, etc — on a de­vice that fits next to a ca­ble box (remember those?). No fan noise. A one-time hard­ware pur­chase with no per-query fee and no data leav­ing the build­ing. New model ar­chi­tec­tures, bet­ter com­pres­sion, and open-source in­fer­ence en­gines have con­verged to make this pos­si­ble, and the sil­i­con roadmap points in one di­rec­tion: more ca­pa­bil­ity per watt, every year. We’ve been run­ning al­ways-on pro­to­types in five homes. The com­plaints

we get are about the AI mis­un­der­stand­ing con­text, not about raw model

ca­pa­bil­ity. That’s a mem­ory ar­chi­tec­ture prob­lem, not a model size

prob­lem.

Are lo­cal mod­els as ca­pa­ble as the best cloud mod­els? No. But we’re usu­ally not ask­ing our smart speaker to re-de­rive the Planck con­stant.

Hardware that runs in­fer­ence on-de­vice. Models that process au­dio and video lo­cally and never trans­mit it. There needs to be a busi­ness model based on sell­ing the hard­ware and

soft­ware, not the data the hard­ware col­lects. An ar­chi­tec­ture where the

com­pany that makes the de­vice lit­er­ally can­not ac­cess the data

it processes, be­cause there is no con­nec­tion to ac­cess it

through.

The most help­ful AI will also be the most in­ti­mate tech­nol­ogy ever built. It will hear every­thing. See every­thing. Know every­thing about the fam­ily. The only ar­chi­tec­ture that keeps that tech­nol­ogy safe is one where it is struc­turally in­ca­pable of be­tray­ing that knowl­edge. Not pol­icy. Not promises. Not a pri­vacy set­ting that can be qui­etly re­moved in a March soft­ware up­date.

Choose lo­cal. Choose edge. Build the AI that knows every­thing but phones home noth­ing.

...

Read the original on juno-labs.com »

6 260 shares, 28 trendiness

Andrej Karpathy talks about “Claws”

Andrej Karpathy talks about Claws”. Andrej Karpathy tweeted a mini-es­say about buy­ing a Mac Mini (“The ap­ple store per­son told me they are sell­ing like hot­cakes and every­one is con­fused”) to tin­ker with Claws:

Andrej Karpathy talks about Claws”. Andrej Karpathy tweeted a mini-es­say about buy­ing a Mac Mini (“The ap­ple store per­son told me they are sell­ing like hot­cakes and every­one is con­fused”) to tin­ker with Claws:

I’m def­i­nitely a bit sus’d to run OpenClaw specif­i­cally […] But I do love the con­cept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, tak­ing the or­ches­tra­tion, sched­ul­ing, con­text, tool calls and a kind of per­sis­tence to a next level.

Looking around, and given that the high level idea is clear, there are a lot of smaller Claws start­ing to pop out. For ex­am­ple, on a quick skim NanoClaw looks re­ally in­ter­est­ing in that the core en­gine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels man­age­able, au­ditable, flex­i­ble, etc.) and runs every­thing in con­tain­ers by de­fault. […]

Anyway there are many oth­ers - e.g. nanobot, ze­ro­claw, iron­claw, pic­o­claw (lol @ pre­fixes). […]

Not 100% sure what my setup ends up look­ing like just yet but Claws are an awe­some, ex­cit­ing new layer of the AI stack.

...

Read the original on simonwillison.net »

7 231 shares, 11 trendiness

CERN 2019 WorldWideWeb Rebuild

In December 1990, an ap­pli­ca­tion called WorldWideWeb was de­vel­oped on a NeXT ma­chine at The European Organization for Nuclear Research (known as CERN) just out­side of Geneva. This pro­gram – WorldWideWeb — is the an­tecedent of most of what we con­sider or know of as the web” to­day.

In February 2019, in cel­e­bra­tion of the thir­ti­eth an­niver­sary of the de­vel­op­ment of WorldWideWeb, a group of de­vel­op­ers and de­sign­ers con­vened at CERN to re­build the orig­i­nal browser within a con­tem­po­rary browser, al­low­ing users around the world to ex­pe­ri­ence the rather hum­ble ori­gins of this trans­for­ma­tive tech­nol­ogy.

This pro­ject was sup­ported by the US Mission in Geneva through the CERN & Society Foundation.

Ready to browse the World Wide Web us­ing WorldWideWeb?

Select Document” from the menu on the side.

Click here to jump in (and re­mem­ber you need to dou­ble-click on links):

* History — a brief his­tory of the ap­pli­ca­tion which was built in 1989 as a prog­en­i­tor to what we know as the web” to­day.

* Timeline — a time­line of the thirty years of in­flu­ences lead­ing up to (and the thirty years of in­flu­ence lead­ing out from) the pub­li­ca­tion of the memo that lead to the de­vel­op­ment of the first web browser.

* The Browser — in­struc­tions for us­ing the recre­ated WorldWideWeb browser, and a col­lec­tion of its in­ter­face pat­terns.

* Typography — de­tails of the NeXT com­put­er’s fonts used by the WorldWideWeb browser.

* Inside the Code — a look at some of the orig­i­nal code of WorldWideWeb.

* Production Process — a be­hind the scenes look at how the WorldWideWeb browser was re­built for to­day.

* Related Links — links to ad­di­tional his­tor­i­cal and tech­ni­cal re­sources around the pro­duc­tion of WorldWideWeb.

* Colophon — a bit of info about the folks be­hind the pro­ject.

...

Read the original on worldwideweb.cern.ch »

8 208 shares, 9 trendiness

Scan Gallery

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

DISCOUNTS: Instead of ran­dom dis­counts we pre­fer keep­ing the prices sta­ble (already since early 2022)

US Shipping - Now all taxes and fees are in­cluded in the ship­ping cost at check­out

...

Read the original on openscan.eu »

9 194 shares, 18 trendiness

New law on more sustainable, circular and safe batteries enters into force

A new law to en­sure that bat­ter­ies are col­lected, reused and re­cy­cled in Europe is en­ter­ing into force to­day. The new Batteries Regulation will en­sure that, in the fu­ture, bat­ter­ies have a low car­bon foot­print, use min­i­mal harm­ful sub­stances, need less raw ma­te­ri­als from non-EU coun­tries, and are col­lected, reused and re­cy­cled to a high de­gree in Europe. This will sup­port the shift to a cir­cu­lar econ­omy, in­crease se­cu­rity of sup­ply for raw ma­te­ri­als and en­ergy, and en­hance the EUs strate­gic au­ton­omy.

In line with the cir­cu­lar­ity am­bi­tions of the European Green Deal, the Batteries Regulation is the first piece of European leg­is­la­tion tak­ing a full life-cy­cle ap­proach in which sourc­ing, man­u­fac­tur­ing, use and re­cy­cling are ad­dressed and en­shrined in a sin­gle law.

Batteries are a key tech­nol­ogy to drive the green tran­si­tion, sup­port sus­tain­able mo­bil­ity and con­tribute to cli­mate neu­tral­ity by 2050. To that end, start­ing from 2025, the Regulation will grad­u­ally in­tro­duce de­c­la­ra­tion re­quire­ments, per­for­mance classes and max­i­mum lim­its on the car­bon foot­print of elec­tric ve­hi­cles, light means of trans­port (such as e-bikes and scoot­ers) and recharge­able in­dus­trial bat­ter­ies.

The Batteries Regulation will en­sure that bat­ter­ies placed on the EU sin­gle mar­ket will only be al­lowed to con­tain a re­stricted amount of harm­ful sub­stances that are nec­es­sary. Substances of con­cerns used in bat­ter­ies will be reg­u­larly re­viewed.

Targets for re­cy­cling ef­fi­ciency, ma­te­r­ial re­cov­ery and re­cy­cled con­tent will be in­tro­duced grad­u­ally from 2025 on­wards. All col­lected waste bat­ter­ies will have to be re­cy­cled and high lev­els of re­cov­ery will have to be achieved, in par­tic­u­lar of crit­i­cal raw ma­te­ri­als such as cobalt, lithium and nickel. This will guar­an­tee that valu­able ma­te­ri­als are re­cov­ered at the end of their use­ful life and brought back in the econ­omy by adopt­ing stricter tar­gets for re­cy­cling ef­fi­ciency and ma­te­r­ial re­cov­ery over time.

Starting in 2027, con­sumers will be able to re­move and re­place the portable bat­ter­ies in their elec­tronic prod­ucts at any time of the life cy­cle. This will ex­tend the life of these prod­ucts be­fore their fi­nal dis­posal, will en­cour­age re-use and will con­tribute to the re­duc­tion of post-con­sumer waste.

To help con­sumers make in­formed de­ci­sions on which bat­ter­ies to pur­chase, key data will be pro­vided on a la­bel. A QR code will pro­vide ac­cess to a dig­i­tal pass­port with de­tailed in­for­ma­tion on each bat­tery that will help con­sumers and es­pe­cially pro­fes­sion­als along the value chain in their ef­forts to make the cir­cu­lar econ­omy a re­al­ity for bat­ter­ies.

Under the new law’s due dili­gence oblig­a­tions, com­pa­nies must iden­tify, pre­vent and ad­dress so­cial and en­vi­ron­men­tal risks linked to the sourc­ing, pro­cess­ing and trad­ing of raw ma­te­ri­als such as lithium, cobalt, nickel and nat­ural graphite con­tained in their bat­ter­ies.  The ex­pected mas­sive in­crease in de­mand for bat­ter­ies in the EU should not con­tribute to an in­crease of such en­vi­ron­men­tal and so­cial risks.

Work will now fo­cus on the ap­pli­ca­tion of the law in the Member States, and the redac­tion of sec­ondary leg­is­la­tion (implementing and del­e­gated acts) pro­vid­ing more de­tailed rules.

Since 2006, bat­ter­ies and waste bat­ter­ies have been reg­u­lated at EU level un­der the Bat­ter­ies Directive. The Commission proposed to re­vise this Directive in December 2020 due to new so­cioe­co­nomic con­di­tions, tech­no­log­i­cal de­vel­op­ments, mar­kets, and bat­tery uses.

Demand for bat­ter­ies is in­creas­ing rapidly. It is set to in­crease 14-fold glob­ally by 2030 and the EU could ac­count for 17% of that de­mand. This is mostly dri­ven by the elec­tri­fi­ca­tion of trans­port. Such ex­po­nen­tial growth in de­mand for bat­ter­ies will lead to an equiv­a­lent in­crease in de­mand for raw ma­te­ri­als, hence the need to min­imise their en­vi­ron­men­tal im­pact.

In 2017, the Commission launched the Eu­ro­pean Battery Alliance to build an in­no­v­a­tive, sus­tain­able and glob­ally com­pet­i­tive bat­tery value chain in Europe, and en­sure sup­ply of bat­ter­ies needed for de­car­bon­is­ing the trans­port and en­ergy sec­tors.

...

Read the original on environment.ec.europa.eu »

10 182 shares, 10 trendiness

What is OAuth?

I des­per­ately need a Matt Levine style ex­pla­na­tion of how OAuth works. What is the his­tor­i­cal cas­cade of re­quire­ments that got us to this place?

There are plenty of ex­pla­na­tions of the in­ner me­chan­i­cal work­ings of OAuth, and lots of ex­pla­na­tions about how var­i­ous flows etc work, but Geoffrey is ask­ing a dif­fer­ent ques­tion:

What I need is to un­der­stand why it is de­signed this way­con­crete ex­am­ples of use cases that mo­ti­vate the de­sign

In the 19 years (!) since I wrote the first sketch of an OAuth spec­i­fi­ca­tion, there has been a lot of minu­tiae and cruft added, but the core idea re­mains the same. Thankfully, it’s a very sim­ple core. Geoffrey’s a very smart guy, and the fact that he’s ask­ing this ques­tion made me think it’s time to write down an an­swer to this.

It’s maybe eas­i­est to start with the Sign-In use-case, which is a much more com­pli­cated spec­i­fi­ca­tion (OpenID Connect) than core OAuth. OIDC uses OAuth un­der the hood, but helps us get to the heart of what’s ac­tu­ally hap­pen­ing.

We send a se­cret to a place that only the per­son try­ing to iden­tify them­selves can ac­cess, and they prove that they can ac­cess that place by show­ing us the se­cret.

The rest is just ac­cu­mu­lated con­sen­sus, in part bikeshed­ding (agreeing on vo­cab­u­lary, etc), part UX, and part mak­ing sure that all the spe­cific mech­a­nisms are se­cure.

There’s also an his­tor­i­cal rea­son to start with OIDC to ex­plain how all this works: in late 2006, I was work­ing on Twitter, and we wanted to sup­port OpenID (then 1.0) so that ahem Twitter would­n’t be­come a cen­tral­ized holder of on­line iden­ti­ties. After chat­ting with the OpenID folks, we quickly re­al­ized that as it was con­structed, we would­n’t be able to sup­port both desk­top clients and web sign-in, since our users would­n’t have pass­words any­more! (mobile apps did­n’t ex­ist yet, but weren’t far out). So, in or­der to al­low OpenID sign-in, we needed a way for folks us­ing Twitter via al­ter­na­tive clients to sign in with­out a pass­word.

There were plenty of so­lu­tions for this; Flickr had an ap­proach, AWS had one, de­li­cious had one, lots of sites just let ran­dom other apps sign-in to your ac­count with your pass­word, etc, but vir­tu­ally every site in the Web 2.0” co­hort needed a way to do this. They were all in­se­cure and all fully cus­tom.

Rather than build­ing TwitterAuth, I fig­ured it was time to have a stan­dard. Insert XKCD 927:

Fortunately, the charg­ing one has been solved now that we've all stan­dard­ized on mini-USB. Or is it mi­cro-USB? Shit.

Thankfully, against all odds, we now have one stan­dard for del­e­gated auth. What it does is very sim­ple:

At its core, OAuth for del­e­ga­tion is a stan­dard way to do the fol­low­ing:

* The first half ex­ists to send, with con­sent, a multi-use se­cret to a known del­e­gate.

* The other half of OAuth de­tails how the del­e­gate can use that se­cret to make sub­se­quent re­quests on be­half of the per­son that gave the con­sent in the first place.

That’s it. The rest is (sadly, mostly nec­es­sary) noise.

Obviously, the above elides ab­solute vol­umes of de­tail about how this is done se­curely and in a con­sis­tent in­ter­op­er­a­ble way. This is the un­en­vi­able work of stan­dards bod­ies. I have plenty of opin­ions on the pros and cons of our cur­rent stan­dards bod­ies, but that’s for an­other time.

There are very cred­i­ble ar­gu­ments that the-set-of-IETF-stan­dards-that-de­scribe-OAuth are less a stan­dard than a frame­work. I’m not sure that’s a bad thing, though. HTML is a frame­work, too – not all browsers need to im­ple­ment all fea­tures, by de­sign.

OIDC it­self is an in­ter­est­ing thing — im­me­di­ately af­ter cre­at­ing OAuth, we re­al­ized that we could com­pose OpenID’s be­hav­iour out of OAuth, even though it was im­pos­si­ble to use OpenID to do what OAuth did. For var­i­ous so­cial, po­lit­i­cal, tech­ni­cal, and op­er­a­tional rea­sons it took the bet­ter part of a decade to write down the bits to make that in­sight a thing that was true in the world. I con­sider it one of my biggest suc­cesses with OAuth that I was in no way in­volved in that work. I don’t have chil­dren, but know all the re­mark­able and com­pli­cated feel­ings of hav­ing cre­ated some­thing that takes on a life of its own.

More gen­er­ally, though, au­then­ti­ca­tion and au­tho­riza­tion are com­pli­cated, sit­u­ated beasts, im­pos­si­ble to sep­a­rate from the UX and ar­chi­tec­tural con­cerns of the sys­tems that in­cor­po­rate them.

The im­por­tant thing when im­ple­ment­ing a stan­dard like OAuth is to un­der­stand first what you’re try­ing to do and why. Once that’s in place, the how is usu­ally a simple” ques­tion of me­chan­ics with fairly con­strained re­quire­ments. I think that’s what makes Geoffrey’s ques­tion so pow­er­ful – it digs into the core of the rea­son why OAuth is of­ten so in­scrutable to so many: the com­pli­cated ma­chin­ery of the stan­dard means that the ac­tual goals it en­codes are lost.

Hopefully, this post helps clear that up!

...

Read the original on leaflet.pub »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.