10 interesting stories served every morning and every evening.

Belgium stops decommissioning nuclear power plants

dpa-international.com

30.04.2026, 11:37 Uhr

Belgium will stop de­com­mis­sion­ing its nu­clear power plants, Prime Minister Bart De Wever an­nounced on Thursday.

The gov­ern­ment is go­ing to ne­go­ti­ate with op­er­a­tor ENGIE over the na­tion­al­iza­tion of the plants, De Wever said.

This gov­ern­ment chooses safe, af­ford­able, and sus­tain­able en­ergy. With less de­pen­dence on fos­sil im­ports and more con­trol over our own sup­ply,” he wrote on X.

ENGIE said it signed a let­ter of in­tent with the Belgian gov­ern­ment on ex­clu­sive ne­go­ti­a­tions.

The agree­ment cov­ers the po­ten­tial ac­qui­si­tion of the com­plete nu­clear fleet of seven re­ac­tors, the as­so­ci­ated per­son­nel, all nu­clear sub­sidiaries, as well as all as­so­ci­ated as­sets and li­a­bil­i­ties, in­clud­ing de­com­mis­sion­ing and dis­man­tling oblig­a­tions,” a press re­lease said.

A ba­sic agree­ment is ex­pected to be reached by October, it said.

Belgium orig­i­nally de­cided in 2003 to phase-out nu­clear power pro­duc­tion by 2025, but po­lit­i­cal de­bate and en­ergy se­cu­rity con­cerns have led to de­lays.

Last year the Belgian par­lia­ment voted by a large ma­jor­ity to end the nu­clear phase-out. De Wever’s gov­ern­ment also aims to build new nu­clear power plants.

Belgium has seven nu­clear re­ac­tors at two dif­fer­ent sites, al­though three re­ac­tors have al­ready been taken off the grid.

The fate of the age­ing in­stal­la­tions has been de­bated for decades. The coun­try is cur­rently heav­ily de­pen­dent on gas im­ports to cover its elec­tric­ity needs as it has been strug­gling to ex­pand re­new­able power gen­er­a­tion sig­nif­i­cantly.

Bart De Wever on X

ENGIE press re­lease

(c) 2026 dpa Deutsche Presse Agentur GmbH

Prompt API · Issue #1213 · mozilla/standards-positions

github.com

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

You can’t per­form that ac­tion at this time.

Rivian Support

rivian.com

Dispute over fate of Kenyan workers who saw Meta AI glasses films

www.bbc.com

Meta in row af­ter work­ers who say they saw smart glasses users hav­ing sex lose jobs

20 hours ago

Chris VallanceSenior tech­nol­ogy re­porter

AFP via Getty Images

Meta is un­der pres­sure to ex­plain why it can­celled a ma­jor con­tract with a com­pany it was us­ing to train AI, shortly af­ter some of its Kenya-based work­ers al­leged they had to view graphic con­tent cap­tured by Meta smart glasses.

Less than two months later, Meta ended its con­tract with Sama, which Sama said would re­sult in 1,108 work­ers be­ing made re­dun­dant.

Meta says it’s be­cause Sama did not meet its stan­dards, a crit­i­cism Sama re­jects. A Kenyan work­ers’ or­gan­i­sa­tion al­leges Meta’s de­ci­sion was caused by the staff speak­ing out.

Meta has not ad­dressed that al­le­ga­tion but told BBC News in a state­ment it had decided to end our work with Sama be­cause they don’t meet our stan­dards”.

Sama has de­fended its work.

Sama has con­sis­tently met the op­er­a­tional, se­cu­rity and qual­ity stan­dards re­quired across our client en­gage­ments, in­clud­ing with Meta,” it said in a state­ment.

At no point were we no­ti­fied of any fail­ure to meet those stan­dards, and we stand firmly be­hind the qual­ity and in­tegrity of our work.”

Naked bod­ies’

In late February, Swedish news­pa­pers Svenska Dagbladet (SvD) and Goteborgs-Posten (GP) pub­lished an in­ves­ti­ga­tion which in­cluded the ac­counts of un­named work­ers who had been asked to re­view videos filmed by Meta’s glasses.

We see every­thing - from liv­ing rooms to naked bod­ies,” one worker re­port­edly said.

At the time of the pub­li­ca­tion, Meta ad­mit­ted sub­con­tracted work­ers might some­times re­view con­tent filmed on its smart glasses when peo­ple shared it with Meta AI.

It said this was for the pur­pose of im­prov­ing the cus­tomer ex­pe­ri­ence, and was a com­mon prac­tice among other com­pa­nies.

However, the rev­e­la­tions have prompted reg­u­la­tors to act.

Shortly af­ter the Swedish in­ves­ti­ga­tion, the UK data watch­dog, the Information Commissioners Office (ICO) wrote to Meta about what it called a concerning” re­port.

The Office of the Data Protection Commissioner in Kenya also an­nounced it was com­menc­ing an in­ves­ti­ga­tion into pri­vacy con­cerns raised by the glasses.

In a state­ment in re­sponse to news of the re­dun­dan­cies a Meta spokesper­son told the BBC, last month, we paused our work with Sama while we looked into these claims.

We take them se­ri­ously. Photos and videos are pri­vate to users. Humans re­view AI con­tent to im­prove prod­uct per­for­mance, for which we get clear user con­sent.”

Standards of se­cre­cy’

Features can in­clude trans­lat­ing text, or re­spond­ing to ques­tions about what the user is look­ing at - par­tic­u­larly use­ful for those who are blind or par­tially sighted.

However, as the de­vices have grown in pop­u­lar­ity, so too have con­cerns about their mis­use.

The work­ers the Swedish news­pa­pers spoke to were data an­no­ta­tors, teach­ing Meta’s AI to in­ter­pret im­ages by man­u­ally la­belling con­tent.

The work­ers said they also re­viewed tran­scripts of in­ter­ac­tions with the AI to check it had an­swered ques­tions ad­e­quately.

In one in­stance, a worker told the news­pa­pers, a man’s glasses were left record­ing in a bed­room where they later filmed a woman, ap­par­ently the man’s wife, un­dress­ing.

Meta’s glasses have a light in the cor­ner of the frames that is turned on when the built-in cam­era is record­ing.

Sama, a US head­quar­tered out­sourc­ing busi­ness, which be­gan as a non-profit or­gan­i­sa­tion with the aim of in­creas­ing em­ploy­ment through the pro­vi­sion of tech jobs, is now an ethical” B-corp.

But this is not the first time a con­tract with Meta has soured.

An ear­lier deal to mod­er­ate Facebook posts at­tracted crit­i­cism, along­side le­gal ac­tion by for­mer em­ploy­ees - some of whom de­scribed be­ing ex­posed to graphic, trau­ma­tis­ing con­tent.

Sama later said it re­gret­ted tak­ing the work.

Naftali Wambalo of the Africa Tech Workers Movement, who is a pe­ti­tioner in the con­tin­u­ing le­gal ac­tion around that case, told the BBC he had also spo­ken with work­ers in­volved in the smart glasses con­tract.

Wambalo be­lieved the rea­son for Meta’s end­ing the work was that it did­n’t want work­ers speak­ing out about hu­man work­ers some­times re­view­ing con­tent cap­tured by the smart glasses.

What I think are the stan­dards they are talk­ing about here are stan­dards of se­crecy,” he told BBC News.

The BBC has asked Meta to re­spond to this point.

The tech gi­ant has pre­vi­ously said that users were made aware of the pos­si­bil­ity of hu­man re­view in the its terms of ser­vice.

Mercy Mutemi a lawyer rep­re­sent­ing the pe­ti­tion­ers, who is also ex­ec­u­tive di­rec­tor of cam­paign group the Oversight Lab, said Meta’s state­ment should be a warn­ing to the Kenyan gov­ern­ment.

We’ve been told that this is our en­try route into the AI ecosys­tem,” she told the BBC. This is a very flimsy foun­da­tion to build your en­tire in­dus­try on.”

Access Denied

thereader.mitpress.mit.edu

The request could not be satisfied

www.democrata.es

security - Re: CVE-2026-31431: CopyFail: linux local privilege scalation

www.openwall.com

Products

Openwall GNU/*/Linux   server OS

Linux Kernel Runtime Guard

John the Ripper   pass­word cracker

Free & Open Source for any plat­form

in the cloud

Pro for Linux

Pro for ma­cOS

Wordlists   for pass­word crack­ing

pass­wdqc   pol­icy en­force­ment

Free & Open Source for Unix

Pro for Windows (Active Directory)

yescrypt   KDF & pass­word hash­ing

yespower   Proof-of-Work (PoW)

cryp­t_blow­fish   pass­word hash­ing

ph­pass   ditto in PHP

tcb   bet­ter pass­word shad­ow­ing

Pluggable Authentication Modules

scan­logd   port scan de­tec­tor

popa3d   tiny POP3 dae­mon

blists   web in­ter­face to mail­ing lists

msu­lo­gin   sin­gle user mode lo­gin

ph­p_mt_seed   mt_rand() cracker

Openwall GNU/*/Linux   server OS

Linux Kernel Runtime Guard

John the Ripper   pass­word cracker

Free & Open Source for any plat­form

in the cloud

Pro for Linux

Pro for ma­cOS

Free & Open Source for any plat­form

in the cloud

Pro for Linux

Pro for ma­cOS

Wordlists   for pass­word crack­ing

pass­wdqc   pol­icy en­force­ment

Free & Open Source for Unix

Pro for Windows (Active Directory)

Free & Open Source for Unix

Pro for Windows (Active Directory)

yescrypt   KDF & pass­word hash­ing

yespower   Proof-of-Work (PoW)

cryp­t_blow­fish   pass­word hash­ing

ph­pass   ditto in PHP

tcb   bet­ter pass­word shad­ow­ing

Pluggable Authentication Modules

scan­logd   port scan de­tec­tor

popa3d   tiny POP3 dae­mon

blists   web in­ter­face to mail­ing lists

msu­lo­gin   sin­gle user mode lo­gin

ph­p_mt_seed   mt_rand() cracker

Services

Publications

Articles

Presentations

Articles

Presentations

Resources

Mailing lists

Community wiki

Source code repos­i­to­ries (GitHub)

File archive & mir­rors

How to ver­ify dig­i­tal sig­na­tures

OVE IDs

Mailing lists

Community wiki

Source code repos­i­to­ries (GitHub)

File archive & mir­rors

How to ver­ify dig­i­tal sig­na­tures

OVE IDs

What’s new

Message-ID: <87se8dgicq.fsf@gentoo.org>

Date: Thu, 30 Apr 2026 05:52:37 +0100

From: Sam James <sam@…too.org>

To: oss-se­cu­rity@…ts.open­wall.com

Cc: Jan Schaumann <jschauma@…meister.org>

Subject: Re: CVE-2026 – 31431: CopyFail: linux lo­cal priv­i­lege

sca­la­tion

Eddie Chapman <eddie@…k.net> writes:

> On 29/04/2026 21:23, Jan Schaumann wrote:

>> Affected and fixed ver­sions

>> ===========================

>> Issue in­tro­duced in 4.14 with com­mit

>> 72548b093ee38a6d4f2a19e6ef1948ae05c181f7 and fixed in

>> 6.18.22 with com­mit

>> fafe0­fa2995a0f7073c1c358d7d3145bc­c9aedd8

>> Issue in­tro­duced in 4.14 with com­mit

>> 72548b093ee38a6d4f2a19e6ef1948ae05c181f7 and fixed in

>> 6.19.12 with com­mit

>> ce42ee423e58df­fa5ec03524054c9d8bfd4f6237

>> Issue in­tro­duced in 4.14 with com­mit

>> 72548b093ee38a6d4f2a19e6ef1948ae05c181f7 and fixed in

>> 7.0 with com­mit

>> a664bf3d603d­c3b­d­cf9ae47c­c21e0­daec706d7a5

>> https://​git.ker­nel.org/​sta­ble/​c/​fafe0­fa2995a0f7073c1c358d7d3145bc­c9aedd8

>> https://​git.ker­nel.org/​sta­ble/​c/​ce42ee423e58df­fa5ec03524054c9d8bfd4f6237

>> https://​git.ker­nel.org/​sta­ble/​c/​a664bf3d603d­c3b­d­cf9ae47c­c21e0­daec706d7a5

>

> So this is one of the worst make-me-root vul­ner­a­bil­i­ties in the ker­nel

> in re­cent times. I see that on the 11th of April 6.19.12 & 6.18.22

LinkedIn Is Scanning Your Browser Extensions. This Is How They Use the Data. — 404

404privacy.com

When com­pa­nies get caught do­ing this sort of thing, the re­sponse is al­most al­ways the same: we’re us­ing this tech­nol­ogy to com­bat fraud,” or ensure pos­i­tive user ex­pe­ri­ence,” or save com­put­ing re­sources,” or some other hog wash.

The sim­ple truth, there’s no rea­son to be col­lect­ing data that can be used to iden­tify a user across the web if they’re not signed in to your ser­vice.

The harm of com­pa­nies like Experian or LinkedIn be­ing able to cor­re­late all of your web traf­fic back to you is not hard to imag­ine. Though, it begs a sim­ple ques­tion: should a com­pany in­volved in my pro­fes­sional life have ac­cess to my per­sonal in­for­ma­tion ob­tained with­out my ex­plicit con­sent?

No. End stop.

This is not new

According to records doc­u­mented by browser­gate.eu and a GitHub repos­i­tory track­ing the ex­ten­sion list, LinkedIn’s ex­ten­sion scan­ning dates to at least 2017, when the list con­tained 38 en­tries. My count? As of April 2026, LinkedIn has iden­ti­fied and tracks 6,278 ex­ten­sions.

The list is ac­tively main­tained and ex­pand­ing.

At this scale the cat­a­log was not built by hand. Someone wrote tool­ing to crawl Chrome Web Store ex­ten­sion pack­ages, parse each man­i­fest for web-ac­ces­si­ble re­sources, iden­tify a probe tar­get, and add the en­try to the list. This is in­fra­struc­ture that has been in place for nearly a decade.

I ver­i­fied this my­self

I opened LinkedIn in Chrome. I opened de­vel­oper tools (F12 or Inspect) and the con­sole filled with er­rors.

Each one of those er­rors is LinkedIn ask­ing your com­puter if you have a spe­cific ex­ten­sion in­stalled.

Skip to the bot­tom for more tech­ni­cal de­tails.

LinkedIn al­ready knows so much about you, why tell them more?

Most fin­ger­print­ing op­er­a­tions work against anony­mous vis­i­tors. The fin­ger­print al­lows a site to rec­og­nize a re­turn­ing browser with­out cook­ies.

The pro­file that re­sults is tech­ni­cally iden­ti­fied but not nec­es­sar­ily per­son­ally iden­ti­fied. The site knows a de­vice, not a per­son. Still an is­sue, but not in­her­ently linked to any per­sonal in­for­ma­tion.

LinkedIn is not work­ing with anony­mous vis­i­tors.

LinkedIn knows your name. Employer. Job ti­tle. Career his­tory. Salary range. Professional net­work. Location.

You pro­vided them with all of it.

When LinkedIn’s ex­ten­sion scan runs on your browser, it is not build­ing a de­vice pro­file for an un­known vis­i­tor. It is ap­pend­ing a de­tailed soft­ware in­ven­tory to a pro­file that al­ready con­tains your ver­i­fied pro­fes­sional iden­tity.

The harm is spe­cific.

Hundreds of job search ex­ten­sions are in the scan list. LinkedIn knows which of its users are qui­etly look­ing for work be­fore they’ve told their em­ployer.

Extensions tied to po­lit­i­cal con­tent, re­li­gious prac­tice, dis­abil­ity ac­com­mo­da­tion, and neu­ro­di­ver­gence are in the list. Your browser soft­ware be­comes a source of in­fer­ences about your per­sonal life, at­tached with­out your knowl­edge to your pro­fes­sional iden­tity.

And be­cause LinkedIn knows where each user works, none of this is only linked to an in­di­vid­ual. The scan re­sults from one em­ployee con­tribute to a pic­ture of their or­ga­ni­za­tion. Across enough em­ploy­ees, LinkedIn can map a com­pa­ny’s in­ter­nal tool­ing, se­cu­rity prod­ucts, com­peti­tor sub­scrip­tions, and work­flows, with­out that or­ga­ni­za­tion’s knowl­edge or con­sent. Your browser be­comes a win­dow into your em­ployer.

None of this is dis­closed in LinkedIn’s pri­vacy pol­icy. There is no men­tion of ex­ten­sion scan­ning in any pub­lic-fac­ing doc­u­ment. No user was asked for con­sent. No user was in­formed.

None of this is dis­closed in LinkedIn’s pri­vacy pol­icy

Why this mat­ters be­yond LinkedIn

The prece­dent

LinkedIn is us­ing these ex­ten­sion lists to make in­fer­ences and take en­force­ment ac­tions against users who have them in­stalled. According to browser­gate, Milinda Lakkam con­firmed this un­der oath, say­ing, LinkedIn took ac­tion against users who had spe­cific ex­ten­sions in­stalled.”

Users who had no idea their soft­ware was be­ing in­ven­to­ried, no idea the in­ven­tory was be­ing used against them, and no way to know it was hap­pen­ing be­cause none of it ap­pears in LinkedIn’s pri­vacy pol­icy.

The fin­ger­print­ing ecosys­tem prob­lem

Browser fin­ger­print­ing is usu­ally dis­cussed as a track­ing prob­lem con­tained to one site. A site col­lects sig­nals, builds a pro­file, rec­og­nizes you across ses­sions. The prob­lem stays lo­cal.

That fram­ing un­der­states what’s ac­tu­ally hap­pen­ing.

LinkedIn’s ex­ten­sion scan pro­duces a de­tailed soft­ware in­ven­tory linked to a ver­i­fied iden­tity. That pro­file does­n’t have to stay at LinkedIn to be use­ful.

If LinkedIn pur­chases a third party be­hav­ioral dataset and your fin­ger­print ap­pears in it, they can ap­pend that data to what they al­ready know about you. Your brows­ing be­hav­ior off LinkedIn, your pur­chase his­tory, your lo­ca­tion pat­terns, your in­ter­ests, all of it be­comes part of a pro­file that is linked to your LinkedIn ac­count.

The re­verse is also true. LinkedIn in­te­grates third party scripts in­clud­ing Google’s re­CAPTCHA en­ter­prise, loaded on every page visit. Data flows be­tween plat­forms. A fin­ger­print that LinkedIn has linked to your ver­i­fied iden­tity can in­form ad­ver­tis­ing and track­ing sys­tems far out­side linkedin.com.

You log into LinkedIn once, and the fin­ger­print that visit pro­duces can fol­low you across the web.

This is the larger ecosys­tem prob­lem. Browser fin­ger­print­ing is the con­nec­tive tis­sue of the mod­ern sur­veil­lance econ­omy. It is how pro­files built on one plat­form get en­riched with data from an­other. It is why you get Instagram or Facebook ads for the item you were just look­ing up on Google.

It is how your pro­fes­sional iden­tity, your brows­ing be­hav­ior, your in­stalled soft­ware, and your lo­ca­tion his­tory get stitched to­gether into some­thing none of those in­di­vid­ual plat­forms could build alone.

The peo­ple this is a real threat to

For the jour­nal­ists, lawyers, re­searchers, and hu­man rights in­ves­ti­ga­tors, that dis­tinc­tion is op­er­a­tionally sig­nif­i­cant. Your LinkedIn pro­file is one of the most de­tailed ver­i­fied iden­tity doc­u­ments that ex­ists about you on­line. You built it de­lib­er­ately, for pro­fes­sional pur­poses, with your real name at­tached. The ex­ten­sion scan means that pro­file now in­cludes a record of every pri­vacy tool, se­cu­rity ex­ten­sion, re­search tool, and pro­duc­tiv­ity ap­pli­ca­tion in­stalled in your browser, col­lected with­out your knowl­edge, linked to your ver­i­fied iden­tity, and trans­mit­ted en­crypted to LinkedIn’s servers with every ac­tion you take on the plat­form.

If you use LinkedIn and Chrome, this is hap­pen­ing to you right now.

Advanced JavaScript fin­ger­print­ing

The ex­ten­sion scan is not a stand­alone fea­ture. It is part of a broader de­vice fin­ger­print­ing sys­tem LinkedIn calls APFC, Anti-fraud Platform Features Collection, in­ter­nally also re­ferred to as DNA, Device Network Analysis.

While LinkedIn is a lit­tle more forth­com­ing about these track­ing meth­ods, as they are com­monly in­cluded on com­mer­cial web­sites, this es­tab­lishes a sort of pat­tern of be­hav­ior.

That sys­tem col­lects 48 browser and de­vice char­ac­ter­is­tics on every visit: can­vas fin­ger­print, WebGL ren­derer and pa­ra­me­ters, au­dio pro­cess­ing be­hav­ior, in­stalled fonts, screen res­o­lu­tion, pixel ra­tio, hard­ware con­cur­rency, de­vice mem­ory, bat­tery level, lo­cal IP ad­dress via WebRTC, time zone, lan­guage, and more.

The ex­ten­sion scan is one in­put into a much larger pro­file.

Technically, what’s hap­pen­ing?

LinkedIn’s code fires a fetch() re­quest to a chrome-ex­ten­sion:// URL, look­ing for a spe­cific file in­stalled to chrome. When the ex­ten­sion is­n’t in­stalled, Chrome blocks the re­quest and logs the fail­ure. When it is in­stalled, the re­quest re­solves silently and LinkedIn records it.

The scan ran for around 15 min­utes on my com­puter, and it searched my com­puter for over 6,000 ex­ten­sions.

You can ver­ify this your­self. Open LinkedIn in Chrome. Open de­vel­oper tools. Go to the con­sole tab. Watch what hap­pens. Every red er­ror is a part of your fin­ger­print.

The code

The sys­tem re­spon­si­ble for this lives in some JavaScript code that LinkedIn runs in every Chrome vis­i­tors browser. The file is ap­prox­i­mately 1.6 megabytes (it’s changed since browser­gate’s analy­sis) of mini­fied and par­tially ob­fus­cated JavaScript.

Standard mini­fi­ca­tion com­presses code for per­for­mance. Obfuscation is a sep­a­rate step that makes code harder to read and un­der­stand. LinkedIn chose to ob­fus­cate the ex­act mod­ule con­tain­ing the ex­ten­sion scan­ning sys­tem, while also bury­ing it in a JavaScript file thou­sands of lines long.

Inside that file, there is a hard­coded ar­ray of browser ex­ten­sion IDs. As of February 2026 that ar­ray con­tained 6,278 en­tries. Each en­try has two fields: a Chrome Web Store ex­ten­sion ID and a spe­cific file path in­side that ex­ten­sion’s pack­age.

The file path is not in­ci­den­tal. Chrome ex­ten­sions ex­pose in­ter­nal files to web pages through the we­b_ac­ces­si­ble_re­sources field. When an ex­ten­sion is in­stalled and has de­clared a file as ac­ces­si­ble, a fetch() re­quest to chrome-ex­ten­sion://{​id}/{​file} suc­ceeds. When it is­n’t in­stalled, Chrome blocks the re­quest. LinkedIn has iden­ti­fied a spe­cific ac­ces­si­ble file for each of the 6,278 ex­ten­sions in its list and probes for it di­rectly.

The scan runs in two modes. The first fires all re­quests si­mul­ta­ne­ously us­ing Promise.allSettled(), prob­ing all of the ex­ten­sions in par­al­lel. The sec­ond fires them se­quen­tially with a con­fig­urable de­lay be­tween each re­quest, spread­ing net­work ac­tiv­ity over time and re­duc­ing its vis­i­bil­ity in mon­i­tor­ing tools. LinkedIn can switch be­tween modes us­ing in­ter­nal fea­ture flags. The scan can also be de­ferred to re­questI­dle­Call­back, which de­lays ex­e­cu­tion un­til the browser is idle so the user sees no per­for­mance im­pact.

A sec­ond de­tec­tion sys­tem called Spectroscopy op­er­ates in­de­pen­dently of the ex­ten­sion list. It walks the en­tire DOM tree, in­spect­ing every text node and el­e­ment at­tribute for ref­er­ences to chrome-ex­ten­sion:// URLs. This catches ex­ten­sions that mod­ify the page even if they aren’t in LinkedIn’s hard­coded list. Together the two sys­tems cover ex­ten­sions that are merely in­stalled and ex­ten­sions that ac­tively in­ter­act with the page.

Both sys­tems feed into the same teleme­try pipeline. Detected ex­ten­sion IDs are pack­aged into AedEvent and SpectroscopyEvent ob­jects, en­crypted with an RSA pub­lic key, and trans­mit­ted to LinkedIn’s li/​track end­point. The en­crypted fin­ger­print is then in­jected as an HTTP header into every sub­se­quent API re­quest made dur­ing your ses­sion. LinkedIn re­ceives it with every ac­tion you take for the du­ra­tion of your visit.

The le­gal con­text

browser­gate.eu has doc­u­mented the le­gal ar­gu­ments in de­tail and their work is worth read­ing in full. The rel­e­vant con­text here is this: in 2024, Microsoft was des­ig­nated as a gate­keeper un­der the EUs Digital Markets Act. LinkedIn is one of the reg­u­lated prod­ucts. The DMA re­quires gate­keep­ers to al­low third party tools ac­cess to user data and pro­hibits gate­keep­ers from tak­ing ac­tion against users of those tools.

browser­gate.eu ar­gues that LinkedIn’s sys­tem­atic en­force­ment against third party tool users, com­bined with the covert ex­ten­sion scan­ning used to iden­tify them, con­sti­tutes non-com­pli­ance with that reg­u­la­tion. Whether that ar­gu­ment pre­vails is a le­gal ques­tion.

What is not a ques­tion is that a crim­i­nal in­ves­ti­ga­tion is now open. The Cybercrime Unit of the Bavarian Central Cybercrime Prosecution Office in Bamberg con­firmed an in­ves­ti­ga­tion. That of­fice han­dles se­ri­ous cy­ber­crime cases with cross-ju­ris­dic­tional reach. This is not a com­pli­ance dis­pute. It is a crim­i­nal mat­ter.

I con­tacted browser­gate.eu di­rectly while prepar­ing this piece. They con­firmed the crim­i­nal in­ves­ti­ga­tion, pro­vided the case num­ber, and in­di­cated the full court doc­u­ments are be­ing pre­pared for pub­lic re­lease.

I will up­date this ar­ti­cle when they are avail­able.

How an Oil Refinery Works

www.construction-physics.com

Though wind and so­lar con­tinue to carve out larger and larger shares of world en­ergy sup­ply, the mod­ern world still runs on pe­tro­leum, and will con­tinue to do so for the fore­see­able fu­ture. The world con­sumes over 100 mil­lion bar­rels of oil a day. As of 2023, oil was re­spon­si­ble for 30% of all en­ergy use world­wide, higher than any other en­ergy source (though its share has been grad­u­ally falling). In chem­i­cal man­u­fac­tur­ing, pe­tro­leum is even more crit­i­cal: an as­tound­ing 90% of chem­i­cal feed­stocks are de­rived from oil or gas. Virtually all plas­tic comes from chem­i­cals ex­tracted from oil or gas, and petro­chem­i­cals are used to pro­duce every­thing from lu­bri­cants to paint to ply­wood to syn­thetic fab­rics to fer­til­izer.

Our enor­mous con­sump­tion of pe­tro­leum is made pos­si­ble by oil re­finer­ies. When oil comes out of the ground, it’s a com­plex mix­ture of thou­sands of dif­fer­ent chem­i­cals. Oil re­finer­ies take in this mix­ture and process it, turn­ing it into chem­i­cals we can ac­tu­ally use. Because of the scale of world­wide pe­tro­leum con­sump­tion, oil re­finer­ies are some of the largest in­dus­trial fa­cil­i­ties in the world. A large oil re­fin­ery will oc­cupy thou­sands of acres and cost bil­lions of dol­lars to con­struct, ul­ti­mately re­fin­ing hun­dreds of thou­sands of bar­rels of oil each day.

Oil is a liq­uid pro­duced from de­com­pos­ing or­ganic ma­te­ri­als, mostly plank­ton and al­gae that died and sank to the bot­tom of an­cient oceans. This dead or­ganic mat­ter grad­u­ally got cov­ered with sed­i­ment, and over mil­lions of years it trans­formed into crude oil. Crude oil is a mix­ture of thou­sands of dif­fer­ent chem­i­cals, most of which are hy­dro­car­bons: mol­e­cules that are var­i­ous arrange­ments of car­bon and hy­dro­gen atoms. The mol­e­cules in crude oil range from the sim­ple, such as propane (three car­bons and eight hy­dro­gens) and bu­tane (four car­bons and ten hy­dro­gens) to the com­plex — some as­phal­tene mol­e­cules in crude oil can con­tain thou­sands of in­di­vid­ual atoms.1

Crude oils ex­tracted from dif­fer­ent parts of the Earth will have dif­fer­ent mix­tures of hy­dro­car­bons and other mol­e­cules, which has given rise to a sort of crude oil tax­on­omy. Heavy” crude oils, found in places like Canada’s oil sands, will have more heavy mol­e­cules, while light” crude oils found in places like Saudi Arabia’s Ghawar field will have more light mol­e­cules. Sweet” crudes, like the crudes ex­tracted from the Brent oil field in the North Sea, have lower sul­fur con­tent, while sour crudes,” like some of the crudes ex­tracted from the Gulf of Mexico, have greater sul­fur con­tent.

The job of an oil re­fin­ery is to process this mix­ture of hy­dro­car­bons and other mol­e­cules: sep­a­rat­ing the mix­ture into in­di­vid­ual chem­i­cals or groups of chem­i­cals, and us­ing var­i­ous chem­i­cal re­ac­tions to change low-value chem­i­cals into more valu­able, use­ful ones.

A re­fin­ery makes use of sev­eral dif­fer­ent meth­ods to sep­a­rate and process crude oil, but the most im­por­tant process of all is prob­a­bly dis­till­ing. Different mol­e­cules within crude oil boil at dif­fer­ent tem­per­a­tures, and con­dense back into liq­uid at dif­fer­ent tem­per­a­tures. Smaller, lighter mol­e­cules boil and con­dense at lower tem­per­a­tures, while larger and heav­ier mol­e­cules boil and con­dense at higher tem­per­a­tures. You can de­scribe this range of boil­ing points with a dis­til­la­tion curve, which shows what frac­tion of the crude oil boils at dif­fer­ent tem­per­a­tures. In the ex­am­ple curve be­low, we can see that at about 350°C half the crude has boiled off, and at 525°C about 80% of the crude has boiled off. Different crude oils will have slightly dif­fer­ent dis­til­la­tion curves, de­pend­ing on the pro­por­tion of dif­fer­ent mol­e­cules within them.

Substances de­rived from crude oil are of­ten mix­tures of chem­i­cals de­fined by their range of boil­ing points. Gasoline, for in­stance, is­n’t just one chem­i­cal: it’s a mix­ture of hy­dro­car­bons, mostly mol­e­cules with be­tween four and 12 car­bon atoms. The EIA de­fines fin­ished gaso­line as having a boil­ing range of 122 to 158 de­grees Fahrenheit at the 10 per­cent re­cov­ery point to 365 to 374 de­grees Fahrenheit at the 90 per­cent re­cov­ery point.”2

Oil re­finer­ies can use this range of boil­ing and con­den­sa­tion to sep­a­rate crude oil into dif­fer­ent groups of chem­i­cals, or frac­tions, us­ing a dis­til­la­tion col­umn. When crude oil en­ters a re­fin­ery, the salt gets re­moved from it, and it’s then heated to around 650 – 750°F, which turns most of the oil into a va­por. The va­por is then fed into a tall col­umn con­tain­ing trays at dif­fer­ent heights, each filled with liq­uid. As the hot va­por rises through the col­umn, at each tray it passes through the liq­uid, which cools it slightly. When the va­por cools enough, it con­denses back into liq­uid. The heav­i­est mol­e­cules with the high­est boil­ing points con­dense first, at the bot­tom of the col­umn, while the lighter ones con­dense last, at the top. The very light­est mol­e­cules don’t con­dense at all: they exit the top of the col­umn while re­main­ing a gas. At the same time, the very heav­i­est mol­e­cules re­main a liq­uid the en­tire time, and exit the bot­tom of the col­umn. Thus, dif­fer­ent mol­e­cules of dif­fer­ent weights can be sep­a­rated out.

Essentially every oil re­fin­ery first dis­tills crude oil into var­i­ous frac­tions in a dis­til­la­tion col­umn, though the ex­act frac­tions sep­a­rated might vary from re­fin­ery to re­fin­ery. Because this dis­til­la­tion is done at at­mos­pheric pres­sure, this first step in the re­fin­ing process is re­ferred to as atmospheric dis­til­la­tion.” The sim­plest re­finer­ies might only do at­mos­pheric dis­til­la­tion, but most re­finer­ies will then send these var­i­ous frac­tions along for fur­ther pro­cess­ing. There are a LOT of processes that a re­fin­ery might use, de­pend­ing on what it’s de­signed to pro­duce, so we’ll just look at some of the most widely used ones.

The gas that comes out of the top of at­mos­pheric dis­til­la­tion will be a mix­ture of sev­eral dif­fer­ent light mol­e­cules — propane, methane, bu­tane, isobu­tane (butane with a slightly dif­fer­ent mol­e­c­u­lar arrange­ment) and so on. To sep­a­rate this mix­ture into its com­po­nent gases, a re­fin­ery can send it to a gas plant, which con­tains a se­ries of dis­til­la­tion columns de­signed to con­dense var­i­ous sub­stances out of the mix­ture. So gas might flow through a debutanizing tower” to sep­a­rate bu­tane, propane and lighter gasses from the rest of the mix­ture; the bu­tane-and-lighter gasses might then be sent to a depropanizing tower” to sep­a­rate the propane from the bu­tane.3

While light gases come out of the top of a dis­til­la­tion col­umn, heavy liq­uids come out the bot­tom. The very heav­i­est mol­e­cules, which emerge from dis­til­la­tion with­out ever hav­ing evap­o­rated at all, are known as resid­u­als. Many of the heav­ier mol­e­cules aren’t par­tic­u­larly valu­able by them­selves, and thus one of the most im­por­tant func­tions of a re­fin­ery is crack­ing — split­ting heavy frac­tions, such as heavy fuel oil, into lighter, more valu­able ones such as gaso­line.

Cracking was in­vented in the early 20th cen­tury as a way to ex­tract more gaso­line from a bar­rel of crude oil to meet ris­ing de­mand from car us­age. Over the years crack­ing meth­ods have evolved, and to­day most re­finer­ies use some fla­vor of cat­alytic crack­ing (or cat crack­ing”). In cat­alytic crack­ing, the heavy frac­tions from at­mos­pheric dis­til­la­tion are mixed with a cat­a­lyst (a ma­te­r­ial de­signed to speed up chem­i­cal re­ac­tions) and sub­jected to heat and pres­sure, split­ting the heavy mol­e­cules into lighter ones. The cat­a­lyst is then sep­a­rated from the mix­ture us­ing a cy­clonic sep­a­ra­tor — es­sen­tially, the mix­ture is spun around, sep­a­rat­ing out the heav­ier cat­a­lyst from the rest of the mix­ture — cleaned, and reused, while the now-cracked (and there­fore va­por-iz­able) oil is sent to an­other dis­til­la­tion col­umn which splits it into var­i­ous frac­tions.

Most cat­alytic crack­ing is fluid cat­alytic crack­ing, which uses a sand-like cat­a­lyst that be­haves as a fluid when mixed with the heavy frac­tions. Different com­pa­nies have de­vel­oped dif­fer­ent fluid cat­alytic crack­ing processes, and dif­fer­ent re­finer­ies might use mul­ti­ple cat­alytic crack­ers in dif­fer­ent parts of the process.

Catalytic crack­ers are de­signed to en­cour­age the chem­i­cal re­ac­tions that break apart heavy hy­dro­car­bons, but these re­ac­tions can also oc­cur within the dis­til­la­tion col­umn if the heat is high enough. Because crack­ing is dis­rup­tive to the dis­til­la­tion process, re­finer­ies limit the tem­per­a­ture in at­mos­pheric dis­til­la­tion to around 650 – 750°F. This leaves be­hind a mix­ture of heavy, un­boiled hy­dro­car­bons at the bot­tom of the col­umn. It would be use­ful to fur­ther sep­a­rate this mix­ture into dif­fer­ent frac­tions so that it could be re­claimed, but at­mos­pheric dis­til­la­tion can’t do that with­out rais­ing the tem­per­a­ture to the point where crack­ing starts to oc­cur.

The so­lu­tion is to send this mix­ture to an­other dis­til­la­tion col­umn that’s kept at very low pres­sure, near vac­uum — this process is thus known as vac­uum dis­til­la­tion or vac­uum flash­ing. Lower pres­sure means lower boil­ing points, al­low­ing the heavy frac­tions to be dis­tilled with­out heat­ing them to the point where crack­ing starts to oc­cur.

Some of the heavy frac­tions that come out of vac­uum dis­til­la­tion might be sent di­rectly to a cat­alytic crack­ing unit to split them into lighter ones. But the very heav­i­est mol­e­cules that come out of the bot­tom of the vac­uum dis­til­la­tion col­umn aren’t suit­able for cat­alytic crack­ing — many of them con­tain heavy met­als that would poi­son the cat­a­lyst, and the chem­i­cal re­ac­tions of these mol­e­cules tend to pro­duce coke (a car­bon-rich solid), which would gum up the cat­a­lyst. Because it’s use­ful to crack these very heavy mol­e­cules, some re­finer­ies will use ther­mal crack­ing processes, which use heat to split mol­e­cules apart. Cokers are ther­mal crack­ers that use heat to crack the heav­i­est mol­e­cules into lighter ones and coke. The lighter mol­e­cules are sent to a dis­til­la­tion col­umn to be sep­a­rated; the coke can be burned as fuel, or as a man­u­fac­tur­ing in­put (the elec­trodes used in alu­minum smelt­ing, for in­stance, are made from coke). Another type of ther­mal crack­ing, vis­break­ing (short for vis­cos­ity break­ing), is used to crack some mol­e­cules and re­duce the vis­cos­ity of the re­main­ing frac­tions.

Besides crack­ing, a re­fin­ery might em­ploy a va­ri­ety of other processes to mod­ify the chem­i­cal struc­ture of var­i­ous mol­e­cules. Catalytic re­form­ing takes the naph­tha frac­tion (the part of the crude oil with a boil­ing point be­tween ~122°F and ~400°F) and ex­poses it to heat and pres­sure in the pres­ence of a cat­a­lyst to pro­duce a new mix­ture of chem­i­cals called re­for­mate that is used to make gaso­line. Isomerization processes take var­i­ous mol­e­cules, such as bu­tane, and mod­ify their phys­i­cal arrange­ment to pro­duce iso­mers — mol­e­cules with iden­ti­cal chem­i­cal for­mu­las but dif­fer­ent struc­tural arrange­ments. Hydrotreating re­acts var­i­ous crude oil frac­tions with hy­dro­gen in the pres­ence of a cat­a­lyst to re­move im­pu­ri­ties and im­prove their qual­ity. (Hydrotreating can be done on its own, but it’s also of­ten com­bined with other processes. Hydrocracking com­bines hy­drotreat­ing with cat­alytic crack­ing, and residue hy­dro­con­ver­sion com­bines hy­drotreat­ing with ther­mal crack­ing.)

To store the var­i­ous in­puts and out­puts of these processes, oil re­finer­ies also have huge num­bers of stor­age tanks called tank farms, which are ca­pa­ble of stor­ing mil­lions of gal­lons of var­i­ous liq­uids. Gases like propane and bu­tane will typ­i­cally be stored as pres­sur­ized liq­uids, ei­ther in above-ground tanks or in un­der­ground cav­erns or salt domes.

To get a sense of how these var­i­ous processes might be arranged, we can look at how they’re im­ple­mented in an ac­tual re­fin­ery. The map be­low shows Chevron’s Richmond, California re­fin­ery, a mod­er­ately large re­fin­ery ca­pa­ble of pro­cess­ing about a quar­ter mil­lion bar­rels of crude oil a day. The tank farm oc­cu­pies the south half of the site, while the pro­cess­ing area wraps around the north and east.

The chart be­low shows the daily ca­pac­ity of var­i­ous processes at the re­fin­ery.

We can see that Chevron Richmond has many of the processes that we de­scribed above: in ad­di­tion to ~257,000 bar­rels of at­mos­pheric dis­til­la­tion, it has ~123,000 bar­rels of vac­uum dis­til­la­tion, ~90,000 bar­rels of cat­alytic crack­ing, and ~71,000 bar­rels of cat­alytic re­form­ing. (Chevron Richmond does­n’t have any cok­ing ca­pac­ity, but Chevron’s slightly larger El Segundo re­fin­ery in Los Angeles does.)

To see how these processes are ac­tu­ally arranged, we can look at a process flow di­a­gram for the re­fin­ery. (This di­a­gram is avail­able be­cause sev­eral years ago Chevron ex­ten­sively mod­i­fied this re­fin­ery, which re­quired them to sub­mit a very de­tailed en­vi­ron­men­tal im­pact re­port to com­ply with California’s en­vi­ron­men­tal qual­ity laws.)

We can see that the re­fin­ing process starts with at­mos­pheric dis­til­la­tion (though the re­fin­ery also processes some heavy gas oil that can skip the dis­til­la­tion process), which sep­a­rates the crude into var­i­ous frac­tions. These frac­tions then get routed to var­i­ous other processes. The light gas gets sent to the gas plant, while the naph­tha gets sent to hy­drotreat­ing, cat­alytic re­form­ing, and iso­mer­iza­tion. Jet fuel and diesel fuel are sent to their own hy­drotreat­ing processes, and the heav­ier frac­tions get sent to var­i­ous cat­alytic crack­ing processes. The out­put of all these processes is var­i­ous crude oil prod­ucts: heavy fuel oil, diesel, jet fuel, lu­bri­cants, and, of course, gaso­line.

Chevron Richmond is just one of 132 op­er­a­ble oil re­finer­ies in the U.S., which col­lec­tively can re­fine over 18 mil­lion bar­rels of crude oil each day. The lo­ca­tion of these re­finer­ies is highly con­cen­trated: most of them are on the Gulf Coast of Texas and Louisiana, with other clus­ters in New Jersey, the Midwest, and in California.

If we look at the dis­tri­b­u­tion of re­fin­ery ca­pac­ity we can see that Chevron Richmond is on the larger side, but far from the largest. Around a fifth of US re­finer­ies are roughly as large or larger than Chevron Richmond. Six US re­finer­ies are more than twice as large, with the ca­pac­ity to re­fine more than half a mil­lion bar­rels a day. And some re­finer­ies around the world are even big­ger: the Jamnagar re­fin­ery in India, the world’s largest re­fin­ery by raw ca­pac­ity, can re­fine 1.4 mil­lion bar­rels of crude per day.

But look­ing at ca­pac­ity in bar­rels per day (which is es­sen­tially at­mos­pheric dis­til­la­tion ca­pac­ity) only tells part of the story. As we noted, dif­fer­ent re­finer­ies will have dif­fer­ent pro­cess­ing equip­ment in­stalled de­pend­ing on what they’re de­signed to pro­duce. Simple re­finer­ies will have lit­tle more than at­mos­pheric dis­til­la­tion, while more com­plex ones will em­ploy long se­quences of processes to pro­duce a wide range of highly re­fined prod­ucts. The chart be­low shows the col­lec­tive US re­fin­ing ca­pac­ity of var­i­ous processes.

We can look at the rel­a­tive com­plex­ity of dif­fer­ent US re­finer­ies us­ing the Nelson Complexity Index, which is in­tended to mea­sure how com­plex a re­fin­ery is. The in­dex is con­structed by tak­ing each process a re­fin­ery em­ploys, and mul­ti­ply­ing its re­fin­ing ca­pac­ity by a complexity fac­tor” that com­pares the cost of that process to at­mos­pheric dis­til­la­tion, and then di­vid­ing by the re­fin­ery’s at­mos­pheric dis­til­la­tion ca­pac­ity. So a re­fin­ery that has 100,000 bar­rels of at­mos­pheric dis­til­la­tion ca­pac­ity (complexity fac­tor of 1) and 50,000 bar­rels of vac­uum dis­til­la­tion ca­pac­ity (complexity fac­tor of 2) would have a Complexity Index of 1 + 2 * 50,000 / 100,000 = 2. If it then added 25,000 bar­rels of cat­alytic crack­ing ca­pac­ity (complexity fac­tor of 6), its Complexity Index would rise to 1 + 1 + 6 * 25,000 / 100,000 = 3.5.

Most re­finer­ies in the US are fairly com­plex. As of 2014, less than 3% of re­finer­ies had a com­plex­ity in­dex of 2 or less, and the av­er­age com­plex­ity in­dex was 8.7. As of 2014 the Chevron Richmond re­fin­ery had a com­plex­ity in­dex of 14, above av­er­age for US re­finer­ies. The Jamnagar re­fin­ery, in ad­di­tion to be­ing the world’s largest, is also par­tic­u­larly com­plex: its com­plex­ity in­dex of 21 would make it more com­plex than vir­tu­ally any US re­fin­ery.

What strikes me most about oil re­fin­ing is­n’t the com­plex­ity of the process — in­deed, while the arrange­ments of var­i­ous processes are of­ten ex­ceed­ingly com­plex, many of the processes them­selves are of­ten sur­pris­ingly sim­ple (conceptually, at least). What strikes me is the sheer scale of it. Refining is an ex­pen­sive un­der­tak­ing not nec­es­sar­ily be­cause the processes are so com­plex, but be­cause the vol­ume of ma­te­r­ial that has to be processed is so high. Chevron’s Richmond re­fin­ery is the size of a small city, and can process the en­tire con­tents of a Very Large Crude Carrier in a lit­tle over a week. And Richmond is­n’t even a par­tic­u­larly large re­fin­ery: the US has 25 re­finer­ies that size or larger, and six re­finer­ies that are more than twice as large. Worldwide, it takes 400 Richmond-size re­finer­ies to keep the world fed with pe­tro­leum.

If you live in Texas or Louisiana these as­pects are prob­a­bly ob­vi­ous to you, but most of us are able to go about our lives with­out ever think­ing about the huge in­dus­trial ma­chine that keeps the blood of civ­i­liza­tion flow­ing. But the US con­sumes over 20 mil­lion bar­rels of oil a day, every day, and it takes a vast com­plex of oil re­finer­ies to make that pos­si­ble.

1

Asphaltenes aren’t tech­ni­cally hy­dro­car­bons: they con­sist mostly of car­bon and hy­dro­gen, but they can also in­cor­po­rate other atoms, such as sul­fur or heavy met­als.

2

The re­cov­ery point is the tem­per­a­ture at which that frac­tion of the liq­uid has been va­por­ized and then col­lected.

3

Most of the gases sent to the gas plant will have no dou­ble bonds in them. Hydrocarbons with­out dou­ble bonds are known as sat­u­rated, be­cause they have the max­i­mum num­ber of hy­dro­gen atoms that they can, and so this type of plant is called a sats gas plant”.

Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library

semgrep.dev

The PyPI pack­age lightning’, a widely-used deep learn­ing frame­work, was com­pro­mised in a sup­ply chain at­tack af­fect­ing ver­sions 2.6.2 and 2.6.3 pub­lished on April 30, 2026. Teams build­ing im­age clas­si­fiers, fine-tun­ing LLMs, run­ning dif­fu­sion mod­els, or de­vel­op­ing time-se­ries fore­cast­ers fre­quently have light­ning some­where in their de­pen­dency tree.

Running pip in­stall light­ning is all that is needed to ac­ti­vate. The ma­li­cious ver­sions con­tain a hid­den _runtime di­rec­tory with ob­fus­cated JavaScript pay­load that ex­e­cutes au­to­mat­i­cally upon mod­ule im­port. The at­tack steals cre­den­tials, au­then­ti­ca­tion to­kens, en­vi­ron­ment vari­ables, and cloud se­crets, while also at­tempt­ing to poi­son GitHub repos­i­to­ries. It has Shai-Hulud themes in­clud­ing cre­at­ing pub­lic repos­i­to­ries called EveryBoiWeBuildIsaWormBoi.

We be­lieve that this at­tack is the work of the same threat ac­tor be­hind the mini Shai-Hulud cam­paign. The IOC struc­ture is con­sis­tent with that op­er­a­tion: the ma­li­cious com­mit mes­sages fol­low the same Dune-themed nam­ing con­ven­tion, with this cam­paign us­ing the pre­fix EveryBoiWeBuildIsAWormyBoi to dis­tin­guish it from the orig­i­nal Mini Shai-Hulud at­tack.

Affected Packages

- light­ning ver­sion 2.6.2

- light­ning ver­sion 2.6.3

For Semgrep Customers

Semgrep has an ad­vi­sory and rule to cover this so you can find to check your pro­jects.

Trigger a new scan if you haven’t re­cently on your pro­jects.

Trigger a new scan if you haven’t re­cently on your pro­jects.

Check the ad­vi­sories page to see if any pro­jects have in­stalled these pack­age ver­sions re­cently: https://​sem­grep.dev/​orgs/-/​ad­vi­sories

Check the ad­vi­sories page to see if any pro­jects have in­stalled these pack­age ver­sions re­cently: https://​sem­grep.dev/​orgs/-/​ad­vi­sories

Check your de­pen­dency fil­ter for matches. If you see No match­ing de­pen­den­cies” you are not ac­tively us­ing the ma­li­cious de­pen­dency in any of your pro­jects. If you did match, ad­di­tional ad­vice on re­me­di­a­tion and in­di­ca­tors of com­pro­mise are be­low.

Check your de­pen­dency fil­ter for matches. If you see No match­ing de­pen­den­cies” you are not ac­tively us­ing the ma­li­cious de­pen­dency in any of your pro­jects. If you did match, ad­di­tional ad­vice on re­me­di­a­tion and in­di­ca­tors of com­pro­mise are be­low.

If you matched: Also au­dit your repos­i­to­ries for the in­jected files listed in the IOCs be­low (.claude/ and .vscode/ di­rec­to­ries with un­ex­pected con­tents), and ro­tate any GitHub to­kens, cloud cre­den­tials, or API keys that may have been pre­sent in the af­fected en­vi­ron­ment.

For gen­eral ad­vice about how to deal with sup­ply chain, cool down pe­ri­ods; our stan­dard ad­vice is cov­ered by posts: $foo com­pro­mised in $packagemanager and Attackers are Still Coming for Security Companies.

Cross-Ecosystem Spread: PyPI to npm

Unlike mini Shai-Hulud, which tar­geted npm di­rectly, the en­try point here is PyPI. The mal­ware pay­load is still JavaScript, and the worm prop­a­ga­tion hap­pens through npm.

Once run­ning, if the mal­ware finds npm pub­lish cre­den­tials, it in­jects a setup.mjs drop­per and router_run­time.js into every pack­age that to­ken can pub­lish to, sets scripts.pre­in­stall to ex­e­cute the drop­per, bumps the patch ver­sion, and re­pub­lishes. And any down­stream de­vel­oper who in­stalls one of those pack­ages runs the full mal­ware on their ma­chine, has their to­kens stolen and pack­ages wormed.

How it Works

The ex­fil­tra­tion com­po­nent shares its de­sign with the Mini Shai-Hulud” mech­a­nism from their last cam­paign, us­ing four par­al­lel chan­nels so stolen data gets out even if in­di­vid­ual paths are blocked.

HTTPS POST to C2. Stolen data is im­me­di­ately POSTed to an at­tacker-con­trolled server over port 443. The do­main and path are stored as en­crypted strings in the pay­load, mak­ing sta­tic analy­sis harder.

HTTPS POST to C2. Stolen data is im­me­di­ately POSTed to an at­tacker-con­trolled server over port 443. The do­main and path are stored as en­crypted strings in the pay­load, mak­ing sta­tic analy­sis harder.

GitHub com­mit search dead-drop. The mal­ware polls the GitHub com­mit search API for com­mit mes­sages pre­fixed with EveryBoiWeBuildIsAWormyBoi, which carry a dou­ble-base64-en­coded to­ken in the for­mat EveryBoiWeBuildIsAWormyBoi:<base64(base64(token))>. Once de­coded, the to­ken is used to au­then­ti­cate an Octokit client for fur­ther op­er­a­tions.

GitHub com­mit search dead-drop. The mal­ware polls the GitHub com­mit search API for com­mit mes­sages pre­fixed with EveryBoiWeBuildIsAWormyBoi, which carry a dou­ble-base64-en­coded to­ken in the for­mat EveryBoiWeBuildIsAWormyBoi:<base64(base64(token))>. Once de­coded, the to­ken is used to au­then­ti­cate an Octokit client for fur­ther op­er­a­tions.

Attacker-controlled pub­lic GitHub repo. A new pub­lic repos­i­tory is cre­ated with a ran­domly cho­sen Dune-word name and the de­scrip­tion A Mini Shai-Hulud has Appeared”, which is di­rectly search­able on GitHub. Stolen cre­den­tials are com­mit­ted as re­sults/​re­sults-<time­stamp>-<n>.json (base64-encoded via the API, plain JSON in­side), with files over 30 MB split into num­bered chunks. Commit mes­sages use chore: up­date de­pen­den­cies as cover.

Attacker-controlled pub­lic GitHub repo. A new pub­lic repos­i­tory is cre­ated with a ran­domly cho­sen Dune-word name and the de­scrip­tion A Mini Shai-Hulud has Appeared”, which is di­rectly search­able on GitHub. Stolen cre­den­tials are com­mit­ted as re­sults/​re­sults-<time­stamp>-<n>.json (base64-encoded via the API, plain JSON in­side), with files over 30 MB split into num­bered chunks. Commit mes­sages use chore: up­date de­pen­den­cies as cover.

Push to vic­tim’s own repo. If the mal­ware ob­tains a ghs_ GitHub server to­ken, it pushes stolen data di­rectly to all branches of the vic­tim’s own GITHUB_REPOSITORY.

Push to vic­tim’s own repo. If the mal­ware ob­tains a ghs_ GitHub server to­ken, it pushes stolen data di­rectly to all branches of the vic­tim’s own GITHUB_REPOSITORY.

What Gets Stolen

The mal­ware tar­gets cre­den­tials across lo­cal files, en­vi­ron­ment, CI/CD pipelines, and cloud providers:

Filesystem: Scans 80+ cre­den­tial file paths for ghp_, gho_, and npm_ to­kens (up to 5 MB per file).

Filesystem: Scans 80+ cre­den­tial file paths for ghp_, gho_, and npm_ to­kens (up to 5 MB per file).

Shell / Environment: Runs gh auth to­ken and dumps all en­vi­ron­ment vari­ables from process.env.

Shell / Environment: Runs gh auth to­ken and dumps all en­vi­ron­ment vari­ables from process.env.

GitHub Actions: On Linux run­ners, dumps Runner.Worker process mem­ory via em­bed­ded Python and ex­tracts all se­crets marked isSecret”:true, along with GITHUB_REPOSITORY and GITHUB_WORKFLOW.

GitHub Actions: On Linux run­ners, dumps Runner.Worker process mem­ory via em­bed­ded Python and ex­tracts all se­crets marked isSecret”:true, along with GITHUB_REPOSITORY and GITHUB_WORKFLOW.

GitHub orgs: Checks to­ken scopes (repo, work­flow) and it­er­ates GitHub Actions org se­crets.

GitHub orgs: Checks to­ken scopes (repo, work­flow) and it­er­ates GitHub Actions org se­crets.

AWS: Tries en­vi­ron­ment vari­ables, ~/.aws/credentials pro­files, IMDSv2 (169.254.169.254), and ECS (169.254.170.2) to call sts:Get­Cal­lerI­den­tity; ad­di­tion­ally enu­mer­ates and fetches all Secrets Manager val­ues and SSM pa­ra­me­ters.

AWS: Tries en­vi­ron­ment vari­ables, ~/.aws/credentials pro­files, IMDSv2 (169.254.169.254), and ECS (169.254.170.2) to call sts:Get­Cal­lerI­den­tity; ad­di­tion­ally enu­mer­ates and fetches all Secrets Manager val­ues and SSM pa­ra­me­ters.

Azure: Uses DefaultAzureCredential to enu­mer­ate sub­scrip­tions and ac­cess Key Vault se­crets.

Azure: Uses DefaultAzureCredential to enu­mer­ate sub­scrip­tions and ac­cess Key Vault se­crets.

GCP: Authenticates via GoogleAuth and enu­mer­ates and fetches all Secret Manager se­crets.

GCP: Authenticates via GoogleAuth and enu­mer­ates and fetches all Secret Manager se­crets.

The tar­get­ing cov­ers lo­cal dev en­vi­ron­ments, CI run­ners, and all three ma­jor cloud providers. Any ma­chine that im­ported the ma­li­cious pack­age dur­ing the af­fected win­dow should be treated as fully com­pro­mised.

Persistence via Developer Tooling

Once in­side a repos­i­tory, the mal­ware plants per­sis­tence hooks tar­get­ing two of the most com­mon de­vel­oper tools: Claude Code and VS Code. This may be among the first doc­u­mented in­stances of mal­ware abus­ing Claude Code’s hook sys­tem in a real-world at­tack.

Claude Code: .claude/settings.json. The mal­ware writes a SessionStart hook with matcher: *” into the repos­i­to­ry’s Claude Code set­tings, point­ing to node .vscode/setup.mjs. It fires every time a de­vel­oper opens Claude Code in the in­fected repo — no tool use or user ac­tion re­quired be­yond launch­ing the ses­sion.

VS Code: .vscode/tasks.json. A par­al­lel hook tar­gets VS Code users via a runOn: folderOpen task that runs node .claude/setup.mjs every time the pro­ject folder is opened.

The drop­per: setup.mjs. Both hooks in­voke setup.mjs, a self-con­tained Bun run­time boot­strap­per. If Bun is­n’t in­stalled, it silently down­loads bun-v1.3.13 from GitHub re­leases, han­dling Linux x64/​ar­m64/​musl, ma­cOS x64/​ar­m64, and Windows x64/​ar­m64. It then ex­e­cutes .claude/router_runtime.js (the full 14.8 MB pay­load) and cleans up from /tmp.

Bonus pay­load: ma­li­cious GitHub Actions work­flow. If the mal­ware holds a GitHub to­ken with write ac­cess, it pushes a work­flow named Formatter to the vic­tim’s repos­i­tory. On every push it dumps all repos­i­tory se­crets via ${{ to­J­SON(se­crets) }} and up­loads them as a down­load­able Actions ar­ti­fact named for­mat-re­sults. The ac­tions are pinned to spe­cific com­mit SHAs to ap­pear le­git­i­mate.

Any repos­i­tory that re­ceived the in­fected light­ning pack­age dur­ing CI and held a to­ken with write ac­cess should be au­dited for these files.

Indicators of Compromise

Look for a few in­di­ca­tors:

A com­mit mes­sage pre­fixed with EveryBoiWeBuildIsAWormyBoi (dead-drop to­ken car­rier, search­able via GitHub com­mit search)

A com­mit mes­sage pre­fixed with EveryBoiWeBuildIsAWormyBoi (dead-drop to­ken car­rier, search­able via GitHub com­mit search)

GitHub re­pos with de­scrip­tion: A Mini Shai-Hulud has Appeared” (attacker ex­fil re­pos, di­rectly search­able)

GitHub re­pos with de­scrip­tion: A Mini Shai-Hulud has Appeared” (attacker ex­fil re­pos, di­rectly search­able)

Packages

- light­ning@2.6.2

- light­ning@2.6.3

Files / System Artifacts

_runtime/start.py

Python loader that ini­tial­izes the pay­load on im­port

run­time/​router­run­time.js

Obfuscated JavaScript pay­load (14.8 MB, Bun run­time)

_runtime/

Directory added to the ma­li­cious pack­age ver­sions

.claude/router_runtime.js

Malware copy in­jected into vic­tim re­pos

.claude/settings.json

Claude Code hook con­fig in­jected into vic­tim re­pos

.claude/setup.mjs

Dropper in­jected into vic­tim re­pos

.vscode/tasks.json

VS Code auto-run task in­jected into vic­tim re­pos

.vscode/setup.mjs

Dropper in­jected into vic­tim re­pos

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.