10 interesting stories served every morning and every evening.




1 1,021 shares, 55 trendiness

Claude Code Unpacked

Stuff that’s in the code but not shipped yet. Feature-flagged, env-gated, or just com­mented out.

A vir­tual pet that lives in your ter­mi­nal. Species and rar­ity are de­rived from your ac­count ID. Persistent mode with mem­ory con­sol­i­da­tion be­tween ses­sions and au­tonomous back­ground ac­tions.Long plan­ning ses­sions on Opus-class mod­els, up to 30-minute ex­e­cu­tion win­dows.Con­trol Claude Code from your phone or a browser. Full re­mote ses­sion with per­mis­sion ap­provals.Run ses­sions in the back­ground with –bgtmuxSessions talk to each other over Unix do­main sock­ets.Be­tween ses­sions, the AI re­views what hap­pened and or­ga­nizes what it learned.

...

Read the original on ccunpacked.dev »

2 899 shares, 2 trendiness

Oracle slashes 30,000 jobs with a cold 6 a.m. email

It was not a phone call. It was not a meet­ing. For thou­sands of Oracle em­ploy­ees across the globe, Tuesday morn­ing be­gan with a sin­gle email land­ing in their in­boxes just af­ter 6 a.m. EST — and by the time they fin­ished read­ing it, their ca­reers at one of the world’s largest tech­nol­ogy com­pa­nies were over.

Oracle has launched what an­a­lysts be­lieve could be the most ex­ten­sive lay­off in the com­pa­ny’s his­tory, with es­ti­mates sug­gest­ing the cuts will af­fect be­tween 20,000 and 30,000 em­ploy­ees — roughly 18% of its global work­force of ap­prox­i­mately 162,000 peo­ple. Workers in the United States, India, and other re­gions all re­ported re­ceiv­ing the same ter­mi­na­tion no­tice at nearly the same hour, sent un­der the name Oracle Leadership.”

There was no heads-up from hu­man re­sources, no con­ver­sa­tion with a di­rect man­ager, and no ad­vance no­tice of any kind. Just an email.

The email that cir­cu­lated widely af­ter screen­shots were posted by af­fected work­ers on Reddit’s r/​em­ploy­eesO­fOr­a­cle com­mu­nity and the pro­fes­sional fo­rum Blind was brief and for­mu­laic. It told em­ploy­ees that fol­low­ing a re­view of the com­pa­ny’s cur­rent busi­ness needs, a de­ci­sion had been made to elim­i­nate their roles as part of a broader or­ga­ni­za­tional change, that the day of the email was their fi­nal work­ing day, and that a sev­er­ance pack­age would be made avail­able af­ter sign­ing ter­mi­na­tion pa­per­work through DocuSign.

Employees were also in­structed to up­date their per­sonal email ad­dresses to re­ceive sub­se­quent com­mu­ni­ca­tions, in­clud­ing sep­a­ra­tion de­tails and an­swers to fre­quently asked ques­tions. For many, ac­cess to in­ter­nal pro­duc­tion sys­tems was re­voked al­most im­me­di­ately af­ter the mes­sage ar­rived.

Based on ac­counts shared across both Reddit and Blind, the cuts were wide­spread and, in some units, se­vere. Among the teams re­ported to be most af­fected:

RHS (Revenue and Health Sciences) — em­ploy­ees de­scribed a re­duc­tion in force of at least 30%, with 16 or more en­gi­neers from in­di­vid­ual busi­ness units cut in a sin­gle ac­tion.

SVOS (SaaS and Virtual Operations Services) — sim­i­larly re­ported a 30% or greater re­duc­tion, with man­ager-level roles in­cluded in the sweep.

At least one man­ager was con­firmed among those let go, and af­fected em­ploy­ees in India said the sev­er­ance struc­ture is ex­pected to fol­low a stan­dard for­mula based on years of ser­vice, paid out in months. Any un­vested re­stricted stock units, how­ever, were for­feited im­me­di­ately.

Workers who had vested stock were told they would re­tain ac­cess to those shares through Fidelity. Some em­ploy­ees noted April 3 as their for­mal last work­ing day, with a one-month gar­den leave pe­riod to fol­low. Separately, posts on Blind al­leged that Oracle had re­cently in­stalled mon­i­tor­ing soft­ware on com­pany-is­sued Mac lap­tops ca­pa­ble of log­ging all de­vice ac­tiv­ity, with warn­ings cir­cu­lat­ing among af­fected em­ploy­ees not to copy any files or code be­fore re­turn­ing their ma­chines.

The lay­offs are di­rectly tied to Oracle’s ag­gres­sive and debt-heavy ex­pan­sion into ar­ti­fi­cial in­tel­li­gence in­fra­struc­ture. According to analy­sis from TD Cowen, the job cuts are ex­pected to free up be­tween $8 bil­lion and $10 bil­lion in cash flow — money the com­pany ur­gently needs to fund a mas­sive build­out of AI data cen­ters.

The fi­nan­cial pic­ture sur­round­ing that ex­pan­sion is strik­ing. Oracle has taken on $58 bil­lion in new debt within just two months. Its stock has lost more than half its value since reach­ing a peak in September 2025. Multiple U. S. banks have re­port­edly stepped back from fi­nanc­ing some of its data cen­ter pro­jects. All of this is hap­pen­ing even as the com­pany posted a 95% jump in net in­come — reach­ing $6.13 bil­lion — last quar­ter.

The con­trast un­der­scores the scale of the bet Oracle is mak­ing: record prof­its on one side, a mount­ing debt load and tens of thou­sands of elim­i­nated jobs on the other. For the work­ers who woke up Tuesday morn­ing to that 6 a.m. email, the com­pa­ny’s am­bi­tions of­fered lit­tle com­fort.

...

Read the original on rollingout.com »

3 429 shares, 67 trendiness

NASA’s Artemis II Crew Launches to the Moon (Official Broadcast)

Artemis II is NASAs first crewed mis­sion un­der the Artemis pro­gram and will launch from the agen­cy’s Kennedy Space Center in Florida. It will send NASA as­tro­nauts Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen on an ap­prox­i­mately 10-day jour­ney around the Moon. Among ob­jec­tives, the agency will test the Orion space­craft’s life sup­port sys­tems for the first time with peo­ple and lay the ground­work for fu­ture crewed Artemis mis­sions.

...

Read the original on plus.nasa.gov »

4 421 shares, 53 trendiness

Introducing EmDash — the spiritual successor to WordPress that solves plugin security

The cost of build­ing soft­ware has dras­ti­cally de­creased. We re­cently re­built Next.js in one week us­ing AI cod­ing agents. But for the past two months our agents have been work­ing on an even more am­bi­tious pro­ject: re­build­ing the WordPress open source pro­ject from the ground up.

WordPress pow­ers over 40% of the Internet. It is a mas­sive suc­cess that has en­abled any­one to be a pub­lisher, and cre­ated a global com­mu­nity of WordPress de­vel­op­ers. But the WordPress open source pro­ject will be 24 years old this year. Hosting a web­site has changed dra­mat­i­cally dur­ing that time. When WordPress was born, AWS EC2 did­n’t ex­ist. In the in­ter­ven­ing years, that task has gone from rent­ing vir­tual pri­vate servers, to up­load­ing a JavaScript bun­dle to a glob­ally dis­trib­uted net­work at vir­tu­ally no cost. It’s time to up­grade the most pop­u­lar CMS on the Internet to take ad­van­tage of this change.

Our name for this new CMS is EmDash. We think of it as the spir­i­tual suc­ces­sor to WordPress. It’s writ­ten en­tirely in TypeScript. It is server­less, but you can run it on your own hard­ware or any plat­form you choose. Plugins are se­curely sand­boxed and can run in their own iso­late, via Dynamic Workers, solv­ing the fun­da­men­tal se­cu­rity prob­lem with the WordPress plu­gin ar­chi­tec­ture. And un­der the hood, EmDash is pow­ered by Astro, the fastest web frame­work for con­tent-dri­ven web­sites.

EmDash is fully open source, MIT li­censed, and avail­able on GitHub. While EmDash aims to be com­pat­i­ble with WordPress func­tion­al­ity, no WordPress code was used to cre­ate EmDash. That al­lows us to li­cense the open source pro­ject un­der the more per­mis­sive MIT li­cense. We hope that al­lows more de­vel­op­ers to adapt, ex­tend, and par­tic­i­pate in EmDash’s de­vel­op­ment.

You can de­ploy the EmDash v0.1.0 pre­view to your own Cloudflare ac­count, or to any Node.js server to­day as part of our early de­vel­oper beta:

Or you can try out the ad­min in­ter­face here in the EmDash Playground:

The story of WordPress is a tri­umph of open source that en­abled pub­lish­ing at a scale never be­fore seen. Few pro­jects have had the same recog­nis­able im­pact on the gen­er­a­tion raised on the Internet. The con­trib­u­tors to WordPress’s core, and its many thou­sands of plu­gin and theme de­vel­op­ers have built a plat­form that de­moc­ra­tised pub­lish­ing for mil­lions; many lives and liveli­hoods be­ing trans­formed by this ubiq­ui­tous soft­ware.

There will al­ways be a place for WordPress, but there is also a lot more space for the world of con­tent pub­lish­ing to grow. A decade ago, peo­ple pick­ing up a key­board uni­ver­sally learned to pub­lish their blogs with WordPress. Today it’s just as likely that per­son picks up Astro, or an­other TypeScript frame­work to learn and build with. The ecosys­tem needs an op­tion that em­pow­ers a wide au­di­ence, in the same way it needed WordPress 23 years ago.

EmDash is com­mit­ted to build­ing on what WordPress cre­ated: an open source pub­lish­ing stack that any­one can in­stall and use at lit­tle cost, while fix­ing the core prob­lems that WordPress can­not solve.

WordPress’ plu­gin ar­chi­tec­ture is fun­da­men­tally in­se­cure. 96% of se­cu­rity is­sues for WordPress sites orig­i­nate in plu­g­ins. In 2025, more high sever­ity vul­ner­a­bil­i­ties were found in the WordPress ecosys­tem than the pre­vi­ous two years com­bined.

Why, af­ter over two decades, is WordPress plu­gin se­cu­rity so prob­lem­atic?

A WordPress plu­gin is a PHP script that hooks di­rectly into WordPress to add or mod­ify func­tion­al­ity. There is no iso­la­tion: a WordPress plu­gin has di­rect ac­cess to the WordPress site’s data­base and filesys­tem. When you in­stall a WordPress plu­gin, you are trust­ing it with ac­cess to nearly every­thing, and trust­ing it to han­dle every ma­li­cious in­put or edge case per­fectly.

EmDash solves this. In EmDash, each plu­gin runs in its own iso­lated sand­box: a Dynamic Worker. Rather than giv­ing di­rect ac­cess to un­der­ly­ing data, EmDash pro­vides the plu­gin with ca­pa­bil­i­ties via bind­ings, based on what the plu­gin ex­plic­itly de­clares that it needs in its man­i­fest. This se­cu­rity model has a strict guar­an­tee: an EmDash plu­gin can only per­form the ac­tions ex­plic­itly de­clared in its man­i­fest. You can know and trust up­front, be­fore in­stalling a plu­gin, ex­actly what you are grant­ing it per­mis­sion to do, sim­i­lar to go­ing through an OAuth flow and grant­ing a 3rd party app a spe­cific set of scoped per­mis­sions.

For ex­am­ple, a plu­gin that sends an email af­ter a con­tent item gets saved looks like this:

im­port { de­fine­Plu­gin } from emdash”;

ex­port de­fault () =>

de­fine­Plu­gin({

id: notify-on-publish”,

ver­sion: 1.0.0”,

ca­pa­bil­i­ties: [“read:content”, email:send”],

hooks: {

content:afterSave”: async (event, ctx) => {

if (event.collection !== posts” || event.con­tent.sta­tus !== published”) re­turn;

await ctx.email!.send({

to: [email protected]”,

sub­ject: `New post pub­lished: ${event.content.title}`,

text: `“${event.content.title}” is now live.`,

ctx.log.info(`No­ti­fied ed­i­tors about ${event.content.id}`);

This plu­gin ex­plic­itly re­quests two ca­pa­bil­i­ties: con­tent:af­ter­Save to hook into the con­tent life­cy­cle, and email:send to ac­cess the ctx.email func­tion. It is im­pos­si­ble for the plu­gin to do any­thing other than use these ca­pa­bil­i­ties. It has no ex­ter­nal net­work ac­cess. If it does need net­work ac­cess, it can spec­ify the ex­act host­name it needs to talk to, as part of its de­f­i­n­i­tion, and be granted only the abil­ity to com­mu­ni­cate with a par­tic­u­lar host­name.

And in all cases, be­cause the plug­in’s needs are de­clared sta­t­i­cally, up­front, it can al­ways be clear ex­actly what the plu­gin is ask­ing for per­mis­sion to be able to do, at in­stall time. A plat­form or ad­min­is­tra­tor could de­fine rules for what plu­g­ins are or aren’t al­lowed to be in­stalled by cer­tain groups of users, based on what per­mis­sions they re­quest, rather than an al­lowlist of ap­proved or safe plu­g­ins.

WordPress plu­gin se­cu­rity is such a real risk that WordPress.org man­u­ally re­views and ap­proves each plu­gin in its mar­ket­place. At the time of writ­ing, that re­view queue is over 800 plu­g­ins long, and takes at least two weeks to tra­verse. The vul­ner­a­bil­ity sur­face area of WordPress plu­g­ins is so wide that in prac­tice, all par­ties rely on mar­ket­place rep­u­ta­tion, rat­ings and re­views. And be­cause WordPress plu­g­ins run in the same ex­e­cu­tion con­text as WordPress it­self and are so deeply in­ter­twined with WordPress code, some ar­gue they must carry for­ward WordPress’ GPL li­cense.

These re­al­i­ties com­bine to cre­ate a chill­ing ef­fect on de­vel­op­ers build­ing plu­g­ins, and on plat­forms host­ing WordPress sites.

Plugin se­cu­rity is the root of this prob­lem. Marketplace busi­nesses pro­vide trust when par­ties oth­er­wise can­not eas­ily trust each other. In the case of the WordPress mar­ket­place, the plu­gin se­cu­rity risk is so large and prob­a­ble that many of your cus­tomers can only rea­son­ably trust your plu­gin via the mar­ket­place. But in or­der to be part of the mar­ket­place your code must be li­censed in a way that forces you to give it away for free every­where other than that mar­ket­place. You are locked in.

EmDash plu­g­ins have two im­por­tant prop­er­ties that mit­i­gate this mar­ket­place lock-in:

Plugins can have any li­cense: they run in­de­pen­dently of EmDash and share no code. It’s the plu­gin au­thor’s choice. Plugin code runs in­de­pen­dently in a se­cure sand­box: a plu­gin can be pro­vided to an EmDash site, and trusted, with­out the EmDash site ever see­ing the code.

The first part is straight­for­ward — as the plu­gin au­thor, you choose what li­cense you want. The same way you can when pub­lish­ing to NPM, PyPi, Packagist or any other reg­istry. It’s an open ecosys­tem for all, and up to the com­mu­nity, not the EmDash pro­ject, what li­cense you use for plu­g­ins and themes.

The sec­ond part is where EmDash’s plu­gin ar­chi­tec­ture breaks free of the cen­tral­ized mar­ket­place.

Developers need to rely on a third party mar­ket­place hav­ing vet­ted the plu­gin far less to be able to make de­ci­sions about whether to use or trust it. Consider the ex­am­ple plu­gin above that sends emails af­ter con­tent is saved; the plu­gin de­clares three things:

It only runs on the con­tent:af­ter­Save hookIt has the read:con­tent ca­pa­bil­i­tyIt has the email:send ca­pa­bil­ity

The plu­gin can have tens of thou­sands of lines of code in it, but un­like a WordPress plu­gin that has ac­cess to every­thing and can talk to the pub­lic Internet, the per­son adding the plu­gin knows ex­actly what ac­cess they are grant­ing to it. The clearly de­fined bound­aries al­low you to make in­formed de­ci­sions about se­cu­rity risks and to zoom in on more spe­cific risks that re­late di­rectly to the ca­pa­bil­i­ties the plu­gin is given.

The more that both sites and plat­forms can trust the se­cu­rity model to pro­vide con­straints, the more that sites and plat­forms can trust plu­g­ins, and break free of cen­tral­ized con­trol of mar­ket­places and rep­u­ta­tion. Put an­other way: if you trust that food safety is en­forced in your city, you’ll be ad­ven­tur­ous and try new places. If you can’t trust that there might be a sta­ple in your soup, you’ll be con­sult­ing Google be­fore every new place you try, and it’s harder for every­one to open new restau­rants.

The busi­ness model of the web is at risk, par­tic­u­larly for con­tent cre­ators and pub­lish­ers. The old way of mak­ing con­tent widely ac­ces­si­ble, al­low­ing all clients free ac­cess in ex­change for traf­fic, breaks when there is no hu­man look­ing at a site to ad­ver­tise to, and the client is in­stead their agent ac­cess­ing the web on their be­half. Creators need ways to con­tinue to make money in this new world of agents, and to build new kinds of web­sites that serve what peo­ple’s agents need and will pay for. Decades ago a new wave of cre­ators cre­ated web­sites that be­came great busi­nesses (often us­ing WordPress to power them) and a sim­i­lar op­por­tu­nity ex­ists to­day.

x402 is an open, neu­tral stan­dard for Internet-native pay­ments. It lets any­one on the Internet eas­ily charge, and any client pay on-de­mand, on a pay-per-use ba­sis. A client, such as an agent, sends a HTTP re­quest and re­ceives a HTTP 402 Payment Required sta­tus code. In re­sponse, the client pays for ac­cess on-de­mand, and the server can let the client through to the re­quested con­tent.

EmDash has built-in sup­port for x402. This means any­one with an EmDash site can charge for ac­cess to their con­tent with­out re­quir­ing sub­scrip­tions and with zero en­gi­neer­ing work. All you need to do is con­fig­ure which con­tent should re­quire pay­ment, set how much to charge, and pro­vide a Wallet ad­dress. The re­quest/​re­sponse flow ends up look­ing like this:

Every EmDash site has a built-in busi­ness model for the AI era.

WordPress is not server­less: it re­quires pro­vi­sion­ing and man­ag­ing servers, scal­ing them up and down like a tra­di­tional web ap­pli­ca­tion. To max­i­mize per­for­mance, and to be able to han­dle traf­fic spikes, there’s no avoid­ing the need to pre-pro­vi­sion in­stances and run some amount of idle com­pute, or share re­sources in ways that limit per­for­mance. This is par­tic­u­larly true for sites with con­tent that must be server ren­dered and can­not be cached.

EmDash is dif­fer­ent: it’s built to run on server­less plat­forms, and make the most out of the v8 iso­late ar­chi­tec­ture of Cloudflare’s open source run­time work­erd. On an in­com­ing re­quest, the Workers run­time in­stantly spins up an iso­late to ex­e­cute code and serve a re­sponse. It scales back down to zero if there are no re­quests. And it only bills for CPU time (time spent do­ing ac­tual work).

You can run EmDash any­where, on any Node.js server — but on Cloudflare you can run mil­lions of in­stances of EmDash us­ing Cloudflare for Platforms that each in­stantly scale fully to zero or up to as many RPS as you need to han­dle, us­ing the ex­act same net­work and run­time that the biggest web­sites in the world rely on.

Beyond cost op­ti­miza­tions and per­for­mance ben­e­fits, we’ve bet on this ar­chi­tec­ture at Cloudflare in part be­cause we be­lieve in hav­ing low cost and free tiers, and that every­one should be able to build web­sites that scale. We’re ex­cited to help plat­forms ex­tend the ben­e­fits of this ar­chi­tec­ture to their own cus­tomers, both big and small.

EmDash is pow­ered by Astro, the web frame­work for con­tent-dri­ven web­sites. To cre­ate an EmDash theme, you cre­ate an Astro pro­ject that in­cludes:

A seed file: JSON that tells the CMS what con­tent types and fields to cre­ate

This makes cre­at­ing themes fa­mil­iar to fron­tend de­vel­op­ers who are in­creas­ingly choos­ing Astro, and to LLMs which are al­ready trained on Astro.

WordPress themes, though in­cred­i­bly flex­i­ble, op­er­ate with a lot of the same se­cu­rity risks as plu­g­ins, and the more pop­u­lar and com­mon­place your theme, the more of a tar­get it is. Themes run through in­te­grat­ing with func­tions.php which is an all-en­com­pass­ing ex­e­cu­tion en­vi­ron­ment, en­abling your theme to be both in­cred­i­bly pow­er­ful and po­ten­tially dan­ger­ous. EmDash themes, as with dy­namic plu­g­ins, turns this ex­pec­ta­tion on its head. Your theme can never per­form data­base op­er­a­tions.

The least fun part about work­ing with any CMS is do­ing the rote mi­gra­tion of con­tent: find­ing and re­plac­ing strings, mi­grat­ing cus­tom fields from one for­mat to an­other, re­nam­ing, re­order­ing and mov­ing things around. This is ei­ther bor­ing repet­i­tive work or re­quires one-off scripts and  single-use” plu­g­ins and tools that are usu­ally nei­ther fun to write nor to use.

EmDash is de­signed to be man­aged pro­gram­mat­i­cally by your AI agents. It pro­vides the con­text and the tools that your agents need, in­clud­ing:

Agent Skills: Each EmDash in­stance in­cludes Agent Skills that de­scribe to your agent the ca­pa­bil­i­ties EmDash can pro­vide to plu­g­ins, the hooks that can trig­ger plu­g­ins, guid­ance on how to struc­ture a plu­gin, and even how to port legacy WordPress themes to EmDash na­tively. When you give an agent an EmDash code­base, EmDash pro­vides every­thing the agent needs to be able to cus­tomize your site in the way you need. EmDash CLI: The EmDash CLI en­ables your agent to in­ter­act pro­gram­mat­i­cally with your lo­cal or re­mote in­stance of EmDash. You can up­load me­dia, search for con­tent, cre­ate and man­age schemas, and do the same set of things you can do in the Admin UI.Built-in MCP Server: Every EmDash in­stance pro­vides its own re­mote Model Context Protocol (MCP) server, al­low­ing you to do the same set of things you can do in the Admin UI.

EmDash uses passkey-based au­then­ti­ca­tion by de­fault, mean­ing there are no pass­words to leak and no brute-force vec­tors to de­fend against. User man­age­ment in­cludes fa­mil­iar role-based ac­cess con­trol out of the box: ad­min­is­tra­tors, ed­i­tors, au­thors, and con­trib­u­tors, each scoped strictly to the ac­tions they need. Authentication is plug­gable, so you can set EmDash up to work with your SSO provider, and au­to­mat­i­cally pro­vi­sion ac­cess based on IdP meta­data.

You can im­port an ex­ist­ing WordPress site by ei­ther go­ing to WordPress ad­min and ex­port­ing a WXR file, or by in­stalling the EmDash Exporter plu­gin on a WordPress site, which con­fig­ures a se­cure end­point that is only ex­posed to you, and pro­tected by a WordPress Application Password you con­trol. Migrating con­tent takes just a few min­utes, and au­to­mat­i­cally works to bring any at­tached me­dia into EmDash’s me­dia li­brary.

Creating any cus­tom con­tent types on WordPress that are not a Post or a Page has meant in­stalling heavy plu­g­ins like Advanced Custom Fields, and squeez­ing the re­sult into a crowded WordPress posts table. EmDash does things dif­fer­ently: you can de­fine a schema di­rectly in the ad­min panel, which will cre­ate en­tirely new EmDash col­lec­tions for you, sep­a­rately or­dered in the data­base. On im­port, you can use the same ca­pa­bil­i­ties to take any cus­tom post types from WordPress, and cre­ate an EmDash con­tent type from it.

For be­spoke blocks, you can use the EmDash Block Kit Agent Skill to in­struct your agent of choice and build them for EmDash.

EmDash is v0.1.0 pre­view, and we’d love you to try it, give feed­back, and we wel­come con­tri­bu­tions to the EmDash GitHub repos­i­tory.

If you’re just play­ing around and want to first un­der­stand what’s pos­si­ble — try out the ad­min in­ter­face in the EmDash Playground.

To cre­ate a new EmDash site lo­cally, via the CLI, run:

Or you can do the same via the Cloudflare dash­board be­low:

We’re ex­cited to see what you build, and if you’re ac­tive in the WordPress com­mu­nity, as a host­ing plat­form, a plu­gin or theme au­thor, or oth­er­wise — we’d love to hear from you. Email us at [email protected], and tell us what you’d like to see from the EmDash pro­ject.

If you want to stay up to date with ma­jor EmDash de­vel­op­ments, you can leave your email ad­dress here.

...

Read the original on blog.cloudflare.com »

5 380 shares, 22 trendiness

CERN levels up with new superconducting karts

The race is on to test new ve­hi­cles in the un­der­ground Large Hadron Collider tun­nel, ahead of ma­jor works start­ing this sum­mer

The race is on to test new ve­hi­cles in the un­der­ground Large Hadron Collider tun­nel, ahead of ma­jor works start­ing this sum­mer

Following on from the ro­botic mice, CERN en­gi­neers have now de­vel­oped a su­per-charged kart to en­able work­ers to race through the Large Hadron Collider (LHC) un­der­ground tun­nel dur­ing the up­com­ing ma­jor works, start­ing this sum­mer.

The karts promise a power boost to ac­tiv­i­ties dur­ing this pe­riod, known as Long Shutdown 3 (LS3), which will see the LHC trans­formed into the High-Luminosity LHC. These ve­hi­cles will re­place the bi­cy­cles that were used un­til now to travel through the 27-km un­der­ground tun­nel, en­abling en­gi­neers and tech­ni­cians to speed to ar­eas where im­prove­ments to the ac­cel­er­a­tor are re­quired.

Each kart is turbo-boosted by 64 su­per­con­duct­ing en­gines,” ex­plains pro­ject leader Mario Idraulico. When the en­gines are cooled to be­low their crit­i­cal tem­per­a­tures, the Meissner ef­fect lev­i­tates the karts, al­low­ing them to zip through the tun­nels at high speeds and, mamma mia, they’re su­per!”

Early tests have been promis­ing, and the next steps in­volve test­ing dif­fer­ent kart de­signs in an un­der­ground race. Safety co­or­di­na­tor Luigi Fratello has en­sured that each dri­ver will be is­sued with Safety and Health Equipment for Long and Limited Stays (SHELLS), al­though his re­sponse to dri­vers want­ing ba­nanas in the tun­nel was Oh no!”

These karts, al­though de­vel­oped to sup­port CERNs fun­da­men­tal re­search pro­gramme, show clear ap­pli­ca­tions for so­ci­ety. CERNs Knowledge Transfer Group has be­gun dis­cus­sions with European startup com­pany Quantum Mushroom to ex­plore aero­space ap­pli­ca­tions and pow­er­ing for next-gen­er­a­tion anti-grav­ity ve­hi­cles.

Surprisingly, the kart pro­ject be­gan from a col­lab­o­ra­tion be­tween CERN en­gi­neers and on­site nurs­ery school chil­dren — one ex­am­ple of CERNs com­mit­ment to in­spir­ing fu­ture gen­er­a­tions. We’re thrilled that the chil­dren’s kart de­signs were the in­spi­ra­tion for the en­gi­neered karts,” ex­claimed school­teacher Yoshi Kyouryuu, mid-way through paint­ing spots on eggs for an Easter egg hunt.

As ed­u­ca­tors, we pro­mote cu­rios­ity from a young age, which is why we paint ques­tion marks all over our yel­low school walls,” ex­plained school di­rec­tor, Rosalina Pfirsich, look­ing up from her sto­ry­book. With all the con­tri­bu­tions the chil­dren have made to the up­com­ing High-Luminosity LHC pro­ject, we’ve taken to call­ing them Luma!”

Find out more about the High-Luminosity LHC pro­ject.

...

Read the original on home.cern »

6 363 shares, 24 trendiness

I quit. The clankers won.

… is what I’m read­ing far too of­ten! Some of you are los­ing faith!

A grow­ing sen­ti­ment amongst my peers — those who haven’t al­ready re­signed to an NPC ca­reer path† — is that blog­ging is over. Coding is cooked. What’s the point of shar­ing in­sights and ex­per­tise when the Cognitive Dark Forest will feed on our hu­man­ity?

Before I’m dis­missed as an ill-in­formed hater please note: I’ve done my re­search.

† To be fair it’s a valid choice in this econ­omy. Clock in, slop around, clock out. Why not?

Star Trek’s cap­tain Kirk lean­ing into a com­puter cast in shadow look­ing con­tem­pla­tive.

It’s never been more im­por­tant to blog. There has never been a bet­ter time to blog. I will tell you why. We’re be­ing starved for hu­man con­ver­sa­tion and au­then­tic voices. What’s more: every­one is try­ing to take your voice away. Do not opt-out of us­ing it your­self.

First let’s ac­cept the re­al­i­ties. The gi­ant pla­gia­rism ma­chines have al­ready stolen every­thing. Copyright is dead. Licenses are washed away in clean rooms. Mass sur­veil­lance and track­ing are a fea­ture, pri­vacy is a bug. Everything is an algorithm” op­ti­mised to ex­ploit.

How can we pos­si­bly com­bat that?

From a purely self­ish per­spec­tive it’s never been eas­ier to stand out and as­sert your­self as an au­thor­ity. When every­one is de­fer­ring to the big bull­shit­ter in the cloud your orig­i­nal thoughts are in­valu­able. Your brain is your biggest as­set. Share it with oth­ers for mu­tual ben­e­fit.

I find writ­ing stuff down im­proves my mem­ory and hard­ens my re­solve. I bet that’s true for you too. It’s part rote learn­ing part rub­ber­duck­ing†. Writing pub­licly in blog form forces me to ques­tion as­sump­tions. Even when re­search fails me Cunningham’s Law saves me.

† Some will claim writ­ing into a pre­dic­tive chat box helps too, and sure, they’re ab­solutely right!

Blogging makes you a bet­ter pro­fes­sional. No mat­ter how small your au­di­ence, some­one will even­tu­ally stum­ble upon your blog and it will un­block their path.

Don’t ac­cept a fate be­ing forced upon you.

The AI in­dus­try is 99% hype; a bil­lion dol­lar in­dus­trial com­plex to put a price tag on cre­ation. At this point if you be­lieve AI is just a tool’ you’re wil­fully ig­nor­ing the harm. (Regardless, why do I keep be­ing told it’s an extreme’ stance if I de­cide not to buy some­thing?)

The 1% util­ity AI has is over­shad­owed by the over­whelm­ing medioc­racy it re­gur­gi­tates.

We’re say­ing good­bye to Sora. To every­one who cre­ated with Sora, shared it, and built com­mu­nity around it: thank you. What you made with Sora mat­tered, and we know this news is dis­ap­point­ing.

Is there any­thing, in the en­tire recorded his­tory of hu­man cre­ation, that could have pos­si­bly mat­tered less than the flat­u­lence Sora pro­duced? NFTs had more value.

I’m not pro­tec­tive over the word art”. Generative AI is art. It’s ir­re­deemably shit art; end of con­ver­sa­tion. A child’s crayon doo­dle is also lack­ing re­fined artistry but we hang it on our fridge be­cause a hu­man made it and that mat­ters. We care and car­ing has a pos­i­tive ef­fect on our lives. When you pass hu­man cre­ativ­ity through the slop wringer, or just prompt an in­can­ta­tion, the re­sult is con­tin­voucly morged; a va­pid mock­ery of the in­put. The garbage out no longer mat­ters, no­body cares, no­body ben­e­fits.

I for­got where I was go­ing with this… oh right: don’t re­sign your­self to the deskilling of our craft. You should keep blog­ging! Take pride in your abil­ity and unique voice. But please don’t des­e­crate your­self with slop.

A di­sheveled Oliver Twist looks up plead­ingly hold­ing out an empty bowl.

The only win­ning move is not to play.

We’ve got­ten too com­fort­able with the con­ve­nience of Big Tech. We do not have to con­tinue play­ing their game. Don’t buy the nar­ra­tives they’re sell­ing.

The AI in­dus­try is built on the preda­tory busi­ness model of casi­nos. Except they’ve for­get the house is sup­posed to win. One up­side of this loom­ing eco­nomic and in­tel­lec­tual de­pres­sion is that the me­dia is be­gin­ning to recog­nise gate keep­ers are no longer the hand that feeds them. Big Tech is not the web. You don’t have to use it nor sup­port it. Blog for the old web, the open web, the in­die web — the web you want to see.

And if you think I’m be­ing dra­matic and I’ve up­set your new toys, you’re wel­come to be left be­hind in the mi­as­matic dystopia these tech­no­facists are rac­ing to build.

...

Read the original on dbushell.com »

7 251 shares, 7 trendiness

U.S. exempts oil industry from protecting Gulf animals, for 'national security'

A com­mit­tee of Trump ad­min­is­tra­tion of­fi­cials voted unan­i­mously on Tuesday to ex­empt the oil and gas in­dus­try in the Gulf of Mexico from re­quire­ments of the Endangered Species Act, a move that would lift pro­tec­tions for en­dan­gered whales, tur­tles and other an­i­mals threat­ened with ex­tinc­tion.

Defense Secretary Pete Hegseth trig­gered the vote two weeks ago by ask­ing Interior Secretary Doug Burgum to call it for rea­sons of na­tional se­cu­rity,” and was pre­sent at the meet­ing.

To be se­cure as a na­tion we need a steady, af­ford­able sup­ply of our own en­ergy,” Hegseth told the six mem­bers of the com­mit­tee, nick­named the God Squad for its abil­ity to make life or death de­ci­sions about en­dan­gered an­i­mals. This is not just about gas prices; it’s about our abil­ity to power our mil­i­tary and pro­tect our na­tion.”

Until now, oil and gas com­pa­nies have been asked by fed­eral agen­cies to pro­tect Gulf species by not dis­card­ing trash into the Gulf and sus­pend­ing their use of loud tech­nol­ogy when they spot whales, among other re­quests.

One species of Gulf whale is par­tic­u­larly vul­ner­a­ble. Scientists es­ti­mate that only about 51 Rice’s whales are left on Earth, all of them in wa­ters of the Gulf of Mexico, which the Trump ad­min­is­tra­tion has termed the Gulf of America.

On Tuesday, Dr. Neil Jacobs, the National Oceanic and Atmospheric Administration’s Under Secretary of Commerce, made clear that oil and gas com­pa­nies would no longer need to ad­here to pro­tec­tions — for Rice’s whales and any other an­i­mals pre­vi­ously pro­tected by the Endangered Species Act.

I want to high­light that the agency ac­tion un­der con­sid­er­a­tion — all oil and gas ac­tiv­i­ties in the Gulf of America — en­com­passes the full suite of ac­tions in­clud­ing var­i­ous pro­tec­tive mea­sures for the Rice’s Whale,” said Jacobs. I will be vot­ing to grant the ex­emp­tion.”

Conservation and pro-democ­racy groups called the vote illegal” and char­ac­ter­ized the na­tional se­cu­rity jus­ti­fi­ca­tion as a man­u­fac­tured threat.

On the one hand, you have the oil and gas in­dus­try, it’s one of the wealth­i­est in­dus­tries on the planet, and the other, you have one of our most en­dan­gered whales,” said Michael Jasny, a se­nior pol­icy an­a­lyst for the Natural Resources Defense Council. It’s caused enor­mous out­rage and as­ton­ish­ment.”

The en­ergy in­dus­try has been ac­cused of caus­ing the whales harm be­fore. After the Deepwater Horizon spill leaked more than 200 mil­lion gal­lons of BPs oil into the Gulf in 2010, cov­er­ing about half of the Rice’s whale habi­tat, the Rice’s whale pop­u­la­tion de­clined by as much as 22 per­cent. The num­ber of ex­ist­ing whales is so low that sci­en­tists have warned the loss of a sin­gle ad­di­tional whale could en­dan­ger fu­ture re­pro­duc­tion and tip the species to­ward ex­tinc­tion.

A spokesper­son for the American Petroleum Institute, a lob­by­ing group for oil and gas com­pa­nies said the en­ergy in­dus­try had a track record of pro­tect­ing wildlife while de­vel­op­ing off­shore en­ergy.

Over the long term, American en­ergy lead­er­ship de­pends on get­ting that bal­ance right through rea­son­able, sci­ence-based pro­tec­tions while meet­ing grow­ing en­ergy de­mand,” said Andrea Woods.

A gath­er­ing of the six-per­son com­mit­tee has only hap­pened be­fore af­ter ex­ten­sive prior con­sul­ta­tion with en­vi­ron­men­tal agen­cies and months of pub­lic no­tice. Just three meet­ings have hap­pened over the past 50 years and only once did an ex­emp­tion take ef­fect.

Not only is a God Squad con­ven­ing as rare as hen’s teeth in the first in­stance, but this snap an­nounce­ment that came a week and a half ago is so vague that the pub­lic does­n’t even re­ally know what the com­mit­tee is sup­posed to con­sider,” said Jane Davenport, a se­nior at­tor­ney at Defenders of Wildlife, a con­ser­va­tion non­profit. So it’s just com­pletely baf­fling, but it is on brand for this ad­min­is­tra­tion.”

The Center for Biological Diversity sued U. S. Interior Secretary Doug Burgum in fed­eral court on March 18, say­ing the gov­ern­ment vi­o­lated the law by not tak­ing the proper steps or pro­vid­ing enough pub­lic in­for­ma­tion be­fore call­ing the com­mit­tee meet­ing.

In its re­sponse to that law­suit, filed Wednesday night, the Trump ad­min­is­tra­tion said Hegseth was the one who asked the Interior Department to call the com­mit­tee meet­ing. The Endangered Species Act in­cludes a pro­vi­sion re­quir­ing the com­mit­tee to grant an ex­emp­tion for any agency ac­tion if the Secretary of Defense finds that such ex­emp­tion is nec­es­sary for rea­sons of na­tional se­cu­rity.”

A fed­eral judge last week de­clined to de­lay the meet­ing, which the Interior Department streamed on YouTube.

Brian Segee, a se­nior at­tor­ney at the Center for Biological Diversity, said the con­se­quences of the vote could be im­me­di­ate and sig­nif­i­cant.

Once an ex­emp­tion is is­sued, it is sweep­ing. It ap­plies not only to the one species that had a jeop­ardy find­ing, Rice’s whale — it ap­plies to every other listed species in the Gulf in re­la­tion to oil and gas op­er­a­tions, which will go on for decades,” said Segee.

Rice’s whales are not the only an­i­mals at risk in the Gulf. Sperm whales, the West Indian man­a­tee and sev­eral Gulf sea tur­tles are also listed as threat­ened or en­dan­gered.

The Interior Department did not re­spond to NPRs re­quest to ex­plain the na­tional se­cu­rity im­pli­ca­tions of oil ex­plo­ration and pro­duc­tion in the Gulf. A rep­re­sen­ta­tive from the Department of Defense said the agency could not com­ment be­cause of pend­ing lit­i­ga­tion.

National se­cu­rity has never been used to jus­tify a meet­ing of the com­mit­tee — and it has never be­fore trig­gered a vote for an ex­emp­tion. But this is not the first time the Trump ad­min­is­tra­tion has in­voked na­tional se­cu­rity to at­tempt to by­pass laws meant to pro­tect the en­vi­ron­ment.

Shortly af­ter Trump’s in­au­gu­ra­tion, an ex­ec­u­tive or­der laid the ground­work for de­creas­ing le­gal pro­tec­tions for an­i­mals be­cause of a national en­ergy emer­gency.”

Our Nation’s cur­rent in­ad­e­quate de­vel­op­ment of do­mes­tic en­ergy re­sources leaves us vul­ner­a­ble to hos­tile for­eign ac­tors and poses an im­mi­nent and grow­ing threat to the United States’ pros­per­ity and na­tional se­cu­rity,” the or­der stated. The or­der also in­di­cated the Interior Department should call the Endangered Species Act Committee to­gether to meet at least four times a year.

We’re very con­cerned that this ad­min­is­tra­tion is in­ter­ested in pur­su­ing a big oil, drill every­where, all the time’ agenda as op­posed to a protect pub­lic re­sources and im­per­iled wildlife agenda,’” said Davenport.

How en­ergy com­pa­nies work in the Gulf — and Washington

For the Endangered Species Act Committee to agree to grant an ex­emp­tion, the law typ­i­cally re­quires ev­i­dence that it’s im­pos­si­ble for in­dus­try to op­er­ate in an area with­out jeop­ar­diz­ing an en­dan­gered species.

But the National Oceanic and Atmospheric Administration de­ter­mined in the doc­u­ment pub­lished last May that there were mea­sures the en­ergy in­dus­try could take to avoid harm­ing Rice’s whales and other species in the Gulf, in­clud­ing slow­ing down boats near the Rice’s whale habi­tat and main­tain­ing a safe dis­tance from any whales that were seen.

It said, take these elec­tively rea­son­able mea­sures to avoid run­ning over and killing Rice’s whales with boats. And yes, oil and gas can pro­ceed,” said Davenport. You can have your cake and eat it, too.”

Some peo­ple work­ing in the Gulf ar­gue that oil com­pa­nies could do even more to pro­tect an­i­mals.

Energy com­pa­nies look for oil and gas in the ocean by blast­ing sound waves into the wa­ter from ships to record how they re­flect off the rock be­low. The air guns used to emit those sound waves are re­spon­si­ble for near-con­stant un­der­wa­ter noise in the Gulf.

Some com­pa­nies have de­vel­oped tools that limit the en­ergy used when con­duct­ing the sur­veys. Reports show those air guns can ex­pose an­i­mals to lower lev­els of noise, over ar­eas up to nine times smaller than the re­gions af­fected by tra­di­tional air guns.

They are much eas­ier on the en­vi­ron­ment,” said Shuki Ronen, a geo­physi­cist at Sercel, one of the com­pa­nies de­vel­op­ing the new tech­nol­ogy. And I think the in­dus­try can adopt them more than they do now.”

An NPR re­view of pub­lic doc­u­ments found that of the 25 seis­mic sur­vey pro­jects ap­proved by 2023 to use air guns for more than 1,000 days over the next few years, all but two en­ergy com­pa­nies said they would use con­ven­tional air­gun sys­tems.

Lawyers for con­ser­va­tion groups say the Endangered Species Act did not in­tend for an ex­emp­tion to be granted when there are steps an in­dus­try can take to avoid harm to an­i­mals.

There’s plenty that can be done,” said Jasny. This is not what the Endangered Species Act is de­signed to do. It’s not how we pro­tect en­dan­gered species in our coun­try.”

Still, many oil and gas com­pa­nies want less re­stric­tion in the Gulf, and are will­ing to pay for it.

Energy com­pa­nies, in­clud­ing Chevron, ExxonMobile and Occidental Petroleum, which ac­quired Anadarko Petroleum Corporation in 2019, spent more than $8 mil­lion since October lob­by­ing the gov­ern­ment about the Endangered Species Act, per­mit­ting re­form and, specif­i­cally, Rice’s whales, lob­by­ing re­ports re­viewed by NPR show.

Part of a pat­tern of making it harder to list species’

Other fed­eral agen­cies have changed how they op­er­ate to pro­tect threat­ened and en­dan­gered an­i­mals since the start of Trump’s sec­ond term in 2025.

In April 2025, the U. S. Army Corps of Engineers cited Trump’s energy emer­gency” or­der in a no­tice that said the agency planned to move for­ward with an un­der­wa­ter ca­ble re­place­ment pro­ject in the Puget Sound near Seattle, with­out first con­sult­ing wildlife agen­cies. The pro­ject is planned in wa­ters used by a killer whale pop­u­la­tion that has been pro­tected by the Endangered Species Act since 1972.

Under Biden, the Fish and Wildlife Service and the National Oceanic and Atmospheric Administration added an av­er­age of around 14 an­i­mals each year to the fed­eral list of en­dan­gered and threat­ened an­i­mals. During Trump’s first ad­min­is­tra­tion, the agen­cies listed an av­er­age of about five an­i­mals an­nu­ally. During Obama’s sec­ond term, the agen­cies av­er­aged about 54 new ad­di­tions.

Since the start of Trump’s sec­ond term, no new an­i­mals have been listed. It is the first time in al­most 20 years that no an­i­mals were added to the list, NPR found.

Segee, the at­tor­ney at the Center for Biological Diversity, said call­ing the Endangered Species Act Committee is just the lat­est of a host of fed­eral ef­forts to re­move pro­tec­tions for en­dan­gered and threat­ened an­i­mals.

In a nut­shell, they’re mak­ing it harder to list species or pro­tect their habi­tats,” said Segee.

NPR would like to hear from peo­ple with in­for­ma­tion about how en­ergy com­pa­nies are work­ing in the Gulf. You can send an email to the re­porter of this ar­ti­cle at ceis­ner@npr.org, or con­tact her on the end-to-end en­crypted plat­form Signal here. Her user­name is: ceis.78.

...

Read the original on www.npr.org »

8 236 shares, 15 trendiness

publications/MADBugs/CVE-2026-4747/write-up.md at main · califio/publications

Advisory: FreeBSD-SA-26:08.rpcsec_gss

CVE: CVE-2026-4747

Affected: FreeBSD 13.5 (Tested on: FreeBSD 14.4-RELEASE amd64 (GENERIC ker­nel, no KASLR)

Attack sur­face: NFS server with kgss­api.ko loaded (port 2049/TCP)

In sys/​rpc/​rpc­sec_gss/​svc_r­pc­sec_gss.c, the func­tion svc_r­pc_gss_­val­i­date() re­con­structs an RPC header into a 128-byte stack buffer (rpchdr[]) for GSS-API sig­na­ture ver­i­fi­ca­tion. It first writes 32 bytes of fixed RPC header fields, then copies the en­tire RPCSEC_GSS cre­den­tial body (oa_length bytes) into the re­main­ing space — with­out check­ing that oa_length fits.

sta­tic bool_t

svc_r­pc_gss_­val­i­date(struct svc_r­pc_gss_­client *client,

struct rpc_msg *msg, gss_qop_t *qop, rpc_gss_proc_t gcproc)

in­t32_t rpchdr[128 / sizeof(in­t32_t)]; // 128 bytes on stack

in­t32_t *buf;

mem­set(rpchdr, 0, sizeof(rpchdr));

// Write 8 fixed-size RPC header fields (32 bytes to­tal)

buf = rpchdr;

IXDR_PUT_LONG(buf, msg->rm_xid);

IXDR_PUT_ENUM(buf, msg->rm_di­rec­tion);

IXDR_PUT_LONG(buf, msg->rm_­call.cb_r­pcvers);

IXDR_PUT_LONG(buf, msg->rm_­call.cb_prog);

IXDR_PUT_LONG(buf, msg->rm_­call.cb_vers);

IXDR_PUT_LONG(buf, msg->rm_­call.cb_proc);

oa = &msg->rm_call.cb_cred;

IXDR_PUT_ENUM(buf, oa->oa_fla­vor);

IXDR_PUT_LONG(buf, oa->oa_length);

if (oa->oa_length) {

// BUG: No bounds check on oa_length!

// After 32 bytes of header, only 96 bytes re­main in rpchdr.

// If oa_length > 96, this over­flows past rpchdr into:

// lo­cal vari­ables → saved callee-saved reg­is­ters → re­turn ad­dress

mem­cpy((cad­dr_t)buf, oa->oa_base, oa->oa_length);

buf += RNDUP(oa->oa_length) / sizeof(in­t32_t);

// gss_ver­i­fy_mic() called af­ter — but over­flow al­ready hap­pened

The buffer has only 128 - 32 = 96 bytes of space for the cre­den­tial body. Any cre­den­tial larger than 96 bytes over­flows the stack buffer.

The patch adds a sin­gle bounds check be­fore the copy:

oa = &msg->rm_call.cb_cred;

if (oa->oa_length > sizeof(rpchdr) - 8 * BYTES_PER_XDR_UNIT) {

rpc_gss_log_de­bug(“auth length %d ex­ceeds max­i­mum”, oa->oa_length);

client->cl_s­tate = CLIENT_STALE;

re­turn (FALSE);

svc_r­pc_gss_­val­i­date:

push rbp

mov rbp, rsp

push r15  ; saved at [rbp-8]

push r14  ; saved at [rbp-16]

push r13  ; saved at [rbp-24]

push r12  ; saved at [rbp-32]

push rbx  ; saved at [rbp-40]

sub rsp, 0xb8  ; 184 bytes of lo­cal space

The rpchdr ar­ray is at [rbp-0xc0] (192 bytes be­low rbp). The mem­cpy writes to rpchdr + 32 = [rbp-0xa0]. The saved reg­is­ters and re­turn ad­dress are above rpchdr on the stack:

However, these are the off­sets for a cre­den­tial body that starts im­me­di­ately. In prac­tice, the cre­den­tial body be­gins with a GSS header (version, pro­ce­dure, se­quence, ser­vice) plus a con­text han­dle. With a 16-byte han­dle, the ac­tual off­sets shift by 32 bytes — the re­turn ad­dress lands at cre­den­tial body byte 200 (verified via De Bruijn pat­tern analy­sis from the re­mote ex­ploit).

Why NFS? The vul­ner­a­ble mod­ule kgss­api.ko im­ple­ments RPCSEC_GSS au­then­ti­ca­tion for the ker­nel’s RPC sub­sys­tem. NFS is the pri­mary (and typ­i­cally only) in-ker­nel RPC ser­vice that uses RPCSEC_GSS. The NFS server dae­mon (nfsd) lis­tens on port 2049/TCP and processes RPC pack­ets in ker­nel con­text — mak­ing this a re­mote ker­nel code ex­e­cu­tion vul­ner­a­bil­ity reach­able over the net­work.

Why Kerberos? The over­flow is deep in­side the GSS val­i­da­tion code path. svc_r­pc_gss_­val­i­date() is only called when:

The GSS pro­ce­dure is DATA (not INIT or DESTROY)

Without a valid GSS con­text, the server re­jects the packet at step 3 (returning AUTH_REJECTEDCRED) and the vul­ner­a­ble mem­cpy is never reached. Creating a valid GSS con­text re­quires a suc­cess­ful Kerberos hand­shake — the at­tacker must pos­sess a valid Kerberos ticket for the NFS ser­vice prin­ci­pal.

In a real-world at­tack, the tar­get would be an en­ter­prise NFS server with ex­ist­ing Kerberos in­fra­struc­ture (Active Directory, FreeIPA, etc.). Any user with a valid Kerberos ticket — even an un­priv­i­leged one — can trig­ger the vul­ner­a­bil­ity. The test lab in­cludes its own KDC be­cause there is no pre-ex­ist­ing Kerberos en­vi­ron­ment.

The XDR layer en­forces MAX_AUTH_BYTES = 400 on the cre­den­tial body, giv­ing an over­flow range of 97–400 bytes (1–304 bytes past the safe limit).

* Network ac­cess to the tar­get’s NFS port (2049/TCP) and KDC port (88/TCP)

# Download im­age

wget https://​down­load.freebsd.org/​re­leases/​VM-IM­AGES/​14.4-RE­LEASE/​amd64/​Lat­est/\

FreeBSD-14.4-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2.xz

xz -d FreeBSD-14.4-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2.xz

cp FreeBSD-14.4-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2 freebsd-vuln.qcow2

qemu-img re­size freebsd-vuln.qcow2 8G

# Cloud-init auto-con­fig­u­ra­tion

cat > user-data << EOF

#cloud-config

ch­passwd:

list: |

root:freebsd

ex­pire: False

ssh_p­wauth: True

bootcmd:

- rm -f /firstboot # pre­vent auto-patch­ing to -p1

- rm -f /var/db/freebsd-update/*

runcmd:

- echo PermitRootLogin yes’ >> /etc/ssh/sshd_config

- ser­vice sshd restart

- kld­load kgss­api

- sysrc rpcbind_en­able=YES nf­s_serv­er_en­able=YES

- echo /export -network 0.0.0.0/0’ > /etc/exports

- mkdir -p /export

- ser­vice rpcbind start && ser­vice nfsd start

EOF

cat > meta-data << EOF

in­stance-id: cve-test

lo­cal-host­name: freebsd-vuln

EOF

genisoim­age -output seed.iso -volid ci­data -joliet -rock user-data meta-data

# Boot VM — for­ward SSH (22), NFS (2049), and KDC (88) ports

qemu-sys­tem-x86_64 -enable-kvm -cpu host -m 2G -smp 2 \

-drive file=freebsd-vuln.qcow2,for­mat=qcow2,if=vir­tio \

-cdrom seed.iso \

-netdev user,id=net0,host­fwd=tcp::2222-:22,host­fwd=tcp::2049-:2049,host­fwd=tcp::8888-:88 \

-device vir­tio-net-pci,net­dev=net0 -nographic

The KDC port (88) is for­warded to host port 8888 di­rectly — no SSH tun­nel re­quired.

For VMware Workstation, ESXi, Fusion, VirtualBox, or bhyve. In this ex­am­ple the VM host­name is test.

Download the in­staller ISO (not the cloud-init im­age):

wget https://​down­load.freebsd.org/​re­leases/​amd64/​amd64/​ISO-IM­AGES/​14.4-RE­LEASE/\

FreeBSD-14.4-RELEASE-amd64-disc1.iso

IMPORTANT: FreeBSD spawns 8 NFS threads per CPU. The ex­ploit kills one thread per round and needs 15 rounds, so you need at least 2 CPUs (= 16 threads). With 1 CPU (8 threads) the ex­ploit fails around round 9.

Network: bridged or NAT (the at­tacker needs to reach ports 22, 88, 2049)

Attach the ISO and in­stall FreeBSD nor­mally

...

Read the original on github.com »

9 230 shares, 20 trendiness

Is BGP safe yet? · Cloudflare

Border Gateway Protocol (BGP) is the postal ser­vice of the Internet. It’s re­spon­si­ble for look­ing at all of the avail­able paths that data could travel and pick­ing the best route. Unfortunately, it is­n’t se­cure, and there have been some ma­jor Internet dis­rup­tions as a re­sult. But for­tu­nately there is a way to make it se­cure.ISPs and other ma­jor Internet play­ers (Sprint and oth­ers) would need to im­ple­ment a cer­ti­fi­ca­tion sys­tem, called RPKI.

To bet­ter un­der­stand why BGPs lack of se­cu­rity is so prob­lem­atic, let’s look at a sim­pli­fied model of how BGP is used to route Internet pack­ets. The Internet is not run by just one com­pany. It’s made up of thou­sands of au­tonomous sys­tems with nodes lo­cated all around the world, con­nected to each other in a mas­sive graph.In essence, the way BGP works is that each node must de­ter­mine how to route pack­ets us­ing only what it knows from the nodes it con­nects with di­rectly.For ex­am­ple, in the sim­ple net­work A–B–C–D–E, the node A only knows how to reach E based on in­for­ma­tion it re­ceived from B. The node B knows about the net­work from A and C. And so forth.A BGP hi­jack oc­curs when a ma­li­cious node de­ceives an­other node, ly­ing about what the routes are for its neigh­bors. Without any se­cu­rity pro­to­cols, this mis­in­for­ma­tion can prop­a­gate from node to node, un­til a large num­ber of nodes now know about, and at­tempt to use these in­cor­rect, nonex­is­tent, or ma­li­cious routes.Click Hijack the re­quest” to vi­su­al­ize how pack­ets are re-routed:

In or­der to make BGP safe, we need some way of pre­vent­ing the spread of this mis­in­for­ma­tion. Since the Internet is so open and dis­trib­uted, we can’t pre­vent ma­li­cious nodes from at­tempt­ing to de­ceive other nodes in the first place. So in­stead we need to give nodes the abil­ity to val­i­date the in­for­ma­tion they re­ceive, so they can re­ject these un­de­sired routes on their own. Enter Resource Public Key Infrastructure (RPKI), a se­cu­rity frame­work method that as­so­ci­ates a route with an au­tonomous sys­tem. It gets a lit­tle tech­ni­cal, but the ba­sic idea is that RPKI uses cryp­tog­ra­phy to pro­vide nodes with a way of do­ing this val­i­da­tion.With RPKI en­abled, let’s see what hap­pens to pack­ets af­ter an at­tempted BGP hi­jack. Click Attempt to hi­jack” to vi­su­al­ize how RPKI al­lows the net­work to pro­tect it­self by in­val­i­dat­ing the ma­li­cious routes:

Border Gateway Protocol (BGP) is the postal ser­vice of the Internet. When some­one drops a let­ter into a mail­box, the postal ser­vice processes that piece of mail and chooses a fast, ef­fi­cient route to de­liver that let­ter to its re­cip­i­ent. Similarly, when some­one sub­mits data across the Internet, BGP is re­spon­si­ble for look­ing at all of the avail­able paths that data could travel and pick­ing the best route, which usu­ally means hop­ping be­tween au­tonomous sys­tems. Learn more →By de­fault, BGP does not em­bed any se­cu­rity pro­to­cols. It is up to every au­tonomous sys­tem to im­ple­ment fil­ter­ing of wrong routes”. Leaking routes can break parts of the Internet by mak­ing them un­reach­able. It is com­monly the re­sult of mis­con­fig­u­ra­tions. Although, it is not al­ways ac­ci­den­tal. A prac­tice called BGP hi­jack con­sists of redi­rect­ing traf­fic to an­other au­tonomous sys­tem to steal in­for­ma­tion (via phish­ing, or pas­sive lis­ten­ing for in­stance).BGP can be made safe if all au­tonomous sys­tems (AS) only an­nounce le­git­i­mate routes. A route is de­fined as le­git­i­mate when the owner of the re­source al­lows its an­nounce­ment. Filters need to be built in or­der to make sure only le­git­i­mate routes are ac­cepted. There are a few ap­proaches for BGP route val­i­da­tion which vary in de­grees of trusta­bil­ity and ef­fi­ciency. A ma­ture im­ple­men­ta­tion is RPKI. With 800k+ routes on the Internet, it is im­pos­si­ble to check them man­u­ally. Resource Public Key Infrastructure (RPKI) is a se­cu­rity frame­work method that as­so­ci­ates a route with an au­tonomous sys­tem. It uses cryp­tog­ra­phy in or­der to val­i­date the in­for­ma­tion be­fore be­ing passed onto the routers. You can read more about RPKI on the Cloudflare blog.On May 14th 2020, Job Snijders from NTT pre­sented a free RPKI 101 we­bi­nar.How does the test work?In or­der to test if your ISP is im­ple­ment­ing BGP safely, we an­nounce a le­git­i­mate route but we make sure the an­nounce­ment is in­valid. If you can load the web­site we host on that route, that means the in­valid route was ac­cepted by your ISP. A leaked or a hi­jacked route would likely be ac­cepted too.Can even more be done?Over the years, net­work op­er­a­tors and de­vel­op­ers started work­ing groups to de­sign and de­ploy stan­dards to over­come un­safe rout­ing pro­to­cols. Cloudflare re­cently joined a global ini­tia­tive called Mutually Agreed Norms for Routing Security (MANRS). It’s a com­mu­nity of se­cu­rity-minded or­ga­ni­za­tions com­mit­ted to mak­ing rout­ing in­fra­struc­ture more ro­bust and se­cure, and mem­bers agree to im­ple­ment fil­ter­ing mech­a­nisms. New voices are al­ways ap­pre­ci­ated.What can you do?Share this page For BGP to be safe, all of the ma­jor ISPs will need to em­brace RPKI. Sharing this page will in­crease aware­ness of the prob­lem which can ul­ti­mately pres­sure ISPs into im­ple­ment­ing RPKI for the good of them­selves and the gen­eral pub­lic. You can also reach out to your ser­vice provider or host­ing com­pany di­rectly and ask them to de­ploy RPKI and join MANRS. When the Internet is safe, every­body wins.

...

Read the original on isbgpsafeyet.com »

10 229 shares, 9 trendiness

Learning to Reason in 13 Parameters

...

Read the original on arxiv.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.