10 interesting stories served every morning and every evening.
Stuff that’s in the code but not shipped yet. Feature-flagged, env-gated, or just commented out.
A virtual pet that lives in your terminal. Species and rarity are derived from your account ID. Persistent mode with memory consolidation between sessions and autonomous background actions.Long planning sessions on Opus-class models, up to 30-minute execution windows.Control Claude Code from your phone or a browser. Full remote session with permission approvals.Run sessions in the background with –bgtmuxSessions talk to each other over Unix domain sockets.Between sessions, the AI reviews what happened and organizes what it learned.
...
Read the original on ccunpacked.dev »
It was not a phone call. It was not a meeting. For thousands of Oracle employees across the globe, Tuesday morning began with a single email landing in their inboxes just after 6 a.m. EST — and by the time they finished reading it, their careers at one of the world’s largest technology companies were over.
Oracle has launched what analysts believe could be the most extensive layoff in the company’s history, with estimates suggesting the cuts will affect between 20,000 and 30,000 employees — roughly 18% of its global workforce of approximately 162,000 people. Workers in the United States, India, and other regions all reported receiving the same termination notice at nearly the same hour, sent under the name “Oracle Leadership.”
There was no heads-up from human resources, no conversation with a direct manager, and no advance notice of any kind. Just an email.
The email that circulated widely after screenshots were posted by affected workers on Reddit’s r/employeesOfOracle community and the professional forum Blind was brief and formulaic. It told employees that following a review of the company’s current business needs, a decision had been made to eliminate their roles as part of a broader organizational change, that the day of the email was their final working day, and that a severance package would be made available after signing termination paperwork through DocuSign.
Employees were also instructed to update their personal email addresses to receive subsequent communications, including separation details and answers to frequently asked questions. For many, access to internal production systems was revoked almost immediately after the message arrived.
Based on accounts shared across both Reddit and Blind, the cuts were widespread and, in some units, severe. Among the teams reported to be most affected:
RHS (Revenue and Health Sciences) — employees described a reduction in force of at least 30%, with 16 or more engineers from individual business units cut in a single action.
SVOS (SaaS and Virtual Operations Services) — similarly reported a 30% or greater reduction, with manager-level roles included in the sweep.
At least one manager was confirmed among those let go, and affected employees in India said the severance structure is expected to follow a standard formula based on years of service, paid out in months. Any unvested restricted stock units, however, were forfeited immediately.
Workers who had vested stock were told they would retain access to those shares through Fidelity. Some employees noted April 3 as their formal last working day, with a one-month garden leave period to follow. Separately, posts on Blind alleged that Oracle had recently installed monitoring software on company-issued Mac laptops capable of logging all device activity, with warnings circulating among affected employees not to copy any files or code before returning their machines.
The layoffs are directly tied to Oracle’s aggressive and debt-heavy expansion into artificial intelligence infrastructure. According to analysis from TD Cowen, the job cuts are expected to free up between $8 billion and $10 billion in cash flow — money the company urgently needs to fund a massive buildout of AI data centers.
The financial picture surrounding that expansion is striking. Oracle has taken on $58 billion in new debt within just two months. Its stock has lost more than half its value since reaching a peak in September 2025. Multiple U. S. banks have reportedly stepped back from financing some of its data center projects. All of this is happening even as the company posted a 95% jump in net income — reaching $6.13 billion — last quarter.
The contrast underscores the scale of the bet Oracle is making: record profits on one side, a mounting debt load and tens of thousands of eliminated jobs on the other. For the workers who woke up Tuesday morning to that 6 a.m. email, the company’s ambitions offered little comfort.
...
Read the original on rollingout.com »
Artemis II is NASA’s first crewed mission under the Artemis program and will launch from the agency’s Kennedy Space Center in Florida. It will send NASA astronauts Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen on an approximately 10-day journey around the Moon. Among objectives, the agency will test the Orion spacecraft’s life support systems for the first time with people and lay the groundwork for future crewed Artemis missions.
...
Read the original on plus.nasa.gov »
The cost of building software has drastically decreased. We recently rebuilt Next.js in one week using AI coding agents. But for the past two months our agents have been working on an even more ambitious project: rebuilding the WordPress open source project from the ground up.
WordPress powers over 40% of the Internet. It is a massive success that has enabled anyone to be a publisher, and created a global community of WordPress developers. But the WordPress open source project will be 24 years old this year. Hosting a website has changed dramatically during that time. When WordPress was born, AWS EC2 didn’t exist. In the intervening years, that task has gone from renting virtual private servers, to uploading a JavaScript bundle to a globally distributed network at virtually no cost. It’s time to upgrade the most popular CMS on the Internet to take advantage of this change.
Our name for this new CMS is EmDash. We think of it as the spiritual successor to WordPress. It’s written entirely in TypeScript. It is serverless, but you can run it on your own hardware or any platform you choose. Plugins are securely sandboxed and can run in their own isolate, via Dynamic Workers, solving the fundamental security problem with the WordPress plugin architecture. And under the hood, EmDash is powered by Astro, the fastest web framework for content-driven websites.
EmDash is fully open source, MIT licensed, and available on GitHub. While EmDash aims to be compatible with WordPress functionality, no WordPress code was used to create EmDash. That allows us to license the open source project under the more permissive MIT license. We hope that allows more developers to adapt, extend, and participate in EmDash’s development.
You can deploy the EmDash v0.1.0 preview to your own Cloudflare account, or to any Node.js server today as part of our early developer beta:
Or you can try out the admin interface here in the EmDash Playground:
The story of WordPress is a triumph of open source that enabled publishing at a scale never before seen. Few projects have had the same recognisable impact on the generation raised on the Internet. The contributors to WordPress’s core, and its many thousands of plugin and theme developers have built a platform that democratised publishing for millions; many lives and livelihoods being transformed by this ubiquitous software.
There will always be a place for WordPress, but there is also a lot more space for the world of content publishing to grow. A decade ago, people picking up a keyboard universally learned to publish their blogs with WordPress. Today it’s just as likely that person picks up Astro, or another TypeScript framework to learn and build with. The ecosystem needs an option that empowers a wide audience, in the same way it needed WordPress 23 years ago.
EmDash is committed to building on what WordPress created: an open source publishing stack that anyone can install and use at little cost, while fixing the core problems that WordPress cannot solve.
WordPress’ plugin architecture is fundamentally insecure. 96% of security issues for WordPress sites originate in plugins. In 2025, more high severity vulnerabilities were found in the WordPress ecosystem than the previous two years combined.
Why, after over two decades, is WordPress plugin security so problematic?
A WordPress plugin is a PHP script that hooks directly into WordPress to add or modify functionality. There is no isolation: a WordPress plugin has direct access to the WordPress site’s database and filesystem. When you install a WordPress plugin, you are trusting it with access to nearly everything, and trusting it to handle every malicious input or edge case perfectly.
EmDash solves this. In EmDash, each plugin runs in its own isolated sandbox: a Dynamic Worker. Rather than giving direct access to underlying data, EmDash provides the plugin with capabilities via bindings, based on what the plugin explicitly declares that it needs in its manifest. This security model has a strict guarantee: an EmDash plugin can only perform the actions explicitly declared in its manifest. You can know and trust upfront, before installing a plugin, exactly what you are granting it permission to do, similar to going through an OAuth flow and granting a 3rd party app a specific set of scoped permissions.
For example, a plugin that sends an email after a content item gets saved looks like this:
import { definePlugin } from “emdash”;
export default () =>
definePlugin({
id: “notify-on-publish”,
version: “1.0.0”,
capabilities: [“read:content”, “email:send”],
hooks: {
“content:afterSave”: async (event, ctx) => {
if (event.collection !== “posts” || event.content.status !== “published”) return;
await ctx.email!.send({
to: “[email protected]”,
subject: `New post published: ${event.content.title}`,
text: `“${event.content.title}” is now live.`,
ctx.log.info(`Notified editors about ${event.content.id}`);
This plugin explicitly requests two capabilities: content:afterSave to hook into the content lifecycle, and email:send to access the ctx.email function. It is impossible for the plugin to do anything other than use these capabilities. It has no external network access. If it does need network access, it can specify the exact hostname it needs to talk to, as part of its definition, and be granted only the ability to communicate with a particular hostname.
And in all cases, because the plugin’s needs are declared statically, upfront, it can always be clear exactly what the plugin is asking for permission to be able to do, at install time. A platform or administrator could define rules for what plugins are or aren’t allowed to be installed by certain groups of users, based on what permissions they request, rather than an allowlist of approved or safe plugins.
WordPress plugin security is such a real risk that WordPress.org manually reviews and approves each plugin in its marketplace. At the time of writing, that review queue is over 800 plugins long, and takes at least two weeks to traverse. The vulnerability surface area of WordPress plugins is so wide that in practice, all parties rely on marketplace reputation, ratings and reviews. And because WordPress plugins run in the same execution context as WordPress itself and are so deeply intertwined with WordPress code, some argue they must carry forward WordPress’ GPL license.
These realities combine to create a chilling effect on developers building plugins, and on platforms hosting WordPress sites.
Plugin security is the root of this problem. Marketplace businesses provide trust when parties otherwise cannot easily trust each other. In the case of the WordPress marketplace, the plugin security risk is so large and probable that many of your customers can only reasonably trust your plugin via the marketplace. But in order to be part of the marketplace your code must be licensed in a way that forces you to give it away for free everywhere other than that marketplace. You are locked in.
EmDash plugins have two important properties that mitigate this marketplace lock-in:
Plugins can have any license: they run independently of EmDash and share no code. It’s the plugin author’s choice. Plugin code runs independently in a secure sandbox: a plugin can be provided to an EmDash site, and trusted, without the EmDash site ever seeing the code.
The first part is straightforward — as the plugin author, you choose what license you want. The same way you can when publishing to NPM, PyPi, Packagist or any other registry. It’s an open ecosystem for all, and up to the community, not the EmDash project, what license you use for plugins and themes.
The second part is where EmDash’s plugin architecture breaks free of the centralized marketplace.
Developers need to rely on a third party marketplace having vetted the plugin far less to be able to make decisions about whether to use or trust it. Consider the example plugin above that sends emails after content is saved; the plugin declares three things:
It only runs on the content:afterSave hookIt has the read:content capabilityIt has the email:send capability
The plugin can have tens of thousands of lines of code in it, but unlike a WordPress plugin that has access to everything and can talk to the public Internet, the person adding the plugin knows exactly what access they are granting to it. The clearly defined boundaries allow you to make informed decisions about security risks and to zoom in on more specific risks that relate directly to the capabilities the plugin is given.
The more that both sites and platforms can trust the security model to provide constraints, the more that sites and platforms can trust plugins, and break free of centralized control of marketplaces and reputation. Put another way: if you trust that food safety is enforced in your city, you’ll be adventurous and try new places. If you can’t trust that there might be a staple in your soup, you’ll be consulting Google before every new place you try, and it’s harder for everyone to open new restaurants.
The business model of the web is at risk, particularly for content creators and publishers. The old way of making content widely accessible, allowing all clients free access in exchange for traffic, breaks when there is no human looking at a site to advertise to, and the client is instead their agent accessing the web on their behalf. Creators need ways to continue to make money in this new world of agents, and to build new kinds of websites that serve what people’s agents need and will pay for. Decades ago a new wave of creators created websites that became great businesses (often using WordPress to power them) and a similar opportunity exists today.
x402 is an open, neutral standard for Internet-native payments. It lets anyone on the Internet easily charge, and any client pay on-demand, on a pay-per-use basis. A client, such as an agent, sends a HTTP request and receives a HTTP 402 Payment Required status code. In response, the client pays for access on-demand, and the server can let the client through to the requested content.
EmDash has built-in support for x402. This means anyone with an EmDash site can charge for access to their content without requiring subscriptions and with zero engineering work. All you need to do is configure which content should require payment, set how much to charge, and provide a Wallet address. The request/response flow ends up looking like this:
Every EmDash site has a built-in business model for the AI era.
WordPress is not serverless: it requires provisioning and managing servers, scaling them up and down like a traditional web application. To maximize performance, and to be able to handle traffic spikes, there’s no avoiding the need to pre-provision instances and run some amount of idle compute, or share resources in ways that limit performance. This is particularly true for sites with content that must be server rendered and cannot be cached.
EmDash is different: it’s built to run on serverless platforms, and make the most out of the v8 isolate architecture of Cloudflare’s open source runtime workerd. On an incoming request, the Workers runtime instantly spins up an isolate to execute code and serve a response. It scales back down to zero if there are no requests. And it only bills for CPU time (time spent doing actual work).
You can run EmDash anywhere, on any Node.js server — but on Cloudflare you can run millions of instances of EmDash using Cloudflare for Platforms that each instantly scale fully to zero or up to as many RPS as you need to handle, using the exact same network and runtime that the biggest websites in the world rely on.
Beyond cost optimizations and performance benefits, we’ve bet on this architecture at Cloudflare in part because we believe in having low cost and free tiers, and that everyone should be able to build websites that scale. We’re excited to help platforms extend the benefits of this architecture to their own customers, both big and small.
EmDash is powered by Astro, the web framework for content-driven websites. To create an EmDash theme, you create an Astro project that includes:
A seed file: JSON that tells the CMS what content types and fields to create
This makes creating themes familiar to frontend developers who are increasingly choosing Astro, and to LLMs which are already trained on Astro.
WordPress themes, though incredibly flexible, operate with a lot of the same security risks as plugins, and the more popular and commonplace your theme, the more of a target it is. Themes run through integrating with functions.php which is an all-encompassing execution environment, enabling your theme to be both incredibly powerful and potentially dangerous. EmDash themes, as with dynamic plugins, turns this expectation on its head. Your theme can never perform database operations.
The least fun part about working with any CMS is doing the rote migration of content: finding and replacing strings, migrating custom fields from one format to another, renaming, reordering and moving things around. This is either boring repetitive work or requires one-off scripts and “single-use” plugins and tools that are usually neither fun to write nor to use.
EmDash is designed to be managed programmatically by your AI agents. It provides the context and the tools that your agents need, including:
Agent Skills: Each EmDash instance includes Agent Skills that describe to your agent the capabilities EmDash can provide to plugins, the hooks that can trigger plugins, guidance on how to structure a plugin, and even how to port legacy WordPress themes to EmDash natively. When you give an agent an EmDash codebase, EmDash provides everything the agent needs to be able to customize your site in the way you need. EmDash CLI: The EmDash CLI enables your agent to interact programmatically with your local or remote instance of EmDash. You can upload media, search for content, create and manage schemas, and do the same set of things you can do in the Admin UI.Built-in MCP Server: Every EmDash instance provides its own remote Model Context Protocol (MCP) server, allowing you to do the same set of things you can do in the Admin UI.
EmDash uses passkey-based authentication by default, meaning there are no passwords to leak and no brute-force vectors to defend against. User management includes familiar role-based access control out of the box: administrators, editors, authors, and contributors, each scoped strictly to the actions they need. Authentication is pluggable, so you can set EmDash up to work with your SSO provider, and automatically provision access based on IdP metadata.
You can import an existing WordPress site by either going to WordPress admin and exporting a WXR file, or by installing the EmDash Exporter plugin on a WordPress site, which configures a secure endpoint that is only exposed to you, and protected by a WordPress Application Password you control. Migrating content takes just a few minutes, and automatically works to bring any attached media into EmDash’s media library.
Creating any custom content types on WordPress that are not a Post or a Page has meant installing heavy plugins like Advanced Custom Fields, and squeezing the result into a crowded WordPress posts table. EmDash does things differently: you can define a schema directly in the admin panel, which will create entirely new EmDash collections for you, separately ordered in the database. On import, you can use the same capabilities to take any custom post types from WordPress, and create an EmDash content type from it.
For bespoke blocks, you can use the EmDash Block Kit Agent Skill to instruct your agent of choice and build them for EmDash.
EmDash is v0.1.0 preview, and we’d love you to try it, give feedback, and we welcome contributions to the EmDash GitHub repository.
If you’re just playing around and want to first understand what’s possible — try out the admin interface in the EmDash Playground.
To create a new EmDash site locally, via the CLI, run:
Or you can do the same via the Cloudflare dashboard below:
We’re excited to see what you build, and if you’re active in the WordPress community, as a hosting platform, a plugin or theme author, or otherwise — we’d love to hear from you. Email us at [email protected], and tell us what you’d like to see from the EmDash project.
If you want to stay up to date with major EmDash developments, you can leave your email address here.
...
Read the original on blog.cloudflare.com »
The race is on to test new vehicles in the underground Large Hadron Collider tunnel, ahead of major works starting this summer
The race is on to test new vehicles in the underground Large Hadron Collider tunnel, ahead of major works starting this summer
Following on from the robotic mice, CERN engineers have now developed a super-charged kart to enable workers to race through the Large Hadron Collider (LHC) underground tunnel during the upcoming major works, starting this summer.
The karts promise a power boost to activities during this period, known as Long Shutdown 3 (LS3), which will see the LHC transformed into the High-Luminosity LHC. These vehicles will replace the bicycles that were used until now to travel through the 27-km underground tunnel, enabling engineers and technicians to speed to areas where improvements to the accelerator are required.
“Each kart is turbo-boosted by 64 superconducting engines,” explains project leader Mario Idraulico. “When the engines are cooled to below their critical temperatures, the Meissner effect levitates the karts, allowing them to zip through the tunnels at high speeds and, mamma mia, they’re super!”
Early tests have been promising, and the next steps involve testing different kart designs in an underground race. Safety coordinator Luigi Fratello has ensured that each driver will be issued with Safety and Health Equipment for Long and Limited Stays (SHELLS), although his response to drivers wanting bananas in the tunnel was “Oh no!”
These karts, although developed to support CERN’s fundamental research programme, show clear applications for society. CERN’s Knowledge Transfer Group has begun discussions with European startup company Quantum Mushroom to explore aerospace applications and powering for next-generation anti-gravity vehicles.
Surprisingly, the kart project began from a collaboration between CERN engineers and onsite nursery school children — one example of CERN’s commitment to inspiring future generations. “We’re thrilled that the children’s kart designs were the inspiration for the engineered karts,” exclaimed schoolteacher Yoshi Kyouryuu, mid-way through painting spots on eggs for an Easter egg hunt.
“As educators, we promote curiosity from a young age, which is why we paint question marks all over our yellow school walls,” explained school director, Rosalina Pfirsich, looking up from her storybook. “With all the contributions the children have made to the upcoming High-Luminosity LHC project, we’ve taken to calling them Luma!”
Find out more about the High-Luminosity LHC project.
...
Read the original on home.cern »
… is what I’m reading far too often! Some of you are losing faith!
A growing sentiment amongst my peers — those who haven’t already resigned to an NPC career path† — is that blogging is over. Coding is cooked. What’s the point of sharing insights and expertise when the Cognitive Dark Forest will feed on our humanity?
Before I’m dismissed as an ill-informed hater please note: I’ve done my research.
† To be fair it’s a valid choice in this economy. Clock in, slop around, clock out. Why not?
Star Trek’s captain Kirk leaning into a computer cast in shadow looking contemplative.
It’s never been more important to blog. There has never been a better time to blog. I will tell you why. We’re being starved for human conversation and authentic voices. What’s more: everyone is trying to take your voice away. Do not opt-out of using it yourself.
First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms. Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit.
How can we possibly combat that?
From a purely selfish perspective it’s never been easier to stand out and assert yourself as an authority. When everyone is deferring to the big bullshitter in the cloud your original thoughts are invaluable. Your brain is your biggest asset. Share it with others for mutual benefit.
I find writing stuff down improves my memory and hardens my resolve. I bet that’s true for you too. It’s part rote learning part rubberducking†. Writing publicly in blog form forces me to question assumptions. Even when research fails me Cunningham’s Law saves me.
† Some will claim writing into a predictive chat box helps too, and sure, they’re absolutely right!
Blogging makes you a better professional. No matter how small your audience, someone will eventually stumble upon your blog and it will unblock their path.
Don’t accept a fate being forced upon you.
The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm. (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)
The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
We’re saying goodbye to Sora. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.
Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value.
I’m not protective over the word “art”. Generative AI is art. It’s irredeemably shit art; end of conversation. A child’s crayon doodle is also lacking refined artistry but we hang it on our fridge because a human made it and that matters. We care and caring has a positive effect on our lives. When you pass human creativity through the slop wringer, or just prompt an incantation, the result is continvoucly morged; a vapid mockery of the input. The garbage out no longer matters, nobody cares, nobody benefits.
I forgot where I was going with this… oh right: don’t resign yourself to the deskilling of our craft. You should keep blogging! Take pride in your ability and unique voice. But please don’t desecrate yourself with slop.
A disheveled Oliver Twist looks up pleadingly holding out an empty bowl.
The only winning move is not to play.
We’ve gotten too comfortable with the convenience of Big Tech. We do not have to continue playing their game. Don’t buy the narratives they’re selling.
The AI industry is built on the predatory business model of casinos. Except they’ve forget the house is supposed to win. One upside of this looming economic and intellectual depression is that the media is beginning to recognise gate keepers are no longer the hand that feeds them. Big Tech is not the web. You don’t have to use it nor support it. Blog for the old web, the open web, the indie web — the web you want to see.
And if you think I’m being dramatic and I’ve upset your new toys, you’re welcome to be left behind in the miasmatic dystopia these technofacists are racing to build.
...
Read the original on dbushell.com »
A committee of Trump administration officials voted unanimously on Tuesday to exempt the oil and gas industry in the Gulf of Mexico from requirements of the Endangered Species Act, a move that would lift protections for endangered whales, turtles and other animals threatened with extinction.
Defense Secretary Pete Hegseth triggered the vote two weeks ago by asking Interior Secretary Doug Burgum to call it “for reasons of national security,” and was present at the meeting.
“To be secure as a nation we need a steady, affordable supply of our own energy,” Hegseth told the six members of the committee, nicknamed the God Squad for its ability to make life or death decisions about endangered animals. “This is not just about gas prices; it’s about our ability to power our military and protect our nation.”
Until now, oil and gas companies have been asked by federal agencies to protect Gulf species by not discarding trash into the Gulf and suspending their use of loud technology when they spot whales, among other requests.
One species of Gulf whale is particularly vulnerable. Scientists estimate that only about 51 Rice’s whales are left on Earth, all of them in waters of the Gulf of Mexico, which the Trump administration has termed the Gulf of America.
On Tuesday, Dr. Neil Jacobs, the National Oceanic and Atmospheric Administration’s Under Secretary of Commerce, made clear that oil and gas companies would no longer need to adhere to protections — for Rice’s whales and any other animals previously protected by the Endangered Species Act.
“I want to highlight that the agency action under consideration — all oil and gas activities in the Gulf of America — encompasses the full suite of actions including various protective measures for the Rice’s Whale,” said Jacobs. “I will be voting to grant the exemption.”
Conservation and pro-democracy groups called the vote “illegal” and characterized the national security justification as a manufactured threat.
“On the one hand, you have the oil and gas industry, it’s one of the wealthiest industries on the planet, and the other, you have one of our most endangered whales,” said Michael Jasny, a senior policy analyst for the Natural Resources Defense Council. “It’s caused enormous outrage and astonishment.”
The energy industry has been accused of causing the whales harm before. After the Deepwater Horizon spill leaked more than 200 million gallons of BP’s oil into the Gulf in 2010, covering about half of the Rice’s whale habitat, the Rice’s whale population declined by as much as 22 percent. The number of existing whales is so low that scientists have warned the loss of a single additional whale could endanger future reproduction and tip the species toward extinction.
A spokesperson for the American Petroleum Institute, a lobbying group for oil and gas companies said the energy industry had a track record of protecting wildlife while developing offshore energy.
“Over the long term, American energy leadership depends on getting that balance right through reasonable, science-based protections while meeting growing energy demand,” said Andrea Woods.
A gathering of the six-person committee has only happened before after extensive prior consultation with environmental agencies and months of public notice. Just three meetings have happened over the past 50 years and only once did an exemption take effect.
“Not only is a God Squad convening as rare as hen’s teeth in the first instance, but this snap announcement that came a week and a half ago is so vague that the public doesn’t even really know what the committee is supposed to consider,” said Jane Davenport, a senior attorney at Defenders of Wildlife, a conservation nonprofit. “So it’s just completely baffling, but it is on brand for this administration.”
The Center for Biological Diversity sued U. S. Interior Secretary Doug Burgum in federal court on March 18, saying the government violated the law by not taking the proper steps or providing enough public information before calling the committee meeting.
In its response to that lawsuit, filed Wednesday night, the Trump administration said Hegseth was the one who asked the Interior Department to call the committee meeting. The Endangered Species Act includes a provision requiring the committee to “grant an exemption for any agency action if the Secretary of Defense finds that such exemption is necessary for reasons of national security.”
A federal judge last week declined to delay the meeting, which the Interior Department streamed on YouTube.
Brian Segee, a senior attorney at the Center for Biological Diversity, said the consequences of the vote could be immediate and significant.
“Once an exemption is issued, it is sweeping. It applies not only to the one species that had a jeopardy finding, Rice’s whale — it applies to every other listed species in the Gulf in relation to oil and gas operations, which will go on for decades,” said Segee.
Rice’s whales are not the only animals at risk in the Gulf. Sperm whales, the West Indian manatee and several Gulf sea turtles are also listed as threatened or endangered.
The Interior Department did not respond to NPR’s request to explain the national security implications of oil exploration and production in the Gulf. A representative from the Department of Defense said the agency could not comment because of pending litigation.
National security has never been used to justify a meeting of the committee — and it has never before triggered a vote for an exemption. But this is not the first time the Trump administration has invoked national security to attempt to bypass laws meant to protect the environment.
Shortly after Trump’s inauguration, an executive order laid the groundwork for decreasing legal protections for animals because of a “national energy emergency.”
“Our Nation’s current inadequate development of domestic energy resources leaves us vulnerable to hostile foreign actors and poses an imminent and growing threat to the United States’ prosperity and national security,” the order stated. The order also indicated the Interior Department should call the Endangered Species Act Committee together to meet at least four times a year.
“We’re very concerned that this administration is interested in pursuing a ‘big oil, drill everywhere, all the time’ agenda as opposed to a ‘protect public resources and imperiled wildlife agenda,’” said Davenport.
How energy companies work in the Gulf — and Washington
For the Endangered Species Act Committee to agree to grant an exemption, the law typically requires evidence that it’s impossible for industry to operate in an area without jeopardizing an endangered species.
But the National Oceanic and Atmospheric Administration determined in the document published last May that there were measures the energy industry could take to avoid harming Rice’s whales and other species in the Gulf, including slowing down boats near the Rice’s whale habitat and maintaining a safe distance from any whales that were seen.
“It said, ’take these electively reasonable measures to avoid running over and killing Rice’s whales with boats. And yes, oil and gas can proceed,” said Davenport. “You can have your cake and eat it, too.”
Some people working in the Gulf argue that oil companies could do even more to protect animals.
Energy companies look for oil and gas in the ocean by blasting sound waves into the water from ships to record how they reflect off the rock below. The air guns used to emit those sound waves are responsible for near-constant underwater noise in the Gulf.
Some companies have developed tools that limit the energy used when conducting the surveys. Reports show those air guns can expose animals to lower levels of noise, over areas up to nine times smaller than the regions affected by traditional air guns.
“They are much easier on the environment,” said Shuki Ronen, a geophysicist at Sercel, one of the companies developing the new technology. “And I think the industry can adopt them more than they do now.”
An NPR review of public documents found that of the 25 seismic survey projects approved by 2023 to use air guns for more than 1,000 days over the next few years, all but two energy companies said they would use conventional airgun systems.
Lawyers for conservation groups say the Endangered Species Act did not intend for an exemption to be granted when there are steps an industry can take to avoid harm to animals.
“There’s plenty that can be done,” said Jasny. “This is not what the Endangered Species Act is designed to do. It’s not how we protect endangered species in our country.”
Still, many oil and gas companies want less restriction in the Gulf, and are willing to pay for it.
Energy companies, including Chevron, ExxonMobile and Occidental Petroleum, which acquired Anadarko Petroleum Corporation in 2019, spent more than $8 million since October lobbying the government about the Endangered Species Act, permitting reform and, specifically, Rice’s whales, lobbying reports reviewed by NPR show.
Part of a pattern of ‘making it harder to list species’
Other federal agencies have changed how they operate to protect threatened and endangered animals since the start of Trump’s second term in 2025.
In April 2025, the U. S. Army Corps of Engineers cited Trump’s “energy emergency” order in a notice that said the agency planned to move forward with an underwater cable replacement project in the Puget Sound near Seattle, without first consulting wildlife agencies. The project is planned in waters used by a killer whale population that has been protected by the Endangered Species Act since 1972.
Under Biden, the Fish and Wildlife Service and the National Oceanic and Atmospheric Administration added an average of around 14 animals each year to the federal list of endangered and threatened animals. During Trump’s first administration, the agencies listed an average of about five animals annually. During Obama’s second term, the agencies averaged about 54 new additions.
Since the start of Trump’s second term, no new animals have been listed. It is the first time in almost 20 years that no animals were added to the list, NPR found.
Segee, the attorney at the Center for Biological Diversity, said calling the Endangered Species Act Committee is just the latest of a host of federal efforts to remove protections for endangered and threatened animals.
“In a nutshell, they’re making it harder to list species or protect their habitats,” said Segee.
NPR would like to hear from people with information about how energy companies are working in the Gulf. You can send an email to the reporter of this article at ceisner@npr.org, or contact her on the end-to-end encrypted platform Signal here. Her username is: ceis.78.
...
Read the original on www.npr.org »
Advisory: FreeBSD-SA-26:08.rpcsec_gss
CVE: CVE-2026-4747
Affected: FreeBSD 13.5 (Tested on: FreeBSD 14.4-RELEASE amd64 (GENERIC kernel, no KASLR)
Attack surface: NFS server with kgssapi.ko loaded (port 2049/TCP)
In sys/rpc/rpcsec_gss/svc_rpcsec_gss.c, the function svc_rpc_gss_validate() reconstructs an RPC header into a 128-byte stack buffer (rpchdr[]) for GSS-API signature verification. It first writes 32 bytes of fixed RPC header fields, then copies the entire RPCSEC_GSS credential body (oa_length bytes) into the remaining space — without checking that oa_length fits.
static bool_t
svc_rpc_gss_validate(struct svc_rpc_gss_client *client,
struct rpc_msg *msg, gss_qop_t *qop, rpc_gss_proc_t gcproc)
int32_t rpchdr[128 / sizeof(int32_t)]; // 128 bytes on stack
int32_t *buf;
memset(rpchdr, 0, sizeof(rpchdr));
// Write 8 fixed-size RPC header fields (32 bytes total)
buf = rpchdr;
IXDR_PUT_LONG(buf, msg->rm_xid);
IXDR_PUT_ENUM(buf, msg->rm_direction);
IXDR_PUT_LONG(buf, msg->rm_call.cb_rpcvers);
IXDR_PUT_LONG(buf, msg->rm_call.cb_prog);
IXDR_PUT_LONG(buf, msg->rm_call.cb_vers);
IXDR_PUT_LONG(buf, msg->rm_call.cb_proc);
oa = &msg->rm_call.cb_cred;
IXDR_PUT_ENUM(buf, oa->oa_flavor);
IXDR_PUT_LONG(buf, oa->oa_length);
if (oa->oa_length) {
// BUG: No bounds check on oa_length!
// After 32 bytes of header, only 96 bytes remain in rpchdr.
// If oa_length > 96, this overflows past rpchdr into:
// local variables → saved callee-saved registers → return address
memcpy((caddr_t)buf, oa->oa_base, oa->oa_length);
buf += RNDUP(oa->oa_length) / sizeof(int32_t);
// gss_verify_mic() called after — but overflow already happened
The buffer has only 128 - 32 = 96 bytes of space for the credential body. Any credential larger than 96 bytes overflows the stack buffer.
The patch adds a single bounds check before the copy:
oa = &msg->rm_call.cb_cred;
if (oa->oa_length > sizeof(rpchdr) - 8 * BYTES_PER_XDR_UNIT) {
rpc_gss_log_debug(“auth length %d exceeds maximum”, oa->oa_length);
client->cl_state = CLIENT_STALE;
return (FALSE);
svc_rpc_gss_validate:
push rbp
mov rbp, rsp
push r15 ; saved at [rbp-8]
push r14 ; saved at [rbp-16]
push r13 ; saved at [rbp-24]
push r12 ; saved at [rbp-32]
push rbx ; saved at [rbp-40]
sub rsp, 0xb8 ; 184 bytes of local space
The rpchdr array is at [rbp-0xc0] (192 bytes below rbp). The memcpy writes to rpchdr + 32 = [rbp-0xa0]. The saved registers and return address are above rpchdr on the stack:
However, these are the offsets for a credential body that starts immediately. In practice, the credential body begins with a GSS header (version, procedure, sequence, service) plus a context handle. With a 16-byte handle, the actual offsets shift by 32 bytes — the return address lands at credential body byte 200 (verified via De Bruijn pattern analysis from the remote exploit).
Why NFS? The vulnerable module kgssapi.ko implements RPCSEC_GSS authentication for the kernel’s RPC subsystem. NFS is the primary (and typically only) in-kernel RPC service that uses RPCSEC_GSS. The NFS server daemon (nfsd) listens on port 2049/TCP and processes RPC packets in kernel context — making this a remote kernel code execution vulnerability reachable over the network.
Why Kerberos? The overflow is deep inside the GSS validation code path. svc_rpc_gss_validate() is only called when:
The GSS procedure is DATA (not INIT or DESTROY)
Without a valid GSS context, the server rejects the packet at step 3 (returning AUTH_REJECTEDCRED) and the vulnerable memcpy is never reached. Creating a valid GSS context requires a successful Kerberos handshake — the attacker must possess a valid Kerberos ticket for the NFS service principal.
In a real-world attack, the target would be an enterprise NFS server with existing Kerberos infrastructure (Active Directory, FreeIPA, etc.). Any user with a valid Kerberos ticket — even an unprivileged one — can trigger the vulnerability. The test lab includes its own KDC because there is no pre-existing Kerberos environment.
The XDR layer enforces MAX_AUTH_BYTES = 400 on the credential body, giving an overflow range of 97–400 bytes (1–304 bytes past the safe limit).
* Network access to the target’s NFS port (2049/TCP) and KDC port (88/TCP)
# Download image
wget https://download.freebsd.org/releases/VM-IMAGES/14.4-RELEASE/amd64/Latest/\
FreeBSD-14.4-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2.xz
xz -d FreeBSD-14.4-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2.xz
cp FreeBSD-14.4-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2 freebsd-vuln.qcow2
qemu-img resize freebsd-vuln.qcow2 8G
# Cloud-init auto-configuration
cat > user-data << ‘EOF’
#cloud-config
chpasswd:
list: |
root:freebsd
expire: False
ssh_pwauth: True
bootcmd:
- rm -f /firstboot # prevent auto-patching to -p1
- rm -f /var/db/freebsd-update/*
runcmd:
- echo ‘PermitRootLogin yes’ >> /etc/ssh/sshd_config
- service sshd restart
- kldload kgssapi
- sysrc rpcbind_enable=YES nfs_server_enable=YES
- echo ‘/export -network 0.0.0.0/0’ > /etc/exports
- mkdir -p /export
- service rpcbind start && service nfsd start
EOF
cat > meta-data << ‘EOF’
instance-id: cve-test
local-hostname: freebsd-vuln
EOF
genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
# Boot VM — forward SSH (22), NFS (2049), and KDC (88) ports
qemu-system-x86_64 -enable-kvm -cpu host -m 2G -smp 2 \
-drive file=freebsd-vuln.qcow2,format=qcow2,if=virtio \
-cdrom seed.iso \
-netdev user,id=net0,hostfwd=tcp::2222-:22,hostfwd=tcp::2049-:2049,hostfwd=tcp::8888-:88 \
-device virtio-net-pci,netdev=net0 -nographic
The KDC port (88) is forwarded to host port 8888 directly — no SSH tunnel required.
For VMware Workstation, ESXi, Fusion, VirtualBox, or bhyve. In this example the VM hostname is test.
Download the installer ISO (not the cloud-init image):
wget https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/14.4-RELEASE/\
FreeBSD-14.4-RELEASE-amd64-disc1.iso
IMPORTANT: FreeBSD spawns 8 NFS threads per CPU. The exploit kills one thread per round and needs 15 rounds, so you need at least 2 CPUs (= 16 threads). With 1 CPU (8 threads) the exploit fails around round 9.
Network: bridged or NAT (the attacker needs to reach ports 22, 88, 2049)
Attach the ISO and install FreeBSD normally
...
Read the original on github.com »
Border Gateway Protocol (BGP) is the postal service of the Internet. It’s responsible for looking at all of the available paths that data could travel and picking the best route. Unfortunately, it isn’t secure, and there have been some major Internet disruptions as a result. But fortunately there is a way to make it secure.ISPs and other major Internet players (Sprint and others) would need to implement a certification system, called RPKI.
To better understand why BGP’s lack of security is so problematic, let’s look at a simplified model of how BGP is used to route Internet packets. The Internet is not run by just one company. It’s made up of thousands of autonomous systems with nodes located all around the world, connected to each other in a massive graph.In essence, the way BGP works is that each node must determine how to route packets using only what it knows from the nodes it connects with directly.For example, in the simple network A–B–C–D–E, the node A only knows how to reach E based on information it received from B. The node B knows about the network from A and C. And so forth.A BGP hijack occurs when a malicious node deceives another node, lying about what the routes are for its neighbors. Without any security protocols, this misinformation can propagate from node to node, until a large number of nodes now know about, and attempt to use these incorrect, nonexistent, or malicious routes.Click “Hijack the request” to visualize how packets are re-routed:
In order to make BGP safe, we need some way of preventing the spread of this misinformation. Since the Internet is so open and distributed, we can’t prevent malicious nodes from attempting to deceive other nodes in the first place. So instead we need to give nodes the ability to validate the information they receive, so they can reject these undesired routes on their own. Enter Resource Public Key Infrastructure (RPKI), a security framework method that associates a route with an autonomous system. It gets a little technical, but the basic idea is that RPKI uses cryptography to provide nodes with a way of doing this validation.With RPKI enabled, let’s see what happens to packets after an attempted BGP hijack. Click “Attempt to hijack” to visualize how RPKI allows the network to protect itself by invalidating the malicious routes:
Border Gateway Protocol (BGP) is the postal service of the Internet. When someone drops a letter into a mailbox, the postal service processes that piece of mail and chooses a fast, efficient route to deliver that letter to its recipient. Similarly, when someone submits data across the Internet, BGP is responsible for looking at all of the available paths that data could travel and picking the best route, which usually means hopping between autonomous systems. Learn more →By default, BGP does not embed any security protocols. It is up to every autonomous system to implement filtering of “wrong routes”. Leaking routes can break parts of the Internet by making them unreachable. It is commonly the result of misconfigurations. Although, it is not always accidental. A practice called BGP hijack consists of redirecting traffic to another autonomous system to steal information (via phishing, or passive listening for instance).BGP can be made safe if all autonomous systems (AS) only announce legitimate routes. A route is defined as legitimate when the owner of the resource allows its announcement. Filters need to be built in order to make sure only legitimate routes are accepted. There are a few approaches for BGP route validation which vary in degrees of trustability and efficiency. A mature implementation is RPKI. With 800k+ routes on the Internet, it is impossible to check them manually. Resource Public Key Infrastructure (RPKI) is a security framework method that associates a route with an autonomous system. It uses cryptography in order to validate the information before being passed onto the routers. You can read more about RPKI on the Cloudflare blog.On May 14th 2020, Job Snijders from NTT presented a free RPKI 101 webinar.How does the test work?In order to test if your ISP is implementing BGP safely, we announce a legitimate route but we make sure the announcement is invalid. If you can load the website we host on that route, that means the invalid route was accepted by your ISP. A leaked or a hijacked route would likely be accepted too.Can even more be done?Over the years, network operators and developers started working groups to design and deploy standards to overcome unsafe routing protocols. Cloudflare recently joined a global initiative called Mutually Agreed Norms for Routing Security (MANRS). It’s a community of security-minded organizations committed to making routing infrastructure more robust and secure, and members agree to implement filtering mechanisms. New voices are always appreciated.What can you do?Share this page For BGP to be safe, all of the major ISPs will need to embrace RPKI. Sharing this page will increase awareness of the problem which can ultimately pressure ISPs into implementing RPKI for the good of themselves and the general public. You can also reach out to your service provider or hosting company directly and ask them to deploy RPKI and join MANRS. When the Internet is safe, everybody wins.
...
Read the original on isbgpsafeyet.com »
...
Read the original on arxiv.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.