10 interesting stories served every morning and every evening.
Skip to main content
De-dollarization: Is the US dollar losing its dominance?
Top dollar no more? Learn more about the factors threatening the dominance of the world’s reserve currency.
While the U. S.’s share in global exports and output has declined, the dollar’s transactional dominance is still evident in areas including FX volumes and trade invoicing.
On the other hand, de-dollarization is unfolding in central bank FX reserves, where the share of USD has slid to a two-decade low.
In fixed income, the share of foreign ownership in the U.S. Treasury market has fallen over the last 15 years, pointing to reduced reliance on the dollar.
De-dollarization is most visible in commodity markets, where a large and growing proportion of energy is being priced in non-dollar-denominated contracts.
The U.S. dollar is the world’s primary reserve currency, and it is also the most widely used currency for trade and other international transactions. However, its hegemony has come into question in recent times due to geopolitical and geostrategic shifts. As a result, de-dollarization has increasingly become a substantive topic of discussion among investors, corporates and market participants more broadly.
What are the potential implications of de-dollarization, and how is it playing out in global markets and trade?
In short, de-dollarization entails a significant reduction in the use of dollars in world trade and financial transactions, decreasing national, institutional and corporate demand for the greenback.
“The concept of de-dollarization relates to changes in the structural demand for the dollar that would relate to its status as a reserve currency. This encompasses areas that relate to the longer-term use of the dollar, such as transactional dominance in FX volumes or commodities trade, denomination of liabilities and share in central bank FX reserves,” said Luis Oganes, head of Global Macro Research at J.P. Morgan.
Importantly, this structural shift is distinct from the cyclical demand for the greenback, which is shorter term and has in recent times been driven by U.S. exceptionalism, including the relative outperformance of the U.S. equity market. “The world has become long on the dollar in recent years, but as U.S. exceptionalism erodes, it should be reasonable to expect the overhang in USD longs to diminish as well,” Oganes said.
What are the causes and implications of de-dollarization?
There are two main factors that could erode the dollar’s status. The first includes adverse events that undermine the perceived safety and stability of the greenback — and the U.S.’s overall standing as the world’s leading economic, political and military power. For instance, increased polarization in the U.S. could jeopardize its governance, which underpins its role as a global safe haven. Ongoing U.S. tariff policy could also cause investors to lose confidence in American assets.
The second factor involves positive developments outside the U.S. that boost the credibility of alternative currencies — economic and political reforms in China, for example. “A candidate reserve currency must be perceived as safe and stable and must provide a source of liquidity that is sufficient to meet growing global demand,” said Alexander Wise, who covers Long-Term Strategy at J.P. Morgan.
Fundamentally, de-dollarization could shift the balance of power among countries, and this could, in turn, reshape the global economy and markets. The impact would be most acutely felt in the U.S., where de-dollarization would likely lead to a broad depreciation and underperformance of U.S. financial assets versus the rest of the world.
“For U.S. equities, outright and relative returns would be negatively impacted by divestment or reallocation away from U.S. markets and a severe loss in confidence. There would also likely be upward pressure on real yields due to the partial divestment of U.S. fixed income by investors, or the diversification or reduction of international reserve allocations,” Wise said.
The U.S.’s share in global exports and output has declined over the past three decades, while China’s has increased substantially. Nonetheless, the transactional dominance of the dollar is still evident in FX volumes, trade invoicing, cross-border liabilities denomination and foreign currency debt issuance.
In 2022, the greenback dominated 88% of traded FX volumes — close to record highs — while the Chinese yuan (CNY) made up just 7%, according to data from the Bank for International Settlements (BIS).
Likewise, there is little sign of USD erosion in trade invoicing. “The share of USD and EUR has held steady over the past two decades at around 40–50%. While the share of CNY is increasing in China’s cross-border transactions as it moves to conduct bilateral trade in its own currency terms, it is still low from a global standpoint,” Oganes observed.
The dollar has also stoutly maintained its superiority when it comes to cross-border liabilities, where its market share stands at 48%. And in foreign currency debt issuance, its share has remained constant since the global financial crisis, at around 70%. “The daylight from the euro, whose share is at 20%, is even greater on this front,” Oganes added.
On the other hand, de-dollarization is unfolding in central bank FX reserves, where the share of USD has slid to a two-decade low in tandem with its macro footprint. “However, the dollar share in FX reserves was lower in the early 1990s, so the recent decline to just under 60% is not completely out of the norm,” said Meera Chandan, co-head of Global FX Strategy at J.P. Morgan.
While much of the reallocation of FX reserves has gone to CNY and other currencies, USD and EUR still dominate levels. “The CNY footprint is still very small, even if growing, and its push for bilateral invoicing is likely to keep this trend on the upswing,” Chandan noted.
The main de-dollarization trend in FX reserves, however, pertains to the growing demand for gold. Seen as an alternative to heavily indebted fiat currencies, the share of gold in FX reserves has increased, led by emerging market (EM) central banks — China, Russia and Türkiye have been the largest buyers in the last decade. Overall, while the share of gold in FX reserves in EM is still low at 9%, the figure is more than double the 4% seen a decade ago; the corresponding share for DM countries is much larger at 20%. This increased demand has in turn partly driven the current bull market in gold, with prices forecast to climb toward $4,000/oz by mid-2026.
The dollar’s share in FX reserves has declined in tandem with its macro footprint
USD share of FX reserves (lhs; unadjusted, %) vs. an average of the z-scores of U.S. share of global GDP and exports (rhs; 1960–2023)
The dollar’s share of FX reserves has declined in tandem with the U.S.’s share of global GDP and exports.
In a sign of de-dollarization in bond markets, the share of foreign ownership in the U.S. Treasury market has been declining over the last 15 years.
USD assets, principally liquid Treasuries, account for the majority of allocated FX reserves. However, demand for Treasuries has stagnated among foreign official institutions, as the growth of FX reserves has slowed and the USD’s share of reserves has dropped from its recent peak. Similarly, the backdrop for foreign private demand has weakened — as yields have risen across DM government bond markets, Treasuries have become relatively less attractive. While foreign investors remain the largest constituent within the Treasury market, their share of ownership has fallen to 30% as of early 2025 — down from a peak of above 50% during the GFC.
“Although foreign demand has not kept pace with the growth of the Treasury market for more than a decade, we must consider what more aggressive action could mean. Japan is the largest foreign creditor and alone holds more than $1.1 trillion Treasuries, or nearly 4% of the market. Accordingly, any significant foreign selling would be impactful, driving yields higher,” said Jay Barry, head of Global Rates Strategy at J.P. Morgan.
According to estimates by J.P. Morgan Research, each 1-percentage-point decline in foreign holdings relative to GDP (or approximately $300 billion of Treasuries) would result in yields rising by more than 33 basis points (bp). “While this is not our base case, it nonetheless underscores the impact of foreign investment on risk-free rates,” Barry added.
“Today, a large and growing proportion of energy is being priced in non-dollar-denominated contracts.”
De-dollarization is most visible in commodity markets, where the greenback’s influence on pricing has diminished. “Today, a large and growing proportion of energy is being priced in non-dollar-denominated contracts,” said Natasha Kaneva, head of Global Commodities Strategy at J.P. Morgan.
For example, due to Western sanctions, Russian oil products exported eastward and southward are being sold in the local currencies of buyers, or in the currencies of countries Russia perceives as friendly. Among buyers, India, China and Turkey are all either using or seeking alternatives to the dollar. Saudi Arabia is also considering adding yuan-denominated futures contracts in the pricing model of Saudi Arabian oil, though progress has been slow.
Notably, cross-border trade settlement in yuan is gaining ground outside of oil too. Some Indian companies have started paying for Russian coal imports in yuan, even without the involvement of Chinese intermediaries. Bangladesh also recently decided to pay Russia for its 1.4 GW nuclear power plant in yuan.
“The de-dollarization trend in the commodity trade is a boon for countries like India, China, Brazil, Thailand and Indonesia, which can now not only buy oil at a discount, but also pay for it with their own local currencies,” Kaneva noted. “This reduces the need for precautionary reserves of U.S. dollars, U.S. Treasuries and oil, which might in turn free up capital to be deployed in growth-boosting domestic projects.”
At the other end of the spectrum, deposit dollarization — where a significant portion of a country’s bank deposits are denominated in the U.S. dollar instead of the local currency — is still evident in many EM countries. “The tendency of EM residents to dollarize in times of stress appears to be correlated across markets,” said Jonny Goulden, head of EM Fixed Income Strategy at J.P. Morgan.
According to J.P. Morgan Research, dollar deposits have grown mostly uninterrupted over the last decade in EM, reaching around $830 billion for a sample set of 18 EM countries (excluding China, Singapore and Hong Kong). “While there are large regional divergences in deposit dollarization across EM, all regions are more dollarized now than they were a decade ago,” Goulden noted. Latin America is the most dollarized region, with an aggregate dollarization rate of 19.1%. EMEA’s rate stands at 15.2%, while Asia (excluding China, Singapore and Hong Kong) has the lowest rate at 9.7%.
China is the exception, as its dollarization rate has been persistently falling since 2017. “This is not surprising, as this was around the time when U.S.–China relations began shifting into their current state, marked by the trade war and growing diplomatic, security and geopolitical tensions,” Goulden said. “This suggests that China, alongside progress on de-dollarizing its own cross-border transactions, has effectively been de-dollarizing the deposits of Chinese residents, adding another dimension to its efforts to separate from U.S. dominance.”
Get more insights straight to your inbox with J.P. Morgan’s In Context newsletter.
J.P. Morgan Global Research brings you the latest updates and analysis of President Trump’s tariff proposals and their economic impact.
Discover the factors contributing to U.S. exceptionalism and how this is playing out across markets. Could recent developments including tariffs and the rise of DeepSeek undermine American outperformance?
Leveraging cutting-edge technology and innovative tools to bring clients industry-leading analysis and investment advice.
...
Read the original on www.jpmorgan.com »
The Incredible Overcomplexity of the Shadcn Radio Button
The other day I was asked to update the visual design of radio buttons in a web app at work. I figured it couldn’t be that complicated. It’s just a radio button right?
Boom! Done. Radio buttons are a built-in HTML element. They’ve been around for 30 years. The browser makes it easy. Time for a coffee.
I dug into our codebase and realized we were using two React components from
Shadcn to power our radio buttons: and
For those unfamiliar with Shadcn, it’s a UI framework that provides a bunch of prebuilt UI components for use in your websites. Unlike traditional UI frameworks like Bootstrap, you don’t import it with a script tag or
npm install. Instead you run a command that copies the components into your codebase.
Here’s the code that was exported from Shadcn into our project:
“use client”;
import * as React from “react”;
import * as RadioGroupPrimitive from “@radix-ui/react-radio-group”;
import { CircleIcon } from “lucide-react”;
import { cn } from “@/lib/utils”;
function RadioGroup({
className,
…props
}: React. ComponentProps
Woof… 3 imports and 45 lines of code. And it’s importing a third party icon library just to render a circle. (Who needs CSS border-radius or the SVG
element when you can add a third party dependency instead?)
All of the styling is done by the 30 different Tailwind classes in the markup. I should probably just tweak those to fix the styling issues.
But now I’m distracted, annoyed, and curious. Where’s the actual ? What’s the point of all this? Let’s dig a little deeper.
The Shadcn components import components from another library called Radix. For those unfamiliar with Radix, it’s a UI framework that provides a bunch of prebuilt UI components…
Wait a second! Isn’t that what I just said about Shadcn? What gives? Why do we need both? Let’s see what the Radix docs say:
Radix Primitives is a low-level UI component library with a focus on accessibility, customization and developer experience. You can use these components either as the base layer of your design system, or adopt them incrementally.
So Radix provides unstyled components, and then Shadcn adds styles on top of that. How does Radix work? You can see for yourself on GitHub:
https://github.com/radix-ui/…
This is getting even more complicated: 215 lines of React code importing 7 other files. But what does it actually do?
Taking a look in the browser
Let’s look in the browser dev tools to see if we can tell what’s going on.
Okay, instead of a radio input it’s rendering a button with an SVG circle inside it? Weird.
It’s also using
ARIA attributes
to tell screen readers and other assistive tools that the button is actually a radio button.
ARIA attributes allow you to change the semantic meaning of HTML elements. For example, you can say that a button is actually a radio button. (If you wanted to do that for some strange reason.)
If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do
so.
Despite that, Radix is repurposing an element and adding an ARIA role instead of using a native HTML element.
Finally, the component also includes a hidden but only if it’s used inside of a element. Weird!
This is getting pretty complicated to just render a radio button. Why would you want to do this?
Styling radio buttons is hard (Wait, is it?)
My best guess is that Radix rebuilds the radio button from scratch in order to make it easier to style. Radio buttons used to be difficult to style consistently across browsers. But for several years we’ve been able to style radio buttons however we want using a few CSS tools:
appearance: none removes the radio button’s default styling allowing us to
do whatever we want.
We can use the ::before pseudo-element to render a “dot” inside of the
unstyled radio button.
We can use the :checked pseudo-class to show and hide that dot depending on
whether the radio button is checked.
input[type=“radio”] {
/* Disable the browser’s default radio button styles */
appearance: none;
margin: 0;
/* Recreate the circle container */
border: 1px solid black;
background: white;
border-radius: 50%;
/* Center our dot in the container */
display: inline-grid;
place-content: center;
/* Use a pseudo-element to display our “dot” */
&::before {
content: “”;
width: 0.75rem;
height: 0.75rem;
border-radius: 50%;
/* And display it when the radio button is checked */
&:checked::before {
background: black;
This doesn’t require any dependencies, JavaScript, or ARIA roles. It’s just an input element with some styles. (You can do the same thing with Tailwind if that’s your jam.)
It does require knowledge of CSS but this isn’t some arcane secret.
Googling “how to style a radio button”
shows several blog posts explaining these techniques. You may say this is a lot of CSS, but the Shadcn component we were using had 30 Tailwind classes!
I’m not trying to convince you to write your own component styles
Look, I get it. You’ve got a lot going on. You’re not big on CSS. You just want to grab some prebuilt components so you can focus on the actual problem you’re solving.
I totally understand why people reach for component libraries like Shadcn and I don’t blame them at all. But I wish these component libraries would keep things simple and reuse the built-in browser elements where possible.
Web development is hard. There’s inherent complexity in building quality sites that solve problems and work well across a wide range of devices and browsers.
But some things don’t have to be hard. Browsers make things like radio buttons easy. Let’s not overcomplicate it.
To understand how our radio buttons work I need to understand two separate component libraries and hundreds of lines of React.
Website visitors need to wait for JavaScript to load, parse, and run in order to be able to toggle a radio button. (In my testing, just adding these components added several KB of JS to a basic app.)
Why am I making such a big deal out of this? It’s just a radio button.
But these small decisions add up to more complexity, more cognitive load, more bugs, and worse website performance.
We have strayed so far from the light
Look at it. It’s beautiful:
...
Read the original on paulmakeswebsites.com »
For the past week, I’ve found myself playing the same 23-second CNN clip on repeat. I’ve watched it in bed, during my commute to work, at the office, midway through making carrot soup, and while brushing my teeth. In the video, Harry Enten, the network’s chief data analyst, stares into the camera and breathlessly tells his audience about the gambling odds that Donald Trump will buy any of Greenland. “The people who are putting their money where their mouth is—they are absolutely taking this seriously,” Enten says. He taps the giant touch screen behind him and pulls up a made-for-TV graphic: Based on how people were betting online at the time, there was a 36 percent chance that the president would annex Greenland. “Whoa, way up there!” Enten yells, slapping his hands together. “My goodness gracious!” The ticker at the bottom of the screen speeds through other odds: Will Gavin Newsom win the next presidential election? 19 percent chance. Will Viktor Orbán be out as the leader of Hungary before the end of the year? 48 percent chance.
These odds were pulled from Kalshi, which hilariously claims not to be a gambling platform: It’s a “prediction market.” People go to sites such as Kalshi and Polymarket—another big prediction market—in order to put money down on a given news event. Nobody would bet on something that they didn’t believe would happen, the thinking goes, and so the markets are meant to forecast the likelihood of a given outcome.
Listen: Prediction markets and the “suckerification” crisis, with Max Read
Prediction markets let you wager on basically anything. Will Elon Musk father another baby by June 30? Will Jesus return this year? Will Israel strike Gaza tomorrow? Will the longevity guru Bryan Johnson’s next functional sperm count be greater than “20.0 M/ejac”? These sites have recently boomed in popularity—particularly among terminally online young men who trade meme stocks and siphon from their 401(k)s to buy up bitcoin. But now prediction markets are creeping into the mainstream. CNN announced a deal with Kalshi last month to integrate the site’s data into its broadcasts, which has led to betting odds showing up in segments about Democrats possibly retaking the House, credit-card interest rates, and Federal Reserve Chair Jerome Powell. At least twice in the past two weeks, Enten has told viewers about the value of data from people who are “putting their money where their mouth is.”
On January 7, the media giant Dow Jones announced its own collaboration with Polymarket and said that it will begin integrating the site’s odds across its publications, including The Wall Street Journal. CNBC has a prediction-market deal, as does Yahoo Finance, Sports Illustrated, and Time. Last week, MoviePass announced that it will begin testing a betting platform. On Sunday, the Golden Globes featured Polymarket’s forecasts throughout the broadcast—because apparently Americans wanted to know whether online gamblers favored Amy Poehler or Dax Shepard to win Best Podcast.
Media is a ruthless, unstable business, and revenue streams are drying up; if you squint, you can see why CNN or Dow Jones might sign a contract that, after all, provides its audience with some kind of data. On air, Enten cites Kalshi odds alongside Gallup polls and Google searches—what’s the difference? “The data featured through our partnership with Kalshi is just one of many sources used to provide context around the stories or topics we are covering and has no impact on editorial judgment,” Brian Poliakoff, a CNN spokesperson, told me in a statement. Nolly Evans, the Journal’s digital general manager, told me that Polymarket provides the newspaper’s journalists with “another way to quantify collective expectations—especially around financial or geopolitical events.” In an email, Jack Suh, a Kalshi spokesperson, told me that the company’s partnerships are designed to inform the public, not to encourage more trading. Polymarket declined to comment.
The problem is that prediction markets are ushering in a world in which news becomes as much about gambling as about the event itself. This kind of thing has already happened to sports, where the language of “parlays” and “covering the spread” has infiltrated every inch of commentary. ESPN partners with DraftKings to bring its odds to SportsCenter and Monday Night Football; CBS Sports has a betting vertical; FanDuel runs its own streaming network. But the stakes of Greenland’s future are more consequential than the NFL playoffs.
The more that prediction markets are treated like news, especially heading into another election, the more every dip and swing in the odds may end up wildly misleading people about what might happen, or influencing what happens in the real world. Yet it’s unclear whether these sites are meaningful predictors of anything. After the Golden Globes, Polymarket CEO Shayne Coplan excitedly posted that his site had correctly predicted 26 of 28 winners, which seems impressive—but Hollywood awards shows are generally predictable. One recent study found that Polymarket’s forecasts in the weeks before the 2024 election were not much better than chance.
These markets are also manipulable. In 2012, one bettor on the now-defunct prediction market Intrade placed a series of huge wagers on Mitt Romney in the two weeks preceding the election, generating a betting line indicative of a tight race. The bettor did not seem motivated by financial gain, according to two researchers who examined the trades. “More plausibly, this trader could have been attempting to manipulate beliefs about the odds of victory in an attempt to boost fundraising, campaign morale, and turnout,” they wrote. The trader lost at least $4 million but might have shaped media attention of the race for less than the price of a prime-time ad, they concluded.
A billionaire congressional candidate can’t just send a check to Quinnipiac University and suddenly find himself as the polling front-runner, but he can place enormous Polymarket bets on himself that move the odds in his favor. Or consider this hypothetical laid out by the Stanford political scientist Andrew Hall: What if, a month before the 2028 presidential election, the race is dead even between J. D. Vance and Mark Cuban? Inexplicably, Vance’s odds of winning surge on Kalshi, possibly linked to shady overseas bets. CNN airs segment after segment about the spike, turning it into an all-consuming national news story. Democrats and Republicans point fingers at each other, and no one knows what’s really going on. Such a scenario is “plausible—maybe even likely—in the coming years,” Hall writes. It doesn’t help that the Trump Media and Technology Group, the owner of the president’s social-media platform, Truth Social, is set to launch its own platform, Truth Predict. (Donald Trump Jr. is an adviser to both Kalshi and Polymarket.)
The irony of prediction markets is that they are supposed to be a more trustworthy way of gleaning the future than internet clickbait and half-baked punditry, but they risk shredding whatever shared trust we still have left. The suspiciously well-timed bets that one Polymarket user placed right before the capture of Nicolás Maduro may have been just a stroke of phenomenal luck that netted a roughly $400,000 payout. Or maybe someone with inside information was looking for easy money. Last week, when White House Press Secretary Karoline Leavitt abruptly ended her briefing after 64 minutes and 30 seconds, many traders were outraged, because they had predicted (with 98 percent odds) that the briefing would run past 65 minutes. Some suspected, with no evidence, that Leavitt had deliberately stopped before the 65-minute mark to turn a profit. (When I asked the White House about this, the spokesperson Davis Ingle told me in a statement, “This is a 100% Fake News narrative.”)
Read: The Polymarket bets on Maduro are a warning
Unintentionally or not, this is what happens when media outlets normalize treating every piece of news and entertainment as something to wager on. As Tarek Mansour, Kalshi’s CEO, has said, his long-term goal is to “financialize everything and create a tradable asset out of any difference in opinion.” (Kalshi means “everything” in Arabic.) What could go wrong? As one viral post on X recently put it, “Got a buddy who is praying for world war 3 so he can win $390 on Polymarket.” It’s a joke. I think.
...
Read the original on www.theatlantic.com »
With a balanced sales structure across individual markets, Dr. Ing. h.c. F. Porsche AG, Stuttgart, delivered a total of 279,449 cars to customers around the world in 2025. The figure was 310,718 for the previous year, representing a decline of 10 per cent. Porsche’s top priority remains a value-oriented derivative mix.
With a balanced sales structure across individual markets, Dr. Ing. h.c. F. Porsche AG, Stuttgart, delivered a total of 279,449 cars to customers around the world in 2025. The figure was 310,718 for the previous year, representing a decline of 10 per cent. Porsche’s top priority remains a value-oriented derivative mix.
“After several record years, our deliveries in 2025 were below the previous year’s level. This development is in line with our expectations and is due to supply gaps for the 718 and Macan combustion-engined models, the continuing weaker demand for exclusive products in China, and our value-oriented supply management,” says Matthias Becker, Member of the Executive Board for Sales and Marketing at Porsche AG. “In 2025, we delighted our customers with outstanding cars — such as the 911 Turbo S with its T-Hybrid drive system.” The response to the launch of the Cayenne Electric at the end of 2025 also shows, Becker adds, that Porsche is meeting customer expectations with its innovative and high-performance products.
With 84,328 deliveries, the Macan was the best-selling model line. North America remains the largest sales region with 86,229 deliveries — a figure that is in line with the previous year.
Porsche repositioned itself in 2025 and made forward-looking strategic product decisions. The delivery mix in 2025 underscores that the sports car manufacturer is consistently responding to global customer preferences by expanding its drivetrain strategy to offer combustion-engined, plug-in hybrid, and fully electric cars. In 2025, 34.4 per cent of Porsche cars delivered worldwide were electrified (+7.4 percentage points), with 22.2 per cent being fully electric and 12.1 per cent being plug-in hybrids. This puts the global share of fully electric vehicles at the upper end of the target range of 20 to 22 per cent for 2025. In Europe, for the first time, more electrified cars were delivered than pure combustion-engined models (57.9 per cent electrification share), with every third car being fully electric. Among the Panamera and Cayenne models, plug-in hybrid derivatives dominate the European delivery figures. At the same time, the combustion-engined and T-Hybrid 911 set a new benchmark with 51,583 deliveries worldwide.
With 86,229 deliveries, North America remains the largest sales region, as it was the year prior. After record deliveries in 2024, the Overseas and Emerging Markets also largely maintained its previous-year levels, with 54,974 cars delivered (-1 per cent). In Europe (excluding Germany), Porsche delivered 66,340 cars by the end of the year, down 13 per cent year-on-year. In the German home market, 29,968 customers took delivery of new cars — a decline of 16 per cent. Reasons for the decrease in both regions include supply gaps for the combustion-engined 718 and Macan models due to EU cybersecurity regulations.
In China, 41,938 cars were delivered to customers (-26 per cent). Key reasons for the decline remain challenging market conditions, especially in the luxury segment, as well as intense competition in the Chinese market, particularly for fully electric models. Porsche continues to focus on value-oriented sales.
Deliveries of the Macan totaled 84,328 units (+2 per cent), with fully electric versions accounting for over half at 45,367 vehicles. In most markets outside the EU, the combustion-engined Macan continues to be offered, with 38,961 of these being delivered. Some 27,701 Panamera models were delivered by the end of December (-6 per cent).
The 911 sports car icon recorded 51,583 deliveries by year-end (+1 per cent), setting another delivery record. The 718 Boxster and 718 Cayman totaled 18,612 deliveries, down 21 per cent from the previous year due to the model line’s phase-out. Production ended in October 2025.
The Taycan accounted for 16,339 deliveries (-22 per cent), mainly due to the slowdown in the adoption of electromobility. The keys to 80,886 Cayenne models were handed to customers in 2025, a decline of 21 per cent, partly due to catch-up effects the previous year. The new fully electric Cayenne celebrated its world premiere in November, with the first markets to offer the model beginning to deliver to customers from this spring. It will be offered alongside combustion-engined and plug-in hybrid versions of the Cayenne.
Looking ahead, Matthias Becker says: “In 2026, we have a clear focus; we want to manage demand and supply according to our ‘value over volume’ strategy. At the same time, we are planning our volumes for 2026 realistically, considering the production phase-out of the combustion-engined 718 and Macan models.” In parallel, Porsche is consistently investing in its three-pronged powertrain strategy and will continue to inspire customers with unique sports cars in 2026. An important component is the expansion of the brand’s customization offering — via both the Exclusive Manufaktur and Sonderwunsch program. In doing so, the company is responding to customers’ ever-increasing desire for individualization.
All amounts are individually rounded to the nearest cent; this may result in minor discrepancies when summed.
This press release contains forward-looking statements and information on the currently expected business development of Porsche AG. These statements are subject to risks and uncertainties. They are based on assumptions about the development of economic, political and legal conditions in individual countries, economic regions and markets, in particular for the automotive industry, which we have made based on the information available to us and which we consider to be realistic at the time of publication. If any of these or other risks materialise, or if the assumptions underlying these statements prove incorrect, the actual results could be significantly different from those expressed or implied by such statements. Forward-looking statements in this presentation are based solely on the information pertaining on the day of publication.
These forward-looking statements will not be updated later. Such statements are valid on the day of publication and may be overtaken by later events.
This information does not constitute an offer to exchange or sell or offer to exchange or purchase securities.
...
Read the original on newsroom.porsche.com »
When I get together with my friends in the industry, I feel a little guilty about how much I love my job. This is a tough time to be a software engineer. The job was less stressful in the late 2010s than it is now, and I sympathize with anyone who is upset about the change. There are a lot of objective reasons to feel bad about work. But despite all that, I’m still having a blast. I enjoy pulling together projects, figuring out difficult bugs, and writing code in general. I like spending time with computers. But what I really love is being useful.
The main character in Gogol’s short story The Overcoat is a man called Akaky Akaievich. Akaky’s job is objectively terrible: he’s stuck in a dead-end copyist role, being paid very little, with colleagues who don’t respect him. Still, he loves his work, to the point that if he has no work to take home with him, he does some recreational copying just for his own sake. Akaky is a dysfunctional person. But his dysfunction makes him a perfect fit for his job.
It’s hard for me to see a problem and not solve it. This is especially true if I’m the only person (or one of a very few people) who could solve it, or if somebody is asking for my help. I feel an almost physical discomfort about it, and a corresponding relief and satisfaction when I do go and solve the problem. The work of a software engineer - or at least my work as a staff software engineer - is perfectly tailored to this tendency. Every day people rely on me to solve a series of technical problems.
In other words, like Akaky Akaievich, I don’t mind the ways in which my job is dysfunctional, because it matches the ways in which I myself am dysfunctional: specifically, my addiction to being useful. (Of course, it helps that my working conditions are overall much better than Akaky’s). I’m kind of like a working dog, in a way. Working dogs get rewarded with treats, but they don’t do it for the treats. They do it for the work itself, which is inherently satisfying.
This isn’t true of all software engineers. But it’s certainly true of many I’ve met: if not an addiction to being useful, then they’re driven by an addiction to solving puzzles, or to the complete control over your work product that you only really get in software or mathematics. If they weren’t working as a software engineer, they would be getting really into Factorio, or crosswords, or tyrannically moderating some internet community.
A lot of the advice I give about working a software engineering job is really about how I’ve shaped my need to be useful in a way that delivers material rewards, and how I try to avoid the pitfalls of such a need. For instance, Protecting your time from predators in large tech companies is about how some people in tech companies will identify people like me and wring us out in ways that only benefit them. Crushing JIRA tickets is a party trick, not a path to impact is about how I need to be useful to my management chain, not to the ticket queue. Trying to impress people you don’t respect is about how I cope with the fact that I’m compelled to be useful to some people who I may not respect or even like.
There’s a lot of discussion on the internet about what ought to motivate software engineers: money and power, producing real value, ushering in the AI machine god, and so on. But what actually does motivate software engineers is often more of an internal compulsion. If you’re in that category - as I suspect most of us are - then it’s worth figuring out how you can harness that compulsion most effectively.
...
Read the original on www.seangoedecke.com »
One of the questions of the 2026 prediction contest is whether Nvidia’s
stock price will close below $100 on any day in 2026. At the time of writing, it trades at $184 and a bit, so going down to $100 would be a near halving of the stock value of the highest valued company in the world.
It’s an interesting question, and it’s worth spending some time on it.
If you just want the answer, my best prediction is that the probability is around 10 %. I didn’t expect to get such a high answer, but read on to see how we can find out.
When we predicted the Dow Jones index crossing a barrier in 2023, we treated the index as an unbiased random walk. That was convenient, but we cannot do it with the Nvidia question because of one major difference: the time scale.
Over short time spans, the volatility1 Or noise, or variation, or standard deviation. of stock movements dominate their return2 Or signal, or drift, or average change.. This happens because noise grows with the square root of time, while signal grows linearly with time.
The plot below illustrates an imaginary amazing investment which has a yearly log-return of 0.3, and a yearly volatility of 0.3.3 Readers aware that stonks
go up will recognise this as an unrealistic Sharpe ratio of 1.0. The middle line follows our best guess for how the investment will grow after each year, and the outer curves illustrate our uncertainty around the exact value of it.
Early on, we can see that the uncertainty is much bigger than the height to the trend line. Before a year has passed, the exact result is determined more by noise than by growth. Toward the end, growth has taken over and the noise has a smaller effect.
One measure of how much volatility there is compared to expected return is the signal-to-noise ratio. It’s computed as
and for the Dow Jones question, we were looking at a signal-to-noise ratio of −8 dB. That is already a little too high to safely assume it behaves like an unbiased random walk, but for a low-stakes prediction contest it works out.
Using return data for the Nvidia stock from 2025, the signal-to-noise ratio is −1.4 dB. Although the movement in this period is still dominated by noise4 Evidenced by negative signal-to-noise ratio., the expected return is still going to matter, and we shouldn’t assume it behaves like an unbiased random walk.
Even if we ignore the problematic signal-to-noise ratio and pretend the Nvidia stock price is an unbiased random walk, we’ll run into what’s perhaps the bigger problem: the theory of unbiased random walks assumes constant volatility throughout the year. The computer will happily tell us there is a near-zero percent chance of the stock closing under $100 at any point next year.
The computer does grant a 23 % probability that the stock price drops to $130, and that might get us thinking. If we assume the stock price has dropped to $130, that tells us something about the market environment we’re in. Nvidia might drop to $130 due to random chance alone, but it’s more likely to do that if we’re in a market with a higher volatility than we assumed based on the 2025 returns. In such a market, a further drop to $100 isn’t so strange anymore.
Our simple random walk model does not account for this. When forecasting stock prices over longer periods, we need a better understanding of how the volatility might change in the future.
Fortunately for us, there are people who continuously estimate the volatility of specific stock prices. They even do it in relation to barriers like the $100 price we’re interested in. They’re options traders!
The expected volatility of the stock price is one of the variables that go into pricing an option. This means we can look up a December 2026 Nvidia call option with a strike price of $100 in the market, see what it costs, and then reverse the option pricing process to get an implied volatility out.
To do this, we first need to learn how to price an option, and to do that, we need to know what an option is.
In this article, we’re going to focus on call options because they are more thickly traded. Assume we have an un-expired Nvidia call option with a strike price of $100. We can then exercise it, which means we trade in the option plus the strike price for one share in the underlying Nvidia stock. If we did that today, we would earn $84, because we lose the $100, but the share in Nvidia we get in exchange is worth $184.
We don’t have to exercise the option, though. If the price of Nvidia goes up tomorrow, we would earn more from exercising the option tomorrow. We can delay exercising it right up until it expires, when it becomes invalid.
If we were able to buy the $100 option for less than $84, we would get free profit. The chart above tells us, however, that the $100 option costs $92.90, meaning the market expects there to be a better opportunity for exercising that option before it expires.
To keep things computationally simple, we are going to use a binomial model for the price of the underlying Nvidia stock. We don’t know the daily volatility, so we’ll keep that as a variable we call \(\sigma\). We will pretend that each day, the Nvidia stock price can either grow with a factor of \(e^\sigma\) or shrink with a factor of \(e^{-\sigma}\).5 This is a geometric binomial walk. We could transform everything in the reasoning below with the logarithm and get an additive walk in log-returns.
Thus, on day zero, the Nvidia stock trades for $184. On day one, it can take one of two values:
\(184e^\sigma\) because it went up, or
\(184e^{-\sigma}\) because it went down.
On day two, it can have one of three values:
\(184e^{2\sigma}\) (went up both in the first and second day),
\(184e^{\sigma - \sigma} = 184\) (went up and then down, or vice versa), or
\(184e^{-2\sigma}\) (went down both days).
If it’s easier, we can visualise this as a tree. Each day, the stock price branches into two possibilities, one where it rises, and one where it goes down. In the graph below, each column of bubbles represents the closing value for a day.
This looks like a very crude approximation, but it actually works if the time steps are fine-grained enough. The uncertainties involved in some of the other estimations we’ll do dwarf the inaccuracies introduced by this model.6 Even for fairly serious use, I wouldn’t be unhappy with daily time steps when the analysis goes a year out.
It is important to keep in mind that the specific numbers in the bubbles depend on which number we selected for the daily volatility \(\sigma\). Any conclusion we draw from this tree is a function of the specific \(\sigma\) chosen to construct the tree.
When we have chosen an initial \(\sigma\) and constructed this tree, we can price an option using it. Maybe we have a call option expiring on day three, with a strike price of $180. On day four, the last day, the option has expired, so it is worth nothing. We’ll put that into the tree.
We have already seen what the value of the option is on the day it expires: it’s what we would profit from exercising it. If the stock is valued at $191, the option is worth $11, the difference between the stock value and the strike price. On the other hand, if the stock is valued at $177, it is worth less than the strike price of the option, so we will not exercise the option, instead letting it expire.
The day before the expiration day is when we have the first interesting choice to make. We can still exercise the option, with the exercise value of the option calculated the same way.
Or we could hold on to the option. If we hold on to the option for a day, the value of the option will either go up or down, depending on the value of the underlying stock price. We will compute a weighted average of these movement possibilities as
where \(V_u\) and \(V_d\) are the values the option will have on the next day when the underlying moves up or down in the tree, respectively. Then we’ll discount this with a safe interest rate to account for the fact that by holding the option, we are foregoing cash that could otherwise be used to invest elsewhere. The general equation for the hold value of the option at any time before the expiration day is
Let’s look specifically at the node where the stock value is $199. We’ll assume a safe interest rate of 3.6 % annually, which translates to 0.01 % daily.7 In the texts I’ve read, 4 % is commonly assumed, but more accurate estimations can be derived from Treasury bills and similar extremely low-risk interest rates. The value of holding on to the option is, then
and now we only need to know what \(\tilde{p}\) is. That variable looks and behaves a lot like a probability, but it’s not. There’s an arbitrage argument that fixes the value of \(\tilde{p}\) to
where \(\sigma\) is the same time step volatility we assumed when creating the tree — in our case, 4 %. This makes \(\tilde{p} = 0.491\), and with this, we can compute the hold value of the option when the underlying is $199:
The value of the option at any point in time is the maximum of the hold value and the exercise value. So we replace the stock value of $199 in the tree with the option value of $19.03. We perform the same calculation for the other nodes in day two.
and then we do the same for the day before that, then before that, etc., until we get to day zero.
We learn that if someone asks us on day zero to buy a call option with a strike price of $180 and expiry three days later, when the underlying stock currently trades for $184, and has an expected daily volatility of 0.04, then we should be willing to pay $7.38 for that option.
What’s weird is this number has nothing to do with the probability we are assigning to up or down movements. Go through the calculations again. We never involved any probability in the calculation of the price. Although I won’t go through the argument — see Shreve’s excellent Stochastic Calculus for
Finance8 Stochastic Calculus for Finance I: The Binomial Asset Pricing
Model; Shreve; Springer; 2005. for that — this price for the option is based on what it would cost to hedge the option with a portfolio of safe investments, borrowing, and long or short positions in the underlying stock.
Even without going through the detailed theory, we can fairly quickly verify that this is indeed how options are priced. Above, we made educated guesses as to the safe interest rate, a reasonable volatility, etc. We calculated with a spot price of $184, a strike price of $180, and expiry three days out. We got an option price of $7.38.
At the time of writing, the Nvidia stock trades at $184.94. It has options that expire in four days. The ones with a strike price of $180 currently sell for $6.20. That’s incredibly close, given the rough estimations and the slight mismatch in duration.9 The main inaccuracy comes from the volatility we used to construct the tree. The actual volatility of the Nvidia stock on such short time periods and small differences in price is lower.
When we constructed the tree above, we assumed a daily volatility of 4 %. If we write code that takes the volatility as a parameter and computes the option price for that volatility, we can try various volatilities until we find one where our price matches the market price for that option.
We write the following code to perform the price calculation faster than we can do it manually.10 Note that we don’t actually construct the full binomial tree. We can compute the value of the underlying stock at any node given only its coordinates, and the option value only depends on the next time step in a way that lets us optimise the computation with dynamic programming.
import Control. Monad.ST
import Data.Foldable (forM_)
import qualified Data.Vector as Vector
import qualified Data.Vector.Mutable as Vector
– | Given the current price of the underlying,
– and the duration (in days) and strike price
– of the option, take a daily volatility and
– compute the option value.
option_value :: Double -> Int -> Double -> Double -> Double
option_value spot duration strike sigma =
let
– Shorthand: u = e^σ
u = exp sigma
– Shorthand: d = e^(−σ)
d = exp (negate sigma)
– Assuming yearly safe interest of 4 %
– this is the weighting factor tilde-p.
p = (exp 0.00016 - d) / (u - d)
– The value of the underlying stock at
– day t, node i.
s t i = spot * u^i * d^(t-i)
– The exercise value of the option depends
– only on the strike and the price of the
– underlying stock.
v_e t i = max 0 (s t i - strike)
– The hold value of the option depends on
– the two possible future values of the
– option v_d and v_u.
v_h v_d v_u =
exp (negate 0.00016)
* (p * v_u + (1-p) * v_d)
in
runST $ do
– Create a mutable vector.
nodes
Vector.write nodes i (v_e duration i)
– Walk the tree backwards from the day
– before expiration.
forM_ (reverse [0 .. duration - 1]) $ \t -> do
– For each node, calculate hold value
– based on option value in the next
– time step (which was just calculated)
– in the iteration before.
forM_ [0 .. t] $ \i -> do
v_d
Here we are valuing a 31-day call option for Nvidia, with a strike price of $170. The market price is $18.68, but our code returns $24.74. This means our guess for the implied daily volatility of 4 % is too high. If we try various values for the volatility, we’ll eventually find that 2.2 % leads to an option price of $18.53, which is fairly close to the market price. This daily volatility corresponds to a yearly volatility of 35 %. If we look up other people’s calculations for the 30-day at-the-money implied volatility of the Nvidia stock, we’ll find they’re at something like 36 %. Definitely close enough.
For answering the question about Nvidia dropping below $100, we don’t want the 30-day at-the-money volatility, though, but the 340-day far out-of-the-money volatility.
The 340-day $100 strike call options sell for $92.90 in the market. To get that price we need to feed our model a daily volatility of 3.1 %. In other words, the 340-day $100 strike call options imply a daily volatility of 3.1 %. Because options so far out of the money are more thinly traded, we might want to confirm this volatility by computing it for other options with nearby strike prices.
We expect the implied volatility to go up as the strike price is further out of the money, which it does. It seems that 3.1 % is a reasonable implied volatility for such large movements.
The Bank of England has published a method13 Working Paper No. 455:
...
Read the original on entropicthoughts.com »
In March 1770, as Boston boiled with outrage over the killing of five colonists by British soldiers, John Adams did something few could comprehend: he volunteered to defend the enemy. Adams believed that the very idea of liberty depended on ensuring that even the reviled had counsel; that a free country could not exist without an independent and impartial bar willing to defend the despised.
But Adams did not believe that his charge in defending his client was to win at all costs. “Every lawyer,” he reflected in his autobiography, “must hold himself responsible not only to his Country, but to the highest and most infallible of all Tribunals for the part he should act.” The moral map Adams established in this case became a foundation for the practice of law in America. Indeed, today’s legal ethics codes still speak of lawyers’ threefold duty: to the client, to the court, and to the country.
Imagine if Adams had decided that defending his clients meant winning at all costs. Can you imagine Bostonians’ outrage if Adams had, say, withheld evidence that the British soldiers did have murderous intent? What would Adams’s legal legacy be if he’d tried not to discover the truth of what happened outside the Custom House, but to sow doubt and uncertainty among the people of Boston? How different would our legal system be if the British soldiers were acquitted not because they were innocent, but because they had a lawyer who was willing to hide the truth?
Such a hypothetical has become our reality two and a half centuries later, only the victims are children, and its ethical corruption and harm operate at an industrial scale. What has emerged from inside Meta over recent months reveals how vacuous the characterization of the lawyer’s ethical obligations have become: Meta lawyers ordering evidence of child exploitation destroyed and research findings buried, while they hid behind attorney-client privilege. Meta’s lawyers do not follow Adams’ precedent, but, rather, the example set by Big Tobacco lawyers in the 1970s and ’80s. These lawyers collapsed Adams’ threefold duty into one — serve the client alone, whatever the cost to the courts and the country.
The story of this ethical erosion begins not in Menlo Park but in the tobacco boardrooms of two generations ago, when Big Tobacco attorney Ernest Pepples outlined what he called the “honesty option”: admitting that smoking killed people. He conceded this would expose tobacco companies to catastrophic liability, and the companies ultimately rejected honesty in favor of profit. In the decades that followed, tobacco lawyers counseled document destruction, abused attorney-client privilege to suppress research, and intimidated scientists whose findings threatened litigation defenses. Big Tobacco’s attorneys perfected hiding the truth from the American people, abandoning their duties to the court and to the country. The cost of that abandonment can be measured in millions of lives and billions of dollars. The cost to the public trust is incalculable.
Fast forward 30 years to Meta headquarters where the multi-trillion dollar company’s attorneys are following Big Tobacco’s playbook, aiding and abetting the company’s disregard for public welfare and children’s safety. Meta’s leadership and legal team have hidden “mountains of evidence,” as Jonathan Haidt and Zach Rausch put it, of direct and indirect harms to kids and teens.
The latest revelations about Meta’s malfeasance come from newly unsealed court documents. In 2020, the company discovered through its own experimental research — an initiative known as Project Mercury — that when users reduced the amount of time they spent on Facebook, their levels of depression, anxiety, and loneliness decreased. Meta’s lawyers buried the findings.
But Project Mercury, and Meta’s suppression of its damning research on the mental health effects of Instagram, is only the beginning. Deeper revelations come from whistleblowers Jason Sattizahn and Kayce Savage and their testimony before the U. S. Senate this past September. Sattizahn and Savage had been researching child exploitation in Meta’s VR ecosystem, where they discovered coordinated pedophile rings operating inside games like Roblox. Sattizahn and Savage described an immersive experience where children regularly encounter what Sattizahn called “the transmission of the motion and the audio of sex acts” from adult users — not just sexual words or adult “content,” but the physical experience of “adults sexually gratifying themselves” while “surrounding and hounding minors,” complete with immersive audio.
Sattizahn also testified how, after his research on Meta’s VR platform uncovered children under the age of 10 in Germany being propositioned for “sex acts, nude photos, and other acts that no child should ever be exposed to,” Meta’s in-house lawyers demanded the erasure of any and all evidence of this finding. When asked by Senator Josh Hawley how often she’d witnessed an underage user being exposed to inappropriate sexual content on Meta VR, Savage replied, “every time I use the headset.” The permissiveness by the company that Savage and Sattizahn testified to is mirrored by the more recently unsealed court documents, which included that Meta maintained a 17-strike policy for sex trafficking accounts — removing predators only after they were caught attempting to traffic people 17 separate times. Meta’s own internal documents called this threshold “very, very, very high.”
According to Sattizahn, Meta’s legal department created what he called a “funnel of manipulation” in response to these identified risks to children, a comprehensive system for controlling every aspect of safety research. Legal representatives embedded in research teams demanded destruction of findings deemed too sensitive. Researchers were forbidden to use words like “illegal” or “non-compliant” even when plainly applicable. Sattizahn and Savage’s testimony is complemented by internal communications, now public, showing that Meta employees worried they were behaving like tobacco executives “doing research and knowing cigs were bad and then keeping that info to themselves.”
On October 23, 2025, a judge in a separate case validated what the whistleblowers and court documents had described. Invoking the rarely used crime-fraud exception to pierce Meta’s attorney-client privilege, District of Columbia Superior Court Judge Yvonne Williams found Meta’s lawyers had coached researchers to hide, block, and sanitize studies on teen mental-health harm in order to shield the company from liability. Judge Williams determined there was probable cause that these communications were “fundamentally inconsistent with the basic premises of the adversary system.”
Attorney-client privilege was originally meant to protect candor in service of truth, but in Meta’s hands it has become a means of hiding the truth — a transformation that marks how far the legal profession has drifted. John Adams believed that truth was the lawyer’s surest refuge, the one place where all three duties could coexist. He wrote in his autobiography that his British client “must therefore expect from me no Art or Address, No Sophistry or Prevarication in such a Cause; nor any thing more than Fact, Evidence and Law would justify.” When lawyers abandon fact, evidence, and law, and turn their craft toward suppression instead, they corrode the foundation of public trust on which the entire legal system depends. Judge Williams’ ruling is thus more than a procedural rebuke; it is a reminder that the law’s legitimacy survives only so long as truth remains discoverable.
Yet Judge Williams’ ruling alone cannot stop Meta’s institutional misdeeds. Meta has thrived in an environment of passivity, thanks to lawyers who refuse to report ethical misconduct, bar associations that decline to investigate despite court findings of probable cause and reams of evidence in the public domain, legislators who prefer theater to legislation, and influential business leaders from other sectors who remain silent bystanders as tech lawyers remake the legal system into one that rewards grift and exploitation rather than enterprise and innovation.
Impunity is not inevitable. State bar associations should open investigations tomorrow and revoke reciprocity to Meta attorneys licensed in other jurisdictions. The evidence is public: testimony under oath, a judge’s finding of probable cause, court documents that speak for themselves. Investigations for potential disbarment should begin with senior leaders like Jennifer Newstead and Joel Kaplan, Meta’s respective heads of legal and public policy who bear responsibility under ethics rules for attorneys working under them.
Junior lawyers at the company who may have witnessed this systematic obstruction and failed to report it should also be scrutinized, as the rules of professional responsibility generally require lawyers to report professional misconduct by another lawyer “that raises a substantial question as to that lawyer’s honesty, trustworthiness or fitness.” That none reported what they witnessed demands investigation at the very least. A stint in Meta’s legal department on a lawyer’s resume should be considered disqualifying by law firms and other future employers, making that lawyer unhireable if they cannot show that they spoke up about, or were otherwise unaware of, the suppression of evidence or harm. The fear of real consequences for playing a role in perpetrating such massive harm to American children should force Meta’s attorneys to either leave the company or to begin to stand up for what’s right.
Congress and state legislatures should also examine whether legal ethics rules require reform: whether attorney-client privilege has been extended too far when it shields corporations’ most questionable activities, and whether benefits of encouraging candid consultations with clients justify costs of facilitating cover-ups. Meta is not the only bad actor here, but it did get caught in the most egregious behavior. The unsealed court filings demonstrate similar behavior by Snap and by Google. OpenAI’s lawyers “accidentally” erased evidence compiled by The New York Times’s attorneys in its copyright lawsuit against the company. Judges have caught attorneys from Google and Apple withholding or destroying documents relevant to anti-trust trials. As Stuart Taylor in The Atlantic wrote of the tobacco lawyers two decades ago, we should invite the profession’s leaders to “explain why lawyers should remain free to hide evidence of corporate wrongdoing, mislead courts, and mangle the truth.”
The machinery for accountability exists. State bars can act tomorrow to investigate and suspend Meta’s attorneys. Judges can continue piercing false privilege claims and issue sanctions against bad-faith advocates. Legislators can demand bar associations justify their continued self-regulation and reform the rules of attorney-client privilege for corporations. Law firms can fire clients that ask them to violate their broader duties to the country and its courts. Importantly, holding corrupt, unethical lawyers accountable for enabling harms to children does not mean that we sacrifice the foundational tenet of the American legal system that John Adams championed: that even those we may despise — the redcoat soldier then, the billionaire and his exploitative empires now — will remain entitled to counsel who will zealously defend them, provided they follow the rules that the rest of us do.
Meta’s attorneys have forgotten that the law’s legitimacy derives from the integrity of those who practice it. For that reason, accountability for failing to follow the rules of professional ethics cannot be left in the hands of those who would pervert the principles at the heart of their profession so casually. Instead, it’s up to the rest of us — those who still believe law should serve justice — to ensure Meta’s attorneys are reminded of their obligations through real, material, swift, individual consequences. They are the architects of a system that harms children at an industrial scale, and every Meta lawyer who participated or stood by silently shares the moral stain of what the company has perpetrated.
Holding Meta accountable includes holding its lawyers accountable; the harm the company inflicts on young people could not exist without lawyers willing to enable it. Defrocking those lawyers could be what ends the impunity for Mark Zuckerberg, his lieutenants, and his empire.
The truth will out for Meta’s lawyers — eventually — as happened with Big Tobacco’s, but the stakes reach beyond any single company’s malfeasance or any one attorney’s lack of conscience. Just as tobacco lawyers’ corruption poisoned public trust, Meta’s attorneys threaten to complete the transformation of law into a service available only to those wealthy enough to corrupt it and shameless enough to ignore the wreckage. Whether courts can function, whether Americans believe law serves justice rather than a system many believe to be rigged, depends on whether those in power repudiate this conduct decisively, or whether they continue, through their inaction, to tacitly endorse it.
...
Read the original on www.afterbabel.com »
I’ve been using Claude Code more and more recently. At some point I realized that rather than do something else until it finishes, I would constantly check on it to see if it was asking for yet another permission, which felt like it was missing the point of having an agent do stuff. So I wanted to use Claude Code with the –dangerously-skip-permissions flag.
If you haven’t used it, this flag does exactly what it says: it lets Claude Code do whatever it wants without asking permission first. No more “May I install this package?”, “Should I modify this config?”, “Can I delete these files?”
It just… does it.
Which is great for flow since I don’t have to worry that it stopped doing stuff just to ask a permission question.
But also, you know, dangerous.
I like my filesystem intact, so the obvious solution is to not run this thing directly on my OS account.
First instinct: throw it in a Docker container. Containers are for isolation, right?
Except I want Claude to be able to build Docker images. And run containers. And maybe orchestrate some stuff.
So now you need Docker-in-Docker, which means –privileged mode, which defeats the entire purpose of sandboxing. That means trading “Claude might mess up my filesystem” for “Claude has root-level access to my container runtime.”
There’s also the nested networking weirdness, volume mounting permissions that make you question your life choices, and the general feeling that you’re fighting the tool instead of using it.
* #yolo run it bare metal: no, no and no
* sandbox-runtime: more of an ACL approach, I want Claude to be able to do anything, because it doesn’t have access to anything except the code
* firejail or similar: same problem as Docker-in-Docker
* cloud VM: costs money, has latency, need to upload my code somewhere
Then I remembered about a project that I’ve used before Docker became all the rage: Vagrant.
If you weren’t around back then, Vagrant gives you proper VM isolation with a reproducible config file. It’s basically infrastructure as code for your local dev environment.
* shared folders that make it feel local enough
I hadn’t used VirtualBox in years since Docker containers covered all requirements until now, so I grabbed the latest version (7.2.4) and got started.
First vagrant up and… the VM is pegging my CPU at 100%+ while completely idle.
I spent an hour turning off various VM features, tweaking settings, asking LLMs random combinations of “virtualbox high cpu idle”, you know, the usual.
Eventually I found this GitHub issue. VirtualBox 7.2.4 shipped with a regression that causes high CPU usage on idle guests. What are the odds.
Here’s what my simple Vagrantfile looks like:
First boot takes a few minutes to provision everything, and you need to sign in to Claude once for each project, but after that, vagrant up is quite fast.
Then, when you are done for the day:
So, what can Claude do with these newfound powers?
Since it’s running in a VM, I also gave it sudo access and instructed it that it has the power to do anything: install system packages, modify configs, create files, run Docker containers, whatever.
* manually started a webapp API and inspected it with curl requests
* installed a browser and manually inspected the app, then built end-to-end tests based on that
All things I’d be nervous about on my host machine, especially with the “just do it” flag enabled.
And now I feel Claude is much more effective since it has the extra context, it’s not relying on me to run the command, return the output or error message, and then iterate. It just does it by itself.
Claude Code isn’t exactly a resource hog, and the VM has plenty of headroom. The shared folder sync works fine, no lag or weirdness when files change. This is under Linux with VirtualBox, YMMV for other platforms.
* general “oops I didn’t mean Claude to do that”
What you’re NOT protecting against:
* deleting the actual project, since the file sync is two-way
* a malicious AI trying to escape the VM (VM escape vulnerabilities exist, but they’re rare and require deliberate exploitation)
* data exfiltration: the VM still has internet access, but besides the code there shouldn’t really be any data to exfiltrate
Threat model: I don’t trust myself to always catch what the agent is doing when I’m in the zone and just want stuff to work. This setup is about preventing accidents, not sophisticated attacks.
Since all my projects are in git I don’t care if it messes something up in the project. Plus you get the benefit of being able to use your regular git tooling/flows/whatever, without having to add credentials to the VM.
But if you need something stricter, config.vm.synced_folder also supports type: “rsync”, one-time one-way sync from the machine running to the machine being started by Vagrant, but then it’s on you to sync it back or whatever is needed.
This took a bit to get right, mostly because of the VirtualBox CPU bug. But now it’s frictionless. I can let Claude Code do whatever it wants without fear, and if something goes sideways, I just nuke the VM and start fresh.
The Vagrantfile is short and reproducible. Drop it in any project directory, vagrant up, and you’re sandboxed.
If you’re using Claude Code with the dangerous flag, I’d recommend something like this. Even if you’re careful about what you approve, it only takes one moment to mess things up.
...
Read the original on blog.emilburzo.com »
A high resolution drawing of the terrazzo layout. (Courtesy of US Bureau of Reclamation)
The western flank of the Hoover Dam holds a celestial map that marks the time of the dam’s creation based on the 25,772-year axial precession of the earth.
One of the two massive bronze cast sculptures that flank Hoover Dam’s Monument Plaza. (Photo by Alexander Rose)
On the western flank of the Hoover Dam stands a little-understood monument, commissioned by the US Bureau of Reclamation when construction of the dam began in 01931. The most noticeable parts of this corner of the dam, now known as Monument Plaza, are the massive winged bronze sculptures and central flagpole which are often photographed by visitors. The most amazing feature of this plaza, however, is under their feet as they take those pictures.
The plaza’s terrazzo floor is actually a celestial map that marks the time of the dam’s creation based on the 25,772-year axial precession of the earth.
Marking in the terrazzo floor of Monument Plaza showing the location of Vega, which will be our North Star in roughly 12,000 years. (Photo by Alexander Rose)
I was particularly interested in this monument because this axial precession is also the slowest cycle that we track in Long Now’s 10,000 Year Clock. Strangely, little to no documentation of this installation seemed to be available, except for a few vacation pictures on Flickr. So the last time I was in Las Vegas, I made a special trip out to Hoover Dam to see if I could learn more about this obscure 26,000-year monument.
I parked my rental car on the Nevada side of the dam on a day pushing 100 degrees. I quickly found Monument Plaza just opposite the visitor center where tours of the dam are offered. While the plaza is easy to find, it stands apart from all the main tours and stories about the dam. With the exception of the writing in the plaza floor itself, the only information I could find came from a speaker running on loop, broadcasting a basic description of the monument while visitors walked around the area. When I asked my tour guide about it, he suggested that there may be some historical documentation and directed me to Emme Woodward, the dam’s historian.
Left: Monument Plaza with access road on left. (Image courtesy of US Bureau of Reclamation). Right: Hansen laying out the axial precession. (Image courtesy of US Bureau of Reclamation)
I was able to get in touch with her after returning home. As she sent me a few items, I began to see why the Bureau of Reclamation doesn’t explain very much about the monument’s background. The first thing she sent me was a description of the plaza by Oskar J. W. Hansen, the artist himself, which I thought would tell me everything I wanted to know. While parts of it were helpful, the artist’s statement of intention was also highly convoluted and opaque. An excerpt:
These [human] postures may be matched to their corresponding reflexes in terms of angle and degree much as one would join cams in a worm-gear drive. There is an angle for doubt, for sorrow, for hate, for joy, for contemplation, and for devotion. There are as many others as there are fleeting emotions within the brain of each individual who inhabits the Earth. Who knows not all these postures of the mind if he would but stop to think of them as usable factors for determining proclivities of character? It is a knowledge bred down to us through the past experience of the whole race of men.
It is pretty hard to imagine the US Bureau of Reclamation using this type of write-up to interpret the monument… and they don’t. And so there it stands, a 26,000-year clock of sorts, for all the world to see, and yet still mired in obscurity.
Markings on the floor showing that Thuban was the North Star for the ancient Egyptians at the time of the Great Pyramids. (Photo by Alexander Rose)
While I may never totally understand the inner motivations of the monument’s designer, I did want to understand it on a technical level. How did Hansen create a celestial clock face frozen in time that we can interpret and understand as the date of the dam’s completion? The earth’s axial precession is a rather obscure piece of astronomy, and our understanding of it through history has been spotty at best. That this major engineering feat was celebrated through this monument to the axial precession still held great interest to me, and I wanted to understand it better.
The giant bronze statues being craned into place. (Image courtesy of US Bureau of Reclamation)
I pressed for more documentation, and the historian sent me instructions for using the Bureau of Reclamation’s image archive site as well as some keywords to search for. The black and white images you see here come from this resource. Using the convoluted web site was a challenge, and at first I had difficulty finding any photos of the plaza before or during its construction. As I discovered, the problem was that I was searching with the term “Monument Plaza,” a name only given to it after its completion in 01936. In order to find images during its construction, I had to search for “Safety Island,” so named because at the time of the dam’s construction, it was an island in the road where workers could stand behind a berm to protect themselves from the never-ending onslaught of cement trucks.
Hansen next to the completed axial precession layout before the terrazzo was laid in. (Image courtesy of US Bureau of Reclamation)
I now had some historical text and photos, but I was still missing a complete diagram of the plaza that would allow me to really understand it. I contacted the historian again, and she obtained permission from her superiors to release the actual building plans. I suspect that they generally don’t like to release technical plans of the dam for security reasons, but it seems they deemed my request a low security risk as the monument is not part of the structure of the dam. The historian sent me a tube full of large blueprints and a CD of the same prints already scanned. With this in hand I was finally able to re-construct the technical intent of the plaza and how it works.
In order to understand how the plaza marks the date of the dam’s construction in the nearly 26,000-year cycle of the earth’s precession, it is worth explaining what exactly axial precession is. In the simplest terms, it is the earth “wobbling” on its tilted axis like a gyroscope — but very, very slowly. This wobbling effectively moves what we see as the center point that stars appear to revolve around each evening.
Long exposure of star trails depicting how all the stars appear to revolve around the earth’s celestial axis, which is currently pointed close to our current North Star — Polaris. Note that when I say that the stars of the night sky “appear to” rotate around Polaris, it is because this apparent rotation is only due to our vantage point on a rotating planet. (Image courtesy of NASA)
Presently, this center point lies very close to the conveniently bright star Polaris. The reason we have historically paid so much attention to this celestial center, or North Star, is because it is the star that stays put all through the course of the night. Having this one fixed point in the sky is the foundation of all celestial navigation.
Figure 1. The earth sits at roughly a 23 degree tilt. Axial precession is that tilt slowly wobbling around in a circle, changing what we perceive as the celestial pole or “North Star.” (Image from Wikipedia entry on Axial Precession, CC3.0.)
But that point near Polaris, which we call the North Star, is actually slowly moving and tracing a circle through the night sky. While Polaris is our North Star, Hansen’s terrazzo floor points out that the North Star of the ancient Egyptians, as they built the great pyramids, was Thuban. And in about 12,000 years, our North Star will be Vega. The workings of this precession are best explained with an animation, as in figure 1. Here you can see how the axis of the earth traces a circle in the sky over the course of 25,772 years.
Unfortunately it is a bit difficult to see how this all works in the inlaid floor at Monument Plaza. The view that you really want to have of the plaza is directly from above. You would need a crane to get this view of the real thing, but by using the original technical drawing as an underlay I was able to mark up a diagram which hopefully clarifies it (Fig. 2).
Figure 2. Description overlaid on the original technical drawing for the layout of terrazzo floor. (Underlay courtesy of US Bureau of Reclamation, color notations by Alexander Rose.)
In this diagram, you can see that the center of the circle traced by the axial precession is actually the massive flag pole in the center of the plaza. This axial circle is prominently marked around the pole, and the angle of Polaris was depicted as precisely as possible to show where it would have been on the date of the dam’s opening. Hansen used the rest of the plaza floor to show the location of the planets visible that evening, and many of the bright stars that appear in the night sky at that location.
By combining planet locations with the angle of precession, we are able to pinpoint the time of the dam’s completion down to within a day. We are now designing a similar system — though with moving parts — in the dials of the 10,000 Year Clock. It is likely that at least major portions of the Hoover Dam will still be in place hundreds of thousands of years from now. Hopefully the Clock will still be ticking and Hansen’s terrazzo floor will still be there, even if it continues to baffle visitors.
A drawing of the terrazzo layout. Click here for a high resolution version. (Courtesy of US Bureau of Reclamation)
I would like to thank Emme Woodward of the US Bureau of Reclamation for all her help in finding the original images and plans of Monument Plaza. If you have further interest in reading Hansen’s original writings about the plaza or in seeing the plans, I have uploaded all the scans to the Internet Archive.
...
Read the original on longnow.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.