10 interesting stories served every morning and every evening.
👋 Hello! If you print this page, you’ll get a nifty calendar that displays all of the year’s dates on a single page. It will automatically fit on a single sheet of paper of any size. For best results, adjust your print settings to landscape orientation and disable the header and footer.
Take in the year all at once. Fold it up and carry it with you. Jot down your notes on it. Plan things out and observe the passage of time. Above all else, be kind to others.
Looking for 2026? Here you go!
...
Read the original on neatnik.net »
A few days ago, I did a controversial blog post about Pixelfed hurting the Fediverse. I defended the theory that, in a communication network, you hurt the trust in the whole network if you create clients that arbitrarily drop messages, something that Pixelfed is doing deliberately. It gathered a lot of reactions.
When I originally wrote this post, nearly one year ago, I thought that either I was missing something or Dansup, Pixelfed’s creator, was missing it. We could not both be right. But as the reactions piled in on the Fediverse, I realised that such irreconcilable opinions do not arise only from ignorance or oversight. It usually means that both parties have vastly different assumptions about the world. They don’t live in the same world.
I started to see a pattern in the two kinds of reactions to my blog post.
There were people like me, often above 40, who like sending emails and browsing old-fashioned websites. We think of ActivityPub as a “communication protocol” between humans. As such, anything that implies losing messages without feedback is the worst thing that could happen. Not losing messages is the top priority of a communication protocol.
And then there are people like Dansup, who believe that ActivityPub is a content consumption protocol. It’s there for entertainment. You create as many accounts as the kinds of media you want to consume. Dansup himself is communicating through a Mastodon account, not a Pixelfed one. Many Pixelfed users also have a Mastodon account, and they never questioned that. They actually want multiple accounts for different use cases.
On the Fediverse threads, nearly all the people defending the Pixelfed philosophy posted from Mastodon accounts. They usually boasted about having both a Mastodon and a Pixelfed account.
To me, the very goal of interoperability is not to force you into creating multiple accounts. Big Monopolies have managed to convince people that they need one account on each platform. This was done, on purpose, for purely unethical reasons in order to keep users captive.
That brainwash/marketing is so deeply entrenched that most people cannot see an alternative anymore. It looks like a natural law: you need an account on a platform to communicate with someone on that platform. That also explains why most politicians want to “regulate” Facebook or X. They think it is impossible not to be on those platforms. They believe those platforms are “public spaces” while they truly are “private spaces trying to destroy all other public spaces in order to get a monopoly.”
People flock to the Fediverse with this philosophy of “one platform, one account”, which makes no sense if you truly want to create a federated communication protocol like email or XMPP.
But Manuel Moreale cracked it for me: the Fediverse is not a communication network. ActivityPub is not a communication protocol. The spec says it: ActivityPub is a protocol to build a “social platform” whose goal is “to deliver content.”
But aren’t social networks also communication networks? That’s what I thought. That’s how they historically were marketed. That’s what we all believed during the “Arab Spring.”
But that was a lie. Communication networks are not profitable. Social networks are entertainment platforms, media consumption protocols. Historically, they disguised themselves as communication platforms to attract users and keep them captive.
The point was never to avoid missing a message sent from a fellow human being. The point was always to fill your time with “content.”
We dreamed of decentralised social networks as “email 2.0.” They truly are “television 2.0.”
They are entertainment platforms that delegate media creation to the users themselves the same way Uber replaced taxis by having people drive others in their own car.
But what was created as “ride-sharing” was in fact a way to 1) destroy competition and 2) make a shittier service while people producing the work were paid less and lost labour rights. It was never about the social!
My own interpretation is that social media users don’t mind losing messages because they were raised on algorithmic platforms that did that all the time. They don’t see the point in trusting a platform because they never experienced a trusted means of communication.
Now that I write it, it may also explain why instant messaging became the dominant communication medium: because if you don’t receive an immediate answer, you don’t even trust the recipient to have received your messages. In fact, even if the message was received, you don’t even trust the recipient’s attention span to remember the message.
Multiple studies have confirmed that we don’t remember the vast majority of what we see while doomscrolling. While the “view” was registered to increase statistics, we don’t have the slightest memory of most of that content, even after only a few seconds. It thus makes sense not to consider social media as a means of communication at all.
There’s no need for a reliable communication protocol if we assume that human brains are not reliable enough to handle asynchronous messages.
It’s not Dansup who is missing something. It is me who is unadapted to the current society. I understand now that Pixelfed was only following some design decisions and protocol abuses fathered by Mastodon. Pixelfed was my own “gotcha” moment because I never understood Instagram in the first place, and, in my eyes, Pixelfed was no better. But if you take that route, Mastodon is no better than Twitter.
Many reactions pointed, justly, that other Fediverse tools such as PeerTube, WriteFreely, or Mobilizon were just not displaying messages at all.
I didn’t consider it a big problem because they never pretended to do it in the first place. Nobody uses those tools to follow others. There’s no expectation. Those platforms are “publish only.” But this is still a big flaw in the Fediverse! Someone could, using autocompletion, send a message pinging your PeerTube address and you will never see it. Try autocomplete “@ploum” from your Mastodon account and guess which suggestion is the only one that will send me a valid notification!
On a more positive note, I should give credit to Dansup for announcing that Pixelfed will soon allow people to optionally “not drop” text messages.
I cling to asynchronous reliable communications, but those are disappearing. I use email a lot because I see it as a true means of communication: reliable, asynchronous, decentralised, standardised, manageable offline with my own tools. But many people, even barely younger than me, tell me that email is “too formal” or “for old people” or “even worse than social network feeds.”
And they are probably right. I like it because I’ve learned to use it. I apply a strong inbox 0 methodology. If I don’t reply or act on your email, it is because I decided not to. I’m actively keeping my inbox clean by sharing only disposable email addresses that I disable once they start to be spammed.
But for most people, their email inbox is simply one more feed full of bad advertising. They have 4 or 5 digit unread count. They scroll through their inbox like they do through their social media feeds.
The main problem with reliable communication protocols? It is a mostly solved problem. Build simple websites, read RSS feeds, write emails. Use IRC and XMPP if you truly want real-time communication. Those are working and working great.
And because of that, they are boring.
Communications protocols are boring. They don’t give you that well-studied random hit of dopamine. They don’t make you addicted.
They don’t make you addicted which means they are not hugely profitable and thus are not advertised. They are not new. They are not as shiny as a new app or a new random chatbot.
The problem with communication protocols was never the protocol part. It’s the communication part. A few sad humans never wanted to communicate in the first place and managed to become billionaires by convincing the rest of mankind that being entertained is better than communicating with other humans.
We believe that a communication network must reach a critical mass to be really useful. People stay on Facebook to “stay in touch with the majority.” I don’t believe that lie anymore. I’m falling back to good old mailing lists. I’m reading the Web and Gemini while offline through Offpunk. I also handle my emails asynchronously while offline.
I may be part of an endangered species.
It doesn’t matter. I made peace with the fact that I will never get in touch with everyone. As long as there are people posting on their gemlogs or blogs with RSS feeds, as long as there are people willing to read my emails without automatically summarising them, there will be a place for those who want to simply communicate. A protected reserve.
You are welcome to join!
I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
...
Read the original on ploum.net »
For many years now, JavaScript has been the workhorse of the web. If you wanted to do something that couldn’t be done with just HTML and CSS, you could usually find a way to do it with JS.
And that is great! JS has helped push user experiences forward, and honestly helped push HTML and CSS forward!
But as time marches on, and the HTML and CSS methods gain traction, we need to start replacing the old JS methods that feel so comfy with new methods that require less JS.
Nothing against JS, but it has better things to do than setup and manage your accordions or offscreen navigation menus… Plus, JS needs to be downloaded, decompressed, evaluated, processed, and then often consumes memory to monitor and maintain features. If we can hand-off any JS functionality to native HTML or CSS, then users can download less stuff, and the remaining JS can pay attention to more important tasks that HTML and CSS can’t handle (yet).
Below are a few examples; any you care to add?
The details and summary HTML elements provide an HTML-only replacement to the typical JS accordion:
CopePen: Accordion / Expanding Content
Add an open attribute to set the default appearance as “open”:
Use the same name attribute on all related details (like radio buttons) to restrict only one open panel at a time:
You can also customize the appearance with CSS and trigger the open/close via JS.
Learn more about the details element in the previously-published “For the Love of <details>”.
Combining the HTML input and datalist elements can create a dropdown of options that autofilters as you type:
CodePen: Input with Autofilter Suggestions Dropdown
You can also use other input types:
Note that, at the time of this writing, Firefox was limited to only textual-based input types, so no date, time, range or color for now… :-(
Also note that, at the time of this writing, there are limitations on mobile, and accessibility concerns.
The popover and popovertarget attributes can replace the traditional JS-driven modal/popover/overlay:
CodePen: Modal / Popover
An auto popover (default) can be “light dismissed” (clicking outside of it or hitting the esc key). Opening an auto automatically closes any other auto popovers that were open. Clicking the button a second time will close the one it opened.
A hint popover can also be “light dismissed”. It does not close other hint popovers when opened. Clicking the button a second time will close the one it opened.
Note that, at the time of this writing, Firefox and all iOS varieties do not support hint popovers.
A manual popover can not be “light dismissed”. It does not close other manual popovers when opened. Clicking the button a second time will close the one it opened.
Learn more about the opening and closing dialogs and popovers in the previously-published “Controlling dialogs and popovers with the Invoker Commands API”.
The above Modal / Popover functionality can also be used to create an offscreen navigation that requires no JS:
#menu { margin: 0; height: 100vh; translate: -100vw;}#menu:popover-open { translate: 0;}
I use a nav element to give it semantic value, but you can use any HTML element (div, section, aside, etc.).
A popover defaults to position: fixed per the User Agent Stylesheet, and is simply pushed off screen when closed, and pulled back onscreen when it is open. Note that margin: 0 is required if you want to override the User Agent center-alignment.
Clicking outside of the above menu closes it. You can force the panel to stay open, requiring a manual/explicit close, by using popover=“manual”.
You can also add a backdrop pseudo element and style it as you wish:
#menu::backdrop { background: rgb(190 190 190 / 75%);}
While we all love the power and flexibility JS provides, we should also respect it, and our users, by limiting its use to what it needs to do.
There is so much more that has changed in recent years, including a ton of options that CSS now covers. If you are now hungry for more, have a look at [my longer article that covers those as well](https://aarontgrogg.com/blog/2023/05/31/replace-js-with-no-js-or-lo-js-options/.
...
Read the original on www.htmhell.dev »
We use cookies where necessary to allow us to understand how people interact with our website and content, so that we can continue to improve our service.
We only ever receive anonymous information, and cannot track you across other websites.
View our privacy policy
Are you OK with cookies?
Skip to main content
Say no to Palantir in the NHS
NHS England is rolling out software to run our health records from Palantir — a US spy-tech firm that has supported mass deportation in the US and enabled genocide in Gaza.
One of Palantir’s founders is also openly against the NHS. Peter Thiel claimed it “makes people sick” and said that the British people love the NHS because we’re suffering from Stockholm syndrome.
We can’t let a company like this take control of our healthcare system.
But we can fight back right now. We’ve set up a simple tool so you can find out if your local NHS trust has started using the software, and then send an email demanding they say no to Palantir.
With the government putting NHS trusts under pressure to adopt the software, we need to act right now. If you want to keep Palantir out of our NHS, send an email to your local trust and Wes Streeting, secretary of state for health.
Please provide your full name and email address.
Sorry, your session has expired. Please try again. If it continues to not work please Get in touch
Is your local NHS Trust using Palantir’s software?
Macedonia, the Former Yugoslav Republic of
You’ve entered a postcode that looks like it’s from outside the UK. If you are outside the UK, please choose your country, or if you are in the UK please check the postcode you have entered.
Would you like Good Law Project and Just Treatment to contact you with updates over email?
Are you sure? By selecting ‘No’, you won’t be able to receive updates about this action.
If you select ‘Yes’ we’ll keep you updated about this topic and our other important work. You can unsubscribe at any time.
...
Read the original on notopalantir.goodlawproject.org »
“You’re taking on a giant. What gives you the audacity?”
On November 5th, 2025, Groq CEO Jonathan Ross was asked why he was even bothering to challenge Nvidia. He didn’t blink:
“I think that was a polite way to ask why in the world are we competing with Nvidia, so we’re not. Competition is a waste of money; competition fundamentally means you are taking something someone else is doing and trying to copy it. You’re wasting R&D dollars trying to do the exact same thing they’ve done instead of using them to differentiate.”
49 days later, Nvidia paid $20 billion for Groq’s assets and hired Ross along with his entire executive team.
Except this wasn’t actually an acquisition, at least not in the traditional sense. Nvidia paid $20 billion for Groq’s IP and people, but explicitly did NOT buy the company. Jensen Huang’s statement was surgical: “While we are adding talented employees to our ranks and licensing Groq’s IP, we are not acquiring Groq as a company.”
That phrasing is the entire story. Because what Nvidia carved out of the deal tells you everything about why this happened.
Forget the AI doomer takes about a bubble forming, lets look into the actual reasons.
* All of Groq’s intellectual property and patents
GroqCloud continues as an independent company under CFO Simon Edwards. This is Nvidia’s largest acquisition ever (previous record was Mellanox at $7B in 2019), and they structured it to leave the actual operating business behind. That doesn’t happen by accident.
To understand why Nvidia paid anything for Groq, you need to understand the architectural bet Ross made when he left Google.
CPUs and GPUs are built around external DRAM/HBM (High Bandwidth Memory). Every compute operation requires shuttling data between the processor and off-chip memory. This works fine for general-purpose computing, but for inference workloads, that constant round-trip creates latency and energy overhead. Since GPUs evolved from graphics rendering they’re optimized for parallel training workloads and not SEQUENTIAL inference.
TPUs (Google’s Tensor Processing Units) reduce this overhead by passing data directly between processors like an assembly line. But they still rely on external memory (HBM) to store the model. This means the calculations are predictable, but the speed of fetching data from memory is not.
LPUs (Groq’s Language Processing Units) take a different approach: even larger on-chip SRAM instead of external DRAM/HBM. The entire model (for models that fit) lives in SRAM with 80 TB/s of bandwidth and 230 MB capacity per chip. No off-chip memory bottleneck or dynamic scheduling. The architecture is entirely deterministic from compilation to execution. You know exactly what happens at each cycle on each chip at each moment.
And 10x better energy efficiency because you’re not constantly moving data across a memory bus.
Compare this to SOTA model tokens/sec throughput on GPU inference
Serious trade off though, only 14GB of SRAM per rack means you can’t run Llama 3.1 405B. And LPUs can’t train models at all. This is an inference-only architecture with limited model size support.
But here’s what makes this interesting: if DRAM/HBM prices continue climbing (DRAM has tripled in a year I should’ve gone all in DRAM at the start of the year I’m done with indexes), and if inference becomes the dominant AI workload (which it is), SRAM-based architectures become economically compelling despite the size limitations. AND production AI applications seem to be moving from 405B-1T models to running 7B-70B models that need low latency and high throughput.
Groq raised $750 million in September 2025 at a post-money valuation of $6.9 billion. Three months later on Xmas Eve, Nvidia paid $20 billion through a “non-exclusive licensing agreement” that acquired all IP and talent while explicitly NOT buying the company.
Just crunched the numbers, that’s a $13.1 billion premium (3x the September valuation) for a company valued at 40x target revenue (double Anthropic’s recent 20x multiple) with slashed projections (The Information reported Groq cut over $1B from 2025 revenue forecasts).
The structure is the story. Traditional M&A (mergers and acquisitions for those blessed not to have dealt with this crowd) would trigger:
Non-exclusive licensing bypasses all of it. No acquisition means no CFIUS review. “Non-exclusive” means no monopoly concerns (anyone can license Groq’s tech). No shareholder votes, minimal disclosure.
But in practice: Nvidia gets the IP (can integrate before anyone else), the talent (Ross + team can’t work for competitors now), and the elimination of GroqCloud (will likely die without IP or leadership). The “non-exclusive” label is legal fiction. When you acquire all the IP and hire everyone who knows how to use it, exclusivity doesn’t matter.
The question isn’t just why Nvidia paid $13.1B more than market rate for technology they could build themselves (they have the PDK, volume, talent, infrastructure, and cash). The question is why they structured it this way.
Where the premium was spent:
Regulatory arbitrage: Non-exclusive licensing avoids years of antitrust review. Structure the deal as IP licensing + talent acquisition, and regulators have no grounds to block it. This alone is worth billions in time and certainty.
Neutralizing Meta/Llama: The April 2025 partnership gave Groq distribution to millions of developers. If Llama + Groq became the default open-source inference stack, Nvidia’s ecosystem gets commoditized. Kill the partnership before it scales.
Eliminating GroqCloud without inheriting Saudi contracts: Nvidia has invested in other cloud providers (CoreWeave, brev.dev). GroqCloud was a competitor. Traditional acquisition would mean inheriting $1.5B worth of contracts to build AI infrastructure for Saudi Arabia, triggering CFIUS scrutiny. The carve out kills GroqCloud while avoiding geopolitical entanglement.
Political access: Chamath makes ~$2B (Social Capital’s ~10% stake). Sacks looks good (major AI deal under his watch as AI Czar). Nvidia gets favorable regulatory treatment from the Trump administration. Timing it for Christmas Eve ensures minimal media scrutiny of these connections.
Blocking Google/Amazon/Microsoft from partnering with Groq: Both are developing custom AI chips (Trainium, Maia). If either had hired Ross + team or licensed Groq’s tech, Nvidia’s inference dominance faces a real challenger. If Google had acquired Groq and brought Ross back, they’d have the original TPU inventor plus LPU IP.
Chiplet integration for future products: Nvidia might integrate LPU as a chiplet alongside GPUs in Blackwell or future architectures. Having Ross’s team makes that possible. You can’t integrate IP you don’t own, and you can’t build it without the people who invented it.
That’s how business works when regulation hasn’t caught up to structural innovation. Nvidia paid $6.9B for technology and $13.1B to solve everything else using a deal structure that traditional antitrust can’t touch.
In February 2025, Saudi Arabia committed $1.5 billion to expand Groq’s Dammam data center. The publicly stated goal was supporting SDAIA’s ALLaM, Saudi Arabia’s Arabic large language model. The actual goal was Vision 2030: positioning the Kingdom as an AI superpower. (Wait so its not just about influencer boxing?)
Groq built the region’s largest inference cluster in eight days in December 2024. From that Dammam facility, GroqCloud serves “nearly four billion people regionally adjacent to the KSA.” This is critical AI infrastructure, funded by the Public Investment Fund, processing inference workloads at national scale.
According to Ross in the Series E announcement, Groq powers Humain’s services including the Humain chat product and supported OpenAI’s GPT-OSS model release in Saudi Arabia. Groq operates 13 facilities across the US, Canada, Europe, and the Middle East. Ross noted that capacity expanded more than 10% in the month before the funding announcement and all of that capacity was already in use. Customers were asking for more capacity than Groq could satisfy.
That creates a CFIUS (Committee on Foreign Investment in the United States) (I know I know I took a while to unabbreviate it) problem. A U. S. chip company, venture-backed by American investors, building sovereign AI capability for Saudi Arabia. If Nvidia had acquired GroqCloud outright, they would inherit those contracts and the regulatory scrutiny that comes with them. Foreign investment reviews, export control questions, congressional inquiries about why an American company is providing cutting-edge AI to a Middle Eastern monarchy.
By carving out GroqCloud, Nvidia gets the technology and the talent without the geopolitical mess. The Saudi contracts stay with Edwards and the independent GroqCloud entity. Clean separation. No CFIUS entanglement.
The Financial Times reported that “despite the loss of much of its leadership team, Groq said it will continue to operate as an independent company.” That’s corporate speak for: executives and VCs are cashing out while regular employees watch the company they built get hollowed out.
Here’s how the $20B probably breaks down (we’ll never know the exact numbers since Groq is private and this isn’t a traditional acquisition):
VCs (Chamath, BlackRock, Neuberger Berman, Deutsche Telekom, etc.): They own equity in Groq Inc. Depending on how the deal is structured, they get paid based on their ownership percentage. Social Capital’s ~10% stake (after dilution) is worth $1.6-2.4B. BlackRock, Neuberger Berman, and other Series E investors get their cut. They’re protected regardless of structure.
* Retention packages from Nvidia (likely massive given the $20B deal size)
* Their existing Groq equity gets paid out at the $20B valuation
Jensen Huang’s email to Nvidia staff (obtained by the FT) said they’re “adding talented employees to our ranks.” When you’re talent important enough to be mentioned in a $20B deal, you’re getting paid.
Who might get paid (depending on deal structure):
Regular Groq employees with vested equity: This is where it gets murky. There are three possible scenarios:
Scenario 1: The IP licensing fee goes to Groq Inc.
If the $20B (or a significant portion) is structured as a licensing fee paid to Groq Inc. for the IP rights, that money gets distributed to all shareholders based on ownership percentage. Employees with vested stock options or RSUs get their pro-rata share. This is the best case for employees.
Example: Engineer with 0.01% fully vested equity gets $2M ($20B × 0.01%). Not bad for an engineer who’s been there since 2018-2020.
Scenario 2: Most of the $20B goes to retention packages
If the deal is structured so that the bulk of the money goes to retention/hiring packages for Ross, Madra, and the senior team joining Nvidia, with a smaller licensing fee to Groq Inc., employees get less. Maybe the split is $15B retention, $5B licensing fee. Now that same engineer with 0.01% gets $500K instead of $2M.
Scenario 3: The IP licensing is separate from talent acquisition
Nvidia pays Groq Inc. for the IP (say $5-7B, roughly the Sept 2024 valuation), and separately pays Ross + team retention packages directly. Regular employees get their share of the IP licensing fee only. That same engineer might get $500-700K.
The critical question: Is the $20B figure the total cost to Nvidia (including retention packages), or is it just the IP licensing fee? If it’s total cost and most goes to retention, regular employees get scammed.
Who definitely gets done over:
Employees staying at GroqCloud: These are the people who:
* Weren’t important enough to be hired by Nvidia
* Have equity tied to GroqCloud’s future value
* Just watched their CEO, President, and entire engineering leadership leave
* Are now working for a company with no IP rights, no technical leadership, and no future
Their equity is worthless. GroqCloud will wind down over 12-18 months. They’ll either get laid off or jump ship to wherever they can land. They built the LPU architecture, contributed to the compiler stack, supported the infrastructure, and got nothing while Chamath made $2B.
This gets messier when you look at who was involved. Chamath Palihapitiya, through Social Capital, led Groq’s initial $10 million investment in 2017 at a $25 million pre-money valuation. Social Capital secured 28.57% of the company and a board seat for Chamath.
David Sacks, Chamath’s co-host on the All-In podcast, became Trump’s AI and Crypto Czar in late 2024. In July 2025, Sacks co-authored “America’s AI Action Plan,” a White House strategy document positioning AI as a matter of national security. The plan called for exporting “the full AI technology stack to all countries willing to join America’s AI alliance” while preventing adversarial nations from building independent AI capabilities.
Two months later at the All-In Summit in September 2025, Tareq Amin (CEO of HUMAIN, Saudi Arabia’s state-backed AI company) presented Groq as “the American AI stack in action.” This was seven months after the $1.5B Saudi deal.
Sunny Madra, Groq’s President and COO, was actively promoting the All-In narrative during this period. He appeared on the All-In podcast in March 2024 to provide a “Groq update” and joined Sacks on “This Week in Startups” in November 2023. When Anthropic raised AI safety regulation concerns in October 2025, Madra publicly sided with Sacks, suggesting “one company is causing chaos for the entire industry” and echoing Sacks’s accusation that Anthropic was engaged in “regulatory capture.”
So you have Sacks pushing an “America First” AI policy from the White House while Chamath’s portfolio company (where Madra is President) is building AI infrastructure for Saudi Arabia. Then Groq gets presented at the All-In Summit as an example of American AI leadership. Three months later, announced on Christmas Eve when media coverage is minimal, Nvidia pays $20 billion to clean up the geopolitical contradiction.
Chamath walks away with $1.6B to $2.4B. Sacks gets a major AI deal under his watch. Nvidia gets favorable regulatory treatment and eliminates multiple problems. The timing ensures minimal scrutiny of these connections.
After dilution from raising $1.7 billion across Series C, D, and E rounds, Social Capital’s stake in Groq was probably 8-12% by the time of the Nvidia deal. At a $20 billion exit, that’s $1.6 billion to $2.4 billion.
Chamath after using you as exit liquidity and bankrolling it into a 200x win for himself
Let’s look at the sh he dumped on retail with his abysmal SPAC track record:
* IPOB (Opendoor): -62.9% (was -95% before a brief spike)
Chamath personally dumped $213 million of Virgin Galactic stock before it crashed, using PIPE structures that let him exit while retail investors stayed locked up. In October 2025, when launching a new SPAC, he posted a warning telling retail investors not to buy it: “these vehicles are not ideal for most retail investors.”
The Groq bet was classic venture capital: concentrated bet on an exceptional founder (Jonathan Ross, the engineer who invented Google’s TPU) building non-obvious technology. Social Capital’s 2017 internal memo projected a “High” exit scenario of $3.2 billion. They landed within range despite dilution.
But retail investors never got access to deals like Groq. They got Virgin Galactic. LOL.
Nvidia paid $20 billion for a company valued at $6.9 billion three months earlier, structured the deal to avoid traditional M&A oversight, killed the cloud business without inheriting Saudi contracts, and enriched the exact people (Chamath, Sacks) who spent the last year promoting “American AI leadership” while cutting deals with foreign governments. The employees who built the technology either got hired by Nvidia or have been utterly shafted.
This was fun to look into. If you have any questions or comments, shout me -> ossamachaib.cs@gmail.com.
...
Read the original on ossa-ma.github.io »
This site requires JavaScript to run correctly. Please turn on JavaScript or unblock scripts
...
Read the original on substack.com »
The Trump administration says it plans to dismantle the National Center for Atmospheric Research in Colorado, which is the nation’s premier atmospheric science center. The center was founded in 1960 and has facilitated generations of breakthroughs in climate and weather science. William Brangham discussed the move with climate scientist Kim Cobb and meteorologist Matthew Cappucci.
Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.
The Trump administration says it plans to dismantle the National Center for Atmospheric Research in Colorado, which is the nation’s premier atmospheric science center. The center was founded in 1960 and has facilitated generations of breakthroughs in climate and weather science. William Brangham discussed the move with climate scientist Kim Cobb and meteorologist Matthew Cappucci.
Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.
The Trump administration says it plans to dismantle the National Center for Atmospheric Research in Colorado, which is the nation’s premier atmospheric science center.
In announcing the closing, Budget Director Russell Vought called the center — quote — “one of the largest sources of climate alarmism in the country.”
NCAR, as the center is known, was founded in 1960 and has facilitated generations of breakthroughs in climate and weather science. The announcement has drawn outcry from meteorologists and climate scientists across the country.
Earlier this week, I spoke with two of them who are very familiar with the center’s work. Brown University’s Kim Cobb is a climate scientist and director of the Institute at Brown for Environment and Society, and Matthew Cappucci is senior meteorologist at MyRadar.
Thank you both so much for being here.
Kim Cobb, to you first. What is NCAR and why, as a climate scientist, is it so important and seemingly precious to this community?
Thanks for having me, William.
NCAR is a really historic institution in our field. It, of course, dates back decades now. And, over that time, it has really woven itself into the fabric of both weather and climate science across the country and around the world.
We’re talking about unique, one-of-a-kind facilities like supercomputers, ticked-out airplanes, and most importantly, a staff of over 800 people who are at the top of their game in innovating in weather and climate science for public good, putting out data that is on every single climate scientist’s computer around the country, if not around the world, and a nexus of collaboration as well that is important training grounds for the next generation of leaders.
And, Matthew Cappucci, as a meteorologist, how important is NCAR to your profession?
I mean, NCAR is really the birthplace of all the tools we use, both technologically.
And, really, the discoveries that are made at NCAR are crucial to our understanding of how the atmosphere works. That’s where we first learned about the MJO, the Madden-Julian Oscillation, one of the biggest overturning circulations in the atmosphere that governs how so many things, for example, hurricanes behave.
It’s where we first created a special product used by airplanes when they’re landing to avoid wind shear, disruptive changing winds with height that could cause plane crashes. They invented a system there to prevent that. That’s where dropsondes were invented, those little probes that are dropped out of the belly of airplanes in the middle of hurricanes to figure out how strong the hurricanes are.
So, so many different tools and discoveries have come from NCAR. In addition, the modeling is incredibly important. And if we’re sort of putting the brakes on that, I worry about the implications for weather forecasting.
Kim Cobb, well, what more on that? Well, let’s say NCAR is broken up. What are the impacts, both for the scientific community and for Americans who benefit from its research?
I think what’s really important to remember is that NCAR focuses on the entire continuum from weather that ranges over hours and days that’s designed to aid the forecasts that keep people safe and protect infrastructure and our economy.
But they also go all the way out to looking decades into the future and really understanding that most important intersection right now, how weather is responding to ongoing climate change. It’s these kinds of questions at the very forefront of our field that they’re focused on right now.
And these are innovations that are going to reap absolutely untold dividends through time. So, by breaking this — these up and its component parts, if you will, first of all, the administration has made clear that it’s the climate portion of the portfolio which they are taking squarely in aim.
And, of course, that is the portion that is right now so important to invest in as we seek to understand more about the coming threats and impacts of ongoing climate change, 2025, of course, wrapping up to be tied for the second warmest year ever.
And, Matthew, the White House has argued that NCAR and its undertakings and its work is somehow contaminated with woke ideology or climate alarmism.
Is there research or data that is coming out of NCAR that is politicized or ideological in any way?
Truth be told, no. The atmosphere does not possess a voter registration card. I think it’s important to remember that scientists do science. And, really, the only people doing the politicization are, for the most part, politicians and the general public and the media.
It worries me, though, that this fits into an overall pattern of the demonization of both science and academia. The idea that we’re trying to shut down science that produces results we don’t like, it’s a very worrisome trend. And we have seen this other times in history, and it never really ended well.
And I’m just very concerned about the tone that’s being taken that simply because one political party or even one political person doesn’t like the fact that we’re learning about the atmosphere.
I mean, Kim, as Matthew is saying, we know that President Trump has called climate change a hoax. He believes it’s nothing that we need to focus on. He has directed so much of the administration to move away from climate research, from renewable energy, doubling down on fossil fuels.
I’m just curious as to what — are you worried similarly about the future of climate research if we continue to chip away at these foundational projects?
Well, obviously, as Matthew said, we turn our backs on science to great risk and peril.
And I think most Americans get it right now. In the headlines every year are these horrific climate-fueled disasters that have taken such a toll on communities and our national economy, to the tune of billions of dollars per year. So this is not a controversial subject in that regard. People do want the best available science information to protect themselves.
And that’s exactly what NCAR and its scientists are focused on. And I think the Trump administration is just not understanding how long an investment has been made to get us to this point and the many dividends that we have already reached and that, once you break something like this, it’s really going to be hard to put it back together again. That’s my concern.
Matthew, the OMB director in his announcement said they will do a systematic review of the work that NCAR does, and critical weather information and research will just get sent somewhere else or done in another location.
How confident are you that a breakup of NCAR could still keep some of the critical elements intact?
I think so many times this administration has historically put the cart before the horse. And what I mean by that is, it seems like just a year ago, they were cutting a huge chunk of National Weather Service forecasters.
And then, after they were laid off, the government realized, oh, wait, we kind of need them, and brought them back. This seems like another really shortsighted decision made by the administration without realizing or fully researching the potential implications of what could happen.
And I also just sort of think this strikes the wrong tone for just about everybody. Weather and climate affect everybody. And, realistically, you’re much more likely to be hit by a tornado or hurricane in a red state. And so I’m surprised by sort of Trump and his administration doing something that could potentially have negative impacts on his own core audience and following.
All right, that is Matthew Cappucci and Kim Cobb.
Thank you both so much for being here. We really appreciate it.
...
Read the original on www.pbs.org »
Rainbow Six Siege is currently facing a major crisis, with mounting evidence pointing to a large scale hack or exploit, even as Ubisoft continues to describe the situation as a server incident. The disruption has now escalated into full service outages across all platforms, according to Ubisoft’s own service status page.
Players across PC and console are being urged by the community to stay offline, as reports continue to surface of accounts receiving billions of in game credits, rare and developer only skins, and experiencing random bans.
BREAKING: Ubisoft Rainbow Six Siege servers have been breached.
Players are reporting massive amounts of R6 Credits, Renown, Alpha Packs, and exclusive items unexpectedly.
Numerous accounts even Ubisoft, including streamers’ and possibly official ones, have received random or… pic.twitter.com/9hGNbBCMAm
The official service status page for Rainbow Six Siege now shows critical issues across every major platform.
On PC, PS4, PS5, Xbox One, and Xbox Series X|S, core services including authentication, in game store access, and matchmaking are all listed as being in outage, while overall connectivity is marked as degraded. Ubisoft states that issues are being investigated, but has not provided a timeline for recovery.
The scale of the outages has further fuelled claims that this is far more serious than routine server instability.
Earlier, the official Rainbow Six account acknowledged that it was aware of an incident affecting the game and said teams were working on a resolution. No mention was made of a security breach, hack, or exploit.
We’re aware of an incident currently affecting Rainbow Six Siege. Our teams are working on a resolution.
We will share further updates once available.
That wording has been met with heavy backlash from players, many of whom believe Ubisoft is attempting to downplay the severity of the situation. Community responses have accused the publisher of calling it a server issue while core systems appear compromised.
Some players also questioned why the game remained online for hours while accounts were being altered in real time.
As the incident unfolded, players reported logging in to find their accounts flooded with billions of R6 Credits and Renown, thousands of Alpha Packs, and access to exclusive cosmetics such as developer skins and Glaciers.
Why is nobody talking about what is happening on Siege right now? Everyone that logs in gets 2 Billion R6 credits, Renown, Developer Skins and Glaciers 😭 pic.twitter.com/9SBiLdrirm
There were also widespread claims that the in game ban feed had been taken over, displaying arbitrary messages. Thousands of accounts were reportedly banned and unbanned at random, including those belonging to streamers and high profile players.
Prominent Siege creator KingGeorge described the situation as completely broken, warning players not to log in and strongly advising against spending any currency, as it could lead to bans or rollbacks once Ubisoft regains control.
Btw as a side note I would not login right now, definitely don’t spend any credits or renown could lead to a ban.
With authentication and account systems affected, many players now expect Ubisoft to roll accounts back to a previous state. However, fears remain that innocent players could still face penalties, especially those who unknowingly spent credits thinking the issue was a visual glitch.
Others argue that banning a significant portion of the active player base would be disastrous, adding further pressure on Ubisoft to handle the aftermath carefully.
The lack of clear communication has only intensified frustration, with players demanding transparency about what actually happened.
What Players Should Do Right Now
Until Ubisoft confirms the issue is fully resolved and explains the cause, players are strongly advised to stay offline, avoid logging in, and do not spend any credits or Renown if access is restored.
With Rainbow Six Siege’s core services offline worldwide and player accounts potentially compromised, this incident could have lasting implications for the game if not handled properly. Ubisoft is expected to provide further updates as its investigation continues.
...
Read the original on www.shanethegamer.com »
The standard sperm-meets-egg story posits that sperm cells are hardly more than bundles of shrink-wrapped DNA with tails. Their mission is simple: Deliver a father’s genes into a mother’s egg for sexual reproduction. Just about all other aspects of a developing embryo, including its cellular and environmental components, have nothing to do with dad. Those all come from mom.
But nearly two decades of studies from multiple independent labs threaten to rewrite that story. They suggest that dad’s gametes shuttle more than DNA: Within a sperm’s minuscule head are stowaway molecules, which enter the egg and convey information about the father’s fitness, such as diet, exercise habits and stress levels, to his offspring. These non-DNA transfers may influence genomic activity that boots up during and after fertilization, exerting some control over the embryo’s development and influencing the adult they will become.
The findings, so far largely described in mouse models, could end up changing the way we think about heredity. They suggest “that what we do in this life affects the next generation,” said Qi Chen, a reproductive and developmental biologist at the University of Utah Medical School who is among the pioneers of this research. In other words: What a father eats, drinks, inhales, is stressed by or otherwise experiences in the weeks and months before he conceives a child might be encoded in molecules, packaged into his sperm cells and transmitted to his future kid. The researchers have largely zeroed in on RNA molecules, those short-lived copies of DNA that reflect genetic activity at a given time.
It’s a tantalizing notion. But the mechanistic details — how experience is encoded, how it’s transferred from sperm to egg, and whether and how it affects a developing embryo — are not easy to unpack, especially given the challenges of conducting research in human subjects. For this reason, and because of the potentially textbook-rewriting implications of the findings, researchers, including those spearheading the work, are cautious about overselling their results.
“It’s still very hand-wavy,” said the epigeneticist Colin Conine of the University of Pennsylvania Perelman School of Medicine and Children’s Hospital of Philadelphia, who has been trying to uncover the mechanics of how sperm RNA can contribute nongenetic information to progeny. Some elements of the story are clear, he said: Researchers have significant evidence that the environment can regulate sperm RNAs, that these molecules transmit traits to offspring and that they can regulate embryonic development after fertilization. “We just don’t have really any understanding of how RNAs can do this, and that’s the hand-wavy part,” Conine said.
But evidence keeps piling up. Most recently, in November 2025, a comprehensive paper published in Cell Metabolism traced the downstream molecular effects of a father mouse’s exercise regimen on sperm microRNAs that target genes “critical for mitochondrial function and metabolic control” in a developing embryo. The researchers found many of those same RNAs overexpressed in the sperm of well-exercised human men.
“This study shows that paternal exercise can confer benefits — enhanced endurance and metabolic health — to offspring,” said Chen, who was not involved in the study. “It’s a powerful reminder that many sperm-mediated epigenetic effects are deeply adaptive in nature.”
The possibility that a previously undocumented avenue of inheritance is at play is too important to ignore. That’s why the researchers are now hunkering down in their labs to trace out the molecular processes that would have to operate for a father’s here-and-now experience to be transferred as developmental instructions to his partner’s egg.
In most animals, a sperm cell is tiny compared to an egg cell. In humans, an egg contains 10 million times the volume of a sperm and contributes most cellular components — nutrition, cytoplasm, mitochondria and other organelles, the molecular machinery to make proteins, and more — to a zygote (a newly fertilized egg that hasn’t started dividing). Plus, a mother provides the environment within which an embryo and then fetus develops and grows. As a result, the effect of a mother’s health on her children has long been scrutinized, including at the molecular level. But over the past 15 years or so, the evidence for some kind of non-DNA inheritance of paternal experience has also been strengthening.
“There are many different labs that have done diet and stress studies, and typically the readouts of those in the next generation are either metabolism or behavioral changes,” Conine said. Feed a male mouse a high-fat or low-protein diet, or take him away from his mom when he is young, and his offspring will inherit traits, such as changes in mitochondrial function, related to those environmental conditions. These traits aren’t necessarily detrimental. For instance, mouse fathers exposed to nicotine sire male pups with livers that are good at disarming not just nicotine but cocaine and other toxins as well.
There is a survival logic here, said Oliver Rando, an epigeneticist at the University of Massachusetts Chan Medical School who led the nicotine study. It’s reasonable to expect that offspring will experience an environmental context similar to that of their parents. Biologically priming them for those conditions could therefore help them survive.
...
Read the original on www.quantamagazine.org »
Time seems simple. But we engineers lose sleep over something as basic as keeping clocks in sync. Here’s why…
The answer lies in this one simple statement - there is no global clock. When you have thousands of machines spread across data centers, continents, and time zones, each operating independently, the simple question of “what time is it?” becomes surprisingly complex.
Clock synchronization sits at the core of some of the most challenging problems in distributed systems, affecting everything from database consistency to debugging to financial transactions.
Every computer has an internal clock, typically driven by a quartz crystal oscillator. These oscillators work by vibrating at a specific frequency when voltage is applied. The standard frequency for most computer clocks is 32768 Hz, chosen because it is a power of two and makes counting down to one second straightforward.
The catch: quartz crystals are not perfect. Their oscillation frequency varies based on many factors; here are a few…
Temperature is the biggest culprit. Standard quartz crystals exhibit frequency drift in the tens of parts per million when temperature changes. A temperature deviation of ~10 degrees Celsius can cause drift equivalent to about 110 seconds per year. The crystal vibrates faster or slower depending on ambient temperature, and data center environments are not perfectly controlled.
Another culprit is manufacturing variation. No two crystals are identical. Even crystals from the same production batch will have slightly different characteristics. Aging compounds this problem as crystals change properties over time.
The result is that two computers started at exactly the same time, never communicating with each other, will inevitably drift apart. After just one day, they might differ by hundreds of milliseconds. After a month, they could be seconds apart.
Clock skew = the difference in time between two clocks at any given instant. Clock drift = the rate at which clocks diverge over time. Both cause serious problems in distributed systems.
Consider a simple example with a distributed make system. You edit a source file on your client machine, which has a clock slightly behind the server where the compiled object file lives. When make runs, it compares timestamps. If the server clock is ahead, the object file appears newer than the source file you just edited, and make does not recompile. Your changes silently disappear from the build.
Scenario: UNIX make with unsynchronized clocks
Client machine clock: 10:00:00 (lagging)
Server machine clock: 10:00:05 (ahead)
1. Edit util.c at client time 10:00:00
2. util.o on server has timestamp 10:00:03
3. Make compares: util.o (10:00:03) vs util.c (10:00:00)
4. Conclusion: util.o is newer, skip recompilation
5. Result: Your changes are ignored
Database systems face even more critical timestamp issues. When two transactions happen at nearly the same time on different nodes, the database must determine which happened first. If clocks are out of sync, the database might order them incorrectly, violating consistency guarantees.
Imagine a banking system where a customer deposits money at one branch (Node A) and immediately withdraws at another branch (Node B). If Node B clock is behind Node A, the withdrawal transaction might get a timestamp earlier than the deposit. A snapshot read at the wrong time could show the withdrawal but not the deposit, making it appear the customer withdrew money they did not have.
Logging and debugging become nearly impossible when clocks disagree. Distributed tracing relies on timestamps to reconstruct the sequence of events across services. When clocks are skewed, the resulting traces show impossible sequences where effects appear before causes.
The simplest approach to clock synchronization is to periodically query a trusted time server and adjust local clocks accordingly. Let’s look at different algorithms and approaches based on this…
Cristian algorithm, proposed in 1989, works with a centralized time server assumed to have accurate time. A client requests the time, the server responds with its current time, and the client adjusts.
The challenge is network delay. By the time the response arrives, the server time is stale. Cristian algorithm estimates the one way delay as half the round trip time.
# Cristian’s Algorithm
def synchronize_clock():
t0 = local_time() # Record time before request
server_time = request_time_from_server()
t1 = local_time() # Record time after response
round_trip = t1 - t0
one_way_delay = round_trip / 2
# Adjust local clock
new_time = server_time + one_way_delay
set_local_clock(new_time)
# Error bound: +/- (t1 - t0) / 2
This works reasonably well when network delays are symmetric, meaning request and response take the same time. In practice, delays are often asymmetric due to different routing paths, varying network congestion, and processing delays.
The Berkeley algorithm takes a different approach, assuming no single machine has an accurate time. Instead, it uses consensus among multiple machines.
A designated time daemon periodically polls all machines for their clock values. It computes the average, discards outliers, and tells each machine how much to adjust. Rather than sending absolute times, which would suffer from network delays, it sends relative adjustments.
Berkeley Algorithm Steps:
1. Time daemon polls machines: “What time do you have?”
2. Responses: Machine A: 10:00:05, Machine B: 10:00:02, Machine C: 10:00:08
3. Time daemon clock: 10:00:04
4. Average: (5 + 2 + 8 + 4) / 4 = 4.75 → 10:00:05
5. Adjustments sent:
- Machine A: slow down by 0s (already at target)
- Machine B: speed up by 3s
- Machine C: slow down by 3s
- Daemon: speed up by 1s
A critical detail: computers should never jump their clocks backward. Doing so violates the assumption of monotonic time that many algorithms depend on. Instead of rewinding, the Berkeley algorithm slows clocks gradually to let them catch up.
NTP uses a hierarchical system of time servers organized into strata.
Stratum 0 devices are high precision time sources like atomic clocks and GPS receivers. Stratum 1 servers connect directly to stratum 0 sources. Each lower stratum synchronizes with the level above, with stratum numbers increasing up to 15.
NTP can typically maintain time within tens of milliseconds over the public internet and can achieve sub-millisecond accuracy on local area networks. However, several factors limit its precision.
NTP Accuracy Limitations:
- Public internet: 10-100 ms typical
- LAN with good conditions: 100-500 µs
- Network asymmetry: can cause 100+ ms errors
- Variable latency: introduces jitter
- Operating system delays: software timestamps add microseconds
Network asymmetry is particularly problematic. If the path from client to server differs from server to client, the assumption that one way delay equals half the round-trip breaks down. Satellite links where uplink and downlink have different latencies are a classic example.
Operating system overhead adds uncertainty. When an NTP packet arrives, it passes through the network stack, gets timestamped by the kernel, and eventually reaches the NTP daemon. Each step introduces variable delays measured in microseconds.
For many applications, NTP accuracy is sufficient. Web servers, file systems, and most business applications tolerate clocks being tens of milliseconds apart. But some domains demand much tighter synchronization.
Financial trading systems measure latency in microseconds. A trade timestamped incorrectly by even a few milliseconds can have significant legal and financial implications. High-frequency trading strategies depend on knowing the precise order of events.
Telecommunications systems require synchronization for TDM (Time Division Multiplexing) where different users share a channel by taking turns. If timing drifts, transmissions from different users collide.
Scientific experiments, particularly in physics, need nanosecond precision to correlate measurements across instruments.
PTP, defined by IEEE 1588, achieves sub-microsecond accuracy by using hardware timestamping. Instead of the operating system recording when a packet arrived, specialized network interface cards timestamp packets as they cross the wire, eliminating software delays.
PTP vs NTP Precision:
- NTP: milliseconds (software timestamping)
- PTP: nanoseconds (hardware timestamping)
Key PTP improvements:
- Hardware timestamps at NIC level
- Boundary clocks at switches maintain precision
- Two-way message exchange calculates asymmetric delays
PTP requires support throughout the network path. Switches must be PTP aware, acting as boundary clocks that maintain synchronization hop by hop. This makes PTP expensive to deploy but essential for applications requiring nanosecond precision.
Meta announced in 2022 that they were migrating from NTP to PTP across their data centers. The investment in PTP infrastructure paid off in reduced errors and better debugging capability.
Lamport introduced the concept of logical clocks based on a simple observation: if two events are causally related, we should be able to order them. If event A sends a message that event B receives, A happened before B. If both events happen on the same process, the earlier one happens before the later one.
Events that are not connected by any chain of causality are concurrent. They could have happened in either order, and from the system’s perspective, there is no meaningful way to distinguish.
Lamport timestamps implement this intuition with a simple algorithm. Each process maintains a counter. Before any event, increment the counter. When sending a message, include the counter value. When receiving a message, set your counter to the maximum of your current value and the received value, then increment.
class LamportClock:
def __init__(self):
self.time = 0
def local_event(self):
self.time += 1
return self.time
def send_event(self):
self.time += 1
return self.time # Include in message
def receive_event(self, received_time):
self.time = max(self.time, received_time) + 1
return self.time
If event A has a lower Lamport timestamp than event B, we know one of two things: either A happened before B, or they are concurrent. The converse is guaranteed: if A happened before B, then A has a lower timestamp than B.
Process P1: [0] –-(1)–-> send m –-(2)–-> local event
v
Process P2: [0] ––––> receive m –-(2)–-> local event –-(3)
P1 events: (1, 2)
P2 events: (2, 3)
The receive on P2 happens after send on P1 (causality preserved)
The limitation is that Lamport timestamps cannot tell you if two events are concurrent. Events with timestamps 5 and 7 might be causally related or might have happened independently on different processes with no communication between them.
Vector clocks extend Lamport timestamps to capture full causality information. Instead of a single counter, each process maintains a vector with an entry for every process in the system.
class VectorClock:
def __init__(self, process_id, num_processes):
...
Read the original on arpitbhayani.me »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.