10 interesting stories served every morning and every evening.




1 737 shares, 56 trendiness

Calendar

👋 Hello! If you print this page, you’ll get a nifty cal­en­dar that dis­plays all of the year’s dates on a sin­gle page. It will au­to­mat­i­cally fit on a sin­gle sheet of pa­per of any size. For best re­sults, ad­just your print set­tings to land­scape ori­en­ta­tion and dis­able the header and footer.

Take in the year all at once. Fold it up and carry it with you. Jot down your notes on it. Plan things out and ob­serve the pas­sage of time. Above all else, be kind to oth­ers.

Looking for 2026? Here you go!

...

Read the original on neatnik.net »

2 580 shares, 26 trendiness

How We Lost Communication to Entertainment

A few days ago, I did a con­tro­ver­sial blog post about Pixelfed hurt­ing the Fediverse. I de­fended the the­ory that, in a com­mu­ni­ca­tion net­work, you hurt the trust in the whole net­work if you cre­ate clients that ar­bi­trar­ily drop mes­sages, some­thing that Pixelfed is do­ing de­lib­er­ately. It gath­ered a lot of re­ac­tions.

When I orig­i­nally wrote this post, nearly one year ago, I thought that ei­ther I was miss­ing some­thing or Dansup, Pixelfed’s cre­ator, was miss­ing it. We could not both be right. But as the re­ac­tions piled in on the Fediverse, I re­alised that such ir­rec­on­cil­able opin­ions do not arise only from ig­no­rance or over­sight. It usu­ally means that both par­ties have vastly dif­fer­ent as­sump­tions about the world. They don’t live in the same world.

I started to see a pat­tern in the two kinds of re­ac­tions to my blog post.

There were peo­ple like me, of­ten above 40, who like send­ing emails and brows­ing old-fash­ioned web­sites. We think of ActivityPub as a communication pro­to­col” be­tween hu­mans. As such, any­thing that im­plies los­ing mes­sages with­out feed­back is the worst thing that could hap­pen. Not los­ing mes­sages is the top pri­or­ity of a com­mu­ni­ca­tion pro­to­col.

And then there are peo­ple like Dansup, who be­lieve that ActivityPub is a con­tent con­sump­tion pro­to­col. It’s there for en­ter­tain­ment. You cre­ate as many ac­counts as the kinds of me­dia you want to con­sume. Dansup him­self is com­mu­ni­cat­ing through a Mastodon ac­count, not a Pixelfed one. Many Pixelfed users also have a Mastodon ac­count, and they never ques­tioned that. They ac­tu­ally want mul­ti­ple ac­counts for dif­fer­ent use cases.

On the Fediverse threads, nearly all the peo­ple de­fend­ing the Pixelfed phi­los­o­phy posted from Mastodon ac­counts. They usu­ally boasted about hav­ing both a Mastodon and a Pixelfed ac­count.

To me, the very goal of in­ter­op­er­abil­ity is not to force you into cre­at­ing mul­ti­ple ac­counts. Big Monopolies have man­aged to con­vince peo­ple that they need one ac­count on each plat­form. This was done, on pur­pose, for purely un­eth­i­cal rea­sons in or­der to keep users cap­tive.

That brain­wash/​mar­ket­ing is so deeply en­trenched that most peo­ple can­not see an al­ter­na­tive any­more. It looks like a nat­ural law: you need an ac­count on a plat­form to com­mu­ni­cate with some­one on that plat­form. That also ex­plains why most politi­cians want to regulate” Facebook or X. They think it is im­pos­si­ble not to be on those plat­forms. They be­lieve those plat­forms are public spaces” while they truly are private spaces try­ing to de­stroy all other pub­lic spaces in or­der to get a mo­nop­oly.”

People flock to the Fediverse with this phi­los­o­phy of one plat­form, one ac­count”, which makes no sense if you truly want to cre­ate a fed­er­ated com­mu­ni­ca­tion pro­to­col like email or XMPP.

But Manuel Moreale cracked it for me: the Fediverse is not a com­mu­ni­ca­tion net­work. ActivityPub is not a com­mu­ni­ca­tion pro­to­col. The spec says it: ActivityPub is a pro­to­col to build a social plat­form” whose goal is to de­liver con­tent.”

But aren’t so­cial net­works also com­mu­ni­ca­tion net­works? That’s what I thought. That’s how they his­tor­i­cally were mar­keted. That’s what we all be­lieved dur­ing the Arab Spring.”

But that was a lie. Communication net­works are not prof­itable. Social net­works are en­ter­tain­ment plat­forms, me­dia con­sump­tion pro­to­cols. Historically, they dis­guised them­selves as com­mu­ni­ca­tion plat­forms to at­tract users and keep them cap­tive.

The point was never to avoid miss­ing a mes­sage sent from a fel­low hu­man be­ing. The point was al­ways to fill your time with content.”

We dreamed of de­cen­tralised so­cial net­works as email 2.0.” They truly are television 2.0.”

They are en­ter­tain­ment plat­forms that del­e­gate me­dia cre­ation to the users them­selves the same way Uber re­placed taxis by hav­ing peo­ple drive oth­ers in their own car.

But what was cre­ated as ride-sharing” was in fact a way to 1) de­stroy com­pe­ti­tion and 2) make a shit­tier ser­vice while peo­ple pro­duc­ing the work were paid less and lost labour rights. It was never about the so­cial!

My own in­ter­pre­ta­tion is that so­cial me­dia users don’t mind los­ing mes­sages be­cause they were raised on al­go­rith­mic plat­forms that did that all the time. They don’t see the point in trust­ing a plat­form be­cause they never ex­pe­ri­enced a trusted means of com­mu­ni­ca­tion.

Now that I write it, it may also ex­plain why in­stant mes­sag­ing be­came the dom­i­nant com­mu­ni­ca­tion medium: be­cause if you don’t re­ceive an im­me­di­ate an­swer, you don’t even trust the re­cip­i­ent to have re­ceived your mes­sages. In fact, even if the mes­sage was re­ceived, you don’t even trust the re­cip­i­en­t’s at­ten­tion span to re­mem­ber the mes­sage.

Multiple stud­ies have con­firmed that we don’t re­mem­ber the vast ma­jor­ity of what we see while doom­scrolling. While the view” was reg­is­tered to in­crease sta­tis­tics, we don’t have the slight­est mem­ory of most of that con­tent, even af­ter only a few sec­onds. It thus makes sense not to con­sider so­cial me­dia as a means of com­mu­ni­ca­tion at all.

There’s no need for a re­li­able com­mu­ni­ca­tion pro­to­col if we as­sume that hu­man brains are not re­li­able enough to han­dle asyn­chro­nous mes­sages.

It’s not Dansup who is miss­ing some­thing. It is me who is un­adapted to the cur­rent so­ci­ety. I un­der­stand now that Pixelfed was only fol­low­ing some de­sign de­ci­sions and pro­to­col abuses fa­thered by Mastodon. Pixelfed was my own gotcha” mo­ment be­cause I never un­der­stood Instagram in the first place, and, in my eyes, Pixelfed was no bet­ter. But if you take that route, Mastodon is no bet­ter than Twitter.

Many re­ac­tions pointed, justly, that other Fediverse tools such as PeerTube, WriteFreely, or Mobilizon were just not dis­play­ing mes­sages at all.

I did­n’t con­sider it a big prob­lem be­cause they never pre­tended to do it in the first place. Nobody uses those tools to fol­low oth­ers. There’s no ex­pec­ta­tion. Those plat­forms are publish only.” But this is still a big flaw in the Fediverse! Someone could, us­ing au­to­com­ple­tion, send a mes­sage ping­ing your PeerTube ad­dress and you will never see it. Try au­to­com­plete @ploum” from your Mastodon ac­count and guess which sug­ges­tion is the only one that will send me a valid no­ti­fi­ca­tion!

On a more pos­i­tive note, I should give credit to Dansup for an­nounc­ing that Pixelfed will soon al­low peo­ple to op­tion­ally not drop” text mes­sages.

I cling to asyn­chro­nous re­li­able com­mu­ni­ca­tions, but those are dis­ap­pear­ing. I use email a lot be­cause I see it as a true means of com­mu­ni­ca­tion: re­li­able, asyn­chro­nous, de­cen­tralised, stan­dard­ised, man­age­able of­fline with my own tools. But many peo­ple, even barely younger than me, tell me that email is too for­mal” or for old peo­ple” or even worse than so­cial net­work feeds.”

And they are prob­a­bly right. I like it be­cause I’ve learned to use it. I ap­ply a strong in­box 0 method­ol­ogy. If I don’t re­ply or act on your email, it is be­cause I de­cided not to. I’m ac­tively keep­ing my in­box clean by shar­ing only dis­pos­able email ad­dresses that I dis­able once they start to be spammed.

But for most peo­ple, their email in­box is sim­ply one more feed full of bad ad­ver­tis­ing. They have 4 or 5 digit un­read count. They scroll through their in­box like they do through their so­cial me­dia feeds.

The main prob­lem with re­li­able com­mu­ni­ca­tion pro­to­cols? It is a mostly solved prob­lem. Build sim­ple web­sites, read RSS feeds, write emails. Use IRC and XMPP if you truly want real-time com­mu­ni­ca­tion. Those are work­ing and work­ing great.

And be­cause of that, they are bor­ing.

Communications pro­to­cols are bor­ing. They don’t give you that well-stud­ied ran­dom hit of dopamine. They don’t make you ad­dicted.

They don’t make you ad­dicted which means they are not hugely prof­itable and thus are not ad­ver­tised. They are not new. They are not as shiny as a new app or a new ran­dom chat­bot.

The prob­lem with com­mu­ni­ca­tion pro­to­cols was never the pro­to­col part. It’s the com­mu­ni­ca­tion part. A few sad hu­mans never wanted to com­mu­ni­cate in the first place and man­aged to be­come bil­lion­aires by con­vinc­ing the rest of mankind that be­ing en­ter­tained is bet­ter than com­mu­ni­cat­ing with other hu­mans.

We be­lieve that a com­mu­ni­ca­tion net­work must reach a crit­i­cal mass to be re­ally use­ful. People stay on Facebook to stay in touch with the ma­jor­ity.” I don’t be­lieve that lie any­more. I’m falling back to good old mail­ing lists. I’m read­ing the Web and Gemini while of­fline through Offpunk. I also han­dle my emails asyn­chro­nously while of­fline.

I may be part of an en­dan­gered species.

It does­n’t mat­ter. I made peace with the fact that I will never get in touch with every­one. As long as there are peo­ple post­ing on their gem­logs or blogs with RSS feeds, as long as there are peo­ple will­ing to read my emails with­out au­to­mat­i­cally sum­maris­ing them, there will be a place for those who want to sim­ply com­mu­ni­cate. A pro­tected re­serve.

You are wel­come to join!

I’m Ploum, a writer and an en­gi­neer. I like to ex­plore how tech­nol­ogy im­pacts so­ci­ety. You can sub­scribe by email or by rss. I value pri­vacy and never share your adress.

I write sci­ence-fic­tion nov­els in French. For Bikepunk, my new post-apoc­a­lyp­tic-cy­clist book, my pub­lisher is look­ing for con­tacts in other coun­tries to dis­trib­ute it in lan­guages other than French. If you can help, con­tact me!

...

Read the original on ploum.net »

3 565 shares, 32 trendiness

Replacing JS with just HTML

For many years now, JavaScript has been the work­horse of the web. If you wanted to do some­thing that could­n’t be done with just HTML and CSS, you could usu­ally find a way to do it with JS.

And that is great! JS has helped push user ex­pe­ri­ences for­ward, and hon­estly helped push HTML and CSS for­ward!

But as time marches on, and the HTML and CSS meth­ods gain trac­tion, we need to start re­plac­ing the old JS meth­ods that feel so comfy with new meth­ods that re­quire less JS.

Nothing against JS, but it has bet­ter things to do than setup and man­age your ac­cor­dions or off­screen nav­i­ga­tion menus… Plus, JS needs to be down­loaded, de­com­pressed, eval­u­ated, processed, and then of­ten con­sumes mem­ory to mon­i­tor and main­tain fea­tures. If we can hand-off any JS func­tion­al­ity to na­tive HTML or CSS, then users can down­load less stuff, and the re­main­ing JS can pay at­ten­tion to more im­por­tant tasks that HTML and CSS can’t han­dle (yet).

Below are a few ex­am­ples; any you care to add?

The de­tails and sum­mary HTML el­e­ments pro­vide an HTML-only re­place­ment to the typ­i­cal JS ac­cor­dion:

CopePen: Accordion / Expanding Content

Add an open at­tribute to set the de­fault ap­pear­ance as open”:

Use the same name at­tribute on all re­lated de­tails (like ra­dio but­tons) to re­strict only one open panel at a time:

You can also cus­tomize the ap­pear­ance with CSS and trig­ger the open/​close via JS.

Learn more about the de­tails el­e­ment in the pre­vi­ously-pub­lished For the Love of <details>”.

Combining the HTML in­put and datal­ist el­e­ments can cre­ate a drop­down of op­tions that aut­ofil­ters as you type:

CodePen: Input with Autofilter Suggestions Dropdown

You can also use other in­put types:

Note that, at the time of this writ­ing, Firefox was lim­ited to only tex­tual-based in­put types, so no date, time, range or color for now… :-(

Also note that, at the time of this writ­ing, there are lim­i­ta­tions on mo­bile, and ac­ces­si­bil­ity con­cerns.

The popover and popover­tar­get at­trib­utes can re­place the tra­di­tional JS-driven modal/​popover/​over­lay:

CodePen: Modal / Popover

An auto popover (default) can be light dis­missed” (clicking out­side of it or hit­ting the esc key). Opening an auto au­to­mat­i­cally closes any other auto popovers that were open. Clicking the but­ton a sec­ond time will close the one it opened.

A hint popover can also be light dis­missed”. It does not close other hint popovers when opened. Clicking the but­ton a sec­ond time will close the one it opened.

Note that, at the time of this writ­ing, Firefox and all iOS va­ri­eties do not sup­port hint popovers.

A man­ual popover can not be light dis­missed”. It does not close other man­ual popovers when opened. Clicking the but­ton a sec­ond time will close the one it opened.

Learn more about the open­ing and clos­ing di­alogs and popovers in the pre­vi­ously-pub­lished Controlling di­alogs and popovers with the Invoker Commands API.

The above Modal / Popover func­tion­al­ity can also be used to cre­ate an off­screen nav­i­ga­tion that re­quires no JS:

#menu { mar­gin: 0; height: 100vh; trans­late: -100vw;}#menu:popover-open { trans­late: 0;}

I use a nav el­e­ment to give it se­man­tic value, but you can use any HTML el­e­ment (div, sec­tion, aside, etc.).

A popover de­faults to po­si­tion: fixed per the User Agent Stylesheet, and is sim­ply pushed off screen when closed, and pulled back on­screen when it is open. Note that mar­gin: 0 is re­quired if you want to over­ride the User Agent cen­ter-align­ment.

Clicking out­side of the above menu closes it. You can force the panel to stay open, re­quir­ing a man­ual/​ex­plicit close, by us­ing popover=“man­ual”.

You can also add a back­drop pseudo el­e­ment and style it as you wish:

#menu::backdrop { back­ground: rgb(190 190 190 / 75%);}

While we all love the power and flex­i­bil­ity JS pro­vides, we should also re­spect it, and our users, by lim­it­ing its use to what it needs to do.

There is so much more that has changed in re­cent years, in­clud­ing a ton of op­tions that CSS now cov­ers. If you are now hun­gry for more, have a look at [my longer ar­ti­cle that cov­ers those as well](https://​aaront­grogg.com/​blog/​2023/​05/​31/​re­place-js-with-no-js-or-lo-js-op­tions/.

...

Read the original on www.htmhell.dev »

4 515 shares, 28 trendiness

Say no to Palantir in the NHS

We use cook­ies where nec­es­sary to al­low us to un­der­stand how peo­ple in­ter­act with our web­site and con­tent, so that we can con­tinue to im­prove our ser­vice.

We only ever re­ceive anony­mous in­for­ma­tion, and can­not track you across other web­sites.

View our pri­vacy pol­icy

Are you OK with cook­ies?

Skip to main con­tent

Say no to Palantir in the NHS

NHS England is rolling out soft­ware to run our health records from Palantir — a US spy-tech firm that has sup­ported mass de­por­ta­tion in the US and en­abled geno­cide in Gaza.

One of Palantir’s founders is also openly against the NHS. Peter Thiel claimed it makes peo­ple sick” and said that the British peo­ple love the NHS be­cause we’re suf­fer­ing from Stockholm syn­drome.

We can’t let a com­pany like this take con­trol of our health­care sys­tem.

But we can fight back right now. We’ve set up a sim­ple tool so you can find out if your lo­cal NHS trust has started us­ing the soft­ware, and then send an email de­mand­ing they say no to Palantir.

With the gov­ern­ment putting NHS trusts un­der pres­sure to adopt the soft­ware, we need to act right now. If you want to keep Palantir out of our NHS, send an email to your lo­cal trust and Wes Streeting, sec­re­tary of state for health.

Please pro­vide your full name and email ad­dress.

Sorry, your ses­sion has ex­pired. Please try again. If it con­tin­ues to not work please Get in touch

Is your lo­cal NHS Trust us­ing Palantir’s soft­ware?

Macedonia, the Former Yugoslav Republic of

You’ve en­tered a post­code that looks like it’s from out­side the UK. If you are out­side the UK, please choose your coun­try, or if you are in the UK please check the post­code you have en­tered.

Would you like Good Law Project and Just Treatment to con­tact you with up­dates over email?

Are you sure? By se­lect­ing No’, you won’t be able to re­ceive up­dates about this ac­tion.

If you se­lect Yes’ we’ll keep you up­dated about this topic and our other im­por­tant work. You can un­sub­scribe at any time.

...

Read the original on notopalantir.goodlawproject.org »

5 515 shares, 22 trendiness

Nvidia's $20B Antitrust Loophole

You’re tak­ing on a gi­ant. What gives you the au­dac­ity?”

On November 5th, 2025, Groq CEO Jonathan Ross was asked why he was even both­er­ing to chal­lenge Nvidia. He did­n’t blink:

I think that was a po­lite way to ask why in the world are we com­pet­ing with Nvidia, so we’re not. Competition is a waste of money; com­pe­ti­tion fun­da­men­tally means you are tak­ing some­thing some­one else is do­ing and try­ing to copy it. You’re wast­ing R&D dol­lars try­ing to do the ex­act same thing they’ve done in­stead of us­ing them to dif­fer­en­ti­ate.”

49 days later, Nvidia paid $20 bil­lion for Groq’s as­sets and hired Ross along with his en­tire ex­ec­u­tive team.

Except this was­n’t ac­tu­ally an ac­qui­si­tion, at least not in the tra­di­tional sense. Nvidia paid $20 bil­lion for Groq’s IP and peo­ple, but ex­plic­itly did NOT buy the com­pany. Jensen Huang’s state­ment was sur­gi­cal: While we are adding tal­ented em­ploy­ees to our ranks and li­cens­ing Groq’s IP, we are not ac­quir­ing Groq as a com­pany.”

That phras­ing is the en­tire story. Because what Nvidia carved out of the deal tells you every­thing about why this hap­pened.

Forget the AI doomer takes about a bub­ble form­ing, lets look into the ac­tual rea­sons.

* All of Groq’s in­tel­lec­tual prop­erty and patents

GroqCloud con­tin­ues as an in­de­pen­dent com­pany un­der CFO Simon Edwards. This is Nvidia’s largest ac­qui­si­tion ever (previous record was Mellanox at $7B in 2019), and they struc­tured it to leave the ac­tual op­er­at­ing busi­ness be­hind. That does­n’t hap­pen by ac­ci­dent.

To un­der­stand why Nvidia paid any­thing for Groq, you need to un­der­stand the ar­chi­tec­tural bet Ross made when he left Google.

CPUs and GPUs are built around ex­ter­nal DRAM/HBM (High Bandwidth Memory). Every com­pute op­er­a­tion re­quires shut­tling data be­tween the proces­sor and off-chip mem­ory. This works fine for gen­eral-pur­pose com­put­ing, but for in­fer­ence work­loads, that con­stant round-trip cre­ates la­tency and en­ergy over­head. Since GPUs evolved from graph­ics ren­der­ing they’re op­ti­mized for par­al­lel train­ing work­loads and not SEQUENTIAL in­fer­ence.

TPUs (Google’s Tensor Processing Units) re­duce this over­head by pass­ing data di­rectly be­tween proces­sors like an as­sem­bly line. But they still rely on ex­ter­nal mem­ory (HBM) to store the model. This means the cal­cu­la­tions are pre­dictable, but the speed of fetch­ing data from mem­ory is not.

LPUs (Groq’s Language Processing Units) take a dif­fer­ent ap­proach: even larger on-chip SRAM in­stead of ex­ter­nal DRAM/HBM. The en­tire model (for mod­els that fit) lives in SRAM with 80 TB/s of band­width and 230 MB ca­pac­ity per chip. No off-chip mem­ory bot­tle­neck or dy­namic sched­ul­ing. The ar­chi­tec­ture is en­tirely de­ter­min­is­tic from com­pi­la­tion to ex­e­cu­tion. You know ex­actly what hap­pens at each cy­cle on each chip at each mo­ment.

And 10x bet­ter en­ergy ef­fi­ciency be­cause you’re not con­stantly mov­ing data across a mem­ory bus.

Compare this to SOTA model to­kens/​sec through­put on GPU in­fer­ence

Serious trade off though, only 14GB of SRAM per rack means you can’t run Llama 3.1 405B. And LPUs can’t train mod­els at all. This is an in­fer­ence-only ar­chi­tec­ture with lim­ited model size sup­port.

But here’s what makes this in­ter­est­ing: if DRAM/HBM prices con­tinue climb­ing (DRAM has tripled in a year I should’ve gone all in DRAM at the start of the year I’m done with in­dexes), and if in­fer­ence be­comes the dom­i­nant AI work­load (which it is), SRAM-based ar­chi­tec­tures be­come eco­nom­i­cally com­pelling de­spite the size lim­i­ta­tions. AND pro­duc­tion AI ap­pli­ca­tions seem to be mov­ing from 405B-1T mod­els to run­ning 7B-70B mod­els that need low la­tency and high through­put.

Groq raised $750 mil­lion in September 2025 at a post-money val­u­a­tion of $6.9 bil­lion. Three months later on Xmas Eve, Nvidia paid $20 bil­lion through a non-exclusive li­cens­ing agree­ment” that ac­quired all IP and tal­ent while ex­plic­itly NOT buy­ing the com­pany.

Just crunched the num­bers, that’s a $13.1 bil­lion pre­mium (3x the September val­u­a­tion) for a com­pany val­ued at 40x tar­get rev­enue (double Anthropic’s re­cent 20x mul­ti­ple) with slashed pro­jec­tions (The Information re­ported Groq cut over $1B from 2025 rev­enue fore­casts).

The struc­ture is the story. Traditional M&A (mergers and ac­qui­si­tions for those blessed not to have dealt with this crowd) would trig­ger:

Non-exclusive li­cens­ing by­passes all of it. No ac­qui­si­tion means no CFIUS re­view. Non-exclusive” means no mo­nop­oly con­cerns (anyone can li­cense Groq’s tech). No share­holder votes, min­i­mal dis­clo­sure.

But in prac­tice: Nvidia gets the IP (can in­te­grate be­fore any­one else), the tal­ent (Ross + team can’t work for com­peti­tors now), and the elim­i­na­tion of GroqCloud (will likely die with­out IP or lead­er­ship). The non-exclusive” la­bel is le­gal fic­tion. When you ac­quire all the IP and hire every­one who knows how to use it, ex­clu­siv­ity does­n’t mat­ter.

The ques­tion is­n’t just why Nvidia paid $13.1B more than mar­ket rate for tech­nol­ogy they could build them­selves (they have the PDK, vol­ume, tal­ent, in­fra­struc­ture, and cash). The ques­tion is why they struc­tured it this way.

Where the pre­mium was spent:

Regulatory ar­bi­trage: Non-exclusive li­cens­ing avoids years of an­titrust re­view. Structure the deal as IP li­cens­ing + tal­ent ac­qui­si­tion, and reg­u­la­tors have no grounds to block it. This alone is worth bil­lions in time and cer­tainty.

Neutralizing Meta/Llama: The April 2025 part­ner­ship gave Groq dis­tri­b­u­tion to mil­lions of de­vel­op­ers. If Llama + Groq be­came the de­fault open-source in­fer­ence stack, Nvidia’s ecosys­tem gets com­modi­tized. Kill the part­ner­ship be­fore it scales.

Eliminating GroqCloud with­out in­her­it­ing Saudi con­tracts: Nvidia has in­vested in other cloud providers (CoreWeave, brev.dev). GroqCloud was a com­peti­tor. Traditional ac­qui­si­tion would mean in­her­it­ing $1.5B worth of con­tracts to build AI in­fra­struc­ture for Saudi Arabia, trig­ger­ing CFIUS scrutiny. The carve out kills GroqCloud while avoid­ing geopo­lit­i­cal en­tan­gle­ment.

Political ac­cess: Chamath makes ~$2B (Social Capital’s ~10% stake). Sacks looks good (major AI deal un­der his watch as AI Czar). Nvidia gets fa­vor­able reg­u­la­tory treat­ment from the Trump ad­min­is­tra­tion. Timing it for Christmas Eve en­sures min­i­mal me­dia scrutiny of these con­nec­tions.

Blocking Google/Amazon/Microsoft from part­ner­ing with Groq: Both are de­vel­op­ing cus­tom AI chips (Trainium, Maia). If ei­ther had hired Ross + team or li­censed Groq’s tech, Nvidia’s in­fer­ence dom­i­nance faces a real chal­lenger. If Google had ac­quired Groq and brought Ross back, they’d have the orig­i­nal TPU in­ven­tor plus LPU IP.

Chiplet in­te­gra­tion for fu­ture prod­ucts: Nvidia might in­te­grate LPU as a chiplet along­side GPUs in Blackwell or fu­ture ar­chi­tec­tures. Having Ross’s team makes that pos­si­ble. You can’t in­te­grate IP you don’t own, and you can’t build it with­out the peo­ple who in­vented it.

That’s how busi­ness works when reg­u­la­tion has­n’t caught up to struc­tural in­no­va­tion. Nvidia paid $6.9B for tech­nol­ogy and $13.1B to solve every­thing else us­ing a deal struc­ture that tra­di­tional an­titrust can’t touch.

In February 2025, Saudi Arabia com­mit­ted $1.5 bil­lion to ex­pand Groq’s Dammam data cen­ter. The pub­licly stated goal was sup­port­ing SDAIAs ALLaM, Saudi Arabia’s Arabic large lan­guage model. The ac­tual goal was Vision 2030: po­si­tion­ing the Kingdom as an AI su­per­power. (Wait so its not just about in­flu­encer box­ing?)

Groq built the re­gion’s largest in­fer­ence clus­ter in eight days in December 2024. From that Dammam fa­cil­ity, GroqCloud serves nearly four bil­lion peo­ple re­gion­ally ad­ja­cent to the KSA.” This is crit­i­cal AI in­fra­struc­ture, funded by the Public Investment Fund, pro­cess­ing in­fer­ence work­loads at na­tional scale.

According to Ross in the Series E an­nounce­ment, Groq pow­ers Humain’s ser­vices in­clud­ing the Humain chat prod­uct and sup­ported OpenAI’s GPT-OSS model re­lease in Saudi Arabia. Groq op­er­ates 13 fa­cil­i­ties across the US, Canada, Europe, and the Middle East. Ross noted that ca­pac­ity ex­panded more than 10% in the month be­fore the fund­ing an­nounce­ment and all of that ca­pac­ity was al­ready in use. Customers were ask­ing for more ca­pac­ity than Groq could sat­isfy.

That cre­ates a CFIUS (Committee on Foreign Investment in the United States) (I know I know I took a while to un­ab­bre­vi­ate it) prob­lem. A U. S. chip com­pany, ven­ture-backed by American in­vestors, build­ing sov­er­eign AI ca­pa­bil­ity for Saudi Arabia. If Nvidia had ac­quired GroqCloud out­right, they would in­herit those con­tracts and the reg­u­la­tory scrutiny that comes with them. Foreign in­vest­ment re­views, ex­port con­trol ques­tions, con­gres­sional in­quiries about why an American com­pany is pro­vid­ing cut­ting-edge AI to a Middle Eastern monar­chy.

By carv­ing out GroqCloud, Nvidia gets the tech­nol­ogy and the tal­ent with­out the geopo­lit­i­cal mess. The Saudi con­tracts stay with Edwards and the in­de­pen­dent GroqCloud en­tity. Clean sep­a­ra­tion. No CFIUS en­tan­gle­ment.

The Financial Times re­ported that despite the loss of much of its lead­er­ship team, Groq said it will con­tinue to op­er­ate as an in­de­pen­dent com­pany.” That’s cor­po­rate speak for: ex­ec­u­tives and VCs are cash­ing out while reg­u­lar em­ploy­ees watch the com­pany they built get hol­lowed out.

Here’s how the $20B prob­a­bly breaks down (we’ll never know the ex­act num­bers since Groq is pri­vate and this is­n’t a tra­di­tional ac­qui­si­tion):

VCs (Chamath, BlackRock, Neuberger Berman, Deutsche Telekom, etc.): They own eq­uity in Groq Inc. Depending on how the deal is struc­tured, they get paid based on their own­er­ship per­cent­age. Social Capital’s ~10% stake (after di­lu­tion) is worth $1.6-2.4B. BlackRock, Neuberger Berman, and other Series E in­vestors get their cut. They’re pro­tected re­gard­less of struc­ture.

* Retention pack­ages from Nvidia (likely mas­sive given the $20B deal size)

* Their ex­ist­ing Groq eq­uity gets paid out at the $20B val­u­a­tion

Jensen Huang’s email to Nvidia staff (obtained by the FT) said they’re adding tal­ented em­ploy­ees to our ranks.” When you’re tal­ent im­por­tant enough to be men­tioned in a $20B deal, you’re get­ting paid.

Who might get paid (depending on deal struc­ture):

Regular Groq em­ploy­ees with vested eq­uity: This is where it gets murky. There are three pos­si­ble sce­nar­ios:

Scenario 1: The IP li­cens­ing fee goes to Groq Inc.

If the $20B (or a sig­nif­i­cant por­tion) is struc­tured as a li­cens­ing fee paid to Groq Inc. for the IP rights, that money gets dis­trib­uted to all share­hold­ers based on own­er­ship per­cent­age. Employees with vested stock op­tions or RSUs get their pro-rata share. This is the best case for em­ploy­ees.

Example: Engineer with 0.01% fully vested eq­uity gets $2M ($20B × 0.01%). Not bad for an en­gi­neer who’s been there since 2018-2020.

Scenario 2: Most of the $20B goes to re­ten­tion pack­ages

If the deal is struc­tured so that the bulk of the money goes to re­ten­tion/​hir­ing pack­ages for Ross, Madra, and the se­nior team join­ing Nvidia, with a smaller li­cens­ing fee to Groq Inc., em­ploy­ees get less. Maybe the split is $15B re­ten­tion, $5B li­cens­ing fee. Now that same en­gi­neer with 0.01% gets $500K in­stead of $2M.

Scenario 3: The IP li­cens­ing is sep­a­rate from tal­ent ac­qui­si­tion

Nvidia pays Groq Inc. for the IP (say $5-7B, roughly the Sept 2024 val­u­a­tion), and sep­a­rately pays Ross + team re­ten­tion pack­ages di­rectly. Regular em­ploy­ees get their share of the IP li­cens­ing fee only. That same en­gi­neer might get $500-700K.

The crit­i­cal ques­tion: Is the $20B fig­ure the to­tal cost to Nvidia (including re­ten­tion pack­ages), or is it just the IP li­cens­ing fee? If it’s to­tal cost and most goes to re­ten­tion, reg­u­lar em­ploy­ees get scammed.

Who def­i­nitely gets done over:

Employees stay­ing at GroqCloud: These are the peo­ple who:

* Weren’t im­por­tant enough to be hired by Nvidia

* Have eq­uity tied to GroqCloud’s fu­ture value

* Just watched their CEO, President, and en­tire en­gi­neer­ing lead­er­ship leave

* Are now work­ing for a com­pany with no IP rights, no tech­ni­cal lead­er­ship, and no fu­ture

Their eq­uity is worth­less. GroqCloud will wind down over 12-18 months. They’ll ei­ther get laid off or jump ship to wher­ever they can land. They built the LPU ar­chi­tec­ture, con­tributed to the com­piler stack, sup­ported the in­fra­struc­ture, and got noth­ing while Chamath made $2B.

This gets messier when you look at who was in­volved. Chamath Palihapitiya, through Social Capital, led Groq’s ini­tial $10 mil­lion in­vest­ment in 2017 at a $25 mil­lion pre-money val­u­a­tion. Social Capital se­cured 28.57% of the com­pany and a board seat for Chamath.

David Sacks, Chamath’s co-host on the All-In pod­cast, be­came Trump’s AI and Crypto Czar in late 2024. In July 2025, Sacks co-au­thored America’s AI Action Plan,” a White House strat­egy doc­u­ment po­si­tion­ing AI as a mat­ter of na­tional se­cu­rity. The plan called for ex­port­ing the full AI tech­nol­ogy stack to all coun­tries will­ing to join America’s AI al­liance” while pre­vent­ing ad­ver­sar­ial na­tions from build­ing in­de­pen­dent AI ca­pa­bil­i­ties.

Two months later at the All-In Summit in September 2025, Tareq Amin (CEO of HUMAIN, Saudi Arabia’s state-backed AI com­pany) pre­sented Groq as the American AI stack in ac­tion.” This was seven months af­ter the $1.5B Saudi deal.

Sunny Madra, Groq’s President and COO, was ac­tively pro­mot­ing the All-In nar­ra­tive dur­ing this pe­riod. He ap­peared on the All-In pod­cast in March 2024 to pro­vide a Groq up­date” and joined Sacks on This Week in Startups” in November 2023. When Anthropic raised AI safety reg­u­la­tion con­cerns in October 2025, Madra pub­licly sided with Sacks, sug­gest­ing one com­pany is caus­ing chaos for the en­tire in­dus­try” and echo­ing Sacks’s ac­cu­sa­tion that Anthropic was en­gaged in regulatory cap­ture.”

So you have Sacks push­ing an America First” AI pol­icy from the White House while Chamath’s port­fo­lio com­pany (where Madra is President) is build­ing AI in­fra­struc­ture for Saudi Arabia. Then Groq gets pre­sented at the All-In Summit as an ex­am­ple of American AI lead­er­ship. Three months later, an­nounced on Christmas Eve when me­dia cov­er­age is min­i­mal, Nvidia pays $20 bil­lion to clean up the geopo­lit­i­cal con­tra­dic­tion.

Chamath walks away with $1.6B to $2.4B. Sacks gets a ma­jor AI deal un­der his watch. Nvidia gets fa­vor­able reg­u­la­tory treat­ment and elim­i­nates mul­ti­ple prob­lems. The tim­ing en­sures min­i­mal scrutiny of these con­nec­tions.

After di­lu­tion from rais­ing $1.7 bil­lion across Series C, D, and E rounds, Social Capital’s stake in Groq was prob­a­bly 8-12% by the time of the Nvidia deal. At a $20 bil­lion exit, that’s $1.6 bil­lion to $2.4 bil­lion.

Chamath af­ter us­ing you as exit liq­uid­ity and bankrolling it into a 200x win for him­self

Let’s look at the sh he dumped on re­tail with his abysmal SPAC track record:

* IPOB (Opendoor): -62.9% (was -95% be­fore a brief spike)

Chamath per­son­ally dumped $213 mil­lion of Virgin Galactic stock be­fore it crashed, us­ing PIPE struc­tures that let him exit while re­tail in­vestors stayed locked up. In October 2025, when launch­ing a new SPAC, he posted a warn­ing telling re­tail in­vestors not to buy it: these ve­hi­cles are not ideal for most re­tail in­vestors.”

The Groq bet was clas­sic ven­ture cap­i­tal: con­cen­trated bet on an ex­cep­tional founder (Jonathan Ross, the en­gi­neer who in­vented Google’s TPU) build­ing non-ob­vi­ous tech­nol­ogy. Social Capital’s 2017 in­ter­nal memo pro­jected a High” exit sce­nario of $3.2 bil­lion. They landed within range de­spite di­lu­tion.

But re­tail in­vestors never got ac­cess to deals like Groq. They got Virgin Galactic. LOL.

Nvidia paid $20 bil­lion for a com­pany val­ued at $6.9 bil­lion three months ear­lier, struc­tured the deal to avoid tra­di­tional M&A over­sight, killed the cloud busi­ness with­out in­her­it­ing Saudi con­tracts, and en­riched the ex­act peo­ple (Chamath, Sacks) who spent the last year pro­mot­ing American AI lead­er­ship” while cut­ting deals with for­eign gov­ern­ments. The em­ploy­ees who built the tech­nol­ogy ei­ther got hired by Nvidia or have been ut­terly shafted.

This was fun to look into. If you have any ques­tions or com­ments, shout me -> os­samachaib.cs@gmail.com.

...

Read the original on ossa-ma.github.io »

6 419 shares, 58 trendiness

Life in a Secret Chinese Nuclear City That Was Never on the Map

This site re­quires JavaScript to run cor­rectly. Please turn on JavaScript or un­block scripts

...

Read the original on substack.com »

7 271 shares, 5 trendiness

White House pushes to dismantle leading climate and weather research center

The Trump ad­min­is­tra­tion says it plans to dis­man­tle the National Center for Atmospheric Research in Colorado, which is the na­tion’s pre­mier at­mos­pheric sci­ence cen­ter. The cen­ter was founded in 1960 and has fa­cil­i­tated gen­er­a­tions of break­throughs in cli­mate and weather sci­ence. William Brangham dis­cussed the move with cli­mate sci­en­tist Kim Cobb and me­te­o­rol­o­gist Matthew Cappucci.

Notice: Transcripts are ma­chine and hu­man gen­er­ated and lightly edited for ac­cu­racy. They may con­tain er­rors.

The Trump ad­min­is­tra­tion says it plans to dis­man­tle the National Center for Atmospheric Research in Colorado, which is the na­tion’s pre­mier at­mos­pheric sci­ence cen­ter. The cen­ter was founded in 1960 and has fa­cil­i­tated gen­er­a­tions of break­throughs in cli­mate and weather sci­ence. William Brangham dis­cussed the move with cli­mate sci­en­tist Kim Cobb and me­te­o­rol­o­gist Matthew Cappucci.

Notice: Transcripts are ma­chine and hu­man gen­er­ated and lightly edited for ac­cu­racy. They may con­tain er­rors.

The Trump ad­min­is­tra­tion says it plans to dis­man­tle the National Center for Atmospheric Research in Colorado, which is the na­tion’s pre­mier at­mos­pheric sci­ence cen­ter.

In an­nounc­ing the clos­ing, Budget Director Russell Vought called the cen­ter — quote — “one of the largest sources of cli­mate alarmism in the coun­try.”

NCAR, as the cen­ter is known, was founded in 1960 and has fa­cil­i­tated gen­er­a­tions of break­throughs in cli­mate and weather sci­ence. The an­nounce­ment has drawn out­cry from me­te­o­rol­o­gists and cli­mate sci­en­tists across the coun­try.

Earlier this week, I spoke with two of them who are very fa­mil­iar with the cen­ter’s work. Brown University’s Kim Cobb is a cli­mate sci­en­tist and di­rec­tor of the Institute at Brown for Environment and Society, and Matthew Cappucci is se­nior me­te­o­rol­o­gist at MyRadar.

Thank you both so much for be­ing here.

Kim Cobb, to you first. What is NCAR and why, as a cli­mate sci­en­tist, is it so im­por­tant and seem­ingly pre­cious to this com­mu­nity?

Thanks for hav­ing me, William.

NCAR is a re­ally his­toric in­sti­tu­tion in our field. It, of course, dates back decades now. And, over that time, it has re­ally wo­ven it­self into the fab­ric of both weather and cli­mate sci­ence across the coun­try and around the world.

We’re talk­ing about unique, one-of-a-kind fa­cil­i­ties like su­per­com­put­ers, ticked-out air­planes, and most im­por­tantly, a staff of over 800 peo­ple who are at the top of their game in in­no­vat­ing in weather and cli­mate sci­ence for pub­lic good, putting out data that is on every sin­gle cli­mate sci­en­tist’s com­puter around the coun­try, if not around the world, and a nexus of col­lab­o­ra­tion as well that is im­por­tant train­ing grounds for the next gen­er­a­tion of lead­ers.

And, Matthew Cappucci, as a me­te­o­rol­o­gist, how im­por­tant is NCAR to your pro­fes­sion?

I mean, NCAR is re­ally the birth­place of all the tools we use, both tech­no­log­i­cally.

And, re­ally, the dis­cov­er­ies that are made at NCAR are cru­cial to our un­der­stand­ing of how the at­mos­phere works. That’s where we first learned about the MJO, the Madden-Julian Oscillation, one of the biggest over­turn­ing cir­cu­la­tions in the at­mos­phere that gov­erns how so many things, for ex­am­ple, hur­ri­canes be­have.

It’s where we first cre­ated a spe­cial prod­uct used by air­planes when they’re land­ing to avoid wind shear, dis­rup­tive chang­ing winds with height that could cause plane crashes. They in­vented a sys­tem there to pre­vent that. That’s where drop­son­des were in­vented, those lit­tle probes that are dropped out of the belly of air­planes in the mid­dle of hur­ri­canes to fig­ure out how strong the hur­ri­canes are.

So, so many dif­fer­ent tools and dis­cov­er­ies have come from NCAR. In ad­di­tion, the mod­el­ing is in­cred­i­bly im­por­tant. And if we’re sort of putting the brakes on that, I worry about the im­pli­ca­tions for weather fore­cast­ing.

Kim Cobb, well, what more on that? Well, let’s say NCAR is bro­ken up. What are the im­pacts, both for the sci­en­tific com­mu­nity and for Americans who ben­e­fit from its re­search?

I think what’s re­ally im­por­tant to re­mem­ber is that NCAR fo­cuses on the en­tire con­tin­uum from weather that ranges over hours and days that’s de­signed to aid the fore­casts that keep peo­ple safe and pro­tect in­fra­struc­ture and our econ­omy.

But they also go all the way out to look­ing decades into the fu­ture and re­ally un­der­stand­ing that most im­por­tant in­ter­sec­tion right now, how weather is re­spond­ing to on­go­ing cli­mate change. It’s these kinds of ques­tions at the very fore­front of our field that they’re fo­cused on right now.

And these are in­no­va­tions that are go­ing to reap ab­solutely un­told div­i­dends through time. So, by break­ing this — these up and its com­po­nent parts, if you will, first of all, the ad­min­is­tra­tion has made clear that it’s the cli­mate por­tion of the port­fo­lio which they are tak­ing squarely in aim.

And, of course, that is the por­tion that is right now so im­por­tant to in­vest in as we seek to un­der­stand more about the com­ing threats and im­pacts of on­go­ing cli­mate change, 2025, of course, wrap­ping up to be tied for the sec­ond warmest year ever.

And, Matthew, the White House has ar­gued that NCAR and its un­der­tak­ings and its work is some­how con­t­a­m­i­nated with woke ide­ol­ogy or cli­mate alarmism.

Is there re­search or data that is com­ing out of NCAR that is politi­cized or ide­o­log­i­cal in any way?

Truth be told, no. The at­mos­phere does not pos­sess a voter reg­is­tra­tion card. I think it’s im­por­tant to re­mem­ber that sci­en­tists do sci­ence. And, re­ally, the only peo­ple do­ing the politi­ciza­tion are, for the most part, politi­cians and the gen­eral pub­lic and the me­dia.

It wor­ries me, though, that this fits into an over­all pat­tern of the de­mo­niza­tion of both sci­ence and acad­e­mia. The idea that we’re try­ing to shut down sci­ence that pro­duces re­sults we don’t like, it’s a very wor­ri­some trend. And we have seen this other times in his­tory, and it never re­ally ended well.

And I’m just very con­cerned about the tone that’s be­ing taken that sim­ply be­cause one po­lit­i­cal party or even one po­lit­i­cal per­son does­n’t like the fact that we’re learn­ing about the at­mos­phere.

I mean, Kim, as Matthew is say­ing, we know that President Trump has called cli­mate change a hoax. He be­lieves it’s noth­ing that we need to fo­cus on. He has di­rected so much of the ad­min­is­tra­tion to move away from cli­mate re­search, from re­new­able en­ergy, dou­bling down on fos­sil fu­els.

I’m just cu­ri­ous as to what — are you wor­ried sim­i­larly about the fu­ture of cli­mate re­search if we con­tinue to chip away at these foun­da­tional pro­jects?

Well, ob­vi­ously, as Matthew said, we turn our backs on sci­ence to great risk and peril.

And I think most Americans get it right now. In the head­lines every year are these hor­rific cli­mate-fu­eled dis­as­ters that have taken such a toll on com­mu­ni­ties and our na­tional econ­omy, to the tune of bil­lions of dol­lars per year. So this is not a con­tro­ver­sial sub­ject in that re­gard. People do want the best avail­able sci­ence in­for­ma­tion to pro­tect them­selves.

And that’s ex­actly what NCAR and its sci­en­tists are fo­cused on. And I think the Trump ad­min­is­tra­tion is just not un­der­stand­ing how long an in­vest­ment has been made to get us to this point and the many div­i­dends that we have al­ready reached and that, once you break some­thing like this, it’s re­ally go­ing to be hard to put it back to­gether again. That’s my con­cern.

Matthew, the OMB di­rec­tor in his an­nounce­ment said they will do a sys­tem­atic re­view of the work that NCAR does, and crit­i­cal weather in­for­ma­tion and re­search will just get sent some­where else or done in an­other lo­ca­tion.

How con­fi­dent are you that a breakup of NCAR could still keep some of the crit­i­cal el­e­ments in­tact?

I think so many times this ad­min­is­tra­tion has his­tor­i­cally put the cart be­fore the horse. And what I mean by that is, it seems like just a year ago, they were cut­ting a huge chunk of National Weather Service fore­cast­ers.

And then, af­ter they were laid off, the gov­ern­ment re­al­ized, oh, wait, we kind of need them, and brought them back. This seems like an­other re­ally short­sighted de­ci­sion made by the ad­min­is­tra­tion with­out re­al­iz­ing or fully re­search­ing the po­ten­tial im­pli­ca­tions of what could hap­pen.

And I also just sort of think this strikes the wrong tone for just about every­body. Weather and cli­mate af­fect every­body. And, re­al­is­ti­cally, you’re much more likely to be hit by a tor­nado or hur­ri­cane in a red state. And so I’m sur­prised by sort of Trump and his ad­min­is­tra­tion do­ing some­thing that could po­ten­tially have neg­a­tive im­pacts on his own core au­di­ence and fol­low­ing.

All right, that is Matthew Cappucci and Kim Cobb.

Thank you both so much for be­ing here. We re­ally ap­pre­ci­ate it.

...

Read the original on www.pbs.org »

8 250 shares, 12 trendiness

Rainbow Six Siege Hacked as Players Get Billions of Credits and Random Bans

Rainbow Six Siege is cur­rently fac­ing a ma­jor cri­sis, with mount­ing ev­i­dence point­ing to a large scale hack or ex­ploit, even as Ubisoft con­tin­ues to de­scribe the sit­u­a­tion as a server in­ci­dent. The dis­rup­tion has now es­ca­lated into full ser­vice out­ages across all plat­forms, ac­cord­ing to Ubisoft’s own ser­vice sta­tus page.

Players across PC and con­sole are be­ing urged by the com­mu­nity to stay of­fline, as re­ports con­tinue to sur­face of ac­counts re­ceiv­ing bil­lions of in game cred­its, rare and de­vel­oper only skins, and ex­pe­ri­enc­ing ran­dom bans.

BREAKING: Ubisoft Rainbow Six Siege servers have been breached.

Players are re­port­ing mas­sive amounts of R6 Credits, Renown, Alpha Packs, and ex­clu­sive items un­ex­pect­edly.

Numerous ac­counts even Ubisoft, in­clud­ing stream­ers’ and pos­si­bly of­fi­cial ones, have re­ceived ran­dom or… pic.twit­ter.com/​9hGNbBC­MAm

The of­fi­cial ser­vice sta­tus page for Rainbow Six Siege now shows crit­i­cal is­sues across every ma­jor plat­form.

On PC, PS4, PS5, Xbox One, and Xbox Series X|S, core ser­vices in­clud­ing au­then­ti­ca­tion, in game store ac­cess, and match­mak­ing are all listed as be­ing in out­age, while over­all con­nec­tiv­ity is marked as de­graded. Ubisoft states that is­sues are be­ing in­ves­ti­gated, but has not pro­vided a time­line for re­cov­ery.

The scale of the out­ages has fur­ther fu­elled claims that this is far more se­ri­ous than rou­tine server in­sta­bil­ity.

Earlier, the of­fi­cial Rainbow Six ac­count ac­knowl­edged that it was aware of an in­ci­dent af­fect­ing the game and said teams were work­ing on a res­o­lu­tion. No men­tion was made of a se­cu­rity breach, hack, or ex­ploit.

We’re aware of an in­ci­dent cur­rently af­fect­ing Rainbow Six Siege. Our teams are work­ing on a res­o­lu­tion.

We will share fur­ther up­dates once avail­able.

That word­ing has been met with heavy back­lash from play­ers, many of whom be­lieve Ubisoft is at­tempt­ing to down­play the sever­ity of the sit­u­a­tion. Community re­sponses have ac­cused the pub­lisher of call­ing it a server is­sue while core sys­tems ap­pear com­pro­mised.

Some play­ers also ques­tioned why the game re­mained on­line for hours while ac­counts were be­ing al­tered in real time.

As the in­ci­dent un­folded, play­ers re­ported log­ging in to find their ac­counts flooded with bil­lions of R6 Credits and Renown, thou­sands of Alpha Packs, and ac­cess to ex­clu­sive cos­met­ics such as de­vel­oper skins and Glaciers.

Why is no­body talk­ing about what is hap­pen­ing on Siege right now? Everyone that logs in gets 2 Billion R6 cred­its, Renown, Developer Skins and Glaciers 😭 pic.twit­ter.com/​9S­BiLdrirm

There were also wide­spread claims that the in game ban feed had been taken over, dis­play­ing ar­bi­trary mes­sages. Thousands of ac­counts were re­port­edly banned and un­banned at ran­dom, in­clud­ing those be­long­ing to stream­ers and high pro­file play­ers.

Prominent Siege cre­ator KingGeorge de­scribed the sit­u­a­tion as com­pletely bro­ken, warn­ing play­ers not to log in and strongly ad­vis­ing against spend­ing any cur­rency, as it could lead to bans or roll­backs once Ubisoft re­gains con­trol.

Btw as a side note I would not lo­gin right now, def­i­nitely don’t spend any cred­its or renown could lead to a ban.

With au­then­ti­ca­tion and ac­count sys­tems af­fected, many play­ers now ex­pect Ubisoft to roll ac­counts back to a pre­vi­ous state. However, fears re­main that in­no­cent play­ers could still face penal­ties, es­pe­cially those who un­know­ingly spent cred­its think­ing the is­sue was a vi­sual glitch.

Others ar­gue that ban­ning a sig­nif­i­cant por­tion of the ac­tive player base would be dis­as­trous, adding fur­ther pres­sure on Ubisoft to han­dle the af­ter­math care­fully.

The lack of clear com­mu­ni­ca­tion has only in­ten­si­fied frus­tra­tion, with play­ers de­mand­ing trans­parency about what ac­tu­ally hap­pened.

What Players Should Do Right Now

Until Ubisoft con­firms the is­sue is fully re­solved and ex­plains the cause, play­ers are strongly ad­vised to stay of­fline, avoid log­ging in, and do not spend any cred­its or Renown if ac­cess is re­stored.

With Rainbow Six Siege’s core ser­vices of­fline world­wide and player ac­counts po­ten­tially com­pro­mised, this in­ci­dent could have last­ing im­pli­ca­tions for the game if not han­dled prop­erly. Ubisoft is ex­pected to pro­vide fur­ther up­dates as its in­ves­ti­ga­tion con­tin­ues.

...

Read the original on www.shanethegamer.com »

9 244 shares, 15 trendiness

How Dad’s Fitness May Be Packaged and Passed Down in Sperm RNA

The stan­dard sperm-meets-egg story posits that sperm cells are hardly more than bun­dles of shrink-wrapped DNA with tails. Their mis­sion is sim­ple: Deliver a fa­ther’s genes into a moth­er’s egg for sex­ual re­pro­duc­tion. Just about all other as­pects of a de­vel­op­ing em­bryo, in­clud­ing its cel­lu­lar and en­vi­ron­men­tal com­po­nents, have noth­ing to do with dad. Those all come from mom.

But nearly two decades of stud­ies from mul­ti­ple in­de­pen­dent labs threaten to rewrite that story. They sug­gest that dad’s ga­metes shut­tle more than DNA: Within a sper­m’s mi­nus­cule head are stow­away mol­e­cules, which en­ter the egg and con­vey in­for­ma­tion about the fa­ther’s fit­ness, such as diet, ex­er­cise habits and stress lev­els, to his off­spring. These non-DNA trans­fers may in­flu­ence ge­nomic ac­tiv­ity that boots up dur­ing and af­ter fer­til­iza­tion, ex­ert­ing some con­trol over the em­bry­o’s de­vel­op­ment and in­flu­enc­ing the adult they will be­come.

The find­ings, so far largely de­scribed in mouse mod­els, could end up chang­ing the way we think about hered­ity. They sug­gest that what we do in this life af­fects the next gen­er­a­tion,” said Qi Chen, a re­pro­duc­tive and de­vel­op­men­tal bi­ol­o­gist at the University of Utah Medical School who is among the pi­o­neers of this re­search. In other words: What a fa­ther eats, drinks, in­hales, is stressed by or oth­er­wise ex­pe­ri­ences in the weeks and months be­fore he con­ceives a child might be en­coded in mol­e­cules, pack­aged into his sperm cells and trans­mit­ted to his fu­ture kid. The re­searchers have largely ze­roed in on RNA mol­e­cules, those short-lived copies of DNA that re­flect ge­netic ac­tiv­ity at a given time.

It’s a tan­ta­liz­ing no­tion. But the mech­a­nis­tic de­tails — how ex­pe­ri­ence is en­coded, how it’s trans­ferred from sperm to egg, and whether and how it af­fects a de­vel­op­ing em­bryo — are not easy to un­pack, es­pe­cially given the chal­lenges of con­duct­ing re­search in hu­man sub­jects. For this rea­son, and be­cause of the po­ten­tially text­book-rewrit­ing im­pli­ca­tions of the find­ings, re­searchers, in­clud­ing those spear­head­ing the work, are cau­tious about over­selling their re­sults.

It’s still very hand-wavy,” said the epi­ge­neti­cist Colin Conine of the University of Pennsylvania Perelman School of Medicine and Children’s Hospital of Philadelphia, who has been try­ing to un­cover the me­chan­ics of how sperm RNA can con­tribute non­genetic in­for­ma­tion to prog­eny. Some el­e­ments of the story are clear, he said: Researchers have sig­nif­i­cant ev­i­dence that the en­vi­ron­ment can reg­u­late sperm RNAs, that these mol­e­cules trans­mit traits to off­spring and that they can reg­u­late em­bry­onic de­vel­op­ment af­ter fer­til­iza­tion. We just don’t have re­ally any un­der­stand­ing of how RNAs can do this, and that’s the hand-wavy part,” Conine said.

But ev­i­dence keeps pil­ing up. Most re­cently, in November 2025, a com­pre­hen­sive pa­per pub­lished in Cell Metabolism traced the down­stream mol­e­c­u­lar ef­fects of a fa­ther mouse’s ex­er­cise reg­i­men on sperm mi­croR­NAs that tar­get genes critical for mi­to­chon­dr­ial func­tion and meta­bolic con­trol” in a de­vel­op­ing em­bryo. The re­searchers found many of those same RNAs over­ex­pressed in the sperm of well-ex­er­cised hu­man men.

This study shows that pa­ter­nal ex­er­cise can con­fer ben­e­fits — en­hanced en­durance and meta­bolic health — to off­spring,” said Chen, who was not in­volved in the study. It’s a pow­er­ful re­minder that many sperm-me­di­ated epi­ge­netic ef­fects are deeply adap­tive in na­ture.”

The pos­si­bil­ity that a pre­vi­ously un­doc­u­mented av­enue of in­her­i­tance is at play is too im­por­tant to ig­nore. That’s why the re­searchers are now hun­ker­ing down in their labs to trace out the mol­e­c­u­lar processes that would have to op­er­ate for a fa­ther’s here-and-now ex­pe­ri­ence to be trans­ferred as de­vel­op­men­tal in­struc­tions to his part­ner’s egg.

In most an­i­mals, a sperm cell is tiny com­pared to an egg cell. In hu­mans, an egg con­tains 10 mil­lion times the vol­ume of a sperm and con­tributes most cel­lu­lar com­po­nents — nu­tri­tion, cy­to­plasm, mi­to­chon­dria and other or­ganelles, the mol­e­c­u­lar ma­chin­ery to make pro­teins, and more — to a zy­gote (a newly fer­til­ized egg that has­n’t started di­vid­ing). Plus, a mother pro­vides the en­vi­ron­ment within which an em­bryo and then fe­tus de­vel­ops and grows. As a re­sult, the ef­fect of a moth­er’s health on her chil­dren has long been scru­ti­nized, in­clud­ing at the mol­e­c­u­lar level. But over the past 15 years or so, the ev­i­dence for some kind of non-DNA in­her­i­tance of pa­ter­nal ex­pe­ri­ence has also been strength­en­ing.

There are many dif­fer­ent labs that have done diet and stress stud­ies, and typ­i­cally the read­outs of those in the next gen­er­a­tion are ei­ther me­tab­o­lism or be­hav­ioral changes,” Conine said. Feed a male mouse a high-fat or low-pro­tein diet, or take him away from his mom when he is young, and his off­spring will in­herit traits, such as changes in mi­to­chon­dr­ial func­tion, re­lated to those en­vi­ron­men­tal con­di­tions. These traits aren’t nec­es­sar­ily detri­men­tal. For in­stance, mouse fa­thers  exposed to nico­tine sire male pups with liv­ers that are good at dis­arm­ing not just nico­tine but co­caine and other tox­ins as well.

There is a sur­vival logic here, said Oliver Rando, an epi­ge­neti­cist at the University of Massachusetts Chan Medical School who led the nico­tine study. It’s rea­son­able to ex­pect that off­spring will ex­pe­ri­ence an en­vi­ron­men­tal con­text sim­i­lar to that of their par­ents. Biologically prim­ing them for those con­di­tions could there­fore help them sur­vive.

...

Read the original on www.quantamagazine.org »

10 205 shares, 8 trendiness

Clock Synchronization Is a Nightmare

Time seems sim­ple. But we en­gi­neers lose sleep over some­thing as ba­sic as keep­ing clocks in sync. Here’s why…

The an­swer lies in this one sim­ple state­ment - there is no global clock. When you have thou­sands of ma­chines spread across data cen­ters, con­ti­nents, and time zones, each op­er­at­ing in­de­pen­dently, the sim­ple ques­tion of what time is it?” be­comes sur­pris­ingly com­plex.

Clock syn­chro­niza­tion sits at the core of some of the most chal­leng­ing prob­lems in dis­trib­uted sys­tems, af­fect­ing every­thing from data­base con­sis­tency to de­bug­ging to fi­nan­cial trans­ac­tions.

Every com­puter has an in­ter­nal clock, typ­i­cally dri­ven by a quartz crys­tal os­cil­la­tor. These os­cil­la­tors work by vi­brat­ing at a spe­cific fre­quency when volt­age is ap­plied. The stan­dard fre­quency for most com­puter clocks is 32768 Hz, cho­sen be­cause it is a power of two and makes count­ing down to one sec­ond straight­for­ward.

The catch: quartz crys­tals are not per­fect. Their os­cil­la­tion fre­quency varies based on many fac­tors; here are a few…

Temperature is the biggest cul­prit. Standard quartz crys­tals ex­hibit fre­quency drift in the tens of parts per mil­lion when tem­per­a­ture changes. A tem­per­a­ture de­vi­a­tion of ~10 de­grees Celsius can cause drift equiv­a­lent to about 110 sec­onds per year. The crys­tal vi­brates faster or slower de­pend­ing on am­bi­ent tem­per­a­ture, and data cen­ter en­vi­ron­ments are not per­fectly con­trolled.

Another cul­prit is man­u­fac­tur­ing vari­a­tion. No two crys­tals are iden­ti­cal. Even crys­tals from the same pro­duc­tion batch will have slightly dif­fer­ent char­ac­ter­is­tics. Aging com­pounds this prob­lem as crys­tals change prop­er­ties over time.

The re­sult is that two com­put­ers started at ex­actly the same time, never com­mu­ni­cat­ing with each other, will in­evitably drift apart. After just one day, they might dif­fer by hun­dreds of mil­lisec­onds. After a month, they could be sec­onds apart.

Clock skew = the dif­fer­ence in time be­tween two clocks at any given in­stant. Clock drift = the rate at which clocks di­verge over time. Both cause se­ri­ous prob­lems in dis­trib­uted sys­tems.

Consider a sim­ple ex­am­ple with a dis­trib­uted make sys­tem. You edit a source file on your client ma­chine, which has a clock slightly be­hind the server where the com­piled ob­ject file lives. When make runs, it com­pares time­stamps. If the server clock is ahead, the ob­ject file ap­pears newer than the source file you just edited, and make does not re­com­pile. Your changes silently dis­ap­pear from the build.

Scenario: UNIX make with un­syn­chro­nized clocks

Client ma­chine clock: 10:00:00 (lagging)

Server ma­chine clock: 10:00:05 (ahead)

1. Edit util.c at client time 10:00:00

2. util.o on server has time­stamp 10:00:03

3. Make com­pares: util.o (10:00:03) vs util.c (10:00:00)

4. Conclusion: util.o is newer, skip re­com­pi­la­tion

5. Result: Your changes are ig­nored

Database sys­tems face even more crit­i­cal time­stamp is­sues. When two trans­ac­tions hap­pen at nearly the same time on dif­fer­ent nodes, the data­base must de­ter­mine which hap­pened first. If clocks are out of sync, the data­base might or­der them in­cor­rectly, vi­o­lat­ing con­sis­tency guar­an­tees.

Imagine a bank­ing sys­tem where a cus­tomer de­posits money at one branch (Node A) and im­me­di­ately with­draws at an­other branch (Node B). If Node B clock is be­hind Node A, the with­drawal trans­ac­tion might get a time­stamp ear­lier than the de­posit. A snap­shot read at the wrong time could show the with­drawal but not the de­posit, mak­ing it ap­pear the cus­tomer with­drew money they did not have.

Logging and de­bug­ging be­come nearly im­pos­si­ble when clocks dis­agree. Distributed trac­ing re­lies on time­stamps to re­con­struct the se­quence of events across ser­vices. When clocks are skewed, the re­sult­ing traces show im­pos­si­ble se­quences where ef­fects ap­pear be­fore causes.

The sim­plest ap­proach to clock syn­chro­niza­tion is to pe­ri­od­i­cally query a trusted time server and ad­just lo­cal clocks ac­cord­ingly. Let’s look at dif­fer­ent al­go­rithms and ap­proaches based on this…

Cristian al­go­rithm, pro­posed in 1989, works with a cen­tral­ized time server as­sumed to have ac­cu­rate time. A client re­quests the time, the server re­sponds with its cur­rent time, and the client ad­justs.

The chal­lenge is net­work de­lay. By the time the re­sponse ar­rives, the server time is stale. Cristian al­go­rithm es­ti­mates the one way de­lay as half the round trip time.

# Cristian’s Algorithm

def syn­chro­nize_­clock():

t0 = lo­cal_­time() # Record time be­fore re­quest

serv­er_­time = re­quest_­time_from_server()

t1 = lo­cal_­time() # Record time af­ter re­sponse

round_trip = t1 - t0

one_way_de­lay = round_trip / 2

# Adjust lo­cal clock

new_­time = serv­er_­time + one_way_de­lay

set_lo­cal_­clock(new_­time)

# Error bound: +/- (t1 - t0) / 2

This works rea­son­ably well when net­work de­lays are sym­met­ric, mean­ing re­quest and re­sponse take the same time. In prac­tice, de­lays are of­ten asym­met­ric due to dif­fer­ent rout­ing paths, vary­ing net­work con­ges­tion, and pro­cess­ing de­lays.

The Berkeley al­go­rithm takes a dif­fer­ent ap­proach, as­sum­ing no sin­gle ma­chine has an ac­cu­rate time. Instead, it uses con­sen­sus among mul­ti­ple ma­chines.

A des­ig­nated time dae­mon pe­ri­od­i­cally polls all ma­chines for their clock val­ues. It com­putes the av­er­age, dis­cards out­liers, and tells each ma­chine how much to ad­just. Rather than send­ing ab­solute times, which would suf­fer from net­work de­lays, it sends rel­a­tive ad­just­ments.

Berkeley Algorithm Steps:

1. Time dae­mon polls ma­chines: What time do you have?”

2. Responses: Machine A: 10:00:05, Machine B: 10:00:02, Machine C: 10:00:08

3. Time dae­mon clock: 10:00:04

4. Average: (5 + 2 + 8 + 4) / 4 = 4.75 → 10:00:05

5. Adjustments sent:

- Machine A: slow down by 0s (already at tar­get)

- Machine B: speed up by 3s

- Machine C: slow down by 3s

- Daemon: speed up by 1s

A crit­i­cal de­tail: com­put­ers should never jump their clocks back­ward. Doing so vi­o­lates the as­sump­tion of mo­not­o­nic time that many al­go­rithms de­pend on. Instead of rewind­ing, the Berkeley al­go­rithm slows clocks grad­u­ally to let them catch up.

NTP uses a hi­er­ar­chi­cal sys­tem of time servers or­ga­nized into strata.

Stratum 0 de­vices are high pre­ci­sion time sources like atomic clocks and GPS re­ceivers. Stratum 1 servers con­nect di­rectly to stra­tum 0 sources. Each lower stra­tum syn­chro­nizes with the level above, with stra­tum num­bers in­creas­ing up to 15.

NTP can typ­i­cally main­tain time within tens of mil­lisec­onds over the pub­lic in­ter­net and can achieve sub-mil­lisec­ond ac­cu­racy on lo­cal area net­works. However, sev­eral fac­tors limit its pre­ci­sion.

NTP Accuracy Limitations:

- Public in­ter­net: 10-100 ms typ­i­cal

- LAN with good con­di­tions: 100-500 µs

- Network asym­me­try: can cause 100+ ms er­rors

- Variable la­tency: in­tro­duces jit­ter

- Operating sys­tem de­lays: soft­ware time­stamps add mi­crosec­onds

Network asym­me­try is par­tic­u­larly prob­lem­atic. If the path from client to server dif­fers from server to client, the as­sump­tion that one way de­lay equals half the round-trip breaks down. Satellite links where up­link and down­link have dif­fer­ent la­ten­cies are a clas­sic ex­am­ple.

Operating sys­tem over­head adds un­cer­tainty. When an NTP packet ar­rives, it passes through the net­work stack, gets time­stamped by the ker­nel, and even­tu­ally reaches the NTP dae­mon. Each step in­tro­duces vari­able de­lays mea­sured in mi­crosec­onds.

For many ap­pli­ca­tions, NTP ac­cu­racy is suf­fi­cient. Web servers, file sys­tems, and most busi­ness ap­pli­ca­tions tol­er­ate clocks be­ing tens of mil­lisec­onds apart. But some do­mains de­mand much tighter syn­chro­niza­tion.

Financial trad­ing sys­tems mea­sure la­tency in mi­crosec­onds. A trade time­stamped in­cor­rectly by even a few mil­lisec­onds can have sig­nif­i­cant le­gal and fi­nan­cial im­pli­ca­tions. High-frequency trad­ing strate­gies de­pend on know­ing the pre­cise or­der of events.

Telecommunications sys­tems re­quire syn­chro­niza­tion for TDM (Time Division Multiplexing) where dif­fer­ent users share a chan­nel by tak­ing turns. If tim­ing drifts, trans­mis­sions from dif­fer­ent users col­lide.

Scientific ex­per­i­ments, par­tic­u­larly in physics, need nanosec­ond pre­ci­sion to cor­re­late mea­sure­ments across in­stru­ments.

PTP, de­fined by IEEE 1588, achieves sub-mi­crosec­ond ac­cu­racy by us­ing hard­ware time­stamp­ing. Instead of the op­er­at­ing sys­tem record­ing when a packet ar­rived, spe­cial­ized net­work in­ter­face cards time­stamp pack­ets as they cross the wire, elim­i­nat­ing soft­ware de­lays.

PTP vs NTP Precision:

- NTP: mil­lisec­onds (software time­stamp­ing)

- PTP: nanosec­onds (hardware time­stamp­ing)

Key PTP im­prove­ments:

- Hardware time­stamps at NIC level

- Boundary clocks at switches main­tain pre­ci­sion

- Two-way mes­sage ex­change cal­cu­lates asym­met­ric de­lays

PTP re­quires sup­port through­out the net­work path. Switches must be PTP aware, act­ing as bound­ary clocks that main­tain syn­chro­niza­tion hop by hop. This makes PTP ex­pen­sive to de­ploy but es­sen­tial for ap­pli­ca­tions re­quir­ing nanosec­ond pre­ci­sion.

Meta an­nounced in 2022 that they were mi­grat­ing from NTP to PTP across their data cen­ters. The in­vest­ment in PTP in­fra­struc­ture paid off in re­duced er­rors and bet­ter de­bug­ging ca­pa­bil­ity.

Lamport in­tro­duced the con­cept of log­i­cal clocks based on a sim­ple ob­ser­va­tion: if two events are causally re­lated, we should be able to or­der them. If event A sends a mes­sage that event B re­ceives, A hap­pened be­fore B. If both events hap­pen on the same process, the ear­lier one hap­pens be­fore the later one.

Events that are not con­nected by any chain of causal­ity are con­cur­rent. They could have hap­pened in ei­ther or­der, and from the sys­tem’s per­spec­tive, there is no mean­ing­ful way to dis­tin­guish.

Lamport time­stamps im­ple­ment this in­tu­ition with a sim­ple al­go­rithm. Each process main­tains a counter. Before any event, in­cre­ment the counter. When send­ing a mes­sage, in­clude the counter value. When re­ceiv­ing a mes­sage, set your counter to the max­i­mum of your cur­rent value and the re­ceived value, then in­cre­ment.

class LamportClock:

def __init__(self):

self.time = 0

def lo­cal_event(self):

self.time += 1

re­turn self.time

def send_event(self):

self.time += 1

re­turn self.time # Include in mes­sage

def re­ceive_event(self, re­ceived_­time):

self.time = max(self.time, re­ceived_­time) + 1

re­turn self.time

If event A has a lower Lamport time­stamp than event B, we know one of two things: ei­ther A hap­pened be­fore B, or they are con­cur­rent. The con­verse is guar­an­teed: if A hap­pened be­fore B, then A has a lower time­stamp than B.

Process P1: [0] –-(1)–-> send m –-(2)–-> lo­cal event

v

Process P2: [0] ––––> re­ceive m –-(2)–-> lo­cal event –-(3)

P1 events: (1, 2)

P2 events: (2, 3)

The re­ceive on P2 hap­pens af­ter send on P1 (causality pre­served)

The lim­i­ta­tion is that Lamport time­stamps can­not tell you if two events are con­cur­rent. Events with time­stamps 5 and 7 might be causally re­lated or might have hap­pened in­de­pen­dently on dif­fer­ent processes with no com­mu­ni­ca­tion be­tween them.

Vector clocks ex­tend Lamport time­stamps to cap­ture full causal­ity in­for­ma­tion. Instead of a sin­gle counter, each process main­tains a vec­tor with an en­try for every process in the sys­tem.

class VectorClock:

def __init__(self, process_id, num_processes):

...

Read the original on arpitbhayani.me »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.