10 interesting stories served every morning and every evening.
Your browser does not support the audio element.
This content is generated by Google AI. Generative AI is experimental
Gemini 3 Pro represents a generational leap from simple recognition to true visual and spatial reasoning. It is our most capable multimodal model ever, delivering state-of-the-art performance across document, spatial, screen and video understanding. This model sets new highs on vision benchmarks such as MMMU Pro and Video MMMU for complex visual reasoning, as well as use-case-specific benchmarks across document, spatial, screen and long video understanding.
Real-world documents are messy, unstructured, and difficult to parse — often filled with interleaved images, illegible handwritten text, nested tables, complex mathematical notation and non-linear layouts. Gemini 3 Pro represents a major leap forward in this domain, excelling across the entire document processing pipeline — from highly accurate Optical Character Recognition (OCR) to complex visual reasoning.To truly understand a document, a model must accurately detect and recognize text, tables, math formulas, figures and charts regardless of noise or format.A fundamental capability is “derendering” — the ability to reverse-engineer a visual document back into structured code (HTML, LaTeX, Markdown) that would recreate it. As illustrated below, Gemini 3 demonstrates accurate perception across diverse modalities including converting an 18th-century merchant log into a complex table, or transforming a raw image with mathematical annotation into precise LaTeX code.
Example 2: Reconstructing equations from an image
Example 3: Reconstructing Florence Nightingale’s original Polar Area Diagram into an interactive chart (with a toggle!)
Users can rely on Gemini 3 to perform complex, multi-step reasoning across tables and charts — even in long reports. In fact, the model notably outperforms the human baseline on the CharXiv Reasoning benchmark (80.5%).To illustrate this, imagine a user analyzing the 62-page U.S. Census Bureau “Income in the United States: 2022” report with the following prompt: “Compare the 2021–2022 percent change in the Gini index for “Money Income” versus “Post-Tax Income”, and what caused the divergence in the post-tax measure, and in terms of “Money Income”, does it show the lowest quintile’s share rising or falling?”Swipe through the images below to see the model’s step-by-step reasoning.
Visual Extraction: To answer the Gini Index Comparison question, Gemini located and cross-referenced this info in Figure 3 about “Money Income decreased by 1.2 percent” and in Table B-3 about “Post-Tax Income increased by 3.2 percent”
Causal Logic: Crucially, Gemini 3 does not stop at the numbers; it correlates this gap with the text’s policy analysis, correctly identifying Lapse of ARPA Policies and the end of Stimulus Payments are the main causes.
Numerical Comparison: To compare the lowest quantile’s share rising or falling, Gemini3 looked at table A-3, and compared the number of 2.9 and 3.0, and concluded that “the share of aggregate household income held by the lowest quintile was rising.”
Gemini 3 Pro is our strongest spatial understanding model so far. Combined with its strong reasoning, this enables the model to make sense of the physical world.Pointing capability: Gemini 3 has the ability to point at specific locations in images by outputting pixel-precise coordinates. Sequences of 2D points can be strung together to perform complex tasks, such as estimating human poses or reflecting trajectories over time.Open vocabulary references: Gemini 3 identifies objects and their intent using an open vocabulary. The most direct application is robotics: the user can ask a robot to generate spatially grounded plans like, “Given this messy table, come up with a plan on how to sort the trash.” This also extends to AR/XR devices, where the user can request an AI assistant to “Point to the screw according to the user manual.”
Gemini 3.0 Pro’s spatial understanding really shines through its screen understanding of desktop and mobile OS screens. This reliability helps make computer use agents robust enough to automate repetitive tasks. UI understanding capabilities can also enable tasks like QA testing, user onboarding and UX analytics. The following computer use demo shows the model perceiving and clicking with high precision.
Gemini 3 Pro takes a massive leap forward in how AI understands video, the most complex data format we interact with. It is dense, dynamic, multimodal and rich with context.High frame rate understanding: We have optimized the model to be much stronger at understanding fast-paced actions when sampling at >1 frames-per-second. Gemini 3 Pro can capture rapid details — vital for tasks like analyzing golf swing mechanics.
By processing video at 10 FPS—10x the default speed—Gemini 3 Pro catches every swing and shift in weight, unlocking deep insights into player mechanics.
2. Video reasoning with “thinking” mode: We upgraded “thinking” mode to go beyond object recognition toward true video reasoning. The model can now better trace complex cause-and-effect relationships over time. Instead of just identifying what is happening, it understands why it is happening.3. Turning long videos into action: Gemini 3 Pro bridges the gap between video and code. It can extract knowledge from long-form content and immediately translate it into functioning apps or structured code.
Here are a few ways we think various fields will benefit from Gemini 3’s capabilities.Gemini 3.0 Pro’s enhanced vision capabilities drive significant gains in the education field, particularly for diagram-heavy questions central to math and science. It successfully tackles the full spectrum of multimodal reasoning problems found from middle school through post-secondary curriculums. This includes visual reasoning puzzles (like Math Kangaroo) and complex chemistry and physics diagrams.Gemini 3’s visual intelligence also powers the generative capabilities of Nano Banana Pro. By combining advanced reasoning with precise generation, the model, for example, can help users identify exactly where they went wrong in a homework problem.
Prompt: “Here is a photo of my homework attempt. Please check my steps and tell me where I went wrong. Instead of explaining in text, show me visually on my image.” (Note: Student work is shown in blue; model corrections are shown in red). [See prompt in Google AI Studio]
Gemini 3 Pro
stands as our most capable general model for medical and biomedical imagery understanding, achieving state-of-the-art performance across major public benchmarks in MedXpertQA-MM (a difficult expert-level medical reasoning exam), VQA-RAD (radiology imagery Q&A) and MicroVQA (multimodal reasoning benchmarks for microscopy based biological research).
Gemini 3 Pro’s enhanced document understanding helps professionals in finance and law tackle highly complex workflows. Finance platforms can seamlessly analyze dense reports filled with charts and tables, while legal platforms benefit from the model’s sophisticated document reasoning.
Gemini 3 Pro improves the way it processes visual inputs by preserving the native aspect ratio of images. This drives significant quality improvements across the board.
Additionally, developers gain granular control over performance and cost via the new media_resolution parameter. This allows you to tune visual token usage to balance fidelity against consumption:High resolution: Maximizes fidelity for tasks requiring fine detail, such as dense OCR or complex document understanding.Low resolution: Optimizes for cost and latency on simpler tasks, such as general scene recognition or long-context tasks.For specific recommendations, refer to our Gemini 3.0 Documentation Guide.We are excited to see what you build with these new capabilities. To get started, check out our developer documentation or play with the model in Google AI Studio today.
...
Read the original on blog.google »
Skip to main content
I once worked at a company which had an enormous amount of technical debt - millions of lines of code, no unit tests, based on frameworks that were well over a decade out of date. On one specific project, we had a market need to get some Windows-only modules running on Linux, and rather than cross-compiling, another team had simply copied & pasted a few hundred thousand lines of code, swapping Windows-specific components for Linux-specific. For the non-technical reader, this is an enormous problem because now two versions of the code exist. So, all features & bug fixes must be solved in two separate codebases that will grow apart over time. When I heard about this, a young & naive version of me set out to fix the situation….Tech debt projects are always a hard sell to management, because even if everything goes flawlessly, the code just does roughly what it did before. This project was no exception, and the optics weren’t great. I did as many engineers do and “ignored the politics”, put my head down, and got it done. But, the project went long, and I lost management’s trust in the process.I realized I was essentially trying to solve a people problem with a technical solution. Most of the developers at this company were happy doing the same thing today that they did yesterday…and five years ago. As Andrew Harmel-Law points out, code tends to follow the personalities of the people that wrote it. Personality types who intensely dislike change tend not to design their code with future change in mind.Most technical problems are really people problems. Think about it. Why does technical debt exist? Because requirements weren’t properly clarified before work began. Because a salesperson promised an unrealistic deadline to a customer. Because a developer chose an outdated technology because it was comfortable. Because management was too reactive and cancelled a project mid-flight. Because someone’s ego wouldn’t let them see a better way of doing things.The core issue with the project was that admitting the need for refactoring was also to admit that the way the company was building software was broken and that individual skillsets were sorely out of date. My small team was trying to fix one module of many, while other developers were writing code as they had been for decades. I had one developer openly tell me, “I don’t want to learn anything new.” I realized that you’ll never clean up tech debt faster than others create it. It is like triage in an emergency room, you must stop the bleeding first, then you can fix whatever is broken.The project also disabused me of the engineer’s ideal of a world in which engineering problems can be solved in a vacuum - staying out of “politics” and letting the work speak for itself - a world where deadlines don’t exist…and let’s be honest, neither do customers. This ideal world rarely exists. The vast majority of projects have non-technical stakeholders, and telling them “just trust me; we’re working on it” doesn’t cut it. I realized that the perception that your team is getting a lot done is just as important as getting a lot done.Non-technical people do not intuitively understand the level of effort required or the need for tech debt cleanup; it must be communicated effectively by engineering - in both initial estimates & project updates. Unless leadership has an engineering background, the value of the technical debt work likely needs to be quantified and shown as business value.Perhaps these are the lessons that prep one for more senior positions. In my opinion, anyone above senior engineer level needs to know how to collaborate cross-functionally, regardless of whether they choose a technical or management track. Schools teach Computer Science, not navigating personalities, egos, and personal blindspots. I have worked with some incredible engineers, better than myself - the type that have deep technical knowledge on just about any technology you bring up. When I was younger, I wanted to be that engineer - the “engineer’s engineer”. But I realize now, that is not my personality. I’m too ADD to be completely heads down. :)For all of their (considerable) strengths, more often than not, those engineers shy away from the interpersonal. They can be incredibly productive ICs, but may fail with bigger initiatives because they are only one person - a single processor core can only go so fast. Perhaps equally valuable is the “heads up coder” - the person who is deeply technical, but also able to pick their head up & see project risks coming (technical & otherwise) and steer the team around them.
You start out your day with a nice cup of coffee, and think, “Ah, greenfield project day…smooth sailing”. You fire up Visual Studio and create a new C# project. “First things first, I need library X.” you say. “Wait, what the?” The full error: Package ‘MyPackage 1.0.0.0’ was restored using ‘.NETFramework,Version=v4.6.1, .NETFramework,Version=v4.6.2, .NETFramework,Version=v4.7, .NETFramework,Version=v4.7.1, .NETFramework,Version=v4.7.2, .NETFramework,Version=v4.8, .NETFramework,Version=v4.8.1’ instead of the project target framework ‘net6.0’. This package may not be fully compatible with your project. “Ok” you think, “That library is a bit older. I’ll go update the library project to .NET 6 to match my project. But, where is .NET 6? “Ok, what about my new project? Just as a test, does the warning go away if I set it to an older .NET Framework? Wait, where are the .NET Framework versions?…
Pointers are funny things. They are one of the make or break concepts for beginners, and even years later, they can cause grief to experienced developers. I am no exception. Here is one such story: I was faced with a class which I wanted to refactor. It can be simplified as follows: In the interest of breaking up the responsibilities, I added a couple of interfaces. The idea is that I can pass around smart pointers to these interfaces and begin to decouple portions of the code. For example, I can inject them into classes that need them: However, I made a mistake. I blame it on years of using boost::intrustive_ptr instead of std::shared_ptr, but enough excuses. Let’s see if you can spot it. Do you see it? If so, give yourself 5 points. If not, maybe after seeing the output: ‘second’ going out of scope… Destructor ‘first’ going out of scope… ‘root’ going out of scope… All done… Both shared pointers ( first & second…
...
Read the original on blog.joeschrag.com »
For every cloud service I use, I want to have a local copy of my data for backup purposes and independence. Unfortunately, the gphotos-sync tool stopped
working in March
2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup. In this post, I describe how I have set up
Immich, a self-hostable photo manager.
Here is the end result: a few (live) photos from NixCon
2025:
I am running Immich on my Ryzen 7 Mini PC (ASRock DeskMini
X600), which consumes less than 10 W of power in idle and has plenty of resources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024:
I installed Proxmox, an Open Source virtualization platform, to divide this mini server into VMs, but you could of course also install Immich directly on any server.
I created a VM (named “photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM.
For the initial import, you could assign more CPU and RAM, but for normal usage, that’s enough.
I (declaratively) installed
NixOS on that VM as described in this blog post:
Afterwards, I enabled Immich, with this exact configuration:
At this point, Immich is available on localhost, but not over the network, because NixOS enables a firewall by default. I could enable the
services.immich.openFirewall option, but I actually want Immich to only be available via my Tailscale VPN, for which I don’t need to open firewall access — instead, I use tailscale serve to forward traffic to localhost:2283:
photos# tailscale serve –bg http://localhost:2283
Because I have Tailscale’s MagicDNS
and TLS certificate provisioning
enabled, that means I can now open https://photos.example.ts.net in my browser on my PC, laptop or phone.
At first, I tried importing my photos using the official Immich CLI:
% nix run nixpkgs#immich-cli — login https://photos.example.ts.net secret
% nix run nixpkgs#immich-cli — upload –recursive /home/michael/lib/photo/gphotos-takeout
Unfortunately, the upload was not running reliably and had to be restarted manually a few times after running into a timeout. Later I realized that this was because the Immich server runs background jobs like thumbnail creation, metadata extraction or face detection, and these background jobs slow down the upload to the extent that the upload can fail with a timeout.
The other issue was that even after the upload was done, I realized that Google Takeout archives for Google Photos contain metadata in separate JSON files next to the original image files:
Unfortunately, these files are not considered by immich-cli.
Luckily, there is a great third-party tool called
immich-go, which solves both of these issues! It pauses background tasks before uploading and restarts them afterwards, which works much better, and it does its best to understand Google Takeout archives.
I ran immich-go as follows and it worked beautifully:
% immich-go \
upload \
from-google-photos \
–server=https://photos.example.ts.net \
–api-key=secret \
~/Downloads/takeout-*.zip
My main source of new photos is my phone, so I installed the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and enabled automatic backup of new photos via the icon at the top right.
I am not 100% sure whether these settings are correct, but it seems like camera photos generally go into Live Photos, and Recent should cover other files…?!
If anyone knows, please send an explanation (or a link!) and I will update the article.
I also strongly recommend to disable notifications for Immich, because otherwise you get notifications whenever it uploads images in the background. These notifications are not required for background upload to work, as an Immich
developer confirmed on
Reddit. Open
Settings → Apps → Immich → Notifications and un-tick the permission checkbox:
Immich’s documentation on
backups contains some good recommendations. The Immich developers recommend backing up the entire contents of UPLOAD_LOCATION, which is /var/lib/immich on NixOS. The
backups subdirectory contains SQL dumps, whereas the 3 directories upload,
library and profile contain all user-uploaded data.
Hence, I have set up a systemd timer that runs rsync to copy /var/lib/immich
onto my PC, which is enrolled in a 3-2-1 backup
scheme.
Immich (currently?) does not contain photo editing features, so to rotate or crop an image, I download the image and use GIMP.
To share images, I still upload them to Google Photos (depending on who I share them with).
The two most promising options in the space of self-hosted image management tools seem to be Immich and Ente.
I got the impression that Immich is more popular in my bubble, and Ente made the impression on me that its scope is far larger than what I am looking for:
Ente is a service that provides a fully open source, end-to-end encrypted platform for you to store your data in the cloud without needing to trust the service provider. On top of this platform, we have built two apps so far: Ente Photos (an alternative to Apple and Google Photos) and Ente Auth (a 2FA alternative to the deprecated Authy).
I don’t need an end-to-end encrypted platform. I already have encryption on the transit layer (Tailscale) and disk layer (LUKS), no need for more complexity.
Immich is a delightful app! It’s very fast and generally seems to work well.
The initial import is smooth, but only if you use the right tool. Ideally, the official immich-cli could be improved. Or maybe immich-go could be made the official one.
I think the auto backup is too hard to configure on an iPhone, so that could also be improved.
But aside from these initial stumbling blocks, I have no complaints.
Table Of Contents
...
Read the original on michael.stapelberg.ch »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on social.growyourown.services »
The controversy highlights a wider trend in which more of what people see online is pre-processed by AI before reaching them. Smartphone makers like Samsung and Google have long used AI to “enhance” images. Samsung previously admitted to using AI to sharpen moon photos, while Google’s Pixel “Best Take” feature stitches together facial expressions from multiple shots to create a single “perfect” group picture.
...
Read the original on www.ynetnews.com »
The Qualcomm Snapdragon X Plus and Snapdragon X Elite have proven that ARM processors have earned a place in the laptop market, as devices like the Lenovo IdeaPad Slim 5 stand out with their long battery life and an affordable price point.
MetaComputing is now offering an alternative to Intel, AMD and the Snapdragon X series. Specifically, the company has introduced a mainboard that can be installe in the Framework Laptop 13 or in a mini PC case. This mainboard is equipped with a CIX CP8180 ARM chipset, which is also found inside the Minisforum MS-R1. This processor has a total of eight ARM Cortex-A720 performance cores, the two fastest can hit boost clock speeds of up to 2.6 GHz. Moreover, there are four Cortex-A520 efficiency cores.
...
Read the original on www.notebookcheck.net »
Whenever I see the comment // this should never happen in code, I try to find out the exact conditions under which it could happen. And in 90% of cases, I find a way to do just that. More often than not, the developer just hasn’t considered all edge cases or future code changes.
In fact, the reason why I like this comment so much is that it often marks the exact spot where strong guarantees fall apart. Often, violating implicit invariants that aren’t enforced by the compiler are the root cause.
Yes, the compiler prevents memory safety issues, and the standard library is best-in-class. But even the standard library has its warts and bugs in business logic can still happen.
All we can work with are hard-learned patterns to write more defensive Rust code, learned throughout years of shipping Rust code to production. I’m not talking about design patterns here, but rather small idioms, which are rarely documented, but make a big difference in the overall code quality.
if !matching_users.is_empty() {
let existing_user = &matching_users[0];
What if you refactor it and forget to keep the is_empty() check? The problem is that the vector indexing is decoupled from checking the length. So matching_users[0] can panic at runtime if the vector is empty.
Checking the length and indexing are two separate operations, which can be changed independently. That’s our first implicit invariant that’s not enforced by the compiler.
If we use slice pattern matching instead, we’ll only get access to the element if the correct match arm is executed.
match matching_users.as_slice() {
[] => todo!(“What to do if no users found!?“),
[existing_user] => { // Safe! Compiler guarantees exactly one element
// No need to index into the vector,
// we can directly use `existing_user` here
_ => Err(RepositoryError::DuplicateUsers)
Note how this automatically uncovered one more edge case: what if the list is empty? We hadn’t explicitly considered this case before. The compiler-enforced pattern matching requires us to think about all possible states! This is a common pattern in all robust Rust code: putting the compiler in charge of enforcing invariants.
When initializing an object with many fields, it’s tempting to use ..Default::default() to fill in the rest. In practice, this is a common source of bugs. You might forget to explicitly set a new field later when you add it to the struct (thus using the default value instead, which might not be what you want), or you might not be aware of all the fields that are being set to default values.
Instead of this:
let foo = Foo {
field1: value1,
field2: value2,
..Default::default() // Implicitly sets all other fields
let foo = Foo {
field1: value1,
field2: value2,
field3: value3, // Explicitly set all fields
field4: value4,
Yes, it’s slightly more verbose, but what you gain is that the compiler will force you to handle all fields explicitly. Now when you add a new field to Foo, the compiler will remind you to set it here as well and reflect on which value makes sense.
If you still prefer to use Default but don’t want to lose compiler checks, you can also destructure the default instance:
let Foo { field1, field2, field3, field4 } = Foo::default();
This way, you get all the default values assigned to local variables and you can still override what you need:
let foo = Foo {
field1: value1, // Override what you need
field2: value2, // Override what you need
field3, // Use default value
field4, // Use default value
This pattern gives you the best of both worlds:
You get default values without duplicating default logic
The compiler will complain when new fields are added to the struct
It’s clear which fields use defaults and which have custom values
Completely destructuring a struct into its components can also be a defensive strategy for API adherence. For example, let’s say you’re building a pizza ordering system and have an order type like this:
struct PizzaOrder {
size: PizzaSize,
toppings: Vec
For your order tracking system, you want to compare orders based on what’s actually on the pizza - the size, toppings, and crust_type. The ordered_at timestamp shouldn’t affect whether two orders are considered the same.
Here’s the problem with the obvious approach:
impl PartialEq for PizzaOrder {
fn eq(&self, other: &Self) -> bool {
self.size == other.size
&& self.toppings == other.toppings
&& self.crust_type == other.crust_type
// Oops! What happens when we add extra_cheese or delivery_address later?
Now imagine your team adds a field for customization options:
struct PizzaOrder {
size: PizzaSize,
toppings: Vec
Your PartialEq implementation still compiles, but is it correct? Should extra_cheese be part of the equality check? Probably yes - a pizza with extra cheese is a different order! But you’ll never know because the compiler won’t remind you to think about it.
impl PartialEq for PizzaOrder {
fn eq(&self, other: &Self) -> bool {
let Self {
size,
toppings,
crust_type,
ordered_at: _,
} = self;
let Self {
size: other_size,
toppings: other_toppings,
crust_type: other_crust,
ordered_at: _,
} = other;
size == other_size && toppings == other_toppings && crust_type == other_crust
Now when someone adds the extra_cheese field, this code won’t compile anymore. The compiler forces you to decide: should extra_cheese be included in the comparison or explicitly ignored with extra_cheese: _?
This pattern works for any trait implementation where you need to handle struct fields: Hash, Debug, Clone, etc. It’s especially valuable in codebases where structs evolve frequently as requirements change.
Code Smell: From Impls That Are Really TryFrom
Sometimes there’s no conversion that will work 100% of the time. That’s fine. When that’s the case, resist the temptation to offer a From implementation out of habit; use TryFrom instead.
Here’s an example of TryFrom in disguise:
impl From for DetectorStartupErrorSubject {
fn from(report: &DetectorStartupErrorReport) -> Self {
let postfix = report
.get_identifier()
.or_else(get_binary_name)
.unwrap_or_else(|| UNKNOWN_DETECTOR_SUBJECT.to_string());
Self(StreamSubject::from(
format!(“apps.errors.detectors.startup.{postfix}“).as_str(),
The unwrap_or_else is a hint that this conversion can fail in some way. We set a default value instead, but is it really the right thing to do for all callers? This should be a TryFrom implementation instead, making the fallible nature explicit. We fail fast instead of continuing with a potentially flawed business logic.
It’s tempting to use match in combination with a catch-all pattern like _ => {}, but this can haunt you later. The problem is that you might forget to handle a new case that was added later.
match self {
Self::Variant1 => { /* … */ }
Self::Variant2 => { /* … */ }
_ => { /* catch-all */ }
match self {
Self::Variant1 => { /* … */ }
Self::Variant2 => { /* … */ }
Self::Variant3 => { /* … */ }
Self::Variant4 => { /* … */ }
By spelling out all variants explicitly, the compiler will warn you when a new variant is added, forcing you to handle it. Another case of putting the compiler to work.
If the code for two variants is the same, you can group them:
match self {
Self::Variant1 => { /* … */ }
...
Read the original on corrode.dev »
A quarter-century after its publication, one of the most influential research articles on the potential carcinogenicity of glyphosate has been retracted for “several critical issues that are considered to undermine the academic integrity of this article and its conclusions.” In a retraction notice dated Friday, November 28, the journal Regulatory Toxicology and Pharmacology announced that the study, published in April 2000 and concluding the herbicide was safe, has been removed from its archives. The disavowal comes 25 years after publication and eight years after thousands of internal Monsanto documents were made public during US court proceedings (the “Monsanto Papers”), revealing that the actual authors of the article were not the listed scientists — Gary M. Williams (New York Medical College), Robert Kroes (Ritox, Utrecht University, Netherlands), and Ian C. Munro (Intertek Cantox, Canada) — but rather Monsanto employees.
Known as “ghostwriting,” this practice is considered a form of scientific fraud. It involves companies paying researchers to sign their names to research articles they did not write. The motivation is clear: When a study supports the safety of a pesticide or drug, it appears far more credible if not authored by scientists employed by the company marketing the product.
You have 73.89% of this article left to read. The rest is for subscribers only.
...
Read the original on www.lemonde.fr »
Use tab to navigate through the menu items. Or: How the AI Bubble, Panic, and Unpreparedness Stole ChristmasWritten by Tom of Moore’s Law Is DeadAt the beginning of November, I ordered a 32GB DDR5 kit for pairing with a Minisforum BD790i X3D motherboard, and three weeks later those very same sticks of DDR5 are now listed for a staggering $330– a 156% increase in price from less than a month ago! At this rate, it seems likely that by Christmas, that DDR5 kit alone could be worth more than the entire Zen 4 X3D platform I planned to pair it with! How could this happen, and more specifically — how could this happen THIS quickly? Well, buckle up! I am about to tell you the story of Sam Altman’s Dirty DRAM Deal, or: How the AI bubble, panic, and unpreparedness stole Christmas…But before I dive in, let me make it clear that my RAM kit’s 156% jump in price isn’t a fluke or some extreme example of what’s going on right now. Nope, and in fact, I’d like to provide two more examples of how how impossible it is becoming to get ahold of RAM - these were provided by a couple of our sources within the industry:One source that works at a US Retailer, stated that a RAM Manufacturer called them in order to inquire if they might buy RAM from to stock up for their other customers. This would be like Corsair asking a Best Buy if they had any RAM around.Another source that works at a Prebuilt PC company, was recently given an estimate for when they would receive RAM orders if they placed them now…and they were told December…of 2026So what happened? Well, it all comes down to three perfectly synergistic events:two unprecedented RAM deals that took everyone by surprise.The secrecy and size of the deals triggered full-scale panic buying from everyone else.The market had almost zero safety stock left due to tariffs, worry about RAM prices over the summer, and stalled equipment transfers.Below, we’re going to walk through each of these factors — and then I’m going to warn you about which hardware categories will be hit the hardest, which products are already being cancelled, and what you should buy before the shelves turn into a repeat of 2021–2022…because this is doomed to turn into much more than just RAM scarcity…deals with Samsung and SK Hynix for 40% of the worlds DRAM supply. Now, did OpenAI’s competition suspect some big RAM deals could be signed in late 2025? Yes. Ok, but did they think it would be deals this huge and with multiple companies? NO! In fact, if you go back and read reporting on Sam Altman’s now infamous trip to South Korea on October 1st, even just mere hours before the massive deals with Samsung and SK Hynix were — most reporting simply mentioned vague reports about Sam talking to Samsung, SK Hynix, TSMC, and Foxconn. But the reporting at the time was soft, almost dismissive — “exploring ties,” “seeking cooperation,” “probing for partnerships.” Nobody hinted that OpenAI was about to swallow up to 40% of global DRAM output — even on morning before it happened! Nobody saw this coming - this is clear in the lack of reporting about the deals before they were announced, and every MLID Source who works in DRAM manufacturing and distribution insist this took everyone in the industry by surprise.To be clear - the shock wasn’t that OpenAI made a big deal, no, it was that they made two massive deals this big, at the same time, with Samsung and SK Hynix simultaneously! In fact, according to our sources - both companies had no idea how big each other’s deal was, nor how close to simultaneous they were. And this secrecy mattered. It mattered a lot.Had Samsung known SK Hynix was about to commit a similar chunk of supply — or vice-versa — the pricing and terms would have likely been different. It’s entirely conceivable they wouldn’t have both agreed to supply such a substantial part of global supply if they had known more…but at the end of the day - OpenAI did succeed in keeping the circles tight, locking down the NDAs, and leveraging the fact that these companies assumed the other wasn’t giving up this much wafer volume simultaneously…in order to make a surgical strike on the global RAM supply chain…and it’s worked so far…Part II — Instant Panic: How did we miss this?Imagine you’re running a hyper scaler, or maybe you’re a major OEM, or perhaps pretend that you are simply one of OpenAI’s chief competitors: On October 1st of 2025, you would have woken up to the news that OpenAI had just cornered the memory market more aggressively than any company in the last decade, and you hadn’t heard even a murmur that this was coming beforehand! Well, you would probably make some follow-up calls to colleagues in the industry, and then also quickly hear rumors that it wasn’t just you - also the two largest suppliers didn’t even see each other’s simultaneous cooperation with OpenAI coming ! You wouldn’t go: “Well, that’s an interesting coincidence”, no, you would say: “WHAT ELSE IS GOING ON THAT WE DON’T KNOW ABOUT?”Again — it’s not the size of the deals that’s solely the issue here, no, it’s also the of them. On October 1st silicon valley executives and procurement managers panicked over concerns like these:What other deals don’t we know about? Is this just the first of many?None of our DRAM suppliers warned us ahead of time! We have to assume they also won’t in the future, and that it’s possible of global DRAM could be bought up without us getting a single warning!We know OpenAI’s competitors are already panic-buying! If we don’t move we might be locked out of the market until 2028!OpenAI’s competitors, OEMs, and cloud providers scrambled to secure whatever inventory remained out of self-defense, and self-defense in a world that was entirely due to the accelerant I’ll now explain in Part III…Normally, the DRAM market has buffers: warehouses of emergency stock, excess wafer starts, older DRAM manufacturing machinery being sold off to budget brands while the big brands upgrade their production lines…but not in 2025, in 2025 those would-be buffers were depleted for three separate reasons:Tariff Chaos. Companies had deliberately reduced how much DRAM they ordered for their safety stock over the summer of 2025 because tariffs were changing almost weekly. Every RAM purchase risked being made at the wrong moment — and so fewer purchases were made.Prices had been falling all summer. Because of the hesitancy to purchase as much safety stock as usual, RAM prices were also genuinely falling over time. And, obviously when memory is getting cheaper month over month, the thing you’d feel is pressured to buy a commodity that could be cheaper the next month…so everyone waited.Secondary RAM Manufacturing Had Stalled. Budget brands normally buy older DRAM fabrication equipment from mega-producers like Samsung when Samsung upgrades their DRAM lines to the latest and greatest equipment. This allows the DRAM market to more than it would otherwise because it makes any upgrading of the fanciest production lines to still be change to the market. However, Korean memory firms have been terrified that reselling old equipment to China-adjacent OEMs might trigger U.S. retaliation…and so those machines have been sitting idle in warehouses since early spring.Yep, there was no cushion. OpenAI hit the market at the exact moment it was least prepared. And now time for the biggest twist of all, a twist that’s actually , and therefore should be getting discussed by far more people in this writer’s opinion: OpenAI isn’t even bothering to buy finished memory modules! No, their deals are unprecedentedly only for raw wafers — uncut, unfinished, and not even allocated to a specific DRAM standard yet. It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM! Right now it seems like these wafers will just be stockpiled in warehouses — like a kid who hides the toybox because they’re afraid nobody wants to play with them, and thus selfishly feels nobody but them should get the toys!And let’s just say it: Here is the uncomfortable truth Sam Altman is always loath to admit in interviews: OpenAI is worried about losing its lead. The last 18 months have seen competitors catching up fast — Anthropic, Meta, xAI, and specifically Google’s Gemini 3 has gotten a ton of praise just in the past week. Everyone’s chasing training capacity. Everyone needs memory. DRAM is the lifeblood of scaling inference and training throughput. Cutting supply to your rivals is not a conspiracy theory. It’s a business tactic as old as business itself. And so, when you consider how secretive OpenAI was about their deals with Samsung and SK Hynix, but additionally how unready they were to immediately utilize their warehouses of DRAM wafers — it sure seems like a primary goal of these deals was to , and not just an attempt to protect OpenAI’s own supply…Part V — What will be cancelled? What should you buy now?Alright, now that we are done explaining the , let’s get to the “ – because even if the RAM shortage miraculously improves immediately behind the scenes — even if the AI Bubble instantly popped or 10 companies started tooling up for more DRAM capacity this second (and many are, to be fair), at a minimum the next six to nine months are already screwed See above: DRAM manufactures are quoting 13-Month lead times for DDR5! This is not a temporary blip. This could be a once-in-a-generation shock. So what gets hit first? What gets hit hardest? Well, below is an E through S-Tier ranking of which products are “the most screwed”:S-Tier (Already Screwed — Too Late to Buy) -RAM itself, obviously. RAM prices have “exploded”. The detonation is in the past.SSDs. These tends to follow DRAM pricing with a lag.RADEON GPUs. AMD doesn’t bundle RAM in their BOM kits to AIBs the way Nvidia does. In fact, the RX 9070 GRE 16GB this channel leaked months ago is almost certainly cancelled according to our sourcesXBOX. Microsoft didn’t plan. Prices may rise and/or supply may dwindle in 2026.Nvidia GPUs. Nvidia maintains large memory inventories for its board partners, giving them a buffer. But high-capacity GPUs (like a hypothetical 24GB 5080 SUPER) are on ice for now because stores were never sufficiently built up. In fact, Nvidia is quietly telling partners that their SUPER refresh “might” launch Q3 2026 — although most partners think it’s just a placeholder for when Nvidia expects new capacity to come online, and thus SUPER may never launch.C-Tier (Think about buying soon)Laptops and phones. These companies negotiate immense long-term contracts, so they’re not hit immediately. But once their stockpiles run dry, watch out!D-Tier (Consider buying soon, but there’s no rush)PlayStation. Sony planned better than almost anyone else. They bought aggressively during the summer price trough, which is why they can afford a Black Friday discount while everyone else is raising prices.Anything without RAM. Specifically CPUs that do not come with coolers could see price over time since there could be a in demand for CPUs if nobody has the RAM to feed them in systems.???-Tier —Steam Machine. Valve keeps things quiet, but the big unknown is whether they pre-bought RAM months ago before announcing their much-hyped Steam Machine. If they did already stockpile an ample supply of DDR5 - then Steam Machine should launch fine, but supply could dry up temporarily at some point while they wait for prices to drop. However, if they didn’t plan ahead - expect a high launch price and very little resupply…it might even need to be cancelled or there might need to be a variant offered without RAM included (BYO RAM Edition!).And that’s it! This last bit was the most important part of the article in this writer’s opinion — an attempt at helping you avoid getting burned. Well, actually, there is one other important reason for this article’s existence I’ll tack onto the end — a hope that other people start digging into what’s going on at OpenAI. I mean seriously — do we even have a single reliable audit of their financials to back up them outrageously spending this much money… Heck, I’ve even heard from numerous sources that OpenAI is “buying up the manufacturing equipment as well” — and without mountains of concrete proof, and/or more input from additional sources on what that really means…I don’t feel I can touch that hot potato without getting burned…but I hope someone else will…
...
Read the original on www.mooreslawisdead.com »
Recent posts:
04 Aug 2025 »
When to Hire a Computer Performance Engineering Team (2025) part 1 of 2
17 Mar 2024 »
The Return of the Frame Pointers
19 Mar 2022 »
Why Don’t You Use …
Blog index
About
RSS
Recent posts:
04 Aug 2025 »
When to Hire a Computer Performance Engineering Team (2025) part 1 of 2
17 Mar 2024 »
The Return of the Frame Pointers
19 Mar 2022 »
Why Don’t You Use …
Blog index
About
RSS
I’ve resigned from Intel and accepted a new opportunity. If you are an Intel employee, you might have seen my fairly long email that summarized what I did in my 3.5 years. Much of this is public:
It’s still early days for AI flame graphs. Right now when I browse CPU performance case studies on the Internet, I’ll often see a CPU flame graph as part of the analysis. We’re a long way from that kind of adoption for GPUs (and it doesn’t help that our open source version is Intel only), but I think as GPU code becomes more complex, with more layers, the need for AI flame graphs will keep increasing.
I also supported cloud computing, participating in 110 customer meetings, and created a company-wide strategy to win back the cloud with 33 specific recommendations, in collaboration with others across 6 organizations. It is some of my best work and features a visual map of interactions between all 19 relevant teams, described by Intel long-timers as the first time they have ever seen such a cross-company map. (This strategy, summarized in a slide deck, is internal only.)
I always wish I did more, in any job, but I’m glad to have contributed this much especially given the context: I overlapped with Intel’s toughest 3 years in history, and I had a hiring freeze for my first 15 months.
My fond memories from Intel include meeting Linus at an Intel event who said “everyone is using fleme graphs these days” (Finnish accent), meeting Pat Gelsinger who knew about my work and introduced me to everyone at an exec all hands, surfing lessons at an Intel Australia and HP offsite (mp4), and meeting Harshad Sane (Intel cloud support engineer) who helped me when I was at Netflix and now has joined Netflix himself — we’ve swapped ends of the meeting table. I also enjoyed meeting Intel’s hardware fellows and senior fellows who were happy to help me understand processor internals. (Unrelated to Intel, but if you’re a Who fan like me, I recently met some other people as well!)
My next few years at Intel would have focused on execution of those 33 recommendations, which Intel can continue to do in my absence. Most of my recommendations aren’t easy, however, and require accepting change, ELT/CEO approval, and multiple quarters of investment. I won’t be there to push them, but other employees can (my CloudTeams strategy is in the inbox of various ELT, and in a shared folder with all my presentations, code, and weekly status reports). This work will hopefully live on and keep making Intel stronger. Good luck.
...
Read the original on www.brendangregg.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.