10 interesting stories served every morning and every evening.
Skip to main content
An official website of the United States GovernmentHere’s how you knowOfficial websites use .gov
A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Please enable JavaScript if it is disabled in your browser or access the information through the links provided below.
The [Tab] key may be used in combination with the [Enter/Return] key to navigate and activate control buttons, such as caption on/off.
On Friday, the Department of Justice served the Federal Reserve with grand jury subpoenas, threatening a criminal indictment related to my testimony before the Senate Banking Committee last June. That testimony concerned in part a multi-year project to renovate historic Federal Reserve office buildings.
I have deep respect for the rule of law and for accountability in our democracy. No one—certainly not the chair of the Federal Reserve—is above the law. But this unprecedented action should be seen in the broader context of the administration’s threats and ongoing pressure.
This new threat is not about my testimony last June or about the renovation of the Federal Reserve buildings. It is not about Congress’s oversight role; the Fed through testimony and other public disclosures made every effort to keep Congress informed about the renovation project. Those are pretexts. The threat of criminal charges is a consequence of the Federal Reserve setting interest rates based on our best assessment of what will serve the public, rather than following the preferences of the President.
This is about whether the Fed will be able to continue to set interest rates based on evidence and economic conditions—or whether instead monetary policy will be directed by political pressure or intimidation.
I have served at the Federal Reserve under four administrations, Republicans and Democrats alike. In every case, I have carried out my duties without political fear or favor, focused solely on our mandate of price stability and maximum employment. Public service sometimes requires standing firm in the face of threats. I will continue to do the job the Senate confirmed me to do, with integrity and a commitment to serving the American people.
...
Read the original on www.federalreserve.gov »
Apple is joining forces with Google to power its artificial intelligence features, including a major Siri upgrade expected later this year.
The multiyear partnership will lean on Google’s Gemini and cloud technology for future Apple foundational models, according to a joint statement obtained by CNBC’s Jim Cramer.
“After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users,” Apple said in a statement Monday.
The models will continue to run on Apple devices and the company’s private cloud compute, the companies added.
Apple declined to comment on the terms of the deal. Google referred CNBC to the joint statement.
In August, Bloomberg reported that Apple was in early talks with Google to use a custom Gemini model to power a new iteration of Siri. The news outlet later reported that Apple was planning to pay about $1 billion a year to utilize Google AI.
The deal is another major indicator of growing trust in Google’s accelerating AI agenda and comeback against OpenAI. In 2025, the search giant logged its best year since 2009 and surpassed Apple in market capitalization last week for the first time since 2019.
Google already pays Apple billions each year to be the default search engine on iPhones. But that lucrative partnership briefly came into question after Google was found to hold an illegal internet search monopoly.
In September, a judge ruled against a worst-case scenario outcome that could have forced Google to divest its Chrome browser business.
The decision also allowed Google to continue to make deals such as the one with Apple.
...
Read the original on www.cnbc.com »
When we released Claude Code, we expected developers to use it for coding. They did—and then quickly began using it for almost everything else. This prompted us to build Cowork: a simpler way for anyone—not just developers—to work with Claude in the very same way. Cowork is available today as a research preview for Claude Max subscribers on our macOS app, and we will improve it rapidly from here.
How is using Cowork different from a regular conversation? In Cowork, you give Claude access to a folder of your choosing on your computer. Claude can then read, edit, or create files in that folder. It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.
In Cowork, Claude completes work like this with much more agency than you’d see in a regular conversation. Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to. If you’ve used Claude Code, this will feel familiar—Cowork is built on the very same foundations. This means Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.
When you’ve mastered the basics, you can make Cowork more powerful still. Claude can use your existing connectors, which link Claude to external information, and in Cowork we’ve added an initial set of skills that improve Claude’s ability to create documents, presentations, and other files. If you pair Cowork with Claude in Chrome, Claude can complete tasks that require browser access, too.
Cowork is designed to make using Claude for new work as simple as possible. You don’t need to keep manually providing context or converting Claude’s outputs into the right format. Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel. It feels much less like a back-and-forth and much more like leaving messages for a coworker.
In Cowork, you can choose which folders and connectors Claude can see: Claude can’t read or edit anything you don’t give it explicit access to. Claude will also ask before taking any significant actions, so you can steer or course-correct it as you need.
That said, there are still things to be aware of before you give Claude control. By default, the main thing to know is that Claude can take potentially destructive actions (such as deleting local files) if it’s instructed to. Since there’s always some chance that Claude might misinterpret your instructions, you should give Claude very clear guidance around things like this.
You should also be aware of the risk of “prompt injections”: attempts by attackers to alter Claude’s plans through content it might encounter on the internet. We’ve built sophisticated defenses against prompt injections, but agent safety—that is, the task of securing Claude’s real-world actions—is still an active area of development in the industry.
These risks aren’t new with Cowork, but it might be the first time you’re using a more advanced tool that moves beyond a simple conversation. We recommend taking precautions, particularly while you learn how it works. We provide more detail in our Help Center.
This is a research preview. We’re releasing Cowork early because we want to learn what people use it for, and how they think it could be better. We encourage you to experiment with what Cowork can do for you, and to try things you don’t expect to work: you might be surprised! As we learn more from this preview, we plan to make lots of improvements (including by adding cross-device sync and bringing it to Windows), and we’ll identify further ways to make it safer.
Claude Max subscribers can try Cowork now by downloading the macOS app, then clicking on “Cowork” in the sidebar. If you’re on another plan, you can join the waitlist for future access.
...
Read the original on claude.com »
Modern TVs are very poorly suited for kids. They require using complicated remotes or mobile phones, and navigating apps that continually try to lure you into watching something else than you intended to. The usual scenario ends up with the kid feeling disempowered and asking an adult to put something on. That something ends up on auto-play because then the adult is free to do other things and the kid ends up stranded powerless and comatose in front of the TV.
Instead I wanted to build something for my 3-year old son that he could understand and use independently. It should empower him to make his own choices. It should be physical and tangible, i.e. it should be something he could touch and feel. It should also have some illusion that the actual media content was stored physically and not un-understandably in “the cloud”, meaning it should e.g. be destroyable — if you break the media there should be consequences. And there should be no auto-play: interact once and get one video.
My first idea for datastorage was to use the shell of a floppy disk and floppy drive, and put in an RFID tag; this has been done a couple of times on the internet, such as RFIDisk or this RaspberryPi based RFID reader or this video covering how to embed an RFID tag in a floppy disk. But getting the floppy disk apart to put in an RFID tag and getting it back together was kinda wonky.
The next problem to tackle was how to detect that a disk is inserted. The concept of AutoRun from Windows 95 was a beauty: insert a CD-ROM and it would automatically start whatever was on the media. Great for convenience, quite questionably for security. While in theory floppy disks are supported for AutoRun, it turns out that floppy drives basically don’t know if a disk is inserted until the operating system tries to access it! There is a pin 34 “Disk Change” that is supposed to give this information, but this is basically a lie. None of the drives in my possession had that pin connected to anything, and the internet mostly concurs. In the end I slightly modified the drive and added a simple rolling switch, that would engage when a disk was inserted.
However, the Arduino FDC Floppy library is only compatible with the AVR-based Arduinos, not the ESP-based ones, because it needs to control the timing very precisely and therefore uses a healthy amount of inline assembler. This meant that I would need one AVR-based Arduino to control the floppy disk, but another ESP-based one to do the WiFi communication. Such combined boards do exist, and I ended up using such a board, but I’m not sure I would recommend it: the usage is really finagly, as you need to set the jumpers differently for programming the ATmega, or programming the ESP, or connecting the two boards serial ports together.
A remote control should be portable, and this means battery-powered. Driving a floppy disk of of lithium batteries was interesting. There is a large spike in current draw when the disk needs to spin up of several amperes, while the power draw afterwards is more modest, a couple of hundred milliamperes. I wanted the batteries to be 18650s, because I have those in abundance. This meant a battery voltage of 3.7V nominally, up to 4.2V for a fully charged battery; 5V is needed to spin the floppy around, so a boost DC-DC converter was needed. I used an off the shelf XL6009 step-up converter board. At this point a lot of head-scratching occurred: that initial spin-up power draw would cause the microcontroller to reset. In the end a 1000uF capacitor at the microcontroller side seemed to help but not eliminate the problem.
One crucial finding was that the ground side of the interface cable should absolutely not be connected to any grounds on the microcontroller side. I was using a relatively simple logic-level MOSFET, the IRLZ34N, to turn off the drive by disconnecting the ground side. If any ground is connected, the disk won’t turn off. But also: if any logic pin was being pulled to ground by the ATmega, that would also provide a path to ground. But since the ATmega cannot sink that much current this would lead to spurious resets! Obvious after the fact, but this took quite some headscratching. Setting all the logic pins to input, and thus high impedance, finally fixed the stability issues.
After fixing the stability, the next challenge was how to make both of the microcontrollers sleep. Because the ATmega sleep modes are quite a lot easier to deal with, and because the initial trigger would be the floppy inserting, I decided to make the ATmega in charge overall. Then the ESP has a very simple function: when awoken, read serial in, when a newline is found then send off that complete line via WiFi, and after 30 seconds signal to the ATmega that we’re sleeping, and go back to sleep.
The ATmega sends a “diskin” message over serial to the ESP; the ESP transmits this over WiFi when available.
Common for both is that they should be idempotent actions, and the diskin shortcut will make the media resume without having to wait for the disk contents itself to be read and processed. This means that the “play/pause” disk just contains an empty file to work.
The little guy quickly caught on to the idea! Much fun was had just pausing and resuming music and his Fantus TV shows. He explored and prodded, and some disks were harmed in the process. One problem that I did solve was that the read head stayed on track 0 after having read everything: this means that when the remote with disk inside it is tumbled around, the disk gets damaged at track 0. To compensate for this, I move the head to track 20 after reading has finished: any damage is then done there, where we don’t store any data. As a bonus it also plays a little more mechanic melody.
...
Read the original on blog.smartere.dk »
A language model trained from scratch exclusively on data from certain places and time periods to reduce modern bias and emulate the voice, vocabulary, and worldview of the era.
Imagine if an AI model didnt just pretend to be historical but actually was.
v0 and v0.5 built on nanoGPT by Andrej Karpathy Core training scripts and model architecture are his work.
Early prompts show the model responding with 1800′s language and behavior. Example: Prompt: “Who art Henry?” and it replied “I know that man, I have did not a black, the storm.”
* Sentences are mostly incoherent (expected for ~187MB training data)
* OCR noise (“Digitized by Google”) still present in outputs
First model to recall and connect a real historical event with an actual figure from the dataset.
Example: Prompt: “It was the year of our Lord 1834”
The output: “It was the year of our Lord 1834 and the streets of London were filled with protest and petition. The cause, as many re counted, was not bound in the way of private, but having taken up the same day in the day of Lord Palmerston, the public will receive a short statement of the difficulties under which the day of law has reached us. It is a matter of deep regret, that the present events in the history of the world are clear, and consequently will be’known. It is not true that the very men who first settled in the Gospel at Jerusalem should have so extensive and so interesting a record of the prosperity and prosperity”
At first I assumed that a protest might have coincidentally taken place the same year but take a look at this:
Trained using a 15GB sample from v2′s 90GB dataset. Model trained to 10K steps only.
There was an issue with tokenization that causes output to look like this:
W ho is Charles D ic ens ? D oes that work more of h ise x cell ent st ir ring , in his pl ays , int he G reat C omp any ’s f arm ? What I have y et to qu ote from J ack P ick ett ? D oy oun ot know th att hey were as pe cies of g all oping , ors aw ing oft heir bre asts , or what w ast ob ed one about the t im ew hen J acks on was looking ont hew ind ow ? What ist hes uccess of an English man , and which h isson has not be end is cover ing t ome , where by to acc omp lish such a we ight ? D idy ou ever m ake ap ass age int othe old road way , ort o an anch or - b ree ze att he foot of our boat , which you m ust le ave us ? The fact is , that whe ther the wind would rise up from the pl ain on S atur day night orn o ont ide , ort ill then orth , or other wise , wes hall be com pelled t od esc ribe af orm idable bar rier , with t hes ame e ffect s ast he p resent . In th iss itu ation , at le ast , itis not to omuch t os ay that we have le ft that room . I bel ieve there are three c op ies int he ” F ive H undred - fold ,” to bere fer red to , ast he first n umber of our read ers wh ow is ht
“Who is Charles Dickens? Does that work more of his excellent stirring, in his plays, in the Great Company’s farm? What I have yet to quote from Jack Pickett? Do you not know that they were a species of galloping, or sawing of their breasts, or what was to be done about the time when Jackson was looking on the window? What is the success of an Englishman, and which his son has not been discovering to me, whereby to accomplish such a weight? Did you ever make a passage into the old roadway, or to an anchor-breeze at the foot of our boat, which you must leave us? The fact is, that whether the wind would rise up from the plain on Saturday night or noontide, or till the north, or otherwise, we shall be compelled to describe a formidable barrier, with the same effects as the present. In this situation, at least, it is not too much to say that we have left that room. I believe there are three copies in the ‘Five Hundred-fold,’ to be referred to, as the first number of our readers who wish t[o]…”
Trained using the same 15GB sample used for training eval1. Model is again trained to 10K steps only.
Charles DarwinECCEMACY. Sir, — The following case is interesting to me : — I was in London a fortnight, and was much affected with an attack of rheumatism. The first attack of rheumatism was a week before I saw you, and the second when I saw you, and the third when I saw you, and the third in the same time. The second attack of gout, however, was not accompanied by any febrile symptoms, but was accompanied with an increased flow of urine, and with a more copious discharge of urine. The third attack was an hour after I saw you, and was succeeded by a return of a paroxysm of gout, and a more rapid return of the gout. The fourth attack was also accompanied by a fever, but was not always accompanied by any febrile symptoms. The third attack of gout was a fortnight after you had been ill, and the fourth was followed by a paroxysm of gout. The fourth attack was a fortnight after you were attacked, and was accompanied by a sense
* The full 90GB is not avalaible yet as it hasn’t been tokenized but you can find a 15GB sample here: https://huggingface.co/datasets/haykgrigorian/TimeCapsuleLLM-London-1800-1875-v2-15GB
Refer to v2 bias report for more info.
This project focuses mostly on curating historical data, preparing it for training and building a tokenizer. I am not going to cover the full LLM training process, for that refer to nanoGPT by Andrej Karpathy.
* Collect .txt files of public domain books, documents, etc from your chosen time period (e.g., London 1800-1850)
* Keep them within your chosen time/place window
* Clean the text files using a script or manually remove headers/footer from Project Gutenberg, Modern annotations or things like OCR errors.
* Run train_tokenizer.py or train_tokenizer_hf.py on the cleaned data.
* This will give you vocab.json and merges.txt
* Thes files define vocab and merge rules for your model
* Refer to nanoGPT by Andrej Karpathy for the training process or your chosen architecture’s docs.
Selective Temporal Training (STT) is a machine learning methodology where all training data is specifically curated to fall within a specific historical time period. It’s done in order to model the language and knowledge of that era without influence from modern concepts. For example, the current model I have now (v0.5) is trained on data exclusively from 1800-1875, it’s not fine tuned but trained from scratch resulting in output that reflects the linguistic style and historical context of that time period.
For this project I’m trying to create a language model that is unclouded from modern bias. If I fine-tune something like GPT-2, it’s already pre-trained and that information won’t go away. If I train from scratch the language model won’t pretend to be old, it just will be. The Goal for this project right now is to create something can reason exclusively using knowledge from London books published between 1800 and 1875.
I’m using books, legal documents, newspapers, and other writings from 1800–1875 London. The list I linked (for v0) has like 200 but for the first training I just used 50 files about ~187 MB. You can view a list of the documents:
...
Read the original on github.com »
When Americans begin taking appetite-suppressing drugs like Ozempic and Wegovy, the changes extend well beyond the bathroom scale. According to new research, the medications are associated with meaningful reductions in how much households spend on food, both at the grocery store and at restaurants.
The study, published Dec. 18 in the Journal of Marketing Research, links survey data on GLP-1 receptor agonist use — a class of drugs originally developed for diabetes and now widely prescribed for weight loss — with detailed transaction records from tens of thousands of U. S. households. The result is one of the most comprehensive looks yet at how GLP-1 adoption is associated with changes in everyday food purchasing in the real world.
The headline finding is striking: Within six months of starting a GLP-1 medication, households reduce grocery spending by an average of 5.3%. Among higher-income households, the drop is even steeper, at more than 8%. Spending at fast-food restaurants, coffee shops and other limited-service eateries falls by about 8%.
Among households who continue using the medication, lower food spending persists at least a year, though the magnitude of the reduction becomes smaller over time, say co-authors, assistant professor Sylvia Hristakeva and professor Jura Liaukonyte, both in the Charles H. Dyson School of Applied Economics and Management in the Cornell SC Johnson College of Business.
“The data show clear changes in food spending following adoption,” Hristakeva said. “After discontinuation, the effects become smaller and harder to distinguish from pre-adoption spending patterns.”
Unlike previous studies that relied on self-reported eating habits, the new analysis draws on purchase data collected by Numerator, a market research firm that tracks grocery and restaurant transactions for a nationally representative panel of about 150,000 households. The researchers matched those records with repeated surveys asking whether household members were taking GLP-1 drugs, when they started and why.
That combination allowed the team to compare adopters with similar households that did not use the drugs, isolating changes that occurred after medication began.
The reductions were not evenly distributed across the grocery store.
Ultra-processed, calorie-dense foods — the kinds most closely associated with cravings — saw the sharpest declines. Spending on savory snacks dropped by about 10%, with similarly large decreases in sweets, baked goods and cookies. Even staples like bread, meat and eggs declined.
Only a handful of categories showed increases. Yogurt rose the most, followed by fresh fruit, nutrition bars and meat snacks.
“The main pattern is a reduction in overall food purchases. Only a small number of categories show increases, and those increases are modest relative to the overall decline,” Hristakeva said.
The effects extended beyond the supermarket. Spending at limited-service restaurants such as fast-food chains and coffee shops fell sharply as well.
The study also sheds light on who is taking GLP-1 medications. The share of U. S. households reporting at least one user rose from about 11% in late 2023 to more than 16% by mid-2024. Weight-loss users skew younger and wealthier, while those taking the drugs for diabetes are older and more evenly distributed across income groups.
Notably, about one-third of users stopped taking the medication during the study period. When they did, their food spending reverted to pre-adoption levels — and their grocery baskets became slightly less healthy than before they started, driven in part by increased spending on categories such as candy and chocolate.
That movement underscores an important limitation, the authors caution. The study cannot fully separate the biological effects of the drugs from other lifestyle changes users may make at the same time. However, evidence from clinical trials, combined with the observed reversion in spending after discontinuation, suggests appetite suppression is likely a key mechanism behind the spending changes.
The findings carry implications far beyond individual households.
For food manufacturers, restaurants and retailers, widespread GLP-1 adoption could mean long-term shifts in demand, particularly for snack foods and fast food. Package sizes, product formulations and marketing strategies may need to change. For policymakers and public-health experts, the results add context to ongoing debates about the role of medical treatments in shaping dietary behavior — and whether biologically driven appetite changes succeed where taxes and labels have struggled.
“At current adoption rates, even relatively modest changes at the household level can have meaningful aggregate effects,” Hristakeva said. “Understanding these demand shifts is therefore important for assessing food markets and consumer spending.”
...
Read the original on news.cornell.edu »
Having not revisited The Hobbit in some time, I’ve felt the familiar pull—shared by many readers—to return to Tolkien’s fairy-tale novel itself. It was my first exposure to Tolkien, and the perfect book for a young reader ready to dive into moral complexity and a fully-realized fictional world.
And what better guide could there be through The Hobbit than Tolkien himself, reading (above) from the 1937 work? In this 1952 recording in two parts (part 2 is below), the venerable fantasist and scholar reads from his own work for the first time on tape.
Tolkien begins with a passage that first describes the creature Gollum; listening to this description again, I am struck by how much differently I imagined him when I first read the book. The Gollum of The Hobbit seems somehow hoarier and more monstrous than many later visual interpretations. This is a minor point and not a criticism, but perhaps a comment on how necessary it is to return to the source of a mythic world as rich as Tolkien’s, even, or especially, when it’s been so well-realized in other media. No one, after all, knows Middle Earth better than its creator.
These readings were part of a much longer recording session, during which Tolkien also read (and sang!) extensively from The Lord of the Rings. A YouTube user has collected, in several parts, a radio broadcast of that full session, and it’s certainly worth your time to listen to it all the way through. It’s also worth knowing the neat context of the recording. Here’s the text that accompanies the video on YouTube:
When Tolkien visited a friend in August of 1952 to retrieve a manuscript of The Lord of the Rings, he was shown a “tape recorder”. Having never seen one before, he asked how it worked and was then delighted to have his voice recorded and hear himself played back for the first time. His friend then asked him to read from The Hobbit, and Tolkien did so in this one incredible take.
Note: An earlier version of this post appeared on our site in 2012.
Listen to J. R.R. Tolkien Read Poems from The Fellowship of the Ring, in Elvish and English (1952)
When the Nobel Prize Committee Rejected The Lord of the Rings: Tolkien “Has Not Measured Up to Storytelling of the Highest Quality” (1961)
J. R.R. Tolkien Snubs a German Publisher Asking for Proof of His “Aryan Descent” (1938)
Josh Jones is a writer and musician based in Durham, NC.
...
Read the original on www.openculture.com »
I have not been shy talking about my love of Xfce over the years here. The desktop environment has been a trusted friend ever since I first loved it on the late Cobind Desktop (still the high water mark of desktop Linux, as far as I’m concerned).
I’m glad to see I’m not the only one. David Gerard of Pivot to AI fame recently shared this post he wrote in 2012:
The question with minimal desktops is the fine line between as simple as possible and just a bit too simple. How much basic stuff do you have to add back? 4.8 took it slightly far, 4.10 is almost Just Right. XFCE is so far a case study in Not Fucking It Up; I hope they never go to version 5, and just update 4 forever.
This (a) longevity and (2) getting the balance right cannot be overstated. Here’s my current Xfce desktop, for example:
Except, no it isn’t. That’s a screenshot of my FreeBSD desktop from 2008, with the bright and clear Tango Iconset (speaking of high-water marks). Remember when iconography was discernable at a glance? Aka, functional as icons? But I digress.
Xfce in 2025 (no, 2026, damn it!) is just as easy to understand, light, and fast as it first was booting Cobind on my HP Brio when I was in school, or when building it from source in FreeBSD ports. Though unlike a barebones window manager or other “light” DEs, Xfce feels usable, feature complete, and designed by someone who understands why people use desktop computers (cough GNOME).
I do use KDE on my primary desktop. Version 4 was a mess, but they’ve made massive improvements, especially within the last year. I’m not sure how much this had to do with the Steam Deck, and a new generation of people realising that… wait… I can run stuff on this box other than games? There’s a desktop here!? But my laptops all run Xfce, and I’m half-tempted to move back to it on the desktop.
I’m with David here. I hope they never feel the need to “innovate” with “disruption” for “UX”. The switch to the Thunar file manager was the last major user-facing change I can remember, and it was great.
I’m not suggesting we reached peak UI with Xfce, but no desktop since has made a compelling case (for me) for its replacement. I love, love, love that Xfce is maintained this way in spite of all the industry pressures to turn it into something else.
I stopped writing posts like this for years, out of fear of how people from specific desktop environments would respond. If you’re about to write me an angry screed, know that I will immediately delete it and block you, just as I did last time. Both yours and my time are better spent.
I also know (sigh) this disclaimer will be ignored, so I’m questioning why I’m even bothering. Maybe I’m a sucker for punishment.
...
Read the original on rubenerd.com »
Disney Animation’s movie for 2025 is Zootopia 2, which is the studio’s 64th animated feature film. Zootopia 2 picks up where the first film left off, taking us deeper into the wonderful and wild animal world of the city. One of the really fun things about Zootopia projects is that each one expands the world further. The first film introduced the setting, the Zootopia+ series on Disney+ offered fun character vignettes to expand that world, and Zootopia 2 now takes us deep into the city’s history and to places both familiar and brand new. I’ve had a great time working on Zootopia 2 for the past two years!
From a technology perspective, sequels are always interesting to work on because they give us the ability to evaluate where our filmmaking capabilities presently stand compared against a known past benchmark; we know roughly what it takes to make a Zootopia movie already, and so we can see how much better we have gotten at it in the intervening years. I think Zootopia 2 is an especially interesting case because of how important the first Zootopia (2016) was in the history of Disney Animation’s technology development. For a bit of context: the decade of Disney Animation films leading up to Zootopia (2016) was a time when the studio was rapidly climbing a steep learning curve for making CG movies. Every film had technical challenges that called for the studio to overcome unprecedented obstacles. Zootopia (2016) similarly presented an enormous list of challenges, but upon completing the film I felt there was a stronger sense of confidence in what the studio could achieve together. A small anecdote about Zootopia (2016) that I am very proud of is that at SIGGRAPH 2017, I heard from a friend at a major peer feature animation studio that they were blown away and had absolutely no idea how we had made Zootopia.
Ever since then, the sense in the studio has always been “this movie will be hard to make, but we know how to make it.” This isn’t to say that we don’t have interesting and difficult challenges to overcome in each movie we make; we always do! But, ever since Zootopia (2016)’s completion, I think we’ve been able to approach the challenges in each movie with greater confidence that we will be able to find solutions.
The major technology challenges on Zootopia 2 ultimately were pretty similar to the challenges on Zootopia (2016): everything is about detail and scale [Burkhard et al. 2016]. The world of Zootopia is incredibly detailed and visually rich, and that detail has to hold up at scales ranging from a tiny shrew to the tallest giraffe. Most characters are covered in detailed fur and hair, and because the setting is a modern city, shots can have hundreds or even thousands of characters on screen all at the same time, surrounded by all of the vehicles and lights and zillions of other props and details one expects in a city. Almost every shot in the movie has some form of complex simulation or FX work, and the nature of the story takes us through every environment and lighting scenario imaginable, all of which we have to be able to render cohesively and efficiently. Going back and rewatching Zootopia (2016), I still notice how much incredible geometry and shading detail is packed into every frame, and in the nine years since, our artists have only pushed things even further.
To give an example of the amazing amount of detail in Zootopia 2: at one point during production, our rendering team noticed some shots that had incredibly detailed snow with tons of tiny glints, so out of curiosity we opened up the shots to see how the artists had shaded the snow, and we found that they had constructed the snow out of zillions upon zillions of individual ice crystals. We were completely blown away; constructing snow this way was an idea that Disney Research had explored shortly after the first Frozen movie was made [Müller et al. 2016], but at the time it was purely a theoretical research idea, and a decade later our artists were just going ahead and actually doing it. The result in the final film looks absolutely amazing, and on top of that, instead of needing a specialized technology solution to make this approach feasible, in the past decade both our renderer and computers in general have gotten so much faster and our artists have improved their workflows so much that a brute-force solution was good enough to achieve this effect without much trouble at all.
One of the largest rendering advancements we made on Zootopia (2016) was the development of the Chiang hair shading model, which has since become the de-facto industry standard for fur/hair shading and is implemented in most major production renderers. For Zootopia 2, we kept the Chiang hair shading model [Chiang et al. 2016] as-is, but instead put a lot of effort into improving the accuracy and performance of our hair ray-geometry intersection algorithms. Making improvements to our ray-curve intersector actually took a large amount of close iteration with our Look Development artists. This may sound surprising since we didn’t change the fur shader at all, but the final look of our fur is an effect that arises from extensive multiple-scattering between fur strands, for which small energy differences that arise from inaccuracies in ray-curve intersection can multiply over many bounces into pretty significant overall look differences. In an original film, if the look of a character’s hair drifts slightly during early preproduction due to underlying renderer changes, generally these small visual changes can be tolerated and factored in as the look of the film evolves, but in a sequel with established characters that have a known target look that we must meet, we have to be a lot more careful.
I’ve been lucky enough to have gotten to work on a wide variety of types and scales of projects over the past decade at Disney Animation, and for Zootopia 2 I got to work on two of my absolute favorite types of projects. The first type of favorite project is the ones where we get to work on a custom solution for a very specific visual need in the film; these are the projects where I can point out a specific thing in final frames that is there because I wrote the code for it. My second type of favorite project is ones where we get to take something super bleeding edge from pure research and take it all the way through to practical, wide production usage. Getting to do both of these types of projects on the same film was a real treat! On Zootopia 2, working on the water tubes sequence was the first project type, and working closely with Disney Research Studios to widely deploy our next-generation path guiding system was the second project type. Hopefully we’ll have a lot more to present on both of these at SIGGRAPH/DigiPro 2026, but in the meantime here’s a quick summary.
One of the big projects I worked on for Moana 2 was a total, from-scratch rethink of our entire approach to rendering water. For the most part the same system we used on Moana 2 proved to be equally successful on Zootopia 2, but for the sequence where Nick, Judy, and Gary De’Snake zoom across the city in a water tube transport system, we had to extend the water rendering system from Moana 2 a little bit further. During this sequence, our characters are inside of glass tubes filled with water moving at something like a hundred miles per hour, with the surrounding environment visible through the tubes and whizzing by. In order to achieve the desired art direction, the tubes had to be modeled with actual water geometry inside since things like bubbles and sloshes and murk and such had to be visible, so going from inside to outside the geometry we had to render was characters inside of water inside of double-sided glass tubes set in huge complex forest and city environments. To both give artists the ability to efficiently model this setup and efficiently render these shots, we wound up building out a customized version of the standard nested dielectrics solution [Schmidt and Budge 2002]. Normally nested dielectrics is pretty straightforward to implement in a simple academic renderer (I’ve written about implementing nested dielectrics in my hobby renderer before), but implementing nested dielectrics to work correctly with the myriad of other advanced features in a production renderer while also remaining performant and robust within the context of a wavefront path tracing architecture proved to require a bit more work compared with in a toy renderer.
During Moana 2’s production, we started work with Disney Research|Studios on a next-generation path guiding system in Hyperion that supports both volumes and surfaces (unlike our previous path guiding system, which only supported surfaces); this new system is built on top of the excellent and state-of-the-art Open Path Guiding (OpenPGL) library [Herholz and Dittebrandt 2022]. Zootopia 2 is the first film where we’ve been able to deploy our next-generation path guiding on a wide scale, rendering about 12% of the entire movie using this system. We presented a lot of the technical details of this new system in our course on path guiding [Reichardt et al. 2025] at SIGGRAPH 2025, but a lot more work beyond what we presented in that course had to go into making path guiding a really production scalable renderer feature. This effort required deep collaboration between a handful of developers on the Hyperion team and a bunch of folks at Disney Research|Studios, to the point where over the past few years Disney Research|Studios has been using Hyperion essentially as one of their primary in-house research renderer platforms and Disney Research staff have been working directly with us on the same codebase. Having come from a more academic rendering background, I think this is one of the coolest things that being part of the larger Walt Disney Company enables our team and studio to do. Our next-generation path guiding system proved to be a really valuable tool on Zootopia 2; in several parts of the movie, entire sequences that we had anticipated to be extraordinarily difficult to render saw enormous efficiency and workflow improvements thanks to path guiding and wound up going through with relative ease!
One particularly fun thing about working on Zootopia 2 was that my wife, Harmony Li, was one of the movie’s Associate Technical Supervisors; this title means she was one of the leads for Zootopia 2’s TD department. Harmony being a supervisor on the show meant I got to work closely with her on a few things! She oversaw character look, simulation, technical animation, crowds, and something that Disney Animation calls “Tactics”, which is essentially optimization across the entire show ranging from pipeline and workflows all the way to render efficiency. As part of Zootopia 2’s Tactics strategy, the rendering team was folded more closely into the asset building process than in previous shows. Having huge crowds of thousands of characters on screen meant that every single individual character needed to be as optimized as possible, and to that end the rendering team helped provide guidance and best practices early in the character modeling and look development process to try to keep everything optimized while not compromising on final look. However, render optimization was only a small part of making the huge crowds in Zootopia 2 possible; various production technology teams and the TD department put enormous groundbreaking work into developing new ways to efficiently author and represent crowd rigs in USD and to interactively visualize huge crowds covered in fur inside of our 3D software packages. All of this also had to be done while, for the first time on a feature film project, Disney Animation switched from Maya to Presto for animation, and all on a movie which by necessity contains by far the greatest variety of different rig types and characters in any of our films (possible in any animated film, period). Again, more on all of this at SIGGRAPH 2026, hopefully.
I think all of the things I’ve written about in this post are just a few great examples of why I think having a dedicated in-house technology development team is so valuable to the way we make films- Disney Animation’s charter is to always be making animated films that push the limits of the art form, and making sure our films are the best looking films we can possible make is a huge part of that goal. As an example, while Hyperion has a lot of cool features and unique technologies that are custom tailored to support Disney Animation’s needs and workflows, in my opinion the real value Hyperion brings at the end of the day is that our rendering team partners extremely closely with our artists and TDs to build exactly the tools that are needed for each of our movies, with maximum flexibility and customization since we know and develop the renderer from top to bottom. This is true of every technology team at Disney Animation, and it’s a big part of why I love working on our movies. I’ve written only about the projects I worked directly on in this post, which is a tiny subset of the whole of what went into making this movie. Making Zootopia 2 took dozens and dozens of these types of projects to achieve, and I’m so glad to have gotten to be a part of it!
On another small personal note, my wife and I had our first kid during the production of Zootopia 2, and our baby’s name is in the credits in the production babies section. What a cool tradition, and what a cool thing that our baby will forever be a part of!
Below are some beautiful frames from Zootopia 2. Every last detail in this movie was hand-crafted by hundreds of artists and TDs and engineers out of a dedication to and love for animation as an art form, and I promise this movie is worth seeing on the biggest theater screen you can find!
All images in this post are courtesy of and the property of Walt Disney Animation Studios.
Nicholas Burkard, Hans Keim, Brian Leach, Sean Palmer, Ernest J. Petti, and Michelle Robinson. 2016. From Armadillo to Zebra: Creating the Diverse Characters and World of Zootopia. In ACM SIGGRAPH 2016 Production Sessions. Article 24.
Matt Jen-Yuan Chiang, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum (Proc. of Eurographics) 35, 2 (May 2016), 275-283.
Thomas Müller, Marios Papas, Markus Gross, Wojciech Jarosz, and Jan Novák. 2016. Efficient Rendering of Heterogeneous Polydisperse Granular Media. ACM Transactions on Graphics (Proc. of SIGGRAPH Asia) 35, 6 (Nov. 2016), Article 168.
Lea Reichardt, Brian Green, Yining Karl Li, and Marco Manzi. 2025. Path Guiding Surfaces and Volumes in Disney’s Hyperion Renderer- A Case Study. In ACM SIGGRAPH 2025 Course Notes: Path Guiding in Production and Recent Advancements. 30-66.
Charles M. Schmidt and Brian Budge. 2002. Simple Nested Dielectrics in Ray Traced Images. Journal of Graphics Tools 7, 2 (Jan. 2002), 1–8.
...
Read the original on blog.yiningkarlli.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.