10 interesting stories served every morning and every evening.




1 845 shares, 33 trendiness

Statement from Federal Reserve Chair Jerome H. Powell

Skip to main con­tent

An of­fi­cial web­site of the United States GovernmentHere’s how you knowOf­fi­cial web­sites use .gov

A .gov web­site be­longs to an of­fi­cial gov­ern­ment or­ga­ni­za­tion in the United States. Secure .gov web­sites use HTTPS

A lock () or https:// means you’ve safely con­nected to the .gov web­site. Share sen­si­tive in­for­ma­tion only on of­fi­cial, se­cure web­sites.

Please en­able JavaScript if it is dis­abled in your browser or ac­cess the in­for­ma­tion through the links pro­vided be­low.

The [Tab] key may be used in com­bi­na­tion with the [Enter/Return] key to nav­i­gate and ac­ti­vate con­trol but­tons, such as cap­tion on/​off.

On Friday, the Department of Justice served the Federal Reserve with grand jury sub­poe­nas, threat­en­ing a crim­i­nal in­dict­ment re­lated to my tes­ti­mony be­fore the Senate Banking Committee last June. That tes­ti­mony con­cerned in part a multi-year pro­ject to ren­o­vate his­toric Federal Reserve of­fice build­ings.

I have deep re­spect for the rule of law and for ac­count­abil­ity in our democ­racy. No one—cer­tainly not the chair of the Federal Reserve—is above the law. But this un­prece­dented ac­tion should be seen in the broader con­text of the ad­min­is­tra­tion’s threats and on­go­ing pres­sure.

This new threat is not about my tes­ti­mony last June or about the ren­o­va­tion of the Federal Reserve build­ings. It is not about Congress’s over­sight role; the Fed through tes­ti­mony and other pub­lic dis­clo­sures made every ef­fort to keep Congress in­formed about the ren­o­va­tion pro­ject. Those are pre­texts. The threat of crim­i­nal charges is a con­se­quence of the Federal Reserve set­ting in­ter­est rates based on our best as­sess­ment of what will serve the pub­lic, rather than fol­low­ing the pref­er­ences of the President.

This is about whether the Fed will be able to con­tinue to set in­ter­est rates based on ev­i­dence and eco­nomic con­di­tions—or whether in­stead mon­e­tary pol­icy will be di­rected by po­lit­i­cal pres­sure or in­tim­i­da­tion.

I have served at the Federal Reserve un­der four ad­min­is­tra­tions, Republicans and Democrats alike. In every case, I have car­ried out my du­ties with­out po­lit­i­cal fear or fa­vor, fo­cused solely on our man­date of price sta­bil­ity and max­i­mum em­ploy­ment. Public ser­vice some­times re­quires stand­ing firm in the face of threats. I will con­tinue to do the job the Senate con­firmed me to do, with in­tegrity and a com­mit­ment to serv­ing the American peo­ple.

...

Read the original on www.federalreserve.gov »

2 591 shares, 68 trendiness

Apple picks Google's Gemini to run AI-powered Siri coming this year

Apple is join­ing forces with Google to power its ar­ti­fi­cial in­tel­li­gence fea­tures, in­clud­ing a ma­jor Siri up­grade ex­pected later this year.

The mul­ti­year part­ner­ship will lean on Google’s Gemini and cloud tech­nol­ogy for fu­ture Apple foun­da­tional mod­els, ac­cord­ing to a joint state­ment ob­tained by CNBCs Jim Cramer.

After care­ful eval­u­a­tion, we de­ter­mined that Google’s tech­nol­ogy pro­vides the most ca­pa­ble foun­da­tion for Apple Foundation Models and we’re ex­cited about the in­no­v­a­tive new ex­pe­ri­ences it will un­lock for our users,” Apple said in a state­ment Monday.

The mod­els will con­tinue to run on Apple de­vices and the com­pa­ny’s pri­vate cloud com­pute, the com­pa­nies added.

Apple de­clined to com­ment on the terms of the deal. Google re­ferred CNBC to the joint state­ment.

In August, Bloomberg re­ported that Apple was in early talks with Google to use a cus­tom Gemini model to power a new it­er­a­tion of Siri. The news out­let later re­ported that Apple was plan­ning to pay about $1 bil­lion a year to uti­lize Google AI.

The deal is an­other ma­jor in­di­ca­tor of grow­ing trust in Google’s ac­cel­er­at­ing AI agenda and come­back against OpenAI. In 2025, the search gi­ant logged its best year since 2009 and sur­passed Apple in mar­ket cap­i­tal­iza­tion last week for the first time since 2019.

Google al­ready pays Apple bil­lions each year to be the de­fault search en­gine on iPhones. But that lu­cra­tive part­ner­ship briefly came into ques­tion af­ter Google was found to hold an il­le­gal in­ter­net search mo­nop­oly.

In September, a judge ruled against a worst-case sce­nario out­come that could have forced Google to di­vest its Chrome browser busi­ness.

The de­ci­sion also al­lowed Google to con­tinue to make deals such as the one with Apple.

...

Read the original on www.cnbc.com »

3 492 shares, 111 trendiness

Introducing Cowork

When we re­leased Claude Code, we ex­pected de­vel­op­ers to use it for cod­ing. They did—and then quickly be­gan us­ing it for al­most every­thing else. This prompted us to build Cowork: a sim­pler way for any­one—not just de­vel­op­ers—to work with Claude in the very same way. Cowork is avail­able to­day as a re­search pre­view for Claude Max sub­scribers on our ma­cOS app, and we will im­prove it rapidly from here.

How is us­ing Cowork dif­fer­ent from a reg­u­lar con­ver­sa­tion? In Cowork, you give Claude ac­cess to a folder of your choos­ing on your com­puter. Claude can then read, edit, or cre­ate files in that folder. It can, for ex­am­ple, re-or­ga­nize your down­loads by sort­ing and re­nam­ing each file, cre­ate a new spread­sheet with a list of ex­penses from a pile of screen­shots, or pro­duce a first draft of a re­port from your scat­tered notes.

In Cowork, Claude com­pletes work like this with much more agency than you’d see in a reg­u­lar con­ver­sa­tion. Once you’ve set it a task, Claude will make a plan and steadily com­plete it, while loop­ing you in on what it’s up to. If you’ve used Claude Code, this will feel fa­mil­iar—Cowork is built on the very same foun­da­tions. This means Cowork can take on many of the same tasks that Claude Code can han­dle, but in a more ap­proach­able form for non-cod­ing tasks.

When you’ve mas­tered the ba­sics, you can make Cowork more pow­er­ful still. Claude can use your ex­ist­ing con­nec­tors, which link Claude to ex­ter­nal in­for­ma­tion, and in Cowork we’ve added an ini­tial set of skills that im­prove Claude’s abil­ity to cre­ate doc­u­ments, pre­sen­ta­tions, and other files. If you pair Cowork with Claude in Chrome, Claude can com­plete tasks that re­quire browser ac­cess, too.

Cowork is de­signed to make us­ing Claude for new work as sim­ple as pos­si­ble. You don’t need to keep man­u­ally pro­vid­ing con­text or con­vert­ing Claude’s out­puts into the right for­mat. Nor do you have to wait for Claude to fin­ish be­fore of­fer­ing fur­ther ideas or feed­back: you can queue up tasks and let Claude work through them in par­al­lel. It feels much less like a back-and-forth and much more like leav­ing mes­sages for a coworker.

In Cowork, you can choose which fold­ers and con­nec­tors Claude can see: Claude can’t read or edit any­thing you don’t give it ex­plicit ac­cess to. Claude will also ask be­fore tak­ing any sig­nif­i­cant ac­tions, so you can steer or course-cor­rect it as you need.

That said, there are still things to be aware of be­fore you give Claude con­trol. By de­fault, the main thing to know is that Claude can take po­ten­tially de­struc­tive ac­tions (such as delet­ing lo­cal files) if it’s in­structed to. Since there’s al­ways some chance that Claude might mis­in­ter­pret your in­struc­tions, you should give Claude very clear guid­ance around things like this.

You should also be aware of the risk of prompt in­jec­tions”: at­tempts by at­tack­ers to al­ter Claude’s plans through con­tent it might en­counter on the in­ter­net. We’ve built so­phis­ti­cated de­fenses against prompt in­jec­tions, but agent safety—that is, the task of se­cur­ing Claude’s real-world ac­tions—is still an ac­tive area of de­vel­op­ment in the in­dus­try.

These risks aren’t new with Cowork, but it might be the first time you’re us­ing a more ad­vanced tool that moves be­yond a sim­ple con­ver­sa­tion. We rec­om­mend tak­ing pre­cau­tions, par­tic­u­larly while you learn how it works. We pro­vide more de­tail in our Help Center.

This is a re­search pre­view. We’re re­leas­ing Cowork early be­cause we want to learn what peo­ple use it for, and how they think it could be bet­ter. We en­cour­age you to ex­per­i­ment with what Cowork can do for you, and to try things you don’t ex­pect to work: you might be sur­prised! As we learn more from this pre­view, we plan to make lots of im­prove­ments (including by adding cross-de­vice sync and bring­ing it to Windows), and we’ll iden­tify fur­ther ways to make it safer.

Claude Max sub­scribers can try Cowork now by down­load­ing the ma­cOS app, then click­ing on Cowork” in the side­bar. If you’re on an­other plan, you can join the wait­list for fu­ture ac­cess.

...

Read the original on claude.com »

4 467 shares, 42 trendiness

Floppy Disks: the best TV remote for kids

Modern TVs are very poorly suited for kids. They re­quire us­ing com­pli­cated re­motes or mo­bile phones, and nav­i­gat­ing apps that con­tin­u­ally try to lure you into watch­ing some­thing else than you in­tended to. The usual sce­nario ends up with the kid feel­ing dis­em­pow­ered and ask­ing an adult to put some­thing on. That some­thing ends up on auto-play be­cause then the adult is free to do other things and the kid ends up stranded pow­er­less and co­matose in front of the TV.

Instead I wanted to build some­thing for my 3-year old son that he could un­der­stand and use in­de­pen­dently. It should em­power him to make his own choices. It should be phys­i­cal and tan­gi­ble, i.e. it should be some­thing he could touch and feel. It should also have some il­lu­sion that the ac­tual me­dia con­tent was stored phys­i­cally and not un-un­der­stand­ably in the cloud”, mean­ing it should e.g. be de­stroy­able — if you break the me­dia there should be con­se­quences. And there should be no auto-play: in­ter­act once and get one video.

My first idea for datas­tor­age was to use the shell of a floppy disk and floppy drive, and put in an RFID tag; this has been done a cou­ple of times on the in­ter­net, such as RFIDisk or this RaspberryPi based RFID reader or this video cov­er­ing how to em­bed an RFID tag in a floppy disk. But get­ting the floppy disk apart to put in an RFID tag and get­ting it back to­gether was kinda wonky.

The next prob­lem to tackle was how to de­tect that a disk is in­serted. The con­cept of AutoRun from Windows 95 was a beauty: in­sert a CD-ROM and it would au­to­mat­i­cally start what­ever was on the me­dia. Great for con­ve­nience, quite ques­tion­ably for se­cu­rity. While in the­ory floppy disks are sup­ported for AutoRun, it turns out that floppy dri­ves ba­si­cally don’t know if a disk is in­serted un­til the op­er­at­ing sys­tem tries to ac­cess it! There is a pin 34 Disk Change” that is sup­posed to give this in­for­ma­tion, but this is ba­si­cally a lie. None of the dri­ves in my pos­ses­sion had that pin con­nected to any­thing, and the in­ter­net mostly con­curs. In the end I slightly mod­i­fied the drive and added a sim­ple rolling switch, that would en­gage when a disk was in­serted.

However, the Arduino FDC Floppy li­brary is only com­pat­i­ble with the AVR-based Arduinos, not the ESP-based ones, be­cause it needs to con­trol the tim­ing very pre­cisely and there­fore uses a healthy amount of in­line as­sem­bler. This meant that I would need one AVR-based Arduino to con­trol the floppy disk, but an­other ESP-based one to do the WiFi com­mu­ni­ca­tion. Such com­bined boards do ex­ist, and I ended up us­ing such a board, but I’m not sure I would rec­om­mend it: the us­age is re­ally fi­nagly, as you need to set the jumpers dif­fer­ently for pro­gram­ming the ATmega, or pro­gram­ming the ESP, or con­nect­ing the two boards se­r­ial ports to­gether.

A re­mote con­trol should be portable, and this means bat­tery-pow­ered. Driving a floppy disk of of lithium bat­ter­ies was in­ter­est­ing. There is a large spike in cur­rent draw when the disk needs to spin up of sev­eral am­peres, while the power draw af­ter­wards is more mod­est, a cou­ple of hun­dred mil­liamperes. I wanted the bat­ter­ies to be 18650s, be­cause I have those in abun­dance. This meant a bat­tery volt­age of 3.7V nom­i­nally, up to 4.2V for a fully charged bat­tery; 5V is needed to spin the floppy around, so a boost DC-DC con­verter was needed. I used an off the shelf XL6009 step-up con­verter board. At this point a lot of head-scratch­ing oc­curred: that ini­tial spin-up power draw would cause the mi­cro­con­troller to re­set. In the end a 1000uF ca­pac­i­tor at the mi­cro­con­troller side seemed to help but not elim­i­nate the prob­lem.

One cru­cial find­ing was that the ground side of the in­ter­face ca­ble should ab­solutely not be con­nected to any grounds on the mi­cro­con­troller side. I was us­ing a rel­a­tively sim­ple logic-level MOSFET, the IRLZ34N, to turn off the drive by dis­con­nect­ing the ground side. If any ground is con­nected, the disk won’t turn off. But also: if any logic pin was be­ing pulled to ground by the ATmega, that would also pro­vide a path to ground. But since the ATmega can­not sink that much cur­rent this would lead to spu­ri­ous re­sets! Obvious af­ter the fact, but this took quite some head­scratch­ing. Setting all the logic pins to in­put, and thus high im­ped­ance, fi­nally fixed the sta­bil­ity is­sues.

After fix­ing the sta­bil­ity, the next chal­lenge was how to make both of the mi­cro­con­trollers sleep. Because the ATmega sleep modes are quite a lot eas­ier to deal with, and be­cause the ini­tial trig­ger would be the floppy in­sert­ing, I de­cided to make the ATmega in charge over­all. Then the ESP has a very sim­ple func­tion: when awoken, read se­r­ial in, when a new­line is found then send off that com­plete line via WiFi, and af­ter 30 sec­onds sig­nal to the ATmega that we’re sleep­ing, and go back to sleep.

The ATmega sends a diskin” mes­sage over se­r­ial to the ESP; the ESP trans­mits this over WiFi when avail­able.

Common for both is that they should be idem­po­tent ac­tions, and the diskin short­cut will make the me­dia re­sume with­out hav­ing to wait for the disk con­tents it­self to be read and processed. This means that the play/pause” disk just con­tains an empty file to work.

The lit­tle guy quickly caught on to the idea! Much fun was had just paus­ing and re­sum­ing mu­sic and his Fantus TV shows. He ex­plored and prod­ded, and some disks were harmed in the process. One prob­lem that I did solve was that the read head stayed on track 0 af­ter hav­ing read every­thing: this means that when the re­mote with disk in­side it is tum­bled around, the disk gets dam­aged at track 0. To com­pen­sate for this, I move the head to track 20 af­ter read­ing has fin­ished: any dam­age is then done there, where we don’t store any data. As a bonus it also plays a lit­tle more me­chanic melody.

...

Read the original on blog.smartere.dk »

5 437 shares, 57 trendiness

haykgrigo3/TimeCapsuleLLM: A LLM trained only on data from certain time periods to reduce modern bias

A lan­guage model trained from scratch ex­clu­sively on data from cer­tain places and time pe­ri­ods to re­duce mod­ern bias and em­u­late the voice, vo­cab­u­lary, and world­view of the era.

Imagine if an AI model didnt just pre­tend to be his­tor­i­cal but ac­tu­ally was.

v0 and v0.5 built on nanoGPT by Andrej Karpathy Core train­ing scripts and model ar­chi­tec­ture are his work.

Early prompts show the model re­spond­ing with 1800′s lan­guage and be­hav­ior. Example: Prompt: Who art Henry?” and it replied I know that man, I have did not a black, the storm.”

* Sentences are mostly in­co­her­ent (expected for ~187MB train­ing data)

* OCR noise (“Digitized by Google”) still pre­sent in out­puts

First model to re­call and con­nect a real his­tor­i­cal event with an ac­tual fig­ure from the dataset.

Example: Prompt: It was the year of our Lord 1834”

The out­put: It was the year of our Lord 1834 and the streets of London were filled with protest and pe­ti­tion. The cause, as many re counted, was not bound in the way of pri­vate, but hav­ing taken up the same day in the day of Lord Palmerston, the pub­lic will re­ceive a short state­ment of the dif­fi­cul­ties un­der which the day of law has reached us. It is a mat­ter of deep re­gret, that the pre­sent events in the his­tory of the world are clear, and con­se­quently will be’­known. It is not true that the very men who first set­tled in the Gospel at Jerusalem should have so ex­ten­sive and so in­ter­est­ing a record of the pros­per­ity and pros­per­ity”

At first I as­sumed that a protest might have co­in­ci­den­tally taken place the same year but take a look at this:

Trained us­ing a 15GB sam­ple from v2′s 90GB dataset. Model trained to 10K steps only.

There was an is­sue with to­k­eniza­tion that causes out­put to look like this:

W ho is Charles D ic ens ? D oes that work more of h ise x cell ent st ir ring , in his pl ays , int he G reat C omp any s f arm ? What I have y et to qu ote from J ack P ick ett ? D oy oun ot know th att hey were as pe cies of g all op­ing , ors aw ing oft heir bre asts , or what w ast ob ed one about the t im ew hen J acks on was look­ing ont hew ind ow ? What ist hes uc­cess of an English man , and which h is­son has not be end is cover ing t ome , where by to acc omp lish such a we ight ? D idy ou ever m ake ap ass age int othe old road way , ort o an anch or - b ree ze att he foot of our boat , which you m ust le ave us ? The fact is , that whe ther the wind would rise up from the pl ain on S atur day night orn o ont ide , ort ill then orth , or other wise , wes hall be com pelled t od esc ribe af orm id­able bar rier , with t hes ame e ffect s ast he p re­sent . In th iss itu ation , at le ast , itis not to omuch t os ay that we have le ft that room . I bel ieve there are three c op ies int he F ive H un­dred - fold ,” to bere fer red to , ast he first n um­ber of our read ers wh ow is ht

Who is Charles Dickens? Does that work more of his ex­cel­lent stir­ring, in his plays, in the Great Company’s farm? What I have yet to quote from Jack Pickett? Do you not know that they were a species of gal­lop­ing, or saw­ing of their breasts, or what was to be done about the time when Jackson was look­ing on the win­dow? What is the suc­cess of an Englishman, and which his son has not been dis­cov­er­ing to me, whereby to ac­com­plish such a weight? Did you ever make a pas­sage into the old road­way, or to an an­chor-breeze at the foot of our boat, which you must leave us? The fact is, that whether the wind would rise up from the plain on Saturday night or noon­tide, or till the north, or oth­er­wise, we shall be com­pelled to de­scribe a for­mi­da­ble bar­rier, with the same ef­fects as the pre­sent. In this sit­u­a­tion, at least, it is not too much to say that we have left that room. I be­lieve there are three copies in the Five Hundred-fold,’ to be re­ferred to, as the first num­ber of our read­ers who wish t[o]…”

Trained us­ing the same 15GB sam­ple used for train­ing eval1. Model is again trained to 10K steps only.

Charles DarwinECCEMACY. Sir, — The fol­low­ing case is in­ter­est­ing to me : — I was in London a fort­night, and was much af­fected with an at­tack of rheuma­tism. The first at­tack of rheuma­tism was a week be­fore I saw you, and the sec­ond when I saw you, and the third when I saw you, and the third in the same time. The sec­ond at­tack of gout, how­ever, was not ac­com­pa­nied by any febrile symp­toms, but was ac­com­pa­nied with an in­creased flow of urine, and with a more co­pi­ous dis­charge of urine. The third at­tack was an hour af­ter I saw you, and was suc­ceeded by a re­turn of a parox­ysm of gout, and a more rapid re­turn of the gout. The fourth at­tack was also ac­com­pa­nied by a fever, but was not al­ways ac­com­pa­nied by any febrile symp­toms. The third at­tack of gout was a fort­night af­ter you had been ill, and the fourth was fol­lowed by a parox­ysm of gout. The fourth at­tack was a fort­night af­ter you were at­tacked, and was ac­com­pa­nied by a sense

* The full 90GB is not avalaible yet as it has­n’t been to­k­enized but you can find a 15GB sam­ple here: https://​hug­ging­face.co/​datasets/​hayk­grigo­rian/​Time­Cap­suleLLM-Lon­don-1800-1875-v2-15GB

Refer to v2 bias re­port for more info.

This pro­ject fo­cuses mostly on cu­rat­ing his­tor­i­cal data, prepar­ing it for train­ing and build­ing a to­k­enizer. I am not go­ing to cover the full LLM train­ing process, for that re­fer to nanoGPT by Andrej Karpathy.

* Collect .txt files of pub­lic do­main books, doc­u­ments, etc from your cho­sen time pe­riod (e.g., London 1800-1850)

* Keep them within your cho­sen time/​place win­dow

* Clean the text files us­ing a script or man­u­ally re­move head­ers/​footer from Project Gutenberg, Modern an­no­ta­tions or things like OCR er­rors.

* Run train_­to­k­enizer.py or train_­to­k­eniz­er_hf.py on the cleaned data.

* This will give you vo­cab.json and merges.txt

* Thes files de­fine vo­cab and merge rules for your model

* Refer to nanoGPT by Andrej Karpathy for the train­ing process or your cho­sen ar­chi­tec­ture’s docs.

Selective Temporal Training (STT) is a ma­chine learn­ing method­ol­ogy where all train­ing data is specif­i­cally cu­rated to fall within a spe­cific his­tor­i­cal time pe­riod. It’s done in or­der to model the lan­guage and knowl­edge of that era with­out in­flu­ence from mod­ern con­cepts. For ex­am­ple, the cur­rent model I have now (v0.5) is trained on data ex­clu­sively from 1800-1875, it’s not fine tuned but trained from scratch re­sult­ing in out­put that re­flects the lin­guis­tic style and his­tor­i­cal con­text of that time pe­riod.

For this pro­ject I’m try­ing to cre­ate a lan­guage model that is un­clouded from mod­ern bias. If I fine-tune some­thing like GPT-2, it’s al­ready pre-trained and that in­for­ma­tion won’t go away. If I train from scratch the lan­guage model won’t pre­tend to be old, it just will be. The Goal for this pro­ject right now is to cre­ate some­thing can rea­son ex­clu­sively us­ing knowl­edge from London books pub­lished be­tween 1800 and 1875.

I’m us­ing books, le­gal doc­u­ments, news­pa­pers, and other writ­ings from 1800–1875 London. The list I linked (for v0) has like 200 but for the first train­ing I just used 50 files about ~187 MB. You can view a list of the doc­u­ments:

https://​github.com/​hayk­grigo3/​Time­Cap­suleLLM/​blob/​main/​Copy%20of%20Lon­don%20­Doc­u­ments%20­for%20­Time%20­Cap­sule%20LLM.txt

...

Read the original on github.com »

6 362 shares, 4 trendiness

- YouTube

...

Read the original on www.youtube.com »

7 316 shares, 25 trendiness

Ozempic is changing the foods Americans buy

When Americans be­gin tak­ing ap­petite-sup­press­ing drugs like Ozempic and Wegovy, the changes ex­tend well be­yond the bath­room scale. According to new re­search, the med­ica­tions are as­so­ci­ated with mean­ing­ful re­duc­tions in how much house­holds spend on food, both at the gro­cery store and at restau­rants.

The study, pub­lished Dec. 18 in the Jour­nal of Marketing Research, links sur­vey data on GLP-1 re­cep­tor ag­o­nist use — a class of drugs orig­i­nally de­vel­oped for di­a­betes and now widely pre­scribed for weight loss — with de­tailed trans­ac­tion records from tens of thou­sands of U. S. house­holds. The re­sult is one of the most com­pre­hen­sive looks yet at how GLP-1 adop­tion is as­so­ci­ated with changes in every­day food pur­chas­ing in the real world.

The head­line find­ing is strik­ing: Within six months of start­ing a GLP-1 med­ica­tion, house­holds re­duce gro­cery spend­ing by an av­er­age of 5.3%. Among higher-in­come house­holds, the drop is even steeper, at more than 8%. Spending at fast-food restau­rants, cof­fee shops and other lim­ited-ser­vice eater­ies falls by about 8%.

Among house­holds who con­tinue us­ing the med­ica­tion, lower food spend­ing per­sists at least a year, though the mag­ni­tude of the re­duc­tion be­comes smaller over time, say co-au­thors, as­sis­tant pro­fes­sor Sylvia Hristakeva and pro­fes­sor Jura Liaukonyte, both in the Charles H. Dyson School of Applied Economics and Management in the Cornell SC Johnson College of Business.

The data show clear changes in food spend­ing fol­low­ing adop­tion,” Hristakeva said. After dis­con­tin­u­a­tion, the ef­fects be­come smaller and harder to dis­tin­guish from pre-adop­tion spend­ing pat­terns.”

Unlike pre­vi­ous stud­ies that re­lied on self-re­ported eat­ing habits, the new analy­sis draws on pur­chase data col­lected by Numerator, a mar­ket re­search firm that tracks gro­cery and restau­rant trans­ac­tions for a na­tion­ally rep­re­sen­ta­tive panel of about 150,000 house­holds. The re­searchers matched those records with re­peated sur­veys ask­ing whether house­hold mem­bers were tak­ing GLP-1 drugs, when they started and why.

That com­bi­na­tion al­lowed the team to com­pare adopters with sim­i­lar house­holds that did not use the drugs, iso­lat­ing changes that oc­curred af­ter med­ica­tion be­gan.

The re­duc­tions were not evenly dis­trib­uted across the gro­cery store.

Ultra-processed, calo­rie-dense foods — the kinds most closely as­so­ci­ated with crav­ings — saw the sharpest de­clines. Spending on sa­vory snacks dropped by about 10%, with sim­i­larly large de­creases in sweets, baked goods and cook­ies. Even sta­ples like bread, meat and eggs de­clined.

Only a hand­ful of cat­e­gories showed in­creases. Yogurt rose the most, fol­lowed by fresh fruit, nu­tri­tion bars and meat snacks.

The main pat­tern is a re­duc­tion in over­all food pur­chases. Only a small num­ber of cat­e­gories show in­creases, and those in­creases are mod­est rel­a­tive to the over­all de­cline,” Hristakeva said.

The ef­fects ex­tended be­yond the su­per­mar­ket. Spending at lim­ited-ser­vice restau­rants such as fast-food chains and cof­fee shops fell sharply as well.

The study also sheds light on who is tak­ing GLP-1 med­ica­tions. The share of U. S. house­holds re­port­ing at least one user rose from about 11% in late 2023 to more than 16% by mid-2024. Weight-loss users skew younger and wealth­ier, while those tak­ing the drugs for di­a­betes are older and more evenly dis­trib­uted across in­come groups.

Notably, about one-third of users stopped tak­ing the med­ica­tion dur­ing the study pe­riod. When they did, their food spend­ing re­verted to pre-adop­tion lev­els — and their gro­cery bas­kets be­came slightly less healthy than be­fore they started, dri­ven in part by in­creased spend­ing on cat­e­gories such as candy and choco­late.

That move­ment un­der­scores an im­por­tant lim­i­ta­tion, the au­thors cau­tion. The study can­not fully sep­a­rate the bi­o­log­i­cal ef­fects of the drugs from other lifestyle changes users may make at the same time. However, ev­i­dence from clin­i­cal tri­als, com­bined with the ob­served re­ver­sion in spend­ing af­ter dis­con­tin­u­a­tion, sug­gests ap­petite sup­pres­sion is likely a key mech­a­nism be­hind the spend­ing changes.

The find­ings carry im­pli­ca­tions far be­yond in­di­vid­ual house­holds.

For food man­u­fac­tur­ers, restau­rants and re­tail­ers, wide­spread GLP-1 adop­tion could mean long-term shifts in de­mand, par­tic­u­larly for snack foods and fast food. Package sizes, prod­uct for­mu­la­tions and mar­ket­ing strate­gies may need to change. For pol­i­cy­mak­ers and pub­lic-health ex­perts, the re­sults add con­text to on­go­ing de­bates about the role of med­ical treat­ments in shap­ing di­etary be­hav­ior — and whether bi­o­log­i­cally dri­ven ap­petite changes suc­ceed where taxes and la­bels have strug­gled.

At cur­rent adop­tion rates, even rel­a­tively mod­est changes at the house­hold level can have mean­ing­ful ag­gre­gate ef­fects,” Hristakeva said. Understanding these de­mand shifts is there­fore im­por­tant for as­sess­ing food mar­kets and con­sumer spend­ing.”

...

Read the original on news.cornell.edu »

8 308 shares, 17 trendiness

J.R.R. Tolkien, Using a Tape Recorder for the First Time, Reads from The Hobbit for 30 Minutes (1952)

Hav­ing not revis­it­ed The Hob­bit in some time, I’ve felt the famil­iar pull—shared by many read­ers—to re­turn to Tolkien’s fairy-tale nov­el it­self. It was my first expo­sure to Tolkien, and the per­fect book for a young read­er ready to dive into moral com­plex­i­ty and a ful­ly-real­ized fic­tion­al world.

And what bet­ter guide could there be through The Hob­bit than Tolkien him­self, read­ing (above) from the 1937 work? In this 1952 record­ing in two parts (part 2 is be­low), the ven­er­a­ble fan­ta­sist and schol­ar reads from his own work for the first time on tape.

Tolkien be­gins with a pas­sage that first de­scribes the crea­ture Gol­lum; lis­ten­ing to this descrip­tion again, I am struck by how much dif­fer­ent­ly I imag­ined him when I first read the book. The Gol­lum of The Hob­bit seems some­how hoari­er and more mon­strous than many lat­er visu­al inter­pre­ta­tions. This is a mi­nor point and not a crit­i­cism, but per­haps a com­ment on how nec­es­sary it is to re­turn to the source of a myth­ic world as rich as Tolkien’s, even, or espe­cial­ly, when it’s been so well-real­ized in oth­er me­dia. No one, af­ter all, knows Mid­dle Earth bet­ter than its cre­ator.

These read­ings were part of a much longer record­ing ses­sion, dur­ing which Tolkien also read (and sang!) exten­sive­ly from The Lord of the Rings. A YouTube user has col­lect­ed, in sev­er­al parts, a ra­dio broad­cast of that full ses­sion, and it’s cer­tain­ly worth your time to lis­ten to it all the way through. It’s also worth know­ing the neat con­text of the record­ing. Here’s the text that accom­pa­nies the video on YouTube:

When Tolkien vis­it­ed a friend in August of 1952 to re­trieve a man­u­script of The Lord of the Rings, he was shown a tape recorder”. Hav­ing nev­er seen one be­fore, he asked how it worked and was then delight­ed to have his voice record­ed and hear him­self played back for the first time. His friend then asked him to read from The Hob­bit, and Tolkien did so in this one incred­i­ble take.

Note: An ear­li­er ver­sion of this post ap­peared on our site in 2012.

Lis­ten to J. R.R. Tolkien Read Poems from The Fel­low­ship of the Ring, in Elvish and Eng­lish (1952)

When the Nobel Prize Com­mit­tee Reject­ed The Lord of the Rings: Tolkien Has Not Mea­sured Up to Sto­ry­telling of the High­est Qual­i­ty” (1961)

J. R.R. Tolkien Snubs a Ger­man Pub­lish­er Ask­ing for Proof of His Aryan Descent” (1938)

Josh Jones is a writer and musi­cian based in Durham, NC.

...

Read the original on www.openculture.com »

9 294 shares, 13 trendiness

Xfce is great

I have not been shy talk­ing about my love of Xfce over the years here. The desk­top en­vi­ron­ment has been a trusted friend ever since I first loved it on the late Cobind Desktop (still the high wa­ter mark of desk­top Linux, as far as I’m con­cerned).

I’m glad to see I’m not the only one. David Gerard of Pivot to AI fame re­cently shared this post he wrote in 2012:

The ques­tion with min­i­mal desk­tops is the fine line be­tween as sim­ple as pos­si­ble and just a bit too sim­ple. How much ba­sic stuff do you have to add back? 4.8 took it slightly far, 4.10 is al­most Just Right. XFCE is so far a case study in Not Fucking It Up; I hope they never go to ver­sion 5, and just up­date 4 for­ever.

This (a) longevity and (2) get­ting the bal­ance right can­not be over­stated. Here’s my cur­rent Xfce desk­top, for ex­am­ple:

Except, no it is­n’t. That’s a screen­shot of my FreeBSD desk­top from 2008, with the bright and clear Tango Iconset (speaking of high-wa­ter marks). Remember when iconog­ra­phy was dis­cern­able at a glance? Aka, func­tional as icons? But I di­gress.

Xfce in 2025 (no, 2026, damn it!) is just as easy to un­der­stand, light, and fast as it first was boot­ing Cobind on my HP Brio when I was in school, or when build­ing it from source in FreeBSD ports. Though un­like a bare­bones win­dow man­ager or other light” DEs, Xfce feels us­able, fea­ture com­plete, and de­signed by some­one who un­der­stands why peo­ple use desk­top com­put­ers (cough GNOME).

I do use KDE on my pri­mary desk­top. Version 4 was a mess, but they’ve made mas­sive im­prove­ments, es­pe­cially within the last year. I’m not sure how much this had to do with the Steam Deck, and a new gen­er­a­tion of peo­ple re­al­is­ing that… wait… I can run stuff on this box other than games? There’s a desk­top here!? But my lap­tops all run Xfce, and I’m half-tempted to move back to it on the desk­top.

I’m with David here. I hope they never feel the need to innovate” with disruption” for UX. The switch to the Thunar file man­ager was the last ma­jor user-fac­ing change I can re­mem­ber, and it was great.

I’m not sug­gest­ing we reached peak UI with Xfce, but no desk­top since has made a com­pelling case (for me) for its re­place­ment. I love, love, love that Xfce is main­tained this way in spite of all the in­dus­try pres­sures to turn it into some­thing else.

I stopped writ­ing posts like this for years, out of fear of how peo­ple from spe­cific desk­top en­vi­ron­ments would re­spond. If you’re about to write me an an­gry screed, know that I will im­me­di­ately delete it and block you, just as I did last time. Both yours and my time are bet­ter spent.

I also know (sigh) this dis­claimer will be ig­nored, so I’m ques­tion­ing why I’m even both­er­ing. Maybe I’m a sucker for pun­ish­ment.

...

Read the original on rubenerd.com »

10 287 shares, 25 trendiness

Code & Visuals

Disney Animation’s movie for 2025 is Zootopia 2, which is the stu­dio’s 64th an­i­mated fea­ture film. Zootopia 2 picks up where the first film left off, tak­ing us deeper into the won­der­ful and wild an­i­mal world of the city. One of the re­ally fun things about Zootopia pro­jects is that each one ex­pands the world fur­ther. The first film in­tro­duced the set­ting, the Zootopia+ se­ries on Disney+ of­fered fun char­ac­ter vi­gnettes to ex­pand that world, and Zootopia 2 now takes us deep into the city’s his­tory and to places both fa­mil­iar and brand new. I’ve had a great time work­ing on Zootopia 2 for the past two years!

From a tech­nol­ogy per­spec­tive, se­quels are al­ways in­ter­est­ing to work on be­cause they give us the abil­ity to eval­u­ate where our film­mak­ing ca­pa­bil­i­ties presently stand com­pared against a known past bench­mark; we know roughly what it takes to make a Zootopia movie al­ready, and so we can see how much bet­ter we have got­ten at it in the in­ter­ven­ing years. I think Zootopia 2 is an es­pe­cially in­ter­est­ing case be­cause of how im­por­tant the first Zootopia (2016) was in the his­tory of Disney Animation’s tech­nol­ogy de­vel­op­ment. For a bit of con­text: the decade of Disney Animation films lead­ing up to Zootopia (2016) was a time when the stu­dio was rapidly climb­ing a steep learn­ing curve for mak­ing CG movies. Every film had tech­ni­cal chal­lenges that called for the stu­dio to over­come un­prece­dented ob­sta­cles. Zootopia (2016) sim­i­larly pre­sented an enor­mous list of chal­lenges, but upon com­plet­ing the film I felt there was a stronger sense of con­fi­dence in what the stu­dio could achieve to­gether. A small anec­dote about Zootopia (2016) that I am very proud of is that at SIGGRAPH 2017, I heard from a friend at a ma­jor peer fea­ture an­i­ma­tion stu­dio that they were blown away and had ab­solutely no idea how we had made Zootopia.

Ever since then, the sense in the stu­dio has al­ways been this movie will be hard to make, but we know how to make it.” This is­n’t to say that we don’t have in­ter­est­ing and dif­fi­cult chal­lenges to over­come in each movie we make; we al­ways do! But, ever since Zootopia (2016)’s com­ple­tion, I think we’ve been able to ap­proach the chal­lenges in each movie with greater con­fi­dence that we will be able to find so­lu­tions.

The ma­jor tech­nol­ogy chal­lenges on Zootopia 2 ul­ti­mately were pretty sim­i­lar to the chal­lenges on Zootopia (2016): every­thing is about de­tail and scale [Burkhard et al. 2016]. The world of Zootopia is in­cred­i­bly de­tailed and vi­su­ally rich, and that de­tail has to hold up at scales rang­ing from a tiny shrew to the tallest gi­raffe. Most char­ac­ters are cov­ered in de­tailed fur and hair, and be­cause the set­ting is a mod­ern city, shots can have hun­dreds or even thou­sands of char­ac­ters on screen all at the same time, sur­rounded by all of the ve­hi­cles and lights and zil­lions of other props and de­tails one ex­pects in a city. Almost every shot in the movie has some form of com­plex sim­u­la­tion or FX work, and the na­ture of the story takes us through every en­vi­ron­ment and light­ing sce­nario imag­in­able, all of which we have to be able to ren­der co­he­sively and ef­fi­ciently. Going back and re­watch­ing Zootopia (2016), I still no­tice how much in­cred­i­ble geom­e­try and shad­ing de­tail is packed into every frame, and in the nine years since, our artists have only pushed things even fur­ther.

To give an ex­am­ple of the amaz­ing amount of de­tail in Zootopia 2: at one point dur­ing pro­duc­tion, our ren­der­ing team no­ticed some shots that had in­cred­i­bly de­tailed snow with tons of tiny glints, so out of cu­rios­ity we opened up the shots to see how the artists had shaded the snow, and we found that they had con­structed the snow out of zil­lions upon zil­lions of in­di­vid­ual ice crys­tals. We were com­pletely blown away; con­struct­ing snow this way was an idea that Disney Research had ex­plored shortly af­ter the first Frozen movie was made [Müller et al. 2016], but at the time it was purely a the­o­ret­i­cal re­search idea, and a decade later our artists were just go­ing ahead and ac­tu­ally do­ing it. The re­sult in the fi­nal film looks ab­solutely amaz­ing, and on top of that, in­stead of need­ing a spe­cial­ized tech­nol­ogy so­lu­tion to make this ap­proach fea­si­ble, in the past decade both our ren­derer and com­put­ers in gen­eral have got­ten so much faster and our artists have im­proved their work­flows so much that a brute-force so­lu­tion was good enough to achieve this ef­fect with­out much trou­ble at all.

One of the largest ren­der­ing ad­vance­ments we made on Zootopia (2016) was the de­vel­op­ment of the Chiang hair shad­ing model, which has since be­come the de-facto in­dus­try stan­dard for fur/​hair shad­ing and is im­ple­mented in most ma­jor pro­duc­tion ren­der­ers. For Zootopia 2, we kept the Chiang hair shad­ing model [Chiang et al. 2016] as-is, but in­stead put a lot of ef­fort into im­prov­ing the ac­cu­racy and per­for­mance of our hair ray-geom­e­try in­ter­sec­tion al­go­rithms. Making im­prove­ments to our ray-curve in­ter­sec­tor ac­tu­ally took a large amount of close it­er­a­tion with our Look Development artists. This may sound sur­pris­ing since we did­n’t change the fur shader at all, but the fi­nal look of our fur is an ef­fect that arises from ex­ten­sive mul­ti­ple-scat­ter­ing be­tween fur strands, for which small en­ergy dif­fer­ences that arise from in­ac­cu­ra­cies in ray-curve in­ter­sec­tion can mul­ti­ply over many bounces into pretty sig­nif­i­cant over­all look dif­fer­ences. In an orig­i­nal film, if the look of a char­ac­ter’s hair drifts slightly dur­ing early pre­pro­duc­tion due to un­der­ly­ing ren­derer changes, gen­er­ally these small vi­sual changes can be tol­er­ated and fac­tored in as the look of the film evolves, but in a se­quel with es­tab­lished char­ac­ters that have a known tar­get look that we must meet, we have to be a lot more care­ful.

I’ve been lucky enough to have got­ten to work on a wide va­ri­ety of types and scales of pro­jects over the past decade at Disney Animation, and for Zootopia 2 I got to work on two of my ab­solute fa­vorite types of pro­jects. The first type of fa­vorite pro­ject is the ones where we get to work on a cus­tom so­lu­tion for a very spe­cific vi­sual need in the film; these are the pro­jects where I can point out a spe­cific thing in fi­nal frames that is there be­cause I wrote the code for it. My sec­ond type of fa­vorite pro­ject is ones where we get to take some­thing su­per bleed­ing edge from pure re­search and take it all the way through to prac­ti­cal, wide pro­duc­tion us­age. Getting to do both of these types of pro­jects on the same film was a real treat! On Zootopia 2, work­ing on the wa­ter tubes se­quence was the first pro­ject type, and work­ing closely with Disney Research Studios to widely de­ploy our next-gen­er­a­tion path guid­ing sys­tem was the sec­ond pro­ject type. Hopefully we’ll have a lot more to pre­sent on both of these at SIGGRAPH/DigiPro 2026, but in the mean­time here’s a quick sum­mary.

One of the big pro­jects I worked on for Moana 2 was a to­tal, from-scratch re­think of our en­tire ap­proach to ren­der­ing wa­ter. For the most part the same sys­tem we used on Moana 2 proved to be equally suc­cess­ful on Zootopia 2, but for the se­quence where Nick, Judy, and Gary De’Snake zoom across the city in a wa­ter tube trans­port sys­tem, we had to ex­tend the wa­ter ren­der­ing sys­tem from Moana 2 a lit­tle bit fur­ther. During this se­quence, our char­ac­ters are in­side of glass tubes filled with wa­ter mov­ing at some­thing like a hun­dred miles per hour, with the sur­round­ing en­vi­ron­ment vis­i­ble through the tubes and whizzing by. In or­der to achieve the de­sired art di­rec­tion, the tubes had to be mod­eled with ac­tual wa­ter geom­e­try in­side since things like bub­bles and sloshes and murk and such had to be vis­i­ble, so go­ing from in­side to out­side the geom­e­try we had to ren­der was char­ac­ters in­side of wa­ter in­side of dou­ble-sided glass tubes set in huge com­plex for­est and city en­vi­ron­ments. To both give artists the abil­ity to ef­fi­ciently model this setup and ef­fi­ciently ren­der these shots, we wound up build­ing out a cus­tomized ver­sion of the stan­dard nested di­electrics so­lu­tion [Schmidt and Budge 2002]. Normally nested di­electrics is pretty straight­for­ward to im­ple­ment in a sim­ple aca­d­e­mic ren­derer (I’ve writ­ten about im­ple­ment­ing nested di­electrics in my hobby ren­derer be­fore), but im­ple­ment­ing nested di­electrics to work cor­rectly with the myr­iad of other ad­vanced fea­tures in a pro­duc­tion ren­derer while also re­main­ing per­for­mant and ro­bust within the con­text of a wave­front path trac­ing ar­chi­tec­ture proved to re­quire a bit more work com­pared with in a toy ren­derer.

During Moana 2’s pro­duc­tion, we started work with Disney Research|Studios on a next-gen­er­a­tion path guid­ing sys­tem in Hyperion that sup­ports both vol­umes and sur­faces (unlike our pre­vi­ous path guid­ing sys­tem, which only sup­ported sur­faces); this new sys­tem is built on top of the ex­cel­lent and state-of-the-art Open Path Guiding (OpenPGL) li­brary [Herholz and Dittebrandt 2022]. Zootopia 2 is the first film where we’ve been able to de­ploy our next-gen­er­a­tion path guid­ing on a wide scale, ren­der­ing about 12% of the en­tire movie us­ing this sys­tem. We pre­sented a lot of the tech­ni­cal de­tails of this new sys­tem in our course on path guid­ing [Reichardt et al. 2025] at SIGGRAPH 2025, but a lot more work be­yond what we pre­sented in that course had to go into mak­ing path guid­ing a re­ally pro­duc­tion scal­able ren­derer fea­ture. This ef­fort re­quired deep col­lab­o­ra­tion be­tween a hand­ful of de­vel­op­ers on the Hyperion team and a bunch of folks at Disney Research|Studios, to the point where over the past few years Disney Research|Studios has been us­ing Hyperion es­sen­tially as one of their pri­mary in-house re­search ren­derer plat­forms and Disney Research staff have been work­ing di­rectly with us on the same code­base. Having come from a more aca­d­e­mic ren­der­ing back­ground, I think this is one of the coolest things that be­ing part of the larger Walt Disney Company en­ables our team and stu­dio to do. Our next-gen­er­a­tion path guid­ing sys­tem proved to be a re­ally valu­able tool on Zootopia 2; in sev­eral parts of the movie, en­tire se­quences that we had an­tic­i­pated to be ex­tra­or­di­nar­ily dif­fi­cult to ren­der saw enor­mous ef­fi­ciency and work­flow im­prove­ments thanks to path guid­ing and wound up go­ing through with rel­a­tive ease!

One par­tic­u­larly fun thing about work­ing on Zootopia 2 was that my wife, Harmony Li, was one of the movie’s Associate Technical Supervisors; this ti­tle means she was one of the leads for Zootopia 2’s TD de­part­ment. Harmony be­ing a su­per­vi­sor on the show meant I got to work closely with her on a few things! She over­saw char­ac­ter look, sim­u­la­tion, tech­ni­cal an­i­ma­tion, crowds, and some­thing that Disney Animation calls Tactics”, which is es­sen­tially op­ti­miza­tion across the en­tire show rang­ing from pipeline and work­flows all the way to ren­der ef­fi­ciency. As part of Zootopia 2’s Tactics strat­egy, the ren­der­ing team was folded more closely into the as­set build­ing process than in pre­vi­ous shows. Having huge crowds of thou­sands of char­ac­ters on screen meant that every sin­gle in­di­vid­ual char­ac­ter needed to be as op­ti­mized as pos­si­ble, and to that end the ren­der­ing team helped pro­vide guid­ance and best prac­tices early in the char­ac­ter mod­el­ing and look de­vel­op­ment process to try to keep every­thing op­ti­mized while not com­pro­mis­ing on fi­nal look. However, ren­der op­ti­miza­tion was only a small part of mak­ing the huge crowds in Zootopia 2 pos­si­ble; var­i­ous pro­duc­tion tech­nol­ogy teams and the TD de­part­ment put enor­mous ground­break­ing work into de­vel­op­ing new ways to ef­fi­ciently au­thor and rep­re­sent crowd rigs in USD and to in­ter­ac­tively vi­su­al­ize huge crowds cov­ered in fur in­side of our 3D soft­ware pack­ages. All of this also had to be done while, for the first time on a fea­ture film pro­ject, Disney Animation switched from Maya to Presto for an­i­ma­tion, and all on a movie which by ne­ces­sity con­tains by far the great­est va­ri­ety of dif­fer­ent rig types and char­ac­ters in any of our films (possible in any an­i­mated film, pe­riod). Again, more on all of this at SIGGRAPH 2026, hope­fully.

I think all of the things I’ve writ­ten about in this post are just a few great ex­am­ples of why I think hav­ing a ded­i­cated in-house tech­nol­ogy de­vel­op­ment team is so valu­able to the way we make films- Disney Animation’s char­ter is to al­ways be mak­ing an­i­mated films that push the lim­its of the art form, and mak­ing sure our films are the best look­ing films we can pos­si­ble make is a huge part of that goal. As an ex­am­ple, while Hyperion has a lot of cool fea­tures and unique tech­nolo­gies that are cus­tom tai­lored to sup­port Disney Animation’s needs and work­flows, in my opin­ion the real value Hyperion brings at the end of the day is that our ren­der­ing team part­ners ex­tremely closely with our artists and TDs to build ex­actly the tools that are needed for each of our movies, with max­i­mum flex­i­bil­ity and cus­tomiza­tion since we know and de­velop the ren­derer from top to bot­tom. This is true of every tech­nol­ogy team at Disney Animation, and it’s a big part of why I love work­ing on our movies. I’ve writ­ten only about the pro­jects I worked di­rectly on in this post, which is a tiny sub­set of the whole of what went into mak­ing this movie. Making Zootopia 2 took dozens and dozens of these types of pro­jects to achieve, and I’m so glad to have got­ten to be a part of it!

On an­other small per­sonal note, my wife and I had our first kid dur­ing the pro­duc­tion of Zootopia 2, and our baby’s name is in the cred­its in the pro­duc­tion ba­bies sec­tion. What a cool tra­di­tion, and what a cool thing that our baby will for­ever be a part of!

Below are some beau­ti­ful frames from Zootopia 2. Every last de­tail in this movie was hand-crafted by hun­dreds of artists and TDs and en­gi­neers out of a ded­i­ca­tion to and love for an­i­ma­tion as an art form, and I promise this movie is worth see­ing on the biggest the­ater screen you can find!

All im­ages in this post are cour­tesy of and the prop­erty of Walt Disney Animation Studios.

Nicholas Burkard, Hans Keim, Brian Leach, Sean Palmer, Ernest J. Petti, and Michelle Robinson. 2016. From Armadillo to Zebra: Creating the Diverse Characters and World of Zootopia. In ACM SIGGRAPH 2016 Production Sessions. Article 24.

Matt Jen-Yuan Chiang, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum (Proc. of Eurographics) 35, 2 (May 2016), 275-283.

Thomas Müller, Marios Papas, Markus Gross, Wojciech Jarosz, and Jan Novák. 2016. Efficient Rendering of Heterogeneous Polydisperse Granular Media. ACM Transactions on Graphics (Proc. of SIGGRAPH Asia) 35, 6 (Nov. 2016), Article 168.

Lea Reichardt, Brian Green, Yining Karl Li, and Marco Manzi. 2025. Path Guiding Surfaces and Volumes in Disney’s Hyperion Renderer- A Case Study. In ACM SIGGRAPH 2025 Course Notes: Path Guiding in Production and Recent Advancements. 30-66.

Charles M. Schmidt and Brian Budge. 2002. Simple Nested Dielectrics in Ray Traced Images. Journal of Graphics Tools 7, 2 (Jan. 2002), 1–8.

...

Read the original on blog.yiningkarlli.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.