10 interesting stories served every morning and every evening.




1 403 shares, 23 trendiness

Last Week on My Mac

Cast your mind back to when you learned to drive, ride a bike, speak a for­eign lan­guage, per­form a tra­cheostomy, or ac­quire any other skill. Wasn’t con­fi­dence the key to your suc­cess? Whatever we do in life, con­fi­dence is al­ways crit­i­cal. If you run a busi­ness, one of the met­rics that are likely to be col­lected is con­fi­dence in your busi­ness, as that’s such an im­por­tant eco­nomic in­di­ca­tor. Confidence is every bit as im­por­tant in com­put­ing.

Over the last few weeks I’ve been dis­cov­er­ing prob­lems that have been erod­ing con­fi­dence in ma­cOS. From text files that sim­ply won’t show up in Spotlight search, to Clock timers that are blank and don’t func­tion, there’s one com­mon fea­ture: ma­cOS en­coun­ters an er­ror or fault, but does­n’t re­port that to the user, in­stead just bury­ing it deep in the log.

When you can spare the time, the next step is to con­tact Apple Support, who seem equally puz­zled. You’re even­tu­ally ad­vised to re­in­stall ma­cOS or, in the worst case, to wipe a fairly new Apple sil­i­con Mac and re­store it in DFU mode, but have no rea­son to be­lieve that will stop the prob­lem from re­cur­ring. You know that Apple Support does­n’t un­der­stand what’s go­ing wrong, and de­spite the in­volve­ment of sup­port en­gi­neers, they seem as per­plexed as you.

One rea­son for this is that ma­cOS so sel­dom re­ports er­rors, and when it does, it’s un­in­for­ma­tive if not down­right mis­lead­ing. Here’s a small gallery of ex­am­ples I’ve en­coun­tered over the last few years, to bring back un­happy mem­o­ries.

Maybe you saved an im­por­tant web­page in Safari 26.1 us­ing its Web Archive for­mat, then a cou­ple of days later dis­cov­ered you could­n’t open it. There’s no er­ror mes­sage, just a blank win­dow, so you try again with the same re­sult. Another site shows the same prob­lem, forc­ing you to con­clude that it’s a bug in Safari. Are you now go­ing to de­vote your time to ob­tain­ing suf­fi­cient in­for­ma­tion to re­port that to Apple us­ing Feedback? Or to con­tact Apple Support and pur­sue its es­ca­la­tion to an en­gi­neer who might for­tu­itously dis­cover the cause?

Silent fail­ures like these are least likely to be re­ported to Apple. In most cases, we find our­selves a workaround, here to aban­don Web Archives and switch to sav­ing web­pages as PDF in­stead. When some­one else men­tions they too have the same prob­lem, we ad­vise them that Web Archives are bro­ken, and our loss of con­fi­dence spreads by con­ta­gion.

Honest and un­der­stand­able er­ror re­port­ing is es­sen­tial to con­fi­dence. It en­ables us to tackle prob­lems rather than just giv­ing up in frus­tra­tion, as­sum­ing that it’s yet an­other fea­ture we used to rely on that has suc­cumbed in the rush to get the next ver­sion of ma­cOS out of the door.

Eroding con­fi­dence is also a prob­lem that the ven­dors of AI ap­pear to have over­looked, or at least se­ri­ously un­der­es­ti­mated. It’s all very well us­ing the eu­phemism of hal­lu­ci­na­tion to play down the sever­ity of er­rors gen­er­ated by LLMs. But those can only cause users to lose con­fi­dence, no mat­ter how intelligent’ you might think your AI is be­com­ing. Go talk to the lawyers who have been caught out by courts sub­mit­ting AI fab­ri­ca­tions whether they still have full con­fi­dence in your prod­uct.

...

Read the original on eclecticlight.co »

2 376 shares, 30 trendiness

What Will Enter the Public Domain in 2026?

At the start of each year, on January 1st, a new crop of works en­ter the pub­lic do­main and be­come free to en­joy, share, and reuse for any pur­pose. Due to dif­fer­ing copy­right laws around the world, there is no one sin­gle pub­lic do­main — and here we fo­cus on three of the most promi­nent. Newly en­ter­ing the pub­lic do­main in 2026 will be:

* works by peo­ple who died in 1955, for coun­tries with a copy­right term of life plus 70 years” (e.g. UK, Russia, most of EU and South America);

* works by peo­ple who died in 1975, for coun­tries with a term of life plus 50 years” (e.g. New Zealand, and most of Africa and Asia);

* films and books (incl. art­works fea­tured) pub­lished in 1930 for the United States.

In our ad­vent-style cal­en­dar be­low, find our top pick of what lies in store for 2026. Each day, as we move through December, we’ll open a new win­dow to re­veal our high­lights! By pub­lic do­main day on January 1st they will all be un­veiled — look out for a spe­cial blog­post from us on that day. (And, of course, if you want to dive straight in and ex­plore the vast swathe of new en­trants for your­self, just visit the links above).

...

Read the original on publicdomainreview.org »

3 371 shares, 17 trendiness

coder/ghostty-web: Ghostty for the web with xterm.js API compatibility

Ghostty for the web with xterm.js API com­pat­i­bil­ity — giv­ing you a proper VT100 im­ple­men­ta­tion in the browser.

* Migrate from xterm by chang­ing your im­port: @xterm/xterm → ghostty-web

* WASM-compiled parser from Ghostty—the same code that runs the na­tive app

Originally cre­ated for Mux (a desk­top app for iso­lated, par­al­lel agen­tic de­vel­op­ment), but de­signed to be used any­where.

Live Demo on an ephemeral VM (thank you to Greg from disco.cloud for host­ing).

npx @ghostty-web/demo@next

This starts a lo­cal HTTP server with a real shell on http://​lo­cal­host:8080. Works best on Linux and ma­cOS.

xterm.js is every­where—VS Code, Hyper, count­less web ter­mi­nals. But it has fun­da­men­tal is­sues:

xterm.js reim­ple­ments ter­mi­nal em­u­la­tion in JavaScript. Every es­cape se­quence, every edge case, every Unicode quirk—all hand-coded. Ghostty’s em­u­la­tor is the same bat­tle-tested code that runs the na­tive Ghostty app.

npm in­stall ghostty-web

ghostty-web aims to be API-compatible with the xterm.js API.

im­port { init, Terminal } from ghostty-web’;

await init();

const term = new Terminal({

font­Size: 14,

theme: {

back­ground: #1a1b26’,

fore­ground: #a9b1d6’,

term.open(doc­u­ment.getEle­ment­ById(‘ter­mi­nal’));

term.on­Data((data) => web­socket.send(data));

web­socket.on­mes­sage = (e) => term.write(e.data);

For a com­pre­hen­sive client server ex­am­ple, re­fer to the demo.

ghostty-web builds from Ghostty’s source with a patch to ex­pose ad­di­tional func­tion­al­ity.

bun run build

Mitchell Hashimoto (author of Ghostty) has been work­ing on libghostty which makes this all pos­si­ble. The patches are very min­i­mal thanks to the work the Ghostty team has done, and we ex­pect them to get smaller.

This li­brary will even­tu­ally con­sume a na­tive Ghostty WASM dis­tri­b­u­tion once avail­able, and will con­tinue to pro­vide an xterm.js com­pat­i­ble API.

At Coder we’re big fans of Ghostty, so ku­dos to that team for all the amaz­ing work.

...

Read the original on github.com »

4 344 shares, 11 trendiness

Google *unkills* JPEG XL?

I’ve writ­ten about JPEG XL in the past. First, I noted Google’s move to kill the for­mat in Chromium in fa­vor of the home­grown and in­fe­rior AVIF. Then, I had a deeper look at the for­mat, and vi­su­ally com­pared JPEG XL with AVIF on a hand­ful of im­ages.

The lat­ter post started with a quick sup­port test:

If you are brows­ing this page around 2023, chances are that your browser sup­ports AVIF but does not sup­port JPEG XL.”

Well, here we are at the end of 2025, and this very sen­tence still holds true. Unless you are one of the 17% of users us­ing Safari, or are ad­ven­tur­ous enough to use a niche browser like Thorium, LibreWolf or the newer Zen Browser, chances are you see the AVIF ban­ner in green and the JPEG XL im­age in black/​red.

The good news is, this will change soon. In a dra­matic turn of events, the Chromium team has re­versed its Obsolete tag, and has de­cided to sup­port the for­mat in Blink (the en­gine be­hind Chrome/Chromium/Edge). Given Chrome’s po­si­tion in the browser mar­ket share, I pre­dict the for­mat will be­come a de fac­tor stan­dard for im­ages in the near fu­ture.

I’ve been fol­low­ing JPEG XL since its ex­per­i­men­tal sup­port in Blink. What started as a promis­ing fea­ture was quickly axed by the team in a bizarre and ridicu­lous man­ner. First, they asked the com­mu­nity for feed­back on the for­mat. Then, the com­mu­nity re­sponded very pos­i­tively. And I don’t only mean a cou­ple of guys in their base­ment. Meta, Intel, Cloudinary, Adobe, ffm­peg, lib­vips, Krita, and many more. After that came the in­fa­mous com­ment:

Thank you every­one for your com­ments and feed­back re­gard­ing JPEG XL. We will be re­mov­ing the JPEG XL code and flag from Chromium for the fol­low­ing rea­sons:Ex­per­i­men­tal flags and code should not re­main in­def­i­nite­lyThere is not enough in­ter­est from the en­tire ecosys­tem to con­tinue ex­per­i­ment­ing with JPEG XLThe new im­age for­mat does not bring suf­fi­cient in­cre­men­tal ben­e­fits over ex­ist­ing for­mats to war­rant en­abling it by de­faultBy re­mov­ing the flag and the code in M110, it re­duces the main­te­nance bur­den and al­lows us to fo­cus on im­prov­ing ex­ist­ing for­mats in Chrome

Yes, right, not enough in­ter­est from the en­tire ecosys­tem”. Sure.

Anyway, fol­low­ing this com­ment, a steady stream of mes­sages pointed out how wrong that was, from all the or­ga­ni­za­tions men­tioned above and many more. People were notic­ing in blog posts, videos, and so­cial me­dia in­ter­ac­tions.

Strangely, the fol­low­ing few years have been pretty calm for JPEG XL. However, a few no­table events did take place. First, the Firefox team showed in­ter­est in a JPEG XL Rust de­coder, af­ter de­scrib­ing their stance on the mat­ter as neutral”. They were con­cerned about the in­creased at­tack sur­face re­sult­ing from in­clud­ing the cur­rent 100K+ lines C++ lib­jxl ref­er­ence de­coder, even though most of those lines are test­ing code. In any case, they kind of re­quested a memory-safe” de­coder. This seems to have kick-started the Rust im­ple­men­ta­tion, jxl-rs, from Google Research.

To top it off, a cou­ple of weeks ago, the PDF Association an­nounced their in­tent to adopt JPEG XL as a pre­ferred im­age for­mat in their PDF spec­i­fi­ca­tion. The CTO of the PDF Association, Peter Wyatt, ex­pressed their de­sire to in­clude JPEG XL as the pre­ferred for­mat for HDR con­tent in PDF files.

All of this pres­sure ex­erted steadily over time made the Chromium team re­con­sider the for­mat. They tried to kill it in fa­vor of AVIF, but that has­n’t worked out. Rick Byers, on be­half of Chromium, made a com­ment in the Blink de­vel­op­ers Google group about the team wel­com­ing a per­for­mant and mem­ory-safe JPEG XL de­coder in Chromium. He stated that the change of stance was in light of the pos­i­tive signs from the com­mu­nity we have ex­posed above (Safari sup­port, Firefox up­dat­ing their po­si­tion, PDF, etc.). Quickly af­ter that, the Chromium is­sue state was changed from Obsolete to Assigned.

This is great news for the for­mat, and I be­lieve it will give it the fi­nal push for mass adop­tion. The for­mat is ex­cel­lent for all kinds of pur­poses, and I’ll be adopt­ing it pretty much in­stantly for this and the Gaia Sky web­site when sup­port is shipped. Some of the fea­tures that make it su­pe­rior to the com­pe­ti­tion are:

* Lossless re-com­pres­sion of JPEG im­ages. This means you can re-com­press your cur­rent JPEG li­brary with­out los­ing in­for­ma­tion and ben­e­fit from a ~30% re­duc­tion in file size for free. This is a killer fea­ture that no other for­mat has.

* Support for im­age sizes of up to 1,073,741,823x1,073,741,824. You won’t run out of im­age space any­time soon. AVIF is ridicu­lous in this as­pect, cap­ping at 8,193x4,320. WebP goes up to 16K2, while the orig­i­nal 1992 JPEG sup­ports 64K2.

* Maximum of 32 bits per chan­nel. No other for­mat (except for the de­funct JPEG 2000) of­fers this.

* Maximum of 4,099 chan­nels. Most other for­mats sup­port 4 or 5, with the ex­cep­tion of JPEG 2000, which sup­ports 16,384.

* JXL sup­ports pro­gres­sive de­cod­ing, which is es­sen­tial for web de­liv­ery, IMO. WebP or HEIC have no such fea­ture. Progressive de­cod­ing in AVIF was added a few years back.

For a full codec fea­ture break­down, see Battle of the Codecs.

JPEG XL is the fu­ture of im­age for­mats. It checks all the right boxes, and it checks them well. Support in the over­whelm­ingly most pop­u­lar browser en­gine is prob­a­bly go­ing to be a cru­cial step­ping stone in the for­mat’s path to star­dom. I’m happy that the Chromium team re­con­sid­ered their in­clu­sion, but I am sad that it took so long and so much pres­sure from the com­mu­nity to achieve it.

...

Read the original on tonisagrista.com »

5 330 shares, 16 trendiness

How to Attend Meetings

JavaScript is­n’t en­abled in your browser, so this file can’t be opened. Enable and re­load.

...

Read the original on docs.google.com »

6 318 shares, 15 trendiness

On 10 Years of Writing a Blog Nobody Reads

In November 2015, I started a blog on Blogger. My first post was a book re­view of The Martian by Andy Weir. 10 years and a cou­ple of blog mi­gra­tions later, I’m still writ­ing. I wanted to share some thoughts and learn­ings I picked up through­out this time. Some of it is spe­cific to writ­ing a blog, but some is gen­er­ally ap­plic­a­ble to writ­ing in any for­mat.

One of the main rea­sons I main­tain this blog is to be­come a bet­ter writer. I re­ally ap­pre­ci­ate when some­one’s writ­ing feels ef­fort­less. Whether it’s in a book, an ar­ti­cle, or even a tech­ni­cal doc­u­ment—com­mu­ni­cat­ing ef­fec­tively is a fine art. I’m not there yet, but I en­joy the process of im­prov­ing.

My style has cer­tainly im­proved since my early days of writ­ing. Reading my old stuff is painful. I would use too many qual­i­fiers and ver­bose phrases. It was a di­rect trans­la­tion of the way I spoke, which turns out is a bad strat­egy for how you should write. If your goal is to have other peo­ple read—and hope­fully en­joy—your writ­ing, you should make an ef­fort to edit your thoughts.

Here’s a sam­ple of the use­less phrases I would add to the start or end of al­most every sen­tence:

This was my worst habit when I started. It’s just fluff that makes it ex­haust­ing to read. It’s re­dun­dant to say I think” at any point in an opin­ion piece.

keep all that pon­der­ing to your­self buddy

Using this careful” lan­guage just soft­ens your ideas to the point of be­ing inar­guable. If you start a sen­tence with I feel…” then no one can dis­pute any­thing that fol­lows, since it’s just your feel­ing. This is bor­ing to read.

Writing a blog, or any­thing re­ally, is your con­tri­bu­tion to pub­lic dis­course. Sure, this blog only av­er­ages 10 page views a week (9 are bots and 1 is me) but I’m still throw­ing my ideas out there into the dig­i­tal ether. If you’re pub­lish­ing some­thing on the in­ter­net, you might as well stand tall be­hind your words and wait for some­one to call bull­shit.

Using mul­ti­ple ad­jec­tives is an­other bad habit I strug­gled with in the past. Phrases like:

These are un­nec­es­sar­ily de­scrip­tive and, more of­ten than not, re­dun­dant. Just use one punc­til­ious ad­jec­tive in­stead. Open a the­saurus if you need to.

My goal now is to use less words to con­vey an idea. Everyone’s in­ter­pre­ta­tion of words is dif­fer­ent, so us­ing more pre­cise lan­guage will just mud­dle your ideas. To use a metaphor from elec­tronic com­mu­ni­ca­tion—there’s so much noise in the chan­nel that mod­u­lat­ing your sig­nal does­n’t pro­vide any ex­tra in­for­ma­tion.

The writ­ing process should be highly it­er­a­tive—many drafts are needed be­fore you ar­rive at some­thing you’re happy with. Taking time be­tween drafts can help too, so you come back to it with a dif­fer­ent per­spec­tive on what you wrote. If we’re talk­ing about a blog, there’s re­ally no strict time­line for get­ting a piece of con­tent out, so when you choose to pub­lish is up to you. Even af­ter pub­lish­ing, there’s noth­ing that stops you from up­dat­ing the con­tent af­ter­wards.

You should write down ideas when you have them. Literally, I wrote the gen­e­sis of this para­graph while in bed at 5am in January. You never know when in­spi­ra­tion will strike, so I find it best to get the thought down quickly and then ex­pand on it later.

It re­ally helps to make the abil­ity to write as ac­ces­si­ble to you as pos­si­ble. For ex­am­ple, I use Obsidian for all my drafts now. It has cross-de­vice sup­port with cloud sync­ing, so writing from any­where” (mostly my phone) is easy now.

I can now pub­lish my smart toaster re­view di­rectly from my smart toaster

There’s a lot of talk about the value of manual” writ­ing in the age of gen­er­a­tive AI. GenAI, specif­i­cally Large Language Models, can be thought of as cal­cu­la­tors for writ­ing; they can gen­er­ate co­her­ent writ­ten ideas in­stantly from any in­put. So just like how no­body does long di­vi­sion by hand any­more, maybe peo­ple won’t do much writ­ing by hand one day.

The in­tro­duc­tion of GenAI has in­creased the sur­plus of writ­ten con­tent to in­fin­ity, es­sen­tially. So from an eco­nom­ics stand­point, with­out any re­source scarcity the value of writ­ten words has been re­duced to zero. But is there still value in hu­man pro­duced writ­ing? Subjectively, yes. Objectively? I’m not sure. I think there’s a lot of per­sonal value in writ­ing though.

Book re­views, for ex­am­ple, are es­sen­tial for gain­ing a bet­ter un­der­stand­ing of what you read. It helps crys­tal­lize the knowl­edge in some way and in­te­grates it into your men­tal map of the world. The re­views I post vary in con­tent—some­times it’s a cri­tique, or a sum­mary, or an ex­trap­o­la­tion of a con­cept from the book I’ll do ad­di­tional re­search on. Either way, this process helps to re­mem­ber some­thing about the book long-term.

I think of it like breath­ing but for ideas. We do so much read­ing all day—there should be a nat­ural bal­ance with pro­duc­ing words too. Inhale, ex­hale, in­hale, ex­hale…

And I’m still not a great writer by any means. There’s a lot of ways to im­prove, which is kind of mo­ti­vat­ing and ex­cites me to keep writ­ing.

I of­ten write too much” and strug­gle to re­ally con­dense my thoughts into a sharp­ened es­say. Most of my posts are 2000+ words…nowa­days I’m try­ing to re­strict my­self to 1000 words. The limit forces me to re­ally think about the core idea I want to share.

...

Read the original on flowtwo.io »

7 309 shares, 34 trendiness

End-to-End Video Generative Modeling with Normalizing Flows

STARFlow-V is the first nor­mal­iz­ing flow-based causal video gen­er­a­tor demon­strat­ing that nor­mal­iz­ing flows can match video dif­fu­sion mod­els in vi­sual qual­ity while of­fer­ing end-to-end train­ing, ex­act like­li­hood es­ti­ma­tion, and na­tive multi-task sup­port across T2V/I2V/V2V gen­er­a­tion.

Normalizing flows (NFs) are end-to-end like­li­hood-based gen­er­a­tive mod­els for con­tin­u­ous data, and have re­cently re­gained at­ten­tion with en­cour­ag­ing progress on im­age gen­er­a­tion. Yet in the video gen­er­a­tion do­main, where spa­tiotem­po­ral com­plex­ity and com­pu­ta­tional cost are sub­stan­tially higher, state-of-the-art sys­tems al­most ex­clu­sively rely on dif­fu­sion-based mod­els. In this work, we re­visit this de­sign space by pre­sent­ing STARFlow-V, a nor­mal­iz­ing flow-based video gen­er­a­tor with sub­stan­tial ben­e­fits such as end-to-end learn­ing, ro­bust causal pre­dic­tion, and na­tive like­li­hood es­ti­ma­tion. Building upon the re­cently pro­posed STARFlow, STARFlow-V op­er­ates in the spa­tiotem­po­ral la­tent space with a global-lo­cal ar­chi­tec­ture which re­stricts causal de­pen­den­cies to a global la­tent space while pre­serv­ing rich lo­cal within-frame in­ter­ac­tions. This eases er­ror ac­cu­mu­la­tion over time, a com­mon pit­fall of stan­dard au­tore­gres­sive dif­fu­sion model gen­er­a­tion. Additionally, we pro­pose flow-score match­ing, which equips the model with a light-weight causal de­noiser to im­prove the video gen­er­a­tion con­sis­tency in an au­tore­gres­sive fash­ion. To im­prove the sam­pling ef­fi­ciency, STARFlow-V em­ploys a video-aware Jacobi it­er­a­tion scheme that re­casts in­ner up­dates as par­al­leliz­able it­er­a­tions with­out break­ing causal­ity. Thanks to the in­vert­ible struc­ture, the same model can na­tively sup­port text-to-video, im­age-to-video as well as video-to-video gen­er­a­tion tasks. Empirically, STARFlow-V achieves strong vi­sual fi­delity and tem­po­ral con­sis­tency with prac­ti­cal sam­pling through­put rel­a­tive to dif­fu­sion-based base­lines. These re­sults pre­sent the first ev­i­dence, to our knowl­edge, that NFs are ca­pa­ble of high-qual­ity au­tore­gres­sive video gen­er­a­tion, es­tab­lish­ing them as a promis­ing re­search di­rec­tion for build­ing world mod­els.

Figure: STARFlow-V pipeline. The model processes text prompts and noise through a Deep Autoregressive Block (global tem­po­ral rea­son­ing) to pro­duce in­ter­me­di­ate la­tents, which are then re­fined by Shallow Flow Blocks (local within-frame de­tails). A Learnable Causal Denoiser (trained via Flow-Score Matching) cleans the out­put. The model is trained end-to-end with two ob­jec­tives: Maximum Likelihood for the flow and Flow-Score Matching for the de­noiser.

A novel two-level ar­chi­tec­ture that sep­a­rates global tem­po­ral rea­son­ing from lo­cal within-frame de­tails. A deep causal Transformer block processes the video au­tore­gres­sively in com­pressed la­tent space to cap­ture long-range spa­tiotem­po­ral de­pen­den­cies, while shal­low flow blocks op­er­ate in­de­pen­dently on each frame to model rich lo­cal struc­tures. This de­sign mit­i­gates com­pound­ing er­rors com­mon in pixel-space au­tore­gres­sive mod­els.

A uni­fied train­ing frame­work that com­bines nor­mal­iz­ing flow max­i­mum like­li­hood with flow-score match­ing for de­nois­ing. Instead of us­ing im­per­fect or non-causal de­nois­ers, we train a light­weight causal

neural de­noiser along­side the main flow model. This de­noiser learns to pre­dict the score (gradient of log-prob­a­bil­ity) of the mod­el’s own dis­tri­b­u­tion, en­abling high-qual­ity sin­gle-step re­fine­ment while pre­serv­ing causal­ity.

Generation (flow in­ver­sion) is re­cast as solv­ing a non­lin­ear sys­tem, en­abling block-wise

par­al­lel up­dates of mul­ti­ple la­tents si­mul­ta­ne­ously in­stead of one-by-one gen­er­a­tion. Combined with video-aware ini­tial­iza­tion that uses tem­po­ral in­for­ma­tion from ad­ja­cent frames and pipelined ex­e­cu­tion be­tween deep and shal­low blocks, this achieves sig­nif­i­cant speedup while main­tain­ing gen­er­a­tion qual­ity.

STARFlow-V is trained on 70M text-video pairs and 400M text-im­age pairs, with a fi­nal 7B pa­ra­me­ter model that can gen­er­ate 480p video at 16fps. The model op­er­ates in a com­pressed la­tent space and lever­ages the in­vert­ible na­ture of nor­mal­iz­ing flows to na­tively sup­port mul­ti­ple gen­er­a­tion tasks with­out any ar­chi­tec­tural changes or re­train­ing.

Navigate through the tabs above to see our mod­el’s ca­pa­bil­i­ties across dif­fer­ent gen­er­a­tion tasks. Each cat­e­gory demon­strates spe­cific as­pects of STARFlow-V, from stan­dard text-to-video gen­er­a­tion to long-form video cre­ation and com­par­isons with dif­fu­sion-based base­lines.

If you find STARFlow-V use­ful in your re­search, please con­sider cit­ing our work:

@article{gu2025starflowv,

ti­tle={STARFlow-V: End-to-End Video Generative Modeling with Scalable Normalizing Flows},

au­thor={Gu, Jiatao and Shen, Ying and Chen, Tianrong and Dinh, Laurent and Wang, Yuyang and Bautista, Miguel 'Angel and Berthelot, David and Susskind, Josh and Zhai, Shuangfei},

jour­nal={arXiv preprint arXiv:XXXX. XXXXX},

year={2025}

Generate videos from in­put im­ages while main­tain­ing tem­po­ral con­sis­tency. Due to the au­tore­gres­sive na­ture of our model, we don’t need to change the ar­chi­tec­ture at all—one model han­dles all tasks seam­lessly.

Our model can ex­tend and trans­form ex­ist­ing videos while main­tain­ing tem­po­ral con­sis­tency. Due to the au­tore­gres­sive na­ture of our model, we don’t need to change the ar­chi­tec­ture at all—one model han­dles all tasks seam­lessly.

Extended video gen­er­a­tion (10s, 15s, 30s) us­ing au­tore­gres­sive seg­ment-by-seg­ment gen­er­a­tion. The tail of each 5s seg­ment is re-en­coded as the pre­fix for the next seg­ment, lever­ag­ing the in­vert­ibil­ity of nor­mal­iz­ing flows.

Side-by-side com­par­isons with base­line Autoregressive dif­fu­sion mod­els. All prompts are sam­pled from VBench (Huang, 2023). Each video shows three meth­ods from left to right: NOVA (https://​github.com/​baaivi­sion/​NOVA), WAN-Causal (finetuned from WAN pro­vided by https://​hug­ging­face.co/​gdhe17/​Self-Forc­ing/​blob/​main/​check­points/​ode_init.pt), and STARFlow-V (Ours).

Examples where our model strug­gles or pro­duces sub­op­ti­mal re­sults, par­tic­u­larly on com­plex mo­tion and phys­i­cal in­ter­ac­tions. These lim­i­ta­tions stem from: (1) in­suf­fi­cient train­ing due to re­source con­straints, (2) low-qual­ity train­ing data, and (3) the ab­sence of post-train­ing re­fine­ment—we per­form only pre­train­ing with­out su­per­vised fine-tun­ing (SFT) or re­in­force­ment learn­ing (RL).

...

Read the original on starflow-v.github.io »

8 268 shares, 12 trendiness

Read Instagram chief Adam Mosseri's memo ordering staff to the office five days a week in 2026

Instagram chief Adam Mosseri is or­der­ing most US staff in his or­ga­ni­za­tion back to the of­fice five days a week start­ing February 2, ac­cord­ing to an in­ter­nal memo ob­tained by Business Insider.

The memo, ti­tled Building a Winning Culture in 2026,” says the change ap­plies to em­ploy­ees in US of­fices with as­signed desks and is part of a broader push to make Instagram more nim­ble and cre­ative” as com­pe­ti­tion in­ten­si­fies.

I be­lieve that we are more cre­ative and col­lab­o­ra­tive when we are to­gether in-per­son,” Mosseri wrote. I felt this pre-COVID and I feel it any time I go to our New York of­fice where the in-per­son cul­ture is strong.”

Earlier this year, Amazon told many cor­po­rate em­ploy­ees to re­turn to the of­fice five days a week. Other tech gi­ants such as Alphabet, Apple, and Microsoft have taken a slightly softer ap­proach, gen­er­ally re­quir­ing staff to be in the of­fice at least three days a week.

The memo, first re­ported by Alex Heath’s Sources newslet­ter, also an­nounced a slew of other changes. Recurring meet­ings will be can­celed every six months and only re-added if absolutely nec­es­sary.” Employees are en­cour­aged to de­cline meet­ings that in­ter­fere with fo­cus time.

I want most of your time fo­cused on build­ing great prod­ucts, not prepar­ing for meet­ings,” Mosseri wrote.

The Instagram chief also called for more prod­uct pro­to­types than slide decks.

Prototypes al­low us to es­tab­lish a proof of con­cept and get a real sense for so­cial dy­nam­ics, and we use them far too in­fre­quently,” Mosseri wrote.

2026 is go­ing to be tough, as was 2025, but I’m ex­cited about our mo­men­tum and our plans for next year,” Mosseri wrote. These changes are go­ing to mean­ing­fully help us move Instagram for­ward in a way we can all be proud of — with cre­ativ­ity, bold­ness, and craft.”

We’ve made good progress this year on Instagram stand­ing for cre­ativ­ity and Threads stand­ing for per­spec­tives, but we still need to do more if we want to lead in both of these ar­eas. A big part of this will come down to strat­egy, and I feel good about the plan we’ve put to­gether for next half. Equally im­por­tant is how well we work. I’ve been think­ing a lot about how we can be more nim­ble and cre­ative in or­der to stay com­pet­i­tive. It’s clear we have to evolve, so we’re go­ing to make a se­ries of changes next year:

1. Back to the of­fice: I be­lieve that we are more cre­ative and col­lab­o­ra­tive when we are to­gether in-per­son. I felt this pre-COVID and I feel it any time I go to our New York of­fice where the in-per­son cul­ture is strong.

Starting February 2, I’m ask­ing every­one in my rollup based in a US of­fice with as­signed desks to come back full time (five days a week). The specifics:

* You’ll still have the flex­i­bil­ity to work from home when you need to, since I rec­og­nize there will be times you won’t be able to come into the of­fice. I trust you all to use your best judg­ment in fig­ur­ing out how to adapt to this sched­ule.

* In the NY of­fice, we won’t ex­pect you to come back full time un­til we’ve al­le­vi­ated the space con­straints. We’ll share more once we have a bet­ter sense of time­line.

* In MPK, we’ll move from MPK21 to MPK22 on January 26 so every­one has an as­signed desk. We’re also of­fer­ing the op­tion to trans­fer from the MPK to SF of­fice for those peo­ple whose com­mute would be the same or bet­ter with that change. We’ll reach out di­rectly to those peo­ple with more info.

* XFN part­ners will con­tinue to fol­low their own org norms.

* There is no change for em­ploy­ees who are cur­rently re­mote.

2. Fewer meet­ings: We all spend too much time in meet­ings that are not ef­fec­tive, and it’s slow­ing us down. Every six months, we’ll can­cel all re­cur­ring meet­ings and only re-add the ones that are ab­solutely nec­es­sary. I also sup­port every­one in mak­ing re­cur­ring 1:1s bi­weekly by de­fault and de­clin­ing meet­ings if they fall dur­ing your fo­cus blocks.

3. More demos, less decks: Most prod­uct overviews should be pro­to­types in­stead of decks. Prototypes al­low us to es­tab­lish a proof of con­cept and get a real sense for so­cial dy­nam­ics, and we use them far too in­fre­quently. If a strat­egy doc is ap­pro­pri­ate, it should be three pages, max, and fol­low this tem­plate. If a deck is nec­es­sary, it should be as tight as pos­si­ble. For all re­views, make it very clear up front what the goal of the meet­ing is and what the key points are that you need to dis­cuss. I want most of your time fo­cused on build­ing great prod­ucts, not prepar­ing for meet­ings.

4. Faster de­ci­sion-mak­ing: We’re go­ing to have a more for­mal­ized un­block­ing process with DRIs, and I’ll be at the pri­or­i­ties progress un­block­ing meet­ing every week. (On weeks where I’m not able to at­tend, I’ll del­e­gate de­ci­sion-mak­ing to one of my di­rects.) This way open de­ci­sions don’t sit for more than a few days, max.

At next week’s All Hands, I’ll talk more about these changes, and you’ll hear from peo­ple around the team about our pri­or­i­ties for next year. 2026 is go­ing to be tough, as was 2025, but I’m ex­cited about our mo­men­tum and our plans for next year. These changes are go­ing to mean­ing­fully help us move Instagram for­ward in a way we can all be proud of — with cre­ativ­ity, bold­ness, and craft.

Have a tip? Contact Pranav Dixit via email at pranavdixit@pro­ton­mail.com or Signal at 1-408-905-9124. Use a per­sonal email ad­dress, a non­work WiFi net­work, and a non­work de­vice; here’s our guide to shar­ing in­for­ma­tion se­curely.

...

Read the original on www.businessinsider.com »

9 267 shares, 11 trendiness

High-income job losses are cooling housing demand

Skip to con­tent

Most met­ros are adding jobs more slowly than nor­mal. Char­lotte leads in job growth among ma­jor met­ros, while Austin and Denver fall far short of their his­tor­i­cally strong pace.

High-income sec­tors are con­tract­ing, while Education and Healthcare are ex­pand­ing faster than nor­mal across most met­ros.

Employment com­po­si­tion mat­ters as much as to­tal growth for lo­cal hous­ing mar­ket strength. Met­ros re­liant on lower-wage job growth are likely to face softer for-sale de­mand.

The na­tional la­bor mar­ket is soft­en­ing, with im­pli­ca­tions for lo­cal hous­ing mar­kets. Most ma­jor met­ros are adding jobs more slowly than nor­mal. We an­a­lyzed em­ploy­ment per­for­mance by metro and in­dus­try, com­par­ing to­day’s growth to long-term trends since 2010. Red rep­re­sents job losses, yel­low shows slower-than-nor­mal growth, and green rep­re­sents faster-than-nor­mal growth.

The job mar­ket dri­ves hous­ing de­mand, but the type of jobs cre­ated or lost im­pacts the type of hous­ing. High-income sec­tors—In­for­ma­tion, Professional Services, and Financial Activities—are shrink­ing across most ma­jor met­ros. Workers in these in­dus­tries drive for-sale hous­ing de­mand more than rental de­mand. Nationally, high-in­come sec­tor em­ploy­ment re­mained flat YOY in August, well be­low its long-term com­pound an­nual growth of +1.6%.The Education and Healthcare sec­tors ac­count for the bulk of new jobs added in most met­ros and are grow­ing faster than nor­mal in al­most every mar­ket. Many of these jobs pay lower wages on av­er­age and of­ten gen­er­ate rental de­mand more than home­buy­ing ac­tiv­ity. Nationally, ed­u­ca­tion and health­care em­ploy­ment rose +3.3% YOY in August, well above its long-term com­pound an­nual growth of +2.1%

Philadelphia (+1.8% YOY) and New York (+1.7% YOY) show stronger job growth than their his­tor­i­cal trends (+1.1% and +1.6%, re­spec­tively). However, this im­prove­ment re­flects re­cov­ery from weak post-Great Financial Crisis base­lines rather than gen­uine out­per­for­mance. Charlotte (+2.6% YOY) is a stand­out per­former, main­tain­ing ro­bust job growth sup­ported by Professional Services ex­pan­sion (+4.5% YOY)—a rare bright spot for for-sale de­mand.Austin (+0.8% YOY) and Den­ver (+0.0% YOY) are grow­ing much more slowly than their his­tor­i­cally strong em­ploy­ment trends (+3.8% and +2.3%, re­spec­tively). Tech and Professional Services jobs are de­clin­ing in both mar­kets, and even health­care—which is ex­pand­ing faster than nor­mal in most met­ros—shows weak growth here. This re­duc­tion in high-pay­ing jobs is weak­en­ing de­mand for both home pur­chases and rentals.The Bay Area continues to lose jobs across high-in­come sec­tors (-0.4% YOY), dri­ving mod­est over­all em­ploy­ment de­clines. These job losses have slowed com­pared to a year ago but re­main neg­a­tive YOY. Despite gen­er­at­ing sub­stan­tial spend­ing and wealth, the AI-driven tech boom has­n’t added mean­ing­ful em­ploy­ment to the re­gion.

What this means for your busi­ness

Whether you build, in­vest, or ad­vise in hous­ing mar­kets, these em­ploy­ment shifts will im­pact your growth op­por­tu­ni­ties in 2026 and be­yond:Rental op­er­a­tors: Pre­pare for sus­tained de­mand from renters em­ployed in health­care and ed­u­ca­tion.

Our Metro and Regional Housing re­search pack­age in­cludes analy­sis of the lat­est de­mand, sup­ply, and af­ford­abil­ity fun­da­men­tals for each metro and re­gion as well as re­sults from our pro­pri­etary sur­veys. Our consulting team con­tin­u­ally eval­u­ates mar­ket fea­si­bil­ity, ab­sorp­tion/​pric­ing/​prod­uct rec­om­men­da­tions, and over­all in­vest­ment/​ex­pan­sion strat­egy in mar­kets na­tion­wide. Combining these two ar­eas of ex­per­tise yields qual­i­ta­tive and quan­ti­ta­tive in­sight for more in­tel­li­gent de­ci­sion-mak­ing.

This pack­age pro­vides a com­plete pic­ture of hous­ing sup­ply, de­mand, and af­ford­abil­ity through lo­cal in­sight, pro­pri­etary sur­veys, and ex­ten­sive data analy­sis. We cur­rently pro­vide an overview of ma­jor hous­ing and eco­nomic trends across 100 MSAs na­tion­wide.

Our re­search ser­vices en­able our clients to gauge hous­ing mar­ket con­di­tions and bet­ter align their busi­ness and strate­gic in­vest­ments in the hous­ing in­dus­try. We pro­vide a thought­ful and unique holis­tic ap­proach of both quan­ti­ta­tive and qual­i­ta­tive analy­sis to help clients make in­formed hous­ing in­vest­ment de­ci­sions.

Our ex­pe­ri­enced team of con­sul­tants helps clients make sound hous­ing in­vest­ment de­ci­sions. We thrive on their suc­cess and work with many clients over mul­ti­ple years and nu­mer­ous pro­jects. ​

Connect with me on LinkedIn

John leads JBRECs Southern California mar­ket cov­er­age for the Metro Analysis and Forecast re­ports, pro­duces the Regional Analysis and Forecast and Homebuilder Analysis and Forecast re­ports, and as­sists with cov­er­age of the pub­lic home­builder space.

If you have any ques­tions about our ser­vices or if you would like to speak to one of our ex­perts about we can help your busi­ness, please con­tact Client Relations at clientser­vices@jbrec.com.

Want to in­ter­view one of our ex­perts?

Media pro­fes­sion­als seek­ing ex­pert analy­sis and au­thor­i­ta­tive com­men­tary on US hous­ing mar­ket trends, pol­icy im­pacts, and in­dus­try de­vel­op­ments can email our team for in­ter­views, quotes, and data-dri­ven in­sights.

Every week, we de­liver analy­sis to over 40,000 sub­scribers with our Building Market Intelligence™ newslet­ter. Subscribe to our weekly BMI newslet­ters to stay cur­rent on press­ing top­ics in the hous­ing in­dus­try.

What’s ahead for hous­ing—In­sights from our 2026 Housing Market Outlook con­fer­ence

...

Read the original on jbrec.com »

10 265 shares, 13 trendiness

Codex, Opus, Gemini try to build Counter Strike

In the last week we’ve had three ma­jor model up­dates: Gemini 3 Pro, Codex Max 5.1, Claude Opus 4.5. We thought we’d give them a chal­lenge:

Build a ba­sic ver­sion of Counter Strike. The game had to be a 3D UI and it had to be mul­ti­player.

If you’re cu­ri­ous, pop open (an ide­ally large com­puter screen) and you can try out each mod­el’s hand­i­work your­self:

We have a full video of us go­ing through the build here, but for those who pre­fer text, you get this post.

We’ll go over some of our high-level im­pres­sions on each model, then dive deeper into the per­for­mance of spe­cific prompts.

We signed up for the high­est-tier plan on each model provider and used the de­faults set for their CLI. For Codex, that’s 5.1 codex-max on the medium set­ting. For Claude it’s Opus 4.5. And with Gemini it’s 3 pro.

We then gave each model about 7 con­sec­u­tive prompts. Prompts were di­vided into two cat­e­gories:

Frontend: At first agents only hav­ing to worry about the game me­chan­ics. Design the scene, the en­e­mies, the logic for shoot­ing, and some sound ef­fects.

Backend: Once that was done agents would then make the game mul­ti­player. They would need to build be se­lec­tion of rooms. Users could join them and start shoot­ing.

So, how’d each model do?

In a fa­mil­iar tune with the other Anthropic mod­els, Opus 4.5 won out on the fron­tend. It made nicer maps, nicer char­ac­ters, nicer guns, and gen­er­ally had the right scene from the get-go.

Once the de­sign was done, Gemini 3 Pro started to win in the back­end. It got less er­rors adding mul­ti­player and per­sis­tence. In gen­eral Gemini did the best with mak­ing log­i­cal rather than vi­sual changes.

Codex Max felt like an in-between” model on both fron­tend and back­end. It got a lot of 2nd place” points in our book. It did rea­son­ably well on the fron­tend and rea­son­ably well on the back­end, but felt less spikey then the other mod­els.

Okay, now let’s get deeper into each prompt.

Goal num­ber 1 was to set up the physics for the game. Models needed to de­sign a map with a first-per­son view­point, and the abil­ity to shoot en­e­mies.

I want you to cre­ate a browser-based ver­sion of counter strike, us­ing three js.

For now, just make this lo­cal: don’t worry about back­ends, Instant, or

any­thing like that.

For the first ver­sion, just make the main char­ac­ter a first-per­son view with

a cross hair. Put en­e­mies at ran­dom places. Enemies have HP. You can

shoot them, and kill them. When an en­emy is killed, they respawn.

Here’s a side-by-side com­par­i­son of the vi­su­als each model came up with:

Visually Claude came up with the most in­ter­est­ing map. There were ob­sta­cles, a nice floor, and you could see every­thing well.

Gemini got the some­thing nice work­ing too.

Codex had an er­ror on it’s first run [1] (it called a func­tion with­out im­port­ing it), but it fixed it real quick. Once bugs were fixed, it’s map was the least vi­su­ally pleas­ing. Things were darker, there were no ob­sta­cles, and it was hard to tell the floor.

Now that we had a map and some poly­gons, we asked the mod­els to style up the char­ac­ters. This was our prompt:

I want you to make the en­e­mies look more like peo­ple. Use a bunch of square poly­gons to rep­re­sent a per­son, and maybe a lit­tle gun

Here’s the re­sult of their work:

Again it feels like Claude did the best job here. The char­ac­ter look quite hu­man — al­most at the level of de­sign in Minecraft. Gemini did well too. Codex made it’s char­ac­ters bet­ter, but every­thing was a sin­gle color, which re­ally di­min­ished it com­pared to the oth­ers.

We then asked each model to add a gun to our first-per­son view. When we shoot, we wanted a re­coil an­i­ma­tion.

I want you to make it so I also have a gun in my field of view. When I shoot, the gun moves a bit.

Here’s the side-by-side of how the re­coil felt for each model:

Here both Claude and Codex got the gun work­ing in one shot. Claude’s gone looks like a real darn pis­tol though.

Gemini had an is­sue try­ing to stick the gun to the cam­era. This got us in quite a back and forth, un­til we re­al­ized that the gun was trans­par­ent.

We were al­most done the fron­tend: the fi­nal step was sound. Here’s what we asked:

I want you to use chip­tunes to an­i­mate the sound of shots. I also want to an­i­mate deaths.

All mod­els added sounds pretty eas­ily. The end­ing part in our prompt: I also want to an­i­mate deaths.” was added at the spur of the mo­ment in the video. Our in­ten­tion was to add sound to deaths. But that’s not what hap­pened.

All 3 mod­els mis­un­der­stood the sen­tence in in the same way: they thought the wanted to an­i­mate how the char­ac­ters died. Fair enough, re-read­ing the sen­tence again, we would un­der­stand it that way too.

Here’s the re­sults they came up with:

All the mod­els got the sound done eas­ily. They all got an­i­ma­tions, but we thought Claude’s an­i­ma­tion felt the most fun.

Now that all mod­els had a real fron­tend, we asked them to make it mul­ti­player.

We did­n’t want the mod­els to worry about shots just yet: goal 1 was to share the move­ment po­si­tions. Here’s what we asked it to do:

I want you to use Instant pres­ence.

Don’t save any­thing in the data­base, just use pres­ence and top­ics. You can

look up the docs.

There should should just be one sin­gle room.

You no longer the need to have the en­e­mies that are ran­domly placed. All the play­ers are what get placed.

For now, don’t worry about shots. Let’s just make it so the po­si­tions of the play­ers are what get set in pres­ence.

Gemini got this right in one shot. Both Codex and Claude needed some more prod­ding.

It was in­ter­est­ing to see how each model tried to solve prob­lems:

Codex used lots of in­tro­spec­tion. It would con­stantly look at the type­script li­brary and look at the func­tions that were avail­able. It did­n’t seem to look at the docs as much.

Claude looks at the docs a bunch. It read and re-read our docs on pres­ence, but rarely in­tro­spected the li­brary like Codex did.

Gemini seemed to do both. It looked at the docs, but then I think be­cause it con­stantly ran the build step, it found any type­script er­rors it had, and fixed it up.

Gemini made the fastest progress here, though all of them got through, as long as we pasted the er­rors back.

Then we moved to get­ting shots to work. Here was the prompt:

Now let’s make shots work. When I shoot, send the shot as a topic, and

make it af­fect the tar­get’s HP. When the tar­get HP goes to zero, they should die and respawn.

Claude got this right in one shot. Gemini and Codex had a few is­sues to fix, but just past­ing the er­rors got them though.

Now that all mod­els had a sin­gle room work­ing, it was time to get them sup­port­ing mul­ti­ple rooms.

The rea­son we added this chal­lenge, was to see (a) how they would deal with a new API (persistence), and (b) how they would deal with the refac­tor nec­es­sary for mul­ti­ple rooms.

So, now I want you to make it so the front page is ac­tu­ally a list of

maps. Since our UI is us­ing lots of poly­gons, make the style kind of

poly­go­nish

Make the UI look like the old counter strike map se­lec­tion screen.

I want you to save these maps in the data­base. Each map has a name.

Use a script to gen­er­ate 5 ran­dom maps with cool names.

Then, push up some per­mis­sions so that any­one can view maps, but they can­not

cre­ate or edit them.

When you join a map, you can just use the map id as the room id for

pres­ence.

All mod­els did great with the UI. Here’s how each looked:

We kind of like Gemini’s UI the most, but they were all pretty cool.

And the per­sis­tence worked well too. They all du­ti­fully cre­ated schema for maps, pushed a mi­gra­tion, and seeded 5 maps.

But things got com­pli­cated in the refac­tor.

Gemini got things done in one shot. It also chose to keep the map id in the URL, which made it much hand­ier to use. Codex took one back and forth with a query er­ror.

But Claude re­ally got stuck. The cul­prit was hooks. Because use­Ef­fect can run mul­ti­ple times, it ended up hav­ing a few very sub­tle bugs. For ex­am­ple, it made 2 can­vas ob­jects in­stead of 1. It also had mul­ti­ple an­i­ma­tion refs run­ning at once.

It was hard to get it to fix things by it­self. We had to put our en­gi­neer hats on and ac­tu­ally look at the code to un­block Claude here.

This did give us a few ideas though:

Claude’s is­sues were hu­man-like. How many of us get tripped up with use­Ef­fect run­ning twice, or get­ting de­pen­dency ar­rays wrong? I think im­prov­ing the React DX on these two is­sues could re­ally push hu­mans and agents fur­ther.

And would have hap­pened if a non-pro­gram­mer was build­ing this? They would have got­ten re­ally stuck. We think there needs to be more tools to go from strictly vibe cod­ing”, to real pro­gram­ming”. Right now the jump feels too steep.

At the end, all mod­els built real a mul­ti­player FPS, with zero code writ­ten by hand! That’s pretty darn cool.

Well, mod­els have def­i­nitely im­proved. They can take much higher-level feed­back, and much higher-level doc­u­men­ta­tion. What re­ally strikes us though is how much they can it­er­ate on their own work thanks to the CLI.

There’s still lots to go though. The promise that you never have to look at the code does­n’t quite feel real yet.

...

Read the original on www.instantdb.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.