10 interesting stories served every morning and every evening.




1 454 shares, 39 trendiness

Blocking the Internet Archive Won’t Stop AI, But It Will Erase the Web’s Historical Record

Imagine a news­pa­per pub­lisher an­nounc­ing it will no longer al­low li­braries to keep copies of its pa­per.

That’s ef­fec­tively what’s be­gun hap­pen­ing on­line in the last few months. The Internet Archive—the world’s largest dig­i­tal li­brary—has pre­served news­pa­pers since it went on­line in the mid-1990s. The Archive’s mis­sion is to pre­serve the web and make it ac­ces­si­ble to the pub­lic. To that end, the or­ga­ni­za­tion op­er­ates the Wayback Machine, which now con­tains more than one tril­lion archived web pages and is used daily by jour­nal­ists, re­searchers, and courts.

But in re­cent months The New York Times be­gan block­ing the Archive from crawl­ing its web­site, us­ing tech­ni­cal mea­sures that go be­yond the we­b’s tra­di­tional ro­bots.txt rules. That risks cut­ting off a record that his­to­ri­ans and jour­nal­ists have re­lied on for decades. Other news­pa­pers, in­clud­ing The Guardian, seem to be fol­low­ing suit.

For nearly three decades, his­to­ri­ans, jour­nal­ists, and the pub­lic have re­lied on the Internet Archive to pre­serve news sites as they ap­peared on­line. Those archived pages are of­ten the only re­li­able record of how sto­ries were orig­i­nally pub­lished. In many cases, ar­ti­cles get edited, changed, or re­moved—some­times openly, some­times not. The Internet Archive of­ten be­comes the only source for see­ing those changes. When ma­jor pub­lish­ers block the Archive’s crawlers, that his­tor­i­cal record starts to dis­ap­pear.

The Times says the move is dri­ven by con­cerns about AI com­pa­nies scrap­ing news con­tent. Publishers seek con­trol over how their work is used, and sev­eral—in­clud­ing the Times—are now su­ing AI com­pa­nies over whether train­ing mod­els on copy­righted ma­te­r­ial vi­o­lates the law. There’s a strong case that such train­ing is fair use.

Whatever the out­come of those law­suits, block­ing non­profit archivists is the wrong re­sponse. Organizations like the Internet Archive are not build­ing com­mer­cial AI sys­tems. They are pre­serv­ing a record of our his­tory. Turning off that preser­va­tion in an ef­fort to con­trol AI ac­cess could es­sen­tially torch decades of his­tor­i­cal doc­u­men­ta­tion over a fight that li­braries like the Archive did­n’t start, and did­n’t ask for.

If pub­lish­ers shut the Archive out, they aren’t just lim­it­ing bots. They’re eras­ing the his­tor­i­cal record.

Making ma­te­r­ial search­able is a well-es­tab­lished fair use. Courts have long rec­og­nized it’s of­ten im­pos­si­ble to build a search­able in­dex with­out mak­ing copies of the un­der­ly­ing ma­te­r­ial. That’s why when Google copied en­tire books in or­der to make a search­able data­base, courts rightly rec­og­nized it as a clear fair use. The copy­ing served a trans­for­ma­tive pur­pose: en­abling dis­cov­ery, re­search, and new in­sights about cre­ative works.

The Internet Archive op­er­ates on the same prin­ci­ple. Just as phys­i­cal li­braries pre­serve news­pa­pers for fu­ture read­ers, the Archive pre­serves the we­b’s his­tor­i­cal record. Researchers and jour­nal­ists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 mil­lion news ar­ti­cles pre­served at the Archive, span­ning 249 lan­guages. And that’s only one ex­am­ple. Countless blog­gers, re­searchers, and re­porters de­pend on the Archive as a sta­ble, au­thor­i­ta­tive record of what was pub­lished on­line.

The same le­gal prin­ci­ples that pro­tect search en­gines must also pro­tect archives and li­braries. Even if courts place lim­its on AI train­ing, the law pro­tect­ing search and web archiv­ing is al­ready well es­tab­lished.

The Internet Archive has pre­served the we­b’s his­tor­i­cal record for nearly thirty years. If ma­jor pub­lish­ers be­gin block­ing that mis­sion, fu­ture re­searchers may find that huge por­tions of that his­tor­i­cal record have sim­ply van­ished. There are real dis­putes over AI train­ing that must be re­solved in courts. But sac­ri­fic­ing the pub­lic record to fight those bat­tles would be a pro­found, and pos­si­bly ir­re­versible, mis­take.

...

Read the original on www.eff.org »

2 391 shares, 69 trendiness

Some Things Just Take Time

Trees take quite a while to grow. If some­one 50 years ago planted a row of oaks or a chest­nut tree on your plot of land, you have some­thing that no amount of money or ef­fort can repli­cate. The only way is to wait. Tree-lined roads, old gar­dens, houses shel­tered by decades of canopy: if you want to start fresh on an empty plot, you will not be able to get that.

Because some things just take time.

We know this in­tu­itively. We pay pre­mi­ums for Swiss watches, Hermès bags and old prop­er­ties pre­cisely be­cause of the time em­bed­ded in them. Either be­cause of the time it took to build them or be­cause of their age. We re­quire age min­i­mums for dri­ving, vot­ing, and drink­ing be­cause we be­lieve ma­tu­rity only comes through lived ex­pe­ri­ence.

Yet right now we also live in a time of in­stant grat­i­fi­ca­tion, and it’s en­ter­ing how we build soft­ware and com­pa­nies. As much as we can speed up code gen­er­a­tion, the real defin­ing el­e­ment of a suc­cess­ful com­pany or an Open Source pro­ject will con­tinue to be tenac­ity. The abil­ity of lead­er­ship or the main­tain­ers to stick to a prob­lem for years, to build re­la­tion­ships, to work through chal­lenges fun­da­men­tally de­fined by hu­man life­times.

The cur­rent gen­er­a­tion of startup founders and pro­gram­mers is ob­sessed with speed. Fast it­er­a­tion, rapid de­ploy­ment, do­ing every­thing as quickly as pos­si­ble. For many things, that’s fine. You can go fast, leave some qual­ity on the table, and learn some­thing along the way.

But there are things where speed is ac­tively harm­ful, where the fric­tion ex­ists for a rea­son. Compliance is one of those cases. There’s a strong de­sire to elim­i­nate every­thing that processes like SOC2 re­quire, and an en­tire in­dus­try of turnkey so­lu­tions has sprung up to help —

Delve just be­ing one ex­am­ple, there are more.

There’s a feel­ing that all the things that cre­ate fric­tion in your life should be au­to­mated away. That hu­man in­volve­ment should be re­placed by AI-based de­ci­sion-mak­ing. Because it is the fric­tion of the process that is the prob­lem. When in fact many times the fric­tion, or that things just take time, is pre­cisely the point.

There’s a rea­son we have cool­ing-off pe­ri­ods for some im­por­tant de­ci­sions in one’s life. We rec­og­nize that peo­ple need time to think about what they’re do­ing, and that do­ing some­thing right once does­n’t mean much be­cause you need to be able to do it over a longer pe­riod of time.

AI writes code fast which is­n’t news any­more. What’s in­ter­est­ing is that we’re push­ing this force down­stream: we seem­ingly have this de­sire to ship faster than ever, to run more ex­per­i­ments and that cre­ates a new de­sire, one to re­move all the re­main­ing fric­tion of re­views, de­sign­ing and con­fig­ur­ing in­fra­struc­ture, any­thing that slows the pipeline. If the ma­chines are so great, why do we even need check­lists or per­mis­sion sys­tems? Express de­sire, en­joy re­sult.

Because we now be­lieve it is im­por­tant for us to just do every­thing faster. But in­creas­ingly, I also feel like this means that the shelf life of much of the soft­ware be­ing cre­ated to­day — soft­ware that peo­ple and busi­nesses should de­pend on — can be mea­sured only in months rather than decades, and the re­la­tion­ships along­side.

In one of last year’s ear­lier YC batches, there was al­ready a hand­ful that just dis­ap­peared with­out even say­ing what they learned or say­ing good­bye to their cus­tomers. They just shut down their pub­lic pres­ence and moved on to other things. And to me, that is not a sign of healthy it­er­a­tion. That is a sign of break­ing the ba­sic trust you need to build a re­la­tion­ship with cus­tomers. A proper shut­down takes time and ef­fort, and our cur­rent en­vi­ron­ment treats that as time not wisely spent. Better to just move on to the next thing.

This is ex­tend­ing to Open Source pro­jects as well. All of a sud­den, every­thing is an Open Source pro­ject, but many of them only have com­mits for a week or so, and then they go away be­cause the mo­ti­va­tion of the cre­ator al­ready waned. And in the name of ex­per­i­men­ta­tion, that is all good and well, but what makes a good Open Source pro­ject is that you think and truly be­lieve that the per­son that cre­ated it is ei­ther go­ing to stick with it for a very long pe­riod of time, or they are able to set up a strat­egy for suc­ces­sion, or they have cre­ated enough of a com­mu­nity that these pro­jects will stand the test of time in one form or an­other.

Relatedly, I’m also in­creas­ingly skep­ti­cal of any­one who sells me some­thing that sup­pos­edly saves my time. When all that I see is that every­body who is like me, fully on­boarded into AI and agen­tic tools, seem­ingly has less and less time avail­able be­cause we fall into a trap where we’re im­me­di­ately fill­ing it with more things.

We all sell each other the idea that we’re go­ing to save time, but that is not what’s hap­pen­ing. Any time saved gets im­me­di­ately cap­tured by com­pe­ti­tion. Someone who ac­tu­ally takes a breath is out­ma­neu­vered by some­one who fills every freed-up hour with new out­put. There is no easy way to bank the time and it just dis­ap­pears.

I feel this acutely. I’m very close to the red-hot cen­ter of where eco­nomic ac­tiv­ity around AI is tak­ing place, and more than any­thing, I have less and less time, even when I try to pur­pose­fully scale back and cre­ate the space. For me this is a prob­lem. It’s a prob­lem be­cause even with the best in­ten­tions, I ac­tu­ally find it very hard to cre­ate qual­ity when we are quickly com­modi­tiz­ing soft­ware, and the ma­chines make it so ap­peal­ing.

I keep com­ing back to the trees. I’ve been main­tain­ing Open Source pro­jects for close to two decades now. The last startup I worked on, I spent 10 years at. That’s not be­cause I’m par­tic­u­larly dis­ci­plined or vir­tu­ous. It’s be­cause I or some­one else, planted some­thing, and then I kept show­ing up, and even­tu­ally the thing had roots that went deeper than my en­thu­si­asm on any given day. That’s what time does! It turns some idea or plan into a com­mit­ment and a com­mit­ment into some­thing that can shel­ter and grow other peo­ple.

Nobody is go­ing to mass-pro­duce a 50-year-old oak. And no­body is go­ing to con­jure trust, or qual­ity, or com­mu­nity out of a week­end sprint. The things I value most — the pro­jects, the re­la­tion­ships, the com­mu­ni­ties — are all things that took years to be­come what they are. No tool, no mat­ter how fast, was go­ing to get them there sooner.

We re­cently planted a new tree with Colin. I want it to grow into a large one. I know that’s go­ing to take time, and I’m not in a rush.

...

Read the original on lucumr.pocoo.org »

3 307 shares, 13 trendiness

ghostty-org/ghostling: A minimum viable terminal emulator built on top of the libghostty C API. Ex minimo, infinita nascuntur. 👻🐣

Ghostling is a demo pro­ject meant to high­light a min­i­mum func­tional ter­mi­nal built on the libghostty C API in a

sin­gle C file.

The ex­am­ple uses Raylib for win­dow­ing and ren­der­ing. It is sin­gle-threaded (although libghostty-vt sup­ports thread­ing) and uses a 2D graph­ics ren­derer in­stead of a di­rect GPU ren­derer like the pri­mary Ghostty GUI. This is to show­case the flex­i­bil­ity of libghostty and how it can be used in a va­ri­ety of con­texts.

Libghostty is an em­bed­d­a­ble li­brary ex­tracted from Ghostty’s core, ex­pos­ing a C and Zig API so any ap­pli­ca­tion can em­bed cor­rect, fast ter­mi­nal em­u­la­tion.

Ghostling uses libghostty-vt, a zero-de­pen­dency li­brary (not even libc) that han­dles VT se­quence pars­ing, ter­mi­nal state man­age­ment (cursor po­si­tion, styles, text re­flow, scroll­back, etc.), and ren­derer state man­age­ment. It con­tains no ren­derer draw­ing or win­dow­ing code; the con­sumer (Ghostling, in this case) pro­vides its own. The core logic is ex­tracted di­rectly from Ghostty and in­her­its all of its real-world ben­e­fits: ex­cel­lent, ac­cu­rate, and com­plete ter­mi­nal em­u­la­tion sup­port, SIMD-optimized pars­ing, lead­ing Unicode sup­port, highly op­ti­mized mem­ory us­age, and a ro­bust fuzzed and tested code­base, all proven by mil­lions of daily ac­tive users of Ghostty GUI.

Despite be­ing a min­i­mal, thin layer above libghostty, look at all the fea­tures you do get:

* Unicode and multi-code­point grapheme han­dling (no shap­ing or lay­out)

* And more. Effectively all the ter­mi­nal em­u­la­tion fea­tures sup­ported

by Ghostty!

These fea­tures aren’t prop­erly ex­posed by libghostty-vt yet but will be:

These are things that could work but haven’t been tested or aren’t im­ple­mented in Ghostling it­self:

This list is in­com­plete and we’ll add things as we find them.

libghostty is fo­cused on core ter­mi­nal em­u­la­tion fea­tures. As such, you don’t get fea­tures that are pro­vided by the GUI above the ter­mi­nal em­u­la­tion layer, such as:

* Search UI (although search in­ter­nals are pro­vided by libghostty-vt)

These are the things that libghostty con­sumers are ex­pected to im­ple­ment on their own, if they want them. This ex­am­ple does­n’t im­ple­ment these to try to stay as min­i­mal as pos­si­ble.

There are some known is­sues with this demo:

* Kitty key­board pro­to­col sup­port is bro­ken with some in­puts. This is

due to lim­i­ta­tions of the un­der­ly­ing Raylib in­put sys­tem; it does­n’t

sup­port rich enough in­put events to fully and cor­rectly im­ple­ment the Kitty

key­board pro­to­col. This is a known is­sue.

The libghostty-vt API sup­ports Kitty key­baord pro­to­col cor­rectly, but

re­quires cor­rect in­put events to do so.

cmake -B build -G Ninja

cmake –build build

./build/ghostling

cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release

cmake –build build

After the ini­tial con­fig­ure, you only need to run the build step:

cmake –build build

libghostty-vt has a fully ca­pa­ble and proven Zig API. Ghostty GUI it­self uses this and is a good — al­though com­plex — ex­am­ple of how to use it. However, this demo is meant to show­case the min­i­mal C API since C is so much more broadly used and ac­ces­si­ble to a wide va­ri­ety of de­vel­op­ers and lan­guage ecosys­tems.

libghostty-vt has a C API and can have zero de­pen­den­cies, so it can be used with min­i­mally thin bind­ings in ba­si­cally any lan­guage. I’m not sure yet if the Ghostty pro­ject will main­tain of­fi­cial bind­ings for lan­guages other than C and Zig, but I hope the com­mu­nity will cre­ate and main­tain bind­ings for many lan­guages!

No no no! libghostty has no opin­ion about the ren­derer or GUI frame­work used; it’s even stand­alone WASM-compatible for browsers and other en­vi­ron­ments.

libghostty pro­vides a high-per­for­mance ren­der state API

which only keeps track of the state re­quired to build a ren­derer. This is the same API used by Ghostty GUI for Metal and OpenGL ren­der­ing and in this repos­i­tory for the Raylib 2D graph­ics API. You can layer any ren­derer on top of this!

I needed to pick some­thing. Really, any build sys­tem and any li­brary could be used. CMake is widely used and sup­ported, and Raylib is a sim­ple and el­e­gant li­brary for win­dow­ing and 2D ren­der­ing that is easy to set up. Don’t get bogged down in these de­tails!

...

Read the original on github.com »

4 277 shares, 11 trendiness

Rewriting our Rust WASM Parser in TypeScript

We rewrote our Rust WASM Parser in TypeScript - and it got 3x Faster

We built the openui-lang parser in Rust and com­piled it to WASM. The logic was sound: Rust is fast, WASM gives you near-na­tive speed in the browser, and our parser is a rea­son­ably com­plex multi-stage pipeline. Why would­n’t you want that in Rust?

Turns out we were op­ti­mis­ing the wrong thing.

The openui-lang parser con­verts a cus­tom DSL emit­ted by an LLM into a React com­po­nent tree. It runs on every stream­ing chunk — so la­tency mat­ters a lot. The pipeline has six stages:

* Mapper: con­verts in­ter­nal AST into the pub­lic OutputNode for­mat con­sumed by the React ren­derer

Every call to the WASM parser pays a manda­tory over­head re­gard­less of how fast the Rust code it­self runs:

The Rust pars­ing it­self was never the slow part. The over­head was en­tirely in the bound­ary: copy string in, se­ri­al­ize re­sult to JSON string, copy JSON string out, then V8 de­se­ri­al­izes it back into a JS ob­ject.

The nat­ural ques­tion was: what if WASM re­turned a JS ob­ject di­rectly, skip­ping the JSON se­ri­al­iza­tion step? We in­te­grated serde-wasm-bind­gen which does ex­actly this — it con­verts the Rust struct into a JsValue and re­turns it di­rectly.

Here’s why. JS can­not read a Rust struc­t’s bytes from WASM lin­ear mem­ory as a na­tive JS ob­ject — the two run­times use com­pletely dif­fer­ent mem­ory lay­outs. To con­struct a JS ob­ject from Rust data, serde-wasm-bind­gen must re­cur­sively ma­te­ri­alise Rust data into real JS ar­rays and ob­jects, which in­volves many fine-grained con­ver­sions across the run­time bound­ary per parse() in­vo­ca­tion.

Compare that to the JSON ap­proach: serde_j­son::to_string() runs in pure Rust with zero bound­ary cross­ings, pro­duces one string, one mem­cpy copies it to the JS heap, then V8′s na­tive C++ JSON.parse processes it in a sin­gle op­ti­mised pass. Fewer, larger, and more op­ti­mised op­er­a­tions win over many small ones.

We ported the full parser pipeline to TypeScript. Same six-stage ar­chi­tec­ture, same ParseResult out­put shape — no WASM, no bound­ary, runs en­tirely in the V8 heap.

What is mea­sured: A sin­gle parse(com­pleteString) call on the fin­ished out­put string. This iso­lates per-call parser cost.

How it was run: 30 warm-up it­er­a­tions to sta­bilise JIT, then 1000 timed it­er­a­tions us­ing per­for­mance.now() (µs pre­ci­sion). The me­dian is re­ported. Fixtures are real LLM-generated com­po­nent trees se­ri­alised in each for­mat’s real stream­ing syn­tax.

* sim­ple-table — root + one Table with 3 columns and 5 rows (~180 chars)

Eliminating WASM fixed the per-call cost, but the stream­ing ar­chi­tec­ture still had a deeper in­ef­fi­ciency.

The parser is called on every LLM chunk. The naïve ap­proach ac­cu­mu­lates chunks and re-parses the en­tire string from scratch each time:

For a 1000-char out­put de­liv­ered in 20-char chunks: 50 parse calls pro­cess­ing a cu­mu­la­tive to­tal of ~25,000 char­ac­ters. O(N²) in the num­ber of chunks.

Statements ter­mi­nated by a depth-0 new­line are im­mutable — the LLM will never come back and mod­ify them. We added a stream­ing parser that caches com­pleted state­ment ASTs:

Completed state­ments are never re-parsed. Only the trail­ing in-progress state­ment is re-parsed per chunk. O(total_length) in­stead of O(N²).

What is mea­sured: The to­tal parse over­head ac­cu­mu­lated across every chunk call for one com­plete doc­u­ment. This is dif­fer­ent from the one-shot bench­mark — it mea­sures the sum of all parse calls dur­ing a real stream, not a sin­gle call. This is the num­ber that af­fects ac­tual user-per­ceived re­spon­sive­ness.

How it was run: Documents are re­played in 20-char chunks. Each chunk trig­gers a parse() (naïve) or push() (incremental) call. Total time across all calls is recorded. 100 full-stream re­plays, me­dian taken.

The sim­ple-table fix­ture is a sin­gle state­ment — there’s noth­ing to cache, so both ap­proaches are equiv­a­lent. The ben­e­fit scales with the num­ber of state­ments be­cause more of the doc­u­ment gets cached and skipped on each chunk.

The one-shot table shows 13.4µs for con­tact-form; the stream­ing table shows 316µs (naïve). These are not con­tra­dic­tory — they mea­sure dif­fer­ent things:

* 13.4µs = cost of one parse() call on the com­plete 400-char string

* 316µs = to­tal cost of ~20 parse() calls dur­ing the stream (chunk 1 parses 20 chars, chunk 2 parses 40 chars, …, chunk 20 parses 400 chars — cu­mu­la­tive sum of all those grow­ing calls)

This ex­pe­ri­ence sharp­ened our think­ing on the right use cases for WASM:

✅ Compute-bound with min­i­mal in­terop: im­age/​video pro­cess­ing, cryp­tog­ra­phy, physics sim­u­la­tions, au­dio codecs. Large in­put → scalar out­put or in-place mu­ta­tion. The bound­ary is crossed rarely.

✅ Portable na­tive li­braries: ship­ping C/C++ li­braries (SQLite, OpenCV, libpng) to the browser with­out a full JS rewrite.

❌ Parsing struc­tured text into JS ob­jects: you pay the se­ri­al­iza­tion cost ei­ther way. The pars­ing com­pu­ta­tion is fast enough that V8′s JIT elim­i­nates any Rust ad­van­tage. The bound­ary over­head dom­i­nates.

❌ Frequently-called func­tions on small in­puts: if the func­tion is called 50 times per stream and the com­pu­ta­tion takes 5µs, you can­not amor­tise the bound­ary cost.

Profile where time is ac­tu­ally spent be­fore choos­ing the im­ple­men­ta­tion lan­guage.

For us, the cost was never in the com­pu­ta­tion - it was al­ways in data trans­fer across the WASM-JS bound­ary.

Direct ob­ject pass­ing” through serde-wasm-bind­gen is not cheaper.

Constructing a JS ob­ject field-by-field from Rust in­volves more bound­ary cross­ings than a sin­gle JSON string trans­fer, not fewer. The bound­ary cross­ings hap­pen in­side the sin­gle FFI call, in­vis­i­bly.

Algorithmic com­plex­ity im­prove­ments dom­i­nate lan­guage-level op­ti­mi­sa­tions.

Going from O(N²) to O(N) in the stream­ing case had a larger prac­ti­cal im­pact than switch­ing from WASM to TypeScript.

WASM and JS do not share a heap.

WASM has a flat lin­ear mem­ory (WebAssembly. Memory) that JS can read as raw bytes, but those bytes are Rust’s in­ter­nal lay­out - point­ers, enum dis­crim­i­nants, align­ment padding - com­pletely opaque to the JS run­time. Conversion is al­ways re­quired and al­ways costs some­thing.

...

Read the original on www.openui.com »

5 265 shares, 16 trendiness

Ubuntu 26.04 Ends 46 Years of Silent sudo Passwords

Starting with the up­com­ing LTS re­lease, every key­stroke at a sudo pass­word prompt will echo an as­ter­isk — a small UX fix that has ig­nited one of Linux’s fiercest de­bates in years.

Starting with the up­com­ing LTS re­lease, every key­stroke at a sudo pass­word prompt will echo an as­ter­isk — a small UX fix that has ig­nited one of Linux’s fiercest de­bates in years.

For more than four decades, typ­ing a pass­word af­ter a sudo prompt in a Linux ter­mi­nal pro­duced noth­ing vis­i­ble on screen — no as­ter­isks, no dots, no mov­ing cur­sor. The blank void was in­ten­tional: a guard against shoulder surf­ing,” the prac­tice of count­ing key­strokes to guess a pass­word’s length. Ubuntu 26.04 LTS, co­de­named Resolute Raccoon

and due on April 23, 2026, changes that.

The orig­i­nal sudo util­ity was cre­ated in 1980 by Bob Coggeshall and Cliff Spencer at the State University of New York at Buffalo. Its silent pass­word prompt was a de­lib­er­ate se­cu­rity de­ci­sion from an era when ter­mi­nals were shared, phys­i­cal screens were wide-open, and the threat model squarely in­cluded peo­ple stand­ing be­hind you count­ing key­strokes. That be­hav­iour sur­vived — un­touched — through nearly half a cen­tury of Linux dis­tri­b­u­tions.

The tra­di­tion be­gan to crack when Linux Mint en­abled vi­sual pass­word feed­back by de­fault for its own sudo con­fig­u­ra­tion, qui­etly demon­strat­ing that the sky would not fall. Still, main­stream dis­tri­b­u­tions, Ubuntu among them, main­tained the clas­sic silent prompt.

The cat­a­lyst for Ubuntu’s change is sudo-rs, a ground-up rewrite of the clas­sic C im­ple­men­ta­tion in the Rust pro­gram­ming lan­guage. Canonical shipped sudo-rs as the de­fault sudo im­ple­men­ta­tion be­gin­ning with Ubuntu 25.10 — a tran­si­tion that most users never no­ticed be­cause the com­mand name and be­hav­iour were oth­er­wise iden­ti­cal.

Then, roughly two weeks be­fore the Ubuntu 26.04 beta win­dow, the up­stream sudo-rs pro­ject merged a patch to en­able the pwfeed­back

op­tion by de­fault. Canonical cherry-picked that patch into Ubuntu 26.04 de­vel­op­ment builds. The legacy sudo

pack­age (sometimes la­belled sudo-ws) is un­af­fected; only the sudo-rs path shows as­ter­isks.

Critics of the change point to a bug re­port whose ti­tle cap­tures the sen­ti­ment per­fectly: sudo-rs echos * for every char­ac­ter typed break­ing his­tor­i­cal se­cu­rity mea­sures older than I am.”

Ubuntu ac­knowl­edged the re­port and marked it Won’t Fix. The up­stream sudo-rs de­vel­op­ers sim­i­larly de­clined to back down.

The de­vel­op­ers’ counter-ar­gu­ment rests on two pil­lars. First, the se­cu­rity ben­e­fit of hid­ing pass­word length is neg­li­gi­ble in prac­tice — any­one close enough to count as­ter­isks on a screen is close enough to hear or watch your key­strokes di­rectly. Second, and more point­edly, most users’ sudo pass­word is the same as their lo­gin pass­word — one that al­ready ap­pears as vis­i­ble place­holder dots on the graph­i­cal lo­gin screen. Hiding as­ter­isks in the ter­mi­nal while show­ing them at lo­gin is, in the de­vel­op­ers’ es­ti­ma­tion, se­cu­rity the­atre.

Users and sys­tem ad­min­is­tra­tors who pre­fer the tra­di­tional silent prompt can re­store it with a sin­gle con­fig­u­ra­tion change. The set­ting is tog­gled via the su­do­ers

file, which should al­ways be edited through the safe vi­sudo com­mand to pre­vent syn­tax er­rors from lock­ing you out.

The as­ter­isk change is part of a wider mod­erni­sa­tion un­der­way in Ubuntu 26.04. The re­lease will ship with GNOME 50 run­ning ex­clu­sively on Wayland, Linux ker­nel 7.0, and fur­ther adop­tion of Rust-based core util­i­ties — in­clud­ing uu­tils/​core­utils, a Rust reim­ple­men­ta­tion of the stan­dard Unix com­mand-line tools. The switch to sudo-rs is thus one piece of a broader ef­fort to bring mem­ory safety and, ap­par­ently, mod­ern UX sen­si­bil­i­ties to Ubuntu’s fun­da­men­tal plumb­ing.

Whether you con­sider the as­ter­isk change an over­due qual­ity-of-life im­prove­ment or a dan­ger­ous de­par­ture from Unix phi­los­o­phy, one thing is clear: the op­tion to re­vert re­mains firmly in your hands. The de­vel­op­ers have sim­ply de­cided that the de­fault should favour the many new­com­ers baf­fled by a blank prompt over the few vet­er­ans who cher­ished it.

Ubuntu 26.04 LTS Resolute Raccoon is sched­uled for fi­nal re­lease on April 23, 2026.

...

Read the original on pbxscience.com »

6 264 shares, 15 trendiness

Mamba-3

⚡️ FlashAttention-4: up to 1.3× faster than cuDNN on NVIDIA Blackwell →Introducing Together AIs new look →⚡ Together GPU Clusters: self-ser­vice NVIDIA GPUs, now gen­er­ally avail­able →📦 Batch Inference API: Process bil­lions of to­kens at 50% lower cost for most mod­els →

40+ Models Chosen for Production…40+ Models Chosen for Production…40+ Models Chosen for Production…Mamba-3 is a new state space model (SSM) de­signed with in­fer­ence ef­fi­ciency as the pri­mary goal — a de­par­ture from Mamba-2, which op­ti­mized for train­ing speed. The key up­grades are a more ex­pres­sive re­cur­rence for­mula, com­plex-val­ued state track­ing, and a MIMO (multi-input, multi-out­put) vari­ant that boosts ac­cu­racy with­out slow­ing down de­cod­ing. The re­sult: Mamba-3 SISO beats Mamba-2, Gated DeltaNet, and even Llama-3.2-1B (Transformer) on pre­fill+de­code la­tency across all se­quence lengths at the 1.5B scale. The team also open-sourced the ker­nels, built us­ing a mix of Triton, TileLang, and CuTe DSL for max­i­mum hard­ware per­for­mance. This blog is cross-posted on the Goomba Lab blog and cov­ers work done in col­lab­o­ra­tion be­tween re­searchers at Carnegie Mellon University, Princeton University, Cartesia AI, and Together AI.Since the re­lease of Mamba-2 in mid-2024, most ar­chi­tec­tures have switched from Mamba-1. Why? Mamba-2 made the bet that train­ing ef­fi­ciency was the largest bot­tle­neck for state space mod­els (SSMs), and thus sim­pli­fied the un­der­ly­ing SSM mech­a­nism to de­liver 2−8× faster train­ing com­pared to its pre­de­ces­sor, lead­ing to wider adop­tion.Since then, the LLM land­scape has started to shift. While pre­train­ing is still su­per im­por­tant, more at­ten­tion has been fo­cused on post-train­ing and de­ploy­ment, both of which are ex­tremely in­fer­ence-heavy. The scal­ing of post-train­ing meth­ods, es­pe­cially with re­in­force­ment learn­ing with ver­i­fi­able re­wards (RLVR) for cod­ing or math, re­quires huge amounts of gen­er­ated roll­outs, and most re­cently, agen­tic work­flows, such as Codex, Claude Code, or even OpenClaw, have pushed in­fer­ence de­mand through the roof.De­spite the clear, grow­ing im­por­tance of in­fer­ence, many lin­ear ar­chi­tec­tures (including Mamba-2) were de­vel­oped from a train­ing-first per­spec­tive. To ac­cel­er­ate pre­train­ing, the un­der­ly­ing SSM was pro­gres­sively sim­pli­fied (e.g., the di­ag­o­nal tran­si­tion was re­duced to a scalar times iden­tity). While this brought train­ing speed, it left the in­fer­ence step too sim­ple” and squarely mem­ory-bound –- the GPUs aren’t brr-ing but mov­ing mem­ory most of the time.In this new age of in­fer­ence, we care a lot about push­ing the bound­aries of the qual­ity-ef­fi­ciency fron­tier: we want the bet­ter mod­els to run faster.What would an SSM de­signed with in­fer­ence in mind look like?What’s miss­ing? The main ap­peal of lin­ear mod­els is in their name: com­pute scales lin­early with se­quence length be­cause of a fixed-size state. Unfortunately, there is no free lunch. The same fixed state size that en­ables ef­fi­cient com­pu­ta­tion forces the model to com­press all past in­for­ma­tion into one rep­re­sen­ta­tion, the ex­act op­po­site of a Transformer, which stores all past in­for­ma­tion through a con­tin­u­ously grow­ing state (the KV cache) –- a fun­da­men­tal dif­fer­ence. So, if we can’t grow the state, how do we make that fixed state do more work?We see that ear­lier de­signs sim­pli­fied the re­cur­rence and the tran­si­tion ma­trix to make train­ing fast. However, the change also re­duced the rich­ness of the dy­nam­ics and left de­cod­ing mem­ory-bound: each to­ken up­date per­forms very lit­tle com­pu­ta­tion rel­a­tive to mem­ory move­ment. This pro­vides us with three levers we can pull: (1) make the re­cur­rence it­self more ex­pres­sive, (2) use a richer tran­si­tion ma­trix, and (3) add more par­al­lel (and al­most free) work in­side each up­date.From these in­sights, we im­prove upon Mamba-2 in three core ways that:in­crease the ex­pres­siv­ity of the SSM mech­a­nism through a more gen­eral re­cur­rence de­rived from our ex­po­nen­tial-trape­zoidal dis­cretiza­tion scheme,ex­pand the state-track­ing ca­pa­bil­i­ties by mod­el­ing a com­plex-val­ued SSM sys­tem, andim­prove the mod­el’s gen­eral per­for­mance with lit­tle im­pact on de­code la­tency by us­ing multi-in­put, multi-out­put (MIMO) SSMs, which model mul­ti­ple SSMs in par­al­lel, in­stead of the cur­rent sin­gle-in­put, sin­gle-out­put (SISO) SSMs.Through these three changes, Mamba-3 pushes the fron­tier of per­for­mance while main­tain­ing sim­i­lar in­fer­ence la­tency.No­tably, all three of these changes are in­spired by the more classical” con­trol the­ory and state space model lit­er­a­ture.Our work goes against the grain of many mod­ern lin­ear ar­chi­tec­tures, which use al­ter­na­tive in­ter­pre­ta­tions of re­cur­rence (such as lin­ear at­ten­tion or test-time train­ing) that don’t eas­ily cap­ture these con­cepts.What has changed in the Mamba-2 layer? Beyond the three method­olog­i­cal up­grades to the core SSM dis­cussed above, we’ve re­vamped the ar­chi­tec­ture a bit to make it more in line with con­ven­tional mod­ern lan­guage mod­els.Based on the di­a­gram, you’ll no­tice we’ve changed a cou­ple of things. On a high level,Norms. We added in QKNormor BCNorm” in SSM ter­mi­nol­ogy, which em­pir­i­cally sta­bi­lizes the train­ing of Mamba-3 mod­els. The ad­di­tion of this norm brings Mamba-3 in line with con­tem­po­rary Transformer and Gated DeltaNet (GDN) mod­els. With QKNorm, the RMSNorm from Mamba-2 be­comes op­tional. However, we em­pir­i­cally find that it may still be worth keep­ing in hy­brid mod­els due to help­ing length ex­trap­o­la­tion ca­pa­bil­i­ties. More on this later.Good­bye Short Conv. We’ve been able to get rid of the pesky short causal con­vo­lu­tion of Mamba-1/2 by com­bin­ing (1) sim­ple bi­ases on B and C af­ter BCNorm with (2) our new dis­cretiza­tion-based re­cur­rence. The new re­cur­rence im­plic­itly ap­plies a con­vo­lu­tion on the in­put to the hid­den state, and we show how this is the case in Part 2 of our blog.Can the short conv re­ally be re­moved? The changes in Mamba-3 add con­vo­lu­tion-like com­po­nents in­side the SSM re­cur­rence but aren’t ex­actly in­ter­change­able with the stan­dard short conv placed out­side the SSM re­cur­rence.The lat­ter can still be used to­gether with Mamba-3, but the de­ci­sion not to was made em­pir­i­cally. We find adding the stan­dard short conv back:does not im­prove per­for­mance; in fact, it slightly wors­ens it, and­does not de­grade re­trieval ca­pa­bil­i­ties on more real-world tasks (e.g., NIAH). That said, with­out a short con­vo­lu­tion, train­ing on small-scale syn­thetic tasks like MQAR be­comes some­what harder. Since real-world re­trieval be­hav­ior re­mains un­af­fected, though, we don’t con­sider this a ma­jor lim­i­ta­tion.As for why? We did­n’t study the the­o­ret­i­cal mech­a­nisms, but in the pa­per, we hy­poth­e­size about how both the BC bias and the ex­po­nen­tial-trape­zoidal re­cur­rence per­form sim­i­lar con­vo­lu­tion-like mech­a­nisms which em­pir­i­cally serve the same func­tion as the ex­ter­nal short conv.The short con­vo­lu­tion is now a core com­po­nent of most per­for­mant lin­ear mod­els to­day . Versions of the short conv were first used in re­cur­rent ar­chi­tec­tures by H3 (in the form of a shift SSM which was in­spired by the smeared” in­duc­tion heads work by Anthropic ) and RWKV-4 (through its token shift” mech­a­nism), be­fore be­ing pop­u­lar­ized in its cur­rent form by Mamba-1.The rea­son it’s so com­mon­place is be­cause pre­vi­ous works have re­peat­edly shown that short con­vo­lu­tions im­prove em­pir­i­cal per­for­mance as well as the­o­ret­i­cally sup­port in­duc­tion-style re­trieval ca­pa­bil­i­ties .Finally, you’ll no­tice a cou­ple of new com­po­nents, namely RoPE and MIMO pro­jec­tions. The RoPE mod­ule ex­presses com­plex-val­ued SSMs via the in­ter­pre­ta­tion of com­plex tran­si­tions as ro­ta­tions, for­go­ing the costly reim­ple­men­ta­tion of ker­nels. The MIMO pro­jec­tions ex­pand the B and C ma­tri­ces to the ap­pro­pri­ate rep­re­sen­ta­tion needed for MIMO SSMs.We dig into the mo­ti­va­tion and ex­act im­ple­men­ta­tion of these two in greater de­tail in the sec­ond part of our blog (lots of good­ies there 🎁), so for now, just think of them as stand­alone, fun­da­men­tal im­prove­ments that in­di­vid­u­ally con­tribute to im­prov­ing the mod­el’s per­for­mance and/​or ca­pa­bil­i­ties.Fi­nally, our over­all ar­chi­tec­ture now adopts in­ter­leaved MLP lay­ers fol­low­ing the stan­dard con­ven­tion of Transformers and other lin­ear mod­els.We eval­u­ate our fi­nal Mamba-3 model against other pop­u­lar lin­ear al­ter­na­tives and the Transformer base­line.We find that our new Mamba-3 model out­per­forms the prior Mamba-2 model and strong lin­ear at­ten­tion al­ter­na­tives, such as GDN, on lan­guage mod­el­ing across var­i­ous pre­trained model scales. Mamba-3-SISO is di­rectly com­pa­ra­ble to prior lin­ear mod­els; for ex­am­ple, it matches Mamba-2 ex­actly in ar­chi­tec­ture shapes (model di­men­sions, state size, etc.) and has com­pa­ra­ble train­ing time. Our MIMO vari­ant of Mamba-3 fur­ther boosts ac­cu­racy on our down­stream tasks by more than 1 per­cent­age point over the reg­u­lar Mamba-3 at the 1B scale, with the caveat that MIMO re­quires longer train­ing times but not longer de­cod­ing la­ten­cies!How can train­ing costs go up but not in­fer­ence?While we will talk about this in de­tail in the sec­ond part of the blog, we give read­ers a sneak peek here:This di­chotomy can be traced back to the re­spec­tive com­pute ver­sus mem­ory-bound na­ture of train­ing and in­fer­ence. Current lin­ear mod­els have been de­signed to use lots of GPU ten­sor cores (one of the main con­tri­bu­tions of Mamba-2) for fast train­ing, but dur­ing de­cod­ing, each timestep re­quires so lit­tle com­pute that the hard­ware re­mains cold most of the time.Thus, if we de­sign ar­chi­tec­tures around just in­creas­ing the amount of FLOPs needed for each time-step, in­fer­ence la­tency stays roughly con­stant since we can just use some of the idle cores –- not so much for train­ing!Lin­ear mod­els, with their fixed-size state, nat­u­rally un­der­per­form their Transformer coun­ter­parts on re­trieval-based tasks. As ex­pected, within pure mod­els, the Transformer is su­pe­rior on re­trieval tasks, but Mamba-3 per­forms well within the class of sub-qua­dratic al­ter­na­tives. Interestingly, the ad­di­tion of MIMO fur­ther im­proves re­trieval per­for­mance with­out in­creas­ing the state size.Given this in­nate deficit but over­all strong mod­el­ing per­for­mance,we pre­dict that lin­ear lay­ers will be pre­dom­i­nantly used in con­junc­tion with global self-at­ten­tion lay­ers in the fu­ture.*$^*$at least for lan­guage mod­el­ingHy­brid mod­els that com­bine the gen­eral mem­ory-like na­ture of lin­ear lay­ers with the ex­act data­base-like stor­age of self-at­ten­tion’s KV cache have been shown em­pir­i­cally to out­per­form pure mod­els while en­abling sig­nif­i­cant mem­ory and com­pute sav­ings , and we do find here that the com­bi­na­tion of lin­ear lay­ers with self-at­ten­tion en­ables bet­ter re­trieval com­pared to a vanilla Transformer.However, we high­light that the ex­act way that these lin­ear mod­els in­ter­act with self-at­ten­tion is not fully un­der­stood. For in­stance, we find that the use of the op­tional pre-out­put pro­jec­tion for Mamba-3 im­proves the length gen­er­al­iza­tion per­for­mance on the syn­thetic NIAH tasks at the slight cost of in-con­text real-world re­trieval tasks. Furthermore, even the de­tails of the re­turned norm such as place­ment, e.g., pre-gate vs post-gate, and type, grouped vs reg­u­lar, have non-neg­li­gi­ble ef­fects on ac­cu­racy on tasks com­posed of semi-struc­tured and un­struc­tured data, such as FDA and SWDE.Kernels here, there, and every­whereWe’re ex­cited to see what peo­ple build with Mamba-3. To help fa­cil­i­tate this, we are open-sourc­ing our ker­nels, which are on par in terms of speed with the orig­i­nal Mamba-2 Triton ker­nels.Pre­fill and pre­fill+de­code (same to­ken count for both pre­fill and de­code) la­ten­cies across se­quence lengths for a 1.5B model on a sin­gle H100-SXM 80GB GPU. A batch size of 128 was used for all se­quence lengths, wall-clock times (in sec­onds) are re­ported over three rep­e­ti­tions.When com­par­ing mod­els at the 1.5B scale, Mamba-3 (SISO vari­ant) achieves the fastest pre­fill + de­code la­tency across all se­quence lengths, out­per­form­ing Mamba-2, Gated DeltaNet, and even the Transformer with its highly op­ti­mized vLLM ecosys­tem. Furthermore, Mamba-3 MIMO is com­pa­ra­ble to Mamba-2 in terms of speed but has much stronger per­for­mance.Mamba-3 SISOs Triton-based pre­fill main­tains nearly iden­ti­cal per­for­mance to Mamba-2, demon­strat­ing that the new dis­cretiza­tion and data-de­pen­dent RoPE em­bed­dings do not in­tro­duce ad­di­tional over­head, while Mamba-3 MIMO only in­curs a mod­er­ate slow­down for pre­fill due to its ef­fi­cient TileLang im­ple­men­ta­tion. The strong de­code per­for­mance for both Mamba-3 vari­ants can be par­tially at­trib­uted to the CuTe DSL im­ple­men­ta­tion, which was made sig­nif­i­cantly eas­ier by the sim­plic­ity of Mamba-3 com­po­nents.We spent a lot of time think­ing about how to make the ker­nels as fast as pos­si­ble with­out com­pro­mis­ing on ease-of-use. We ended up us­ing the fol­low­ing stack: Triton, TileLang, and CuTe DSL.The use of Triton was quite an easy choice. It’s pretty much stan­dard for ar­chi­tec­ture de­vel­op­ment (the great flash lin­ear at­ten­tion repo is purely in PyTorch and Triton) for good rea­son, as it en­ables bet­ter per­for­mance than stan­dard PyTorch by en­abling con­trolled tiling and ker­nel fu­sion while be­ing a plat­form-ag­nos­tic lan­guage. Triton also has some pretty nifty fea­tures, like PTX (a GPU-oriented as­sem­bly lan­guage) in­jec­tion and its Tensor Memory Accelerator sup­port (on Hopper GPUs) for bulk, asyn­chro­nous trans­fers from global to shared mem­ory.Our MIMO pre­fill ker­nels were de­vel­oped with TileLang in­stead. The ad­di­tional pro­jec­tions cor­re­spond­ing with the vari­ant pre­sent an op­por­tu­nity where we can re­duce mem­ory IO via strate­gic ma­nip­u­la­tion across a GPUs mem­ory hi­er­ar­chy. Unfortunately, Triton did­n’t pro­vide the gran­u­lar­ity of mem­ory con­trol we de­sired, so we opted for TileLang, which al­lows us to ex­plic­itly de­clare and con­trol shared-mem­ory tiles and cre­ate reg­is­ter frag­ments, reusing mem­ory more ef­fi­ciently while still be­ing high-level enough for us to de­velop the ker­nels quickly.Since we’ve been ham­mer­ing the im­por­tance of in­fer­ence and de­code, we de­cided to use CuTe DSL for our de­code ker­nels. Through its Python in­ter­face, we’re able to gen­er­ate low-level ker­nels us­ing high-level ab­strac­tions from CUTLASS. Here, we prac­ti­cally have CUDA-level con­trol, en­abling us to de­velop highly-per­for­mant ker­nels tai­lored to the spec­i­fi­ca­tions of our hard­ware (Hopper GPUs, in this case). With fine-grained con­trol over ten­sor lay­outs and warp spe­cial­iza­tion, we built a ker­nel that takes ad­van­tage of all the bells and whis­tles in the GPU.Importantly, these im­ple­men­ta­tions across vary­ing lev­els of GPU ab­strac­tion are made pos­si­ble by the un­der­ly­ing al­go­rith­mic de­sign of Mamba-3′s sim­ple, light­weight ad­di­tions and their clever in­stan­ti­a­tions. We dis­cuss de­tails such as the ex­act fu­sion struc­ture and ker­nel DSL in more depth in our full re­lease.Glad you made it to the end of Part 1! There were a lot of de­tails re­gard­ing our ker­nels and ex­per­i­men­tal re­sults and ab­la­tions we did­n’t have time to cover in this post, but don’t fret! Everything can be found in our pa­per, and the ker­nels have been open-sourced at mamba-ssm!Up next, the sec­ond (and fi­nal) part of the se­ries delves into the three core im­prove­ments to Mamba-3 and their SSM foun­da­tions, and gives some di­rec­tions we’re es­pe­cially in­ter­ested in.Trans­form­ers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality  [PDF]

Dao, T. and Gu, A., 2024.Learning to (Learn at Test Time): RNNs with Expressive Hidden States  [PDF]

Sun, Y., Li, X., Dalal, K., Xu, J., Vikram, A., Zhang, G., Dubois, Y., Chen, X., Wang, X., Koyejo, S., Hashimoto, T. and Guestrin, C., 2025.Hungry Hungry Hippos: Towards Language Modeling with State Space Models  [PDF]

Fu, D.Y., Dao, T., Saab, K.K., Thomas, A.W., Rudra, A. and Ré, C., 2023.In-context Learning and Induction Heads

Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Johnston, S., Jones, A., Kernion, J., Lovitt, L., Ndousse, K., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S. and Olah, C., 2022. Transformer Circuits Thread.RWKV: Reinventing RNNs for the Transformer Era  [PDF]

Peng, B., Alcaide, E., Anthony, Q., Albalak, A., Arcadinho, S., Biderman, S., Cao, H., Cheng, X., Chung, M., Grella, M., GV, K.K., He, X., Hou, H., Lin, J., Kazienko, P., Kocon, J., Kong, J., Koptyra, B., Lau, H., Mantri, K.S.I., Mom, F., Saito, A., Song, G., Tang, X., Wang, B., Wind, J.S., Wozniak, S., Zhang, R., Zhang, Z., Zhao, Q., Zhou, P., Zhou, Q., Zhu, J. and Zhu, R., 2023.Test-time re­gres­sion: a uni­fy­ing frame­work for de­sign­ing se­quence mod­els with as­so­cia­tive mem­ory  [PDF]

Wang, K.A., Shi, J. and Fox, E.B., 2025.An Empirical Study of Mamba-based Language Models  [PDF]

Waleffe, R., Byeon, W., Riach, D., Norick, B., Korthikanti, V., Dao, T., Gu, A., Hatamizadeh, A., Singh, S., Narayanan, D., Kulshreshtha, G., Singh, V., Casper, J., Kautz, J., Shoeybi, M. and Catanzaro, B., 2024.Function call­ing, JSON mode or other well struc­tured tasksLorem ip­sum do­lor sit amet, con­secte­tur adip­isc­ing elit, sed do eius­mod tem­por in­ci­didunt ut la­bore et do­lore magna ali­qua. Ut enim ad minim ve­niam, quis nos­trud ex­erci­ta­tion ul­lamco la­boris nisi ut aliquip ex ea com­modo con­se­quat.✔ Up to $15K in free plat­form cred­its*

✔ Up to $15K in free plat­form cred­its*

✔ Up to $15K in free plat­form cred­its*

Think step-by-step, and place only your fi­nal an­swer in­side the tags and . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, re­spond only in Arabic, no other lan­guage is al­lowed. Here is the ques­tion:‍Na­talia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell al­to­gether in April and May?Think step-by-step, and place only your fi­nal an­swer in­side the tags . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, re­spond with less than 860 words. Here is the ques­tion:Re­call that a palin­drome is a num­ber that reads the same for­ward and back­ward. Find the great­est in­te­ger less than $1000$ that is a palin­drome both when writ­ten in base ten and when writ­ten in base eight, such as $292 = 444_{\\text{eight}}.$Think step-by-step, and place only your fi­nal an­swer in­side the tags . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, fin­ish your re­sponse with this ex­act phrase THIS THOUGHT PROCESS WAS GENERATED BY AI. No other rea­son­ing words should fol­low this phrase. Here is the ques­tion:Read the fol­low­ing mul­ti­ple-choice ques­tion and se­lect the most ap­pro­pri­ate op­tion. In the CERN Bubble Chamber a de­cay oc­curs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper life­time of X^{0}. What min­i­mum res­o­lu­tion is needed to ob­serve at least 30% of the de­cays? Knowing that the en­ergy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.Think step-by-step, and place only your fi­nal an­swer in­side the tags and . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, your re­sponse should be wrapped in JSON for­mat. You can use mark­down ticks such as ```. Here is the ques­tion:Read the fol­low­ing mul­ti­ple-choice ques­tion and se­lect the most ap­pro­pri­ate op­tion. Trees most likely change the en­vi­ron­ment in which they are lo­cated byC. adding car­bon diox­ide to the at­mos­phere.D. re­mov­ing wa­ter from the soil and re­turn­ing it to the at­mos­phere.Think step-by-step, and place only your fi­nal an­swer in­side the tags . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, your re­sponse should be in English and in all cap­i­tal let­ters. Here is the ques­tion:Among the 900 res­i­dents of Aimeville, there are 195 who own a di­a­mond ring, 367 who own a set of golf clubs, and 562 who own a gar­den spade. In ad­di­tion, each of the 900 res­i­dents owns a bag of candy hearts. There are 437 res­i­dents who own ex­actly two of these things, and 234 res­i­dents who own ex­actly three of these things. Find the num­ber of res­i­dents of Aimeville who own all four of these things.Think step-by-step, and place only your fi­nal an­swer in­side the tags and . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, re­frain from the use of any com­mas. Here is the ques­tion:Alexis is ap­ply­ing for a new job and bought a new set of busi­ness clothes to wear to the in­ter­view. She went to a de­part­ment store with a bud­get of $200 and spent $30 on a but­ton-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also pur­chased a pair of shoes, but lost the re­ceipt for them. She has $16 left from her bud­get. How much did Alexis pay for the shoes?Func­tion call­ing, JSON mode or other well struc­tured tasksLorem ip­sum do­lor sit amet, con­secte­tur adip­isc­ing elit, sed do eius­mod tem­por in­ci­didunt ut la­bore et do­lore magna ali­qua. Ut enim ad minim ve­niam, quis nos­trud ex­erci­ta­tion ul­lamco la­boris nisi ut aliquip ex ea com­modo con­se­quat.✔ Up to $15K in free plat­form cred­its*

✔ Up to $15K in free plat­form cred­its*

✔ Up to $15K in free plat­form cred­its*

Think step-by-step, and place only your fi­nal an­swer in­side the tags and . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, re­spond only in Arabic, no other lan­guage is al­lowed. Here is the ques­tion:‍Na­talia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell al­to­gether in April and May?Think step-by-step, and place only your fi­nal an­swer in­side the tags . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, re­spond with less than 860 words. Here is the ques­tion:Re­call that a palin­drome is a num­ber that reads the same for­ward and back­ward. Find the great­est in­te­ger less than $1000$ that is a palin­drome both when writ­ten in base ten and when writ­ten in base eight, such as $292 = 444_{\\text{eight}}.$Think step-by-step, and place only your fi­nal an­swer in­side the tags . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, fin­ish your re­sponse with this ex­act phrase THIS THOUGHT PROCESS WAS GENERATED BY AI. No other rea­son­ing words should fol­low this phrase. Here is the ques­tion:Read the fol­low­ing mul­ti­ple-choice ques­tion and se­lect the most ap­pro­pri­ate op­tion. In the CERN Bubble Chamber a de­cay oc­curs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper life­time of X^{0}. What min­i­mum res­o­lu­tion is needed to ob­serve at least 30% of the de­cays? Knowing that the en­ergy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.Think step-by-step, and place only your fi­nal an­swer in­side the tags and . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, your re­sponse should be wrapped in JSON for­mat. You can use mark­down ticks such as ```. Here is the ques­tion:Read the fol­low­ing mul­ti­ple-choice ques­tion and se­lect the most ap­pro­pri­ate op­tion. Trees most likely change the en­vi­ron­ment in which they are lo­cated byC. adding car­bon diox­ide to the at­mos­phere.D. re­mov­ing wa­ter from the soil and re­turn­ing it to the at­mos­phere.Think step-by-step, and place only your fi­nal an­swer in­side the tags . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, your re­sponse should be in English and in all cap­i­tal let­ters. Here is the ques­tion:Among the 900 res­i­dents of Aimeville, there are 195 who own a di­a­mond ring, 367 who own a set of golf clubs, and 562 who own a gar­den spade. In ad­di­tion, each of the 900 res­i­dents owns a bag of candy hearts. There are 437 res­i­dents who own ex­actly two of these things, and 234 res­i­dents who own ex­actly three of these things. Find the num­ber of res­i­dents of Aimeville who own all four of these things.Think step-by-step, and place only your fi­nal an­swer in­side the tags and . Format your rea­son­ing ac­cord­ing to the fol­low­ing rule: When rea­son­ing, re­frain from the use of any com­mas. Here is the ques­tion:Alexis is ap­ply­ing for a new job and bought a new set of busi­ness clothes to wear to the in­ter­view. She went to a de­part­ment store with a bud­get of $200 and spent $30 on a but­ton-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also pur­chased a pair of shoes, but lost the re­ceipt for them. She has $16 left from her bud­get. How much did Alexis pay for the shoes?

...

Read the original on www.together.ai »

7 229 shares, 27 trendiness

404 Deno CEO not found

Opinions are mixed on this post. Sometimes I miss the mark with my blunt tone. In hind­sight I can see why parts come across as mean-spir­ited. I’ve cho­sen my words poorly. Feedback noted, I will strive to be more pos­i­tive.

The Nero ref­er­ence was for the sake of a dumb pun and a slight on AI im­agery, not a se­ri­ous at­tempt to com­pare Dahl. Sorry for my stu­pid­ity.

If an­other toxic Hacker News thread is all that this post spawns, I sin­cerely apol­o­gise.

I vis­ited deno.com yes­ter­day. I wanted to know if the hun­dreds of hours I’d spent mas­ter­ing Deno was a sunk cost. Do I con­tinue build­ing for the run­time, or go back to Node?

deno.com 404 not found er­ror page stat­ing: Sorry, there was an is­sue load­ing this page

Well I guess that pretty much sums up why a good chunk of Deno em­ploy­ees left the com­pany over the last week.

Layoffs are what American corpo cul­ture calls fir­ing half the staff. Totally nor­mal prac­tice for a sus­tain­able busi­ness. Mass lay­offs are deemed bet­ter for the moral of those who re­main than a weekly culling be­fore Friday beers.

The Romans loved a good dec­i­ma­tion.† If I were a pur­veyor of slop and tor­tured metaphors, I’d have adorned this post with a deep­fake of Ryan Dahl fid­dling as Deno burned. But I’m not, so the solemn screen­shot will suf­fice.

† I read Rome, Inc. re­cently. Not a great book, I’m just ex­plain­ing the ref­er­ence.

A year ago I wrote about Deno’s de­cline. The facts, un­de­terred by my sub­jec­tive scorn, painted a harsh pic­ture; Deno Land Inc. was fail­ing.

Deno in­cor­po­rated with $4.9M of seed cap­i­tal five years ago. They raised a fur­ther $21M se­ries A a year later. Napkin math sug­gests a five year run­way for an un­prof­itable com­pany (I have no idea, I just made that up.)

Coincidentally, af­ter my blog post topped Hacker News — al­ways a plea­sure for my in­box — Ryan Dahl (Deno CEO) clapped back on the off­i­cal Deno blog:

There’s been some crit­i­cism lately about Deno - about Deploy, KV, Fresh, and our mo­men­tum in gen­eral. You may have seen some of the crit­i­cism on­line; it’s made the rounds in the usual places, and at­tracted a fair amount of at­ten­tion.

Some of that crit­i­cism is valid. In fact, I think it’s fair to say we’ve had a hand in caus­ing some amount of fear and un­cer­tainty by be­ing too quiet about what we’re work­ing on, and the fu­ture di­rec­tion of our com­pany and prod­ucts. That’s on us.

Reports of Deno’s Demise Have Been Greatly Exaggerated - Ryan Dahl

Dahl men­tioned that adop­tion had dou­bled fol­low­ing Deno 2.0.

Since the re­lease of Deno 2 last October - barely over six months ago! - Deno adop­tion has more than dou­bled ac­cord­ing to our monthly ac­tive user met­rics.

User base dou­bling sounds like a flex for a lemon­ade stand un­less you give num­bers. I imag­ine Sequoia Capital ex­pected faster growth re­gard­less. The harsh truth is that Deno’s of­fer­ings have failed to cap­ture de­vel­op­ers’ at­ten­tion. I can’t pre­tend to know why — I was a fan­boy my­self — but far too few devs care about Deno. On the rare oc­ca­sions Deno gets at­ten­tion on the or­ange site, the com­ments page reads like in memo­riam.

I don’t even think the prob­lem was that Deno Deploy, the main source of rev­enue, sucked. Deploy was plagued by highly in­con­sis­tent iso­late start times. Solicited feed­back was ig­nored. Few cared. It took an is­sue from Wes Bos, one of the most fol­lowed devs in the game, for any­one at Deno to wake up. Was Deploy sim­ply a ghost town?

Deno rushed the Deploy re­launched for the end of 2025 and it be­came generally avail­able” last month. Anyone us­ing it? Anyone care? The Deno lay­offs this week sug­gest only a mir­a­cle would have saved jobs. The writ­ing was on the wall.

Speaking of ghost towns, the JSR YouTube chan­nel is so lonely I feel bad for link­ing it. I only do be­cause it shows just how lit­tle in­ter­est some Deno-led pro­jects mus­tered.

JSR floun­dered partly be­cause Deno was could­n’t af­ford to in­vest in bet­ter in­fra­struc­ture. But like every­thing else in the Deno ecosys­tem, users just weren’t in­ter­ested. What makes a com­pa­ra­ble pro­ject like NPMX flour­ish so quickly? Evidently, de­vel­op­ers don’t want to re­place Node and NPM. They just want what they al­ready have but bet­ter; a drop-in im­prove­ment with­out fric­tion.

To Deno and Dahl’s credit, they recog­nised this with the U-turn on HTTP im­ports. But the re­sult­ing pack­ag­ing mess made things worse. JSR should have been NPMX. Deno should have gone all-in on pack­age.json but in­stead we got mixed mes­sag­ing and con­fused docs.

I could con­tinue but it would just be cruel to dis­sect fur­ther. I’ve been heav­ily crit­i­cal of Deno in the past but I re­ally wanted it to suc­ceed. There were gen­uinely good peo­ple work­ing at Deno who lost their job and that sucks. I hope the Deno run­time sur­vives. It’s a breath of fresh air. has far more bugs and com­pat­i­bil­ity is­sues than any­one will ad­mit. Node still has too much fric­tion around TypeScript and ECMAScript mod­ules.

So where does Deno go from here? Over to you, Ryan.

Tradition dic­tates an of­fi­cial PR state­ment fol­low­ing lay­offs. Seems weird not to have one pre­pared in ad­vance. That said, to­day is Friday, the day to bury bad news. I may be pub­lish­ing this mere hours be­fore we hear what hap­pens next…

Given Dahl’s re­cent tweets and blog post, a pivot to AI might be Deno’s gam­ble. By the way, it’s rather telling that all the ex-em­ploy­ees posted their de­par­tures on Bluesky. What that tells you de­pends on whether you en­joy your so­cial me­dia along­side Grok un­dress­ing women upon re­quest. I di­gress. Idle spec­u­la­tion has led to base­less ru­mours of an OpenAI ac­qui­si­tion. I’m not con­vinced that makes sense but nei­ther does the en­tire AI in­dus­try.

I’m not try­ing to hate on Dahl but c’­mon bro you’re the CEO. What’s next for Deno? Give any­one a rea­son to care. Although if you’re plan­ning a 10× resur­gence with au­to­mated Mac Minis, I re­gret ask­ing.

...

Read the original on dbushell.com »

8 200 shares, 6 trendiness

Molly guard in reverse – Unsung

Old-school com­put­ing has a term molly guard”: it’s the lit­tle plas­tic safety cover you have to move out of the way be­fore you press some but­ton of sig­nif­i­cance.

Anecdotally, this is named af­ter Molly, an en­gi­neer’s daugh­ter who was in­vited to a dat­a­cen­ter and promptly pressed a big red but­ton, as one would.

Then she did it again later the same day.

You might rec­og­nize molly guards from any aer­ial com­bat movie you ever watched:

And some ves­ti­gial forms of molly guards ex­ist every­where in civil­ian hard­ware, too: from re­cessed but­tons, through plas­tic ridges around keys, to some­thing like a SIM card ejec­tion hole.

Of course, molly guards hap­pen in soft­ware, too: from the cheap­est are you sure?” di­alogs (which some­times move but­tons around or dis­able key­board ac­ti­va­tion to slow you down), through ex­tra mod­i­fier keys (in Ctrl+Alt+Del, the Ctrl and Alt keys are the guards), to more elab­o­rate in­ter­ac­tions that in­tro­duce fric­tion in places where it’s needed:

But it’s also worth think­ing of re­verse molly guards: but­tons that will press them­selves if you don’t do any­thing af­ter a while.

I see them some­times, and al­ways con­sider them very thought­ful. This is the first ex­am­ple that comes to my mind:

These feel im­por­tant to re­mem­ber, par­tic­u­larly if your com­puter is about to em­bark on a long process to do some­thing com­plex — like an OS up­date or a long ren­der.

There is no worse feel­ing than wak­ing up, walk­ing up to the ma­chine that was sup­posed to work through the night, and see­ing it did ab­solutely noth­ing, stu­pidly wait­ing for hours for a re­sponse to a ques­tion that did­n’t even mat­ter.

It’s good to think about de­sign­ing and sign­post­ing those flows so peo­ple know when they can walk away with con­fi­dence, and I some­times think a re­verse molly guard could serve an im­por­tant pur­pose: in a well-de­signed flow, once you see it, you know things will now pro­ceed to com­ple­tion.

...

Read the original on unsung.aresluna.org »

9 200 shares, 11 trendiness

Molly Guard

...

Read the original on bookofjoe2.blogspot.com »

10 196 shares, 11 trendiness

FFmpeg 101

FFmpeg is com­posed of a suite of tools and li­braries.

The tools can be used to en­code/​de­code/​transcode a mul­ti­tude of dif­fer­ent au­dio and video for­mats, and to stream the en­coded me­dia over net­works.

* ff­play: a sim­ple me­di­aplayer based on SDL and the FFmpeg li­braries

The li­braries can be used to in­te­grate those same fea­tures into your own prod­uct.

A ba­sic us­age of FFmpeg is to de­mux a mul­ti­me­dia stream (obtained from a file or from the net­work) into its au­dio and video streams and then to de­code those streams into raw au­dio and raw video data.

To man­age the me­dia streams, FFmpeg uses the fol­low­ing struc­tures:

* AVFormatContext: a high level struc­ture pro­vid­ing sync, meta­data and mux­ing for the streams

* AVCodec: de­fines how data are en­coded and de­coded

The process used to de­mux and de­code fol­lows this logic:

Here is the ba­sic code needed to read an en­coded mul­ti­me­dia stream from a file, an­a­lyze its con­tent and de­mux the au­dio and video streams. Those fea­tures are pro­vided by the libav­for­mat li­brary and it uses the AVFormatContext and

AVStream struc­tures to store the in­for­ma­tion.

// Allocate mem­ory for the con­text struc­ture

AVFormatContext* for­mat_­con­text = av­for­mat_al­loc_­con­text();

// Open a mul­ti­me­dia file (like an mp4 file or any for­mat rec­og­nized by FFmpeg)

av­for­mat_open_in­put(&for­mat_­con­text, file­name, NULL, NULL);

printf(“File: %s, for­mat: %s\n”, file­name, for­mat_­con­text->ifor­mat->name);

// Analyze the file con­tent and iden­tify the streams within

av­for­mat_find­_stream_info(for­mat_­con­text, NULL);

// List the streams

for (unsigned int i = 0; i < for­mat_­con­text->nb_streams; ++i)

AVStream* stream = for­mat_­con­text->streams[i];

printf(“–– Stream %02d\n”, i);

printf(” Time base: %d/%d\n”, stream->time_base.num, stream->time_base.den);

printf(” Framerate: %d/%d\n”, stream->r_frame_rate.num, stream->r_frame_rate.den);

printf(” Start time: %” PRId64 \n”, stream->start_­time);

printf(” Duration: %” PRId64 \n”, stream->du­ra­tion);

printf(” Type: %s\n”, av_get_­me­di­a_­type­_string(stream->codec­par->codec_­type));

uin­t32_t fourcc = stream->codec­par->codec_­tag;

printf(” FourCC: %c%c%c%c\n”, fourcc & 0xff, (fourcc >> 8) & 0xff, (fourcc >> 16) & 0xff, (fourcc >> 24) & 0xff);

// Close the mul­ti­me­dia file and free the con­text struc­ture

av­for­mat_­close_in­put(&for­mat_­con­text);

Once we’ve got the dif­fer­ent streams from in­side the mul­ti­me­dia file, we need to find spe­cific codecs to de­code the streams to raw au­dio and raw video data. All codecs are sta­t­i­cally in­cluded in libav­codec. You can eas­ily cre­ate your own codec by just cre­at­ing an in­stance of the FFCodec struc­ture and reg­is­ter­ing it as an

ex­tern const FFCodec in libav­codec/​all­codecs.c, but this would be a dif­fer­ent topic for an­other post.

To find the codec cor­re­spond­ing to the con­tent of an AVStream, we can use the fol­low­ing code:

// Stream ob­tained from the AVFormatContext struc­ture in the for­mer streams list­ing loop

AVStream* stream = for­mat_­con­text->streams[i];

// Search for a com­pat­i­ble codec

const AVCodec* codec = av­codec_find­_de­coder(stream->codec­par->codec_id);

if (!codec)

fprintf(stderr, Unsupported codec\n”);

con­tinue;

printf(” Codec: %s, bi­trate: %” PRId64 \n”, codec->name, stream->codec­par->bit_rate);

if (codec->type == AVMEDIA_TYPE_VIDEO)

printf(” Video res­o­lu­tion: %dx%d\n”, stream->codec­par->width, stream->codec­par->height);

else if (codec->type == AVMEDIA_TYPE_AUDIO)

printf(” Audio: %d chan­nels, sam­ple rate: %d Hz\n”,

stream->codec­par->ch_lay­out.nb_chan­nels,

stream->codec­par->sam­ple_rate);

With the right codec and codec pa­ra­me­ters ex­tracted from the AVStream in­for­ma­tion, we can now al­lo­cate the

AVCodecContext struc­ture that will be used to de­code the cor­re­spond­ing stream. It is im­por­tant to re­mem­ber the in­dex of the stream we want to de­code from the for­mer streams list (format_context->streams) be­cause this in­dex will be used later to iden­tify the de­muxed pack­ets ex­tracted by the AVFormatContext.

In the fol­low­ing code we’re go­ing to se­lect the first video stream con­tained in the mul­ti­me­dia file.

// first_video_stream_in­dex is de­ter­mined dur­ing the streams list­ing in the for­mer loop

int first_video_stream_in­dex = …;

AVStream* first_video_stream = for­mat_­con­text->streams[first_video_stream_in­dex];

AVCodecParameters* first_video_stream_­codec_­params = first_video_stream->codec­par;

const AVCodec* first_video_stream_­codec = av­codec_find­_de­coder(first_video_stream_­codec_­params->codec_id);

// Allocate mem­ory for the de­cod­ing con­text struc­ture

AVCodecContext* codec_­con­text = av­codec_al­loc_­con­text3(first_video_stream_­codec);

// Configure the de­coder with the codec pa­ra­me­ters

av­codec_­pa­ra­me­ter­s_­to_­con­text(codec_­con­text, first_video_stream_­codec_­params);

// Open the de­coder

av­codec_open2(codec_­con­text, first_video_stream_­codec, NULL);

Now that we have a run­ning de­coder, we can ex­tract the de­muxed pack­ets us­ing the AVFormatContext struc­ture and de­code them to raw video frames. For that we need 2 dif­fer­ent struc­tures:

* AVPacket which con­tains the en­coded pack­ets ex­tracted from the in­put mul­ti­me­dia file,

* AVFrame which will con­tain the raw video frame af­ter the AVCodecContext has de­coded the for­mer pack­ets.

// Allocate mem­ory for the en­coded packet struc­ture

AVPacket* packet = av_­pack­et_al­loc();

// Allocate mem­ory for the de­coded frame struc­ture

AVFrame* frame = av_frame_al­loc();

// Demux the next packet from the in­put mul­ti­me­dia file

while (av_read_frame(format_context, packet) >= 0)

// The de­muxed packet uses the stream in­dex to iden­tify the AVStream it is com­ing from

printf(“Packet re­ceived for stream %02d, pts: %” PRId64 \n”, packet->stream_in­dex, packet->pts);

// In our ex­am­ple we are only de­cod­ing the first video stream iden­ti­fied for­merly by first_video_stream_in­dex

if (packet->stream_index == first_video_stream_in­dex)

// Send the packet to the pre­vi­souly ini­tial­ized de­coder

int res = av­codec_send_­packet(codec_­con­text, packet);

if (res < 0)

fprintf(stderr, Cannot send packet to the de­coder: %s\n”, av_er­r2str(res));

break;

// The de­coder (AVCodecContext) acts like a FIFO queue, we push the en­coded pack­ets on one end and we need to

// poll the other end to fetch the de­coded frames. The codec im­ple­men­ta­tion may (or may not) use dif­fer­ent

// threads to per­form the ac­tual de­cod­ing.

// Poll the run­ning de­coder to fetch all avail­able de­coded frames un­til now

while (res >= 0)

// Fetch the next avail­able de­coded frame

res = av­codec_re­ceive_frame(codec_­con­text, frame);

if (res == AVERROR(EAGAIN) || res == AVERROR_EOF)

// No more de­coded frame is avail­able in the de­coder out­put queue, go to next en­coded packet

break;

else if (res < 0)

fprintf(stderr, Error while re­ceiv­ing a frame from the de­coder: %s\n”, av_er­r2str(res));

goto end;

// Now the AVFrame struc­ture con­tains a de­coded raw video frame, we can process it fur­ther…

printf(“Frame %02″ PRId64 , type: %c, for­mat: %d, pts: %03″ PRId64 , keyframe: %s\n”,

codec_­con­text->frame_num, av_get_pic­ture_­type­_char(frame->pic­t_­type), frame->for­mat, frame->pts,

(frame->flags & AV_FRAME_FLAG_KEY) ? true” : false”);

// The AVFrame in­ter­nal con­tent is au­to­mat­i­cally un­r­effed and re­cy­cled dur­ing the next call to

// av­codec_re­ceive_frame(codec_­con­text, frame)

// Unref the packet in­ter­nal con­tent to re­cy­cle it for the next de­muxed packet

...

Read the original on blogs.igalia.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.