10 interesting stories served every morning and every evening.




1 1,261 shares, 48 trendiness

Ghostty Is Now Non-Profit

Ghostty is now fis­cally spon­sored by Hack Club, a reg­is­tered 501(c)(3) non-profit.

Fiscal spon­sor­ship is a le­gal and fi­nan­cial arrange­ment in which a rec­og­nized non-profit ex­tends its tax-ex­empt sta­tus to a pro­ject that aligns with its mis­sion. This al­lows Ghostty to op­er­ate as a char­i­ta­ble ini­tia­tive while Hack Club man­ages com­pli­ance, do­na­tions, ac­count­ing, and gov­er­nance over­sight.

Being non-profit clearly demon­strates our com­mit­ment to keep­ing Ghostty free and open source for every­one. It paves the way for a model for sus­tain­able de­vel­op­ment be­yond my per­sonal in­volve­ment. And it also pro­vides im­por­tant le­gal pro­tec­tions and as­sur­ances to the peo­ple and com­mu­ni­ties that adopt and use Ghostty.

Since the be­gin­ning of the pro­ject in 2023 and the pri­vate beta days of Ghostty, I’ve re­peat­edly ex­pressed my in­ten­tion that Ghostty legally be­come a non-profit. This in­ten­tion stems from sev­eral core be­liefs I have.

First, I want to lay bricks for a sus­tain­able fu­ture for Ghostty that does­n’t de­pend on my per­sonal in­volve­ment tech­ni­cally or fi­nan­cially. Financially, I am still the largest donor to the pro­ject, and I in­tend to re­main so, but a non-profit struc­ture al­lows oth­ers to con­tribute fi­nan­cially with­out fear of mis­ap­pro­pri­a­tion or mis­use of funds (as pro­tected by le­gal re­quire­ments and over­sight from the fis­cal spon­sor).

Second, I want to squelch any pos­si­ble con­cerns about a

rug pull”. A non-profit struc­ture pro­vides en­force­able as­sur­ances: the mis­sion can­not be qui­etly changed, funds can­not be di­verted to pri­vate ben­e­fit, and the pro­ject can­not be sold off or re­pur­posed for com­mer­cial gain. The struc­ture legally binds Ghostty to the pub­lic-ben­e­fit pur­pose it was cre­ated to serve.

Finally, de­spite be­ing decades-old tech­nol­ogy, ter­mi­nals and ter­mi­nal-re­lated tech­nolo­gies re­main foun­da­tional to mod­ern com­put­ing and soft­ware in­fra­struc­ture. They’re of­ten out of the lime­light, but they’re ever pre­sent on de­vel­oper ma­chines, em­bed­ded in IDEs, vis­i­ble as read-only con­soles for con­tin­u­ous in­te­gra­tion and cloud ser­vices, and still one of the pri­mary ways re­mote ac­cess is done on servers around the world.

I be­lieve in­fra­struc­ture of this kind should be stew­arded by a mis­sion-dri­ven,

non-com­mer­cial en­tity that pri­or­i­tizes pub­lic ben­e­fit over pri­vate profit.

That struc­ture in­creases trust, en­cour­ages adop­tion, and cre­ates the con­di­tions for Ghostty to grow into a widely used and im­pact­ful piece of open-source in­fra­struc­ture.

From a tech­ni­cal per­spec­tive, noth­ing changes for Ghostty. Our tech­ni­cal goals for the pro­ject re­main the same, the li­cense (MIT) re­mains the same, and we con­tinue our work to­wards bet­ter Ghostty GUI re­leases and

libghostty.

Financially, Ghostty can now ac­cept tax-de­ductible do­na­tions in the United States. This opens up new av­enues for fund­ing the pro­ject and sus­tain­ing de­vel­op­ment over the long term. Most im­me­di­ately, I’m ex­cited to be­gin

com­pen­sat­ing con­trib­u­tors, but I also in­tend to sup­port up­stream de­pen­den­cies, fund com­mu­nity events, and pay for bor­ing op­er­a­tional costs.

All our fi­nan­cial trans­ac­tions will be trans­par­ent down to in­di­vid­ual trans­ac­tions for both in­flows and out­flows. You can view our pub­lic ledger at Ghostty’s page on Hack Club Bank. At the time of writ­ing, this is empty, but you’ll soon see some ini­tial fund­ing from me and the be­gin­ning of pay­ing for some of our op­er­a­tional costs.

All ap­plic­a­ble names, marks, and in­tel­lec­tual prop­erty as­so­ci­ated with Ghostty have been trans­ferred to Hack Club and are now owned un­der the non-profit um­brella. Copyright con­tin­ues to be held by in­di­vid­ual con­trib­u­tors un­der the con­tin­ued and ex­ist­ing li­cense struc­ture.

From a lead­er­ship per­spec­tive, I re­main the pro­ject lead and fi­nal au­thor­ity on all de­ci­sions, but as stated ear­lier, the cre­ation of a non-profit struc­ture lays the ground­work for an even­tual fu­ture be­yond this model.

As our fis­cal spon­sor, Hack Club pro­vides es­sen­tial ser­vices to Ghostty, in­clud­ing ac­count­ing, le­gal com­pli­ance, and gov­er­nance over­sight. To sup­port this, 7% of all do­na­tions to Ghostty go to Hack Club to cover these costs in ad­di­tion to sup­port­ing their broader mis­sion of em­pow­er­ing young peo­ple around the world in­ter­ested in tech­nol­ogy and cod­ing.

In the words of Zach Latta, Hack Club’s founder and ex­ec­u­tive di­rec­tor this is a good-for-good” trade. Instead of donor fees go­ing to a for-profit man­age­ment com­pany or cov­er­ing pure over­head of a sin­gle pro­ject, the fees go to an­other non-profit do­ing im­por­tant work in the tech com­mu­nity and the over­head is amor­tized across many pro­jects.

In ad­di­tion to the 7% fees, my fam­ily is per­son­ally do­nat­ing $150,000

di­rectly to the Hack Club pro­ject1 (not to Ghostty within it). Hack Club does amaz­ing work and I would’ve sup­ported them re­gard­less of their fis­cal spon­sor­ship of Ghostty, but I wanted to pair these two things to­gether to am­plify the im­pact of both.

Please con­sider do­nat­ing to sup­port Ghostty’s con­tin­ued de­vel­op­ment.

I rec­og­nize that Ghostty is al­ready in an ab­nor­mally for­tu­nate po­si­tion to have my­self as a backer, but I do en­vi­sion a fu­ture where Ghostty is more equally sup­ported by a broader com­mu­nity. And with our new struc­ture, you can be as­sured about the us­age of your funds

to­wards pub­lic-ben­e­fit goals.

This post is­n’t meant to di­rectly be a fundrais­ing pitch

so it is pur­posely lack­ing crit­i­cal de­tails about our fund­ing goals, bud­get, pro­ject goals, pro­ject met­rics, etc. I’ll work on those in the fu­ture. In the mean time, if you’re in­ter­ested in talk­ing more about sup­port­ing Ghostty, please email me at m@mitchellh.com.

I’m thank­ful for Hack Club and their team for work­ing with us to make this hap­pen. I’m also thank­ful for the Ghostty com­mu­nity who has sup­ported this pro­ject and has trusted me and con­tin­ues to trust me to stew­ard it re­spon­si­bly.

For more in­for­ma­tion about Ghostty’s non-profit struc­ture, see the

ded­i­cated page on Ghostty’s web­site.

...

Read the original on mitchellh.com »

2 1,040 shares, 42 trendiness

Advent of Code

Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.

Advent of Code is an Advent cal­en­dar of small pro­gram­ming puz­zles for a va­ri­ety of skill lev­els that can be solved in any pro­gram­ming lan­guage you like. People use them as in­ter­view prep, com­pany train­ing, uni­ver­sity course­work, prac­tice prob­lems, a speed con­test, or to chal­lenge each other.

You don’t need a com­puter sci­ence back­ground to par­tic­i­pate - just a lit­tle pro­gram­ming knowl­edge and some prob­lem solv­ing skills will get you pretty far. Nor do you need a fancy com­puter; every prob­lem has a so­lu­tion that com­pletes in at most 15 sec­onds on ten-year-old hard­ware.

If you’d like to sup­port Advent of Code, you can do so in­di­rectly by help­ing to [Share] it with oth­ers or di­rectly via AoC++.

If you get stuck, try your so­lu­tion against the ex­am­ples given in the puz­zle; you should get the same an­swers. If not, re-read the de­scrip­tion. Did you mis­un­der­stand some­thing? Is your pro­gram do­ing some­thing you don’t ex­pect? After the ex­am­ples work, if your an­swer still is­n’t cor­rect, build some test cases for which you can ver­ify the an­swer by hand and see if those work with your pro­gram. Make sure you have the en­tire puz­zle in­put. If you’re still stuck, maybe ask a friend for help, or come back to the puz­zle later. You can also ask for hints in the sub­red­dit.

Is there an easy way to se­lect en­tire code blocks? You should be able to triple-click code blocks to se­lect them. You’ll need JavaScript en­abled.

#!/usr/bin/env perl

use warn­ings;

use strict;

print You can test it out by ;

print triple-clicking this code.\n”;

How does au­then­ti­ca­tion work? Advent of Code uses OAuth to con­firm your iden­tity through other ser­vices. When you log in, you only ever give your cre­den­tials to that ser­vice - never to Advent of Code. Then, the ser­vice you use tells the Advent of Code servers that you’re re­ally you. In gen­eral, this re­veals no in­for­ma­tion about you be­yond what is al­ready pub­lic; here are ex­am­ples from Reddit and GitHub. Advent of Code will re­mem­ber your unique ID, names, URL, and im­age from the ser­vice you use to au­then­ti­cate.

Why was this puz­zle so easy / hard? The dif­fi­culty and sub­ject mat­ter varies through­out each event. Very gen­er­ally, the puz­zles get more dif­fi­cult over time, but your spe­cific skillset will make each puz­zle sig­nif­i­cantly eas­ier or harder for you than some­one else. Making puz­zles is tricky.

Why do the puz­zles un­lock at mid­night EST/UTC-5? Because that’s when I can con­sis­tently be avail­able to make sure every­thing is work­ing. I also have a fam­ily, a day job, and even need sleep oc­ca­sion­ally. If you can’t par­tic­i­pate at mid­night, that’s not a prob­lem; if you want to race, many peo­ple use pri­vate leader­boards to com­pete with peo­ple in their area.

I find the text on the site hard to read. Is there a high con­trast mode? There is a high con­trast al­ter­nate stylesheet. Firefox sup­ports these by de­fault (View -> Page Style -> High Contrast).

I have a puz­zle idea! Can I send it to you? Please don’t. Because of le­gal is­sues like copy­right and at­tri­bu­tion, I don’t ac­cept puz­zle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by ac­ci­dent.

Did I find a bug with a puz­zle? Once a puz­zle has been out for even an hour, many peo­ple have al­ready solved it; af­ter that point, bugs are very un­likely. Start by ask­ing on the sub­red­dit.

Should I try to get a fast so­lu­tion time? Maybe. Solving puz­zles is hard enough on its own, but try­ing for a fast time also re­quires many ad­di­tional skills and a lot of prac­tice; speed-solves of­ten look noth­ing like code that would pass a code re­view. If that sounds in­ter­est­ing, go for it! However, you should do Advent of Code in a way that is use­ful to you, and so it is com­pletely fine to choose an ap­proach that meets your goals and ig­nore speed en­tirely.

Why did the num­ber of days per event change? It takes a ton of my free time every year to run Advent of Code, and build­ing the puz­zles ac­counts for the ma­jor­ity of that time. After keep­ing a con­sis­tent sched­ule for ten years(!), I needed a change. The puz­zles still start on December 1st so that the day num­bers make sense (Day 1 = Dec 1), and puz­zles come out every day (ending mid-De­cem­ber).

What hap­pened to the global leader­board? The global leader­board was one of the largest sources of stress for me, for the in­fra­struc­ture, and for many users. People took things too se­ri­ously, go­ing way out­side the spirit of the con­test; some peo­ple even re­sorted to things like DDoS at­tacks. Many peo­ple in­cor­rectly con­cluded that they were some­how worse pro­gram­mers be­cause their own times did­n’t com­pare. What started as a fun fea­ture in 2015 be­came an ever-grow­ing prob­lem, and so, af­ter ten years of Advent of Code, I re­moved the global leader­board. (However, I’ve made it so you can share a read-only view of your pri­vate leader­board. Please don’t use this fea­ture or data to cre­ate a new” global leader­board.)

While try­ing to get a fast time on a pri­vate leader­board, may I use AI / watch stream­ers / check the so­lu­tion threads / ask a friend for help / etc? If you are a mem­ber of any pri­vate leader­boards, you should ask the peo­ple that run them what their ex­pec­ta­tions are of their mem­bers. If you don’t agree with those ex­pec­ta­tions, you should find a new pri­vate leader­board or start your own! Private leader­boards might have rules like max­i­mum run­time, al­lowed pro­gram­ming lan­guage, what time you can first open the puz­zle, what tools you can use, or whether you have to wear a silly hat while work­ing.

Should I use AI to solve Advent of Code puz­zles? No. If you send a friend to the gym on your be­half, would you ex­pect to get stronger? Advent of Code puz­zles are de­signed to be in­ter­est­ing for hu­mans to solve - no con­sid­er­a­tion is made for whether AI can or can­not solve a puz­zle. If you want prac­tice prompt­ing an AI, there are al­most cer­tainly bet­ter ex­er­cises else­where de­signed with that in mind.

Can I copy/​re­dis­trib­ute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re post­ing a code repos­i­tory some­where, please don’t in­clude parts of Advent of Code like the puz­zle text or your in­puts. If you’re mak­ing a web­site, please don’t make it look like Advent of Code or name it some­thing sim­i­lar.

...

Read the original on adventofcode.com »

3 984 shares, 38 trendiness

Zig quits GitHub, gripes about Microsoft's AI obsession

The Foundation that pro­motes the Zig pro­gram­ming lan­guage has quit GitHub due to what its lead­er­ship per­ceives as the code shar­ing site’s de­cline.

The drama be­gan in April 2025 when GitHub user AlekseiNikiforovIBM started a thread ti­tled safe_sleep.sh rarely hangs in­def­i­nitely.” GitHub ad­dressed the prob­lem in August, but did­n’t re­veal that in the thread, which re­mained open un­til Monday.

The code uses 100 per­cent CPU all the time, and will run for­ever

That tim­ing ap­pears no­table. Last week, Andrew Kelly, pres­i­dent and lead de­vel­oper of the Zig Software Foundation, an­nounced that the Zig pro­ject is mov­ing to Codeberg, a non-profit git host­ing ser­vice, be­cause GitHub no longer demon­strates com­mit­ment to en­gi­neer­ing ex­cel­lence.

One piece of ev­i­dence he of­fered for that as­sess­ment was the safe_sleep.sh rarely hangs in­def­i­nitely” thread.

Most im­por­tantly, Actions has in­ex­cus­able bugs while be­ing com­pletely ne­glected,” Kelly wrote. After the CEO of GitHub said to embrace AI or get out’, it seems the lack­eys at Microsoft took the hint, be­cause GitHub Actions started vibe-scheduling’ — choos­ing jobs to run seem­ingly at ran­dom. Combined with other bugs and in­abil­ity to man­u­ally in­ter­vene, this causes our CI sys­tem to get so backed up that not even mas­ter branch com­mits get checked.”

Kelly’s gripe seems jus­ti­fied, as the bug dis­cussed in the thread ap­pears to have popped up fol­low­ing a code change in February 2022 that users flagged in prior bug re­ports.

The code change re­placed in­stances of the posix sleep” com­mand with a safe_sleep” script that failed to work as ad­ver­tised. It was sup­posed to al­low the GitHub Actions run­ner — the ap­pli­ca­tion that runs a job from a GitHub Actions work­flow — to pause ex­e­cu­tion safely.

The bug in this safe sleep’ script is ob­vi­ous from look­ing at it: if the process is not sched­uled for the one-sec­ond in­ter­val in which the loop would re­turn (due to $SECONDS hav­ing the cor­rect value), then it sim­ply spins for­ever,” wrote Zig core de­vel­oper Matthew Lugg in a com­ment ap­pended to the April bug thread.

That can eas­ily hap­pen on a CI ma­chine un­der ex­treme load. When this hap­pens, it’s pretty bad: it com­pletely breaks a run­ner un­til man­ual in­ter­ven­tion. On Zig’s CI run­ner ma­chines, we ob­served mul­ti­ple of these processes which had been run­ning for hun­dreds of hours, silently tak­ing down two run­ner ser­vices for weeks.”

The fix was merged on August 20, 2025, from a sep­a­rate is­sue opened back in February 2024. The re­lated bug re­port from April 2025 re­mained open un­til Monday, December 1, 2025. A sep­a­rate CPU us­age bug re­mains un­re­solved.

Jeremy Howard, co-founder of Answer. AI and Fast.AI, said in a se­ries of so­cial me­dia posts that users’ claims about GitHub Actions be­ing in a poor state of re­pair ap­pear to be jus­ti­fied.

The bug,” he wrote, was im­ple­mented in a way that, very ob­vi­ously to nearly any­one at first glance, uses 100 per­cent CPU all the time, and will run for­ever un­less the task hap­pens to check the time dur­ing the cor­rect sec­ond.”

I can’t see how such an ex­tra­or­di­nary col­lec­tion of out­right face-palm­ing events could be made

He added that the plat­form-in­de­pen­dent fix for the CPU is­sue pro­posed last February lin­gered for a year with­out re­view and was closed by the GitHub bot in March 2025 be­fore be­ing re­vived and merged.

Whilst one could say that this is just one iso­lated in­ci­dent, I can’t see how such an ex­tra­or­di­nary col­lec­tion of out­right face-palm­ing events could be made in any rea­son­ably func­tion­ing or­ga­ni­za­tion,” Howard con­cluded.

GitHub did not im­me­di­ately re­spond to a re­quest for com­ment.

While Kelly has gone on to apol­o­gize for the in­cen­di­ary na­ture of his post, Zig is not the only soft­ware pro­ject pub­licly part­ing ways with GitHub.

Over the week­end, Rodrigo Arias Mallo, cre­ator of the Dillo browser pro­ject, said he’s plan­ning to move away from GitHub ow­ing to con­cerns about over-re­liance on JavaScript, GitHub’s abil­ity to deny ser­vice, de­clin­ing us­abil­ity, in­ad­e­quate mod­er­a­tion tools, and over-focusing on LLMs and gen­er­a­tive AI, which are de­stroy­ing the open web (or what re­mains of it) among other prob­lems.”

Codeberg, for its part, has dou­bled its sup­port­ing mem­ber­ship since January, go­ing from more than 600 mem­bers to over 1,200 as of last week.

GitHub has not dis­closed how many of its users pay for its ser­vices presently. The code host­ing biz had over 1.3 mil­lion paid GitHub Copilot sub­scribers, up 30 per­cent quar­ter-over-quar­ter,” Microsoft CEO Satya Nadella said on the com­pa­ny’s Q2 2024 earn­ings call.

In Q4 2024, when GitHub re­ported an an­nual rev­enue run rate of $2 bil­lion, GitHub Copilot sub­scrip­tions ac­counted for about 40 per­cent of the com­pa­ny’s an­nual rev­enue growth.

Nadella of­fered a dif­fer­ent fig­ure dur­ing Microsoft’s Q3 2025 earn­ings call: we now have over 15 mil­lion GitHub Copilot users, up over 4X year-over-year.” It’s not clear how many GitHub users pay for Copilot, or for run­ner scripts that burned CPU cy­cles when they should have been sleep­ing. ®

...

Read the original on www.theregister.com »

4 897 shares, 36 trendiness

Steam Machine today, Steam Phones tomorrow

is a se­nior ed­i­tor and found­ing mem­ber of The Verge who cov­ers gad­gets, games, and toys. He spent 15 years edit­ing the likes of CNET, Gizmodo, and Engadget.

is a se­nior ed­i­tor and found­ing mem­ber of The Verge who cov­ers gad­gets, games, and toys. He spent 15 years edit­ing the likes of CNET, Gizmodo, and Engadget.

The game it­self is a Windows ex­e­cutable, right? At a core level, the Linux op­er­at­ing sys­tem does not even know how to load the pro­gram, and so, in­stead of in­vok­ing it through the OS, you in­voke it through Proton, which is go­ing to do the first step of set­ting up the ad­dress space, load­ing the seg­ments of code into mem­ory. The code com­ing from the app is all x86, and so Proton is a fa­cil­i­ta­tor. It puts the ex­ist­ing code of the app in a for­mat and a lay­out that the Linux OS can un­der­stand and then starts ex­e­cut­ing that code.

...

Read the original on www.theverge.com »

5 892 shares, 34 trendiness

Everyone in Seattle Hates AI — Jonathon Ready

I grabbed lunch with a for­mer Microsoft coworker I’ve al­ways ad­mired—one of those en­gi­neers who can take any idea, even a mediocre one, and im­me­di­ately find the gold in it. I wanted her take on Wanderfugl 🐦, the AI-powered map I’ve been build­ing full-time. I ex­pected en­cour­age­ment. At worst, overly gen­er­ous feed­back be­cause she knows what I’ve sac­ri­ficed.

Instead, she re­acted to it with a level of neg­a­tiv­ity I’d never seen her di­rect at me be­fore.

When I fi­nally got her to ex­plain what was wrong, none of it had any­thing to do with what I built. She talked about Copilot 365. And Microsoft AI. And every mis­er­able AI tool she’s forced to use at work. My prod­uct barely fea­tured. Her re­ac­tion was­n’t about me at all. It was about her en­tire en­vi­ron­ment.

Her PM had been laid off months ear­lier. The team asked why. Their di­rec­tor told them it was be­cause the PM org wasn’t ef­fec­tive enough at us­ing Copilot 365.”

I ner­vously laughed. This di­rec­tor got up in a group meet­ing and said that some­one lost their job over this?

After a pause I tried to share how much bet­ter I’ve been feel­ing—how AI tools helped me learn faster, how much they ac­cel­er­ated my work on Wanderfugl. I did­n’t fully grok how tone deaf I was be­ing though. She’s drown­ing in re­sent­ment.

I left the lunch de­flated and weirdly guilty, like build­ing an AI prod­uct made me part of the prob­lem.

But then I re­al­ized this was big­ger than one con­ver­sa­tion. Every time I shared Wanderfugl with a Seattle en­gi­neer, I got the same re­flex­ive, crit­i­cal, neg­a­tive re­sponse. This was­n’t true in Bali, Tokyo, Paris, or San Francisco—people were cu­ri­ous, en­gaged, wanted to un­der­stand what I was build­ing. But in Seattle? Instant hos­til­ity the mo­ment they heard AI.”

The peo­ple at big tech in Seattle are not ok

When I joined Microsoft, there was still a sense of pos­si­bil­ity. Satya was push­ing growth mind­set” every­where. Leaders talked about em­pow­er­ment and break­ing down si­los. And even though there was al­ways a gap be­tween the slo­gans and re­al­ity, there was room to try things.

I leaned into it. I pushed into ar­eas no­body wanted to touch, like Windows up­date com­pres­sion, be­cause it lived awk­wardly across three teams. Somehow, a 40% im­prove­ment made it out alive. Leadership backed it. The peo­ple try­ing to kill it shrank back into their fief­doms. It felt like the cul­ture wanted change.

That world is gone.

When the lay­off di­rec­tive hit, every org braced for im­pact. Anything not strictly in­side the org’s char­ter was axed. I went from ship­ping a ma­jor im­prove­ment in Windows 11 to hav­ing zero pro­jects overnight. I quit shortly af­ter. In hind­sight, get­ting laid off with sev­er­ance might’ve been bet­ter than watch­ing the cul­ture col­lapse in slow mo­tion.

Then came the AI panic.

If you could clas­sify your pro­ject as AI,” you were safe and pres­ti­gious. If you could­n’t, you were no­body. Overnight, most en­gi­neers got re­branded as not AI tal­ent.” And then came the fi­nal in­sult: every­one was forced to use Microsoft’s AI tools whether they worked or not.

Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they re­placed. Worse than com­peti­tors’ tools. Sometimes worse than do­ing the work man­u­ally.

But you weren’t al­lowed to fix them—that was the AI org’s turf. You were sup­posed to use them, fail to see pro­duc­tiv­ity gains, and keep quiet.

Meanwhile, AI teams be­came a pro­tected class. Everyone else saw comp stag­nate, stock re­fresh­ers evap­o­rate, and per­for­mance re­views tank. And if your team failed to meet ex­pec­ta­tions? Clearly you weren’t embracing AI.”

Bring up AI in a Seattle cof­fee shop now and peo­ple re­act like you’re ad­vo­cat­ing as­bestos.

Amazon folks are slightly more in­su­lated, but not by much. The old Seattle deal—Ama­zon treats you poorly but pays you more—only masks the rot.

This be­lief sys­tem—that AI is use­less and that you’re not good enough to work on it any­way—hurts three groups:

1. The com­pa­nies.

They’ve taught their best en­gi­neers that in­no­va­tion is­n’t their job.

2. The en­gi­neers.

They’re stuck in re­sent­ment and self-doubt while their ca­reers stall.

3. Anyone try­ing to build any­thing new in Seattle.

Say AI and peo­ple treat you like a threat or an id­iot.

And the loop feeds it­self:

Engineers don’t try be­cause they think they can’t.

Companies don’t em­power them be­cause they as­sume they should­n’t.

Bad prod­ucts re­in­force the be­lief that AI is doomed.

The spi­ral locks in.

My for­mer coworker—the com­pos­ite of three peo­ple for anonymity—now be­lieves she’s both un­qual­i­fied for AI work and that AI is­n’t worth do­ing any­way. She’s wrong on both counts, but the cul­ture made sure she’d land there.

Seattle has tal­ent as good as any­where. But in San Francisco, peo­ple still be­lieve they can change the world—so some­times they ac­tu­ally do.

...

Read the original on jonready.com »

6 841 shares, 34 trendiness

Slop Evader — Tega Brain

How to Get to Zero at Pioneer Works, Sep 12 - Dec 14, 2025. Review in the Art Newspaper, Oct 14. Offset at MediaLive: Data Rich, Dirt Poor at BMoCA, Sep 12 - Jan 11, 2026.

A browser ex­ten­sion for avoid­ing AI slop.

Download it for Chrome or Firefox.

This is a search tool that will only re­turn con­tent cre­ated be­fore ChatGPT’s first pub­lic re­lease on November 30, 2022.

Since the pub­lic re­lease of ChatGTPT and other large lan­guage mod­els, the in­ter­net is be­ing in­creas­ingly pol­luted by AI gen­er­ated text, im­ages and video. This browser ex­ten­sion uses the Google search API to only re­turn con­tent pub­lished be­fore Nov 30th, 2022 so you can be sure that it was writ­ten or pro­duced by the hu­man hand.

...

Read the original on tegabrain.com »

7 828 shares, 23 trendiness

Accepting US car standards would risk European lives, warn cities and civil society

About Us

EU Member States cut back on vi­tal ve­hi­cle checks and open door to monster trucks” in dou­ble blow to EU road safety

PIN Talk: Driving un­der the in­flu­ence and road safety, Ljubljana 8 December / PIN pogovor: Vožnja pod vplivom alko­hola ali drog in varnost prometa,  Ljubljana, 8. de­cem­ber 2025

Accepting US car stan­dards would risk European lives, warn cities and civil so­ci­ety

EU of­fi­cials must re­visit the hastily agreed trade deal with the US, where the EU stated that it intends to ac­cept” lower US ve­hi­cle stan­dards, say cities — in­clud­ing Paris, Brussels and Amsterdam, and more than 75 civil so­ci­ety or­gan­i­sa­tions. In a let­ter to European law­mak­ers, the sig­na­to­ries warn that align­ing European stan­dards with laxer rules in the US would un­der­mine the EUs global lead­er­ship in road safety, pub­lic health, cli­mate pol­icy and com­pet­i­tive­ness.

The deal agreed over sum­mer states that with re­spect to au­to­mo­biles, the United States and the European Union in­tend to ac­cept and pro­vide mu­tual recog­ni­tion to each oth­er’s stan­dards.” Yet, EU ve­hi­cle safety reg­u­la­tions have sup­ported a 36% re­duc­tion in European road deaths since 2010. By con­trast, road deaths in the US over the same pe­riod in­creased 30%, with pedes­trian deaths up 80% and cy­clist deaths up 50%.

Europe cur­rently has manda­tory re­quire­ments for life-sav­ing tech­nolo­gies, such as pedes­trian pro­tec­tion, au­to­mated emer­gency brak­ing and lane-keep­ing as­sis­tance. Some of the most ba­sic pedes­trian pro­tec­tion re­quire­ments which have long been in place in the EU, such as de­for­ma­tion zones in the front of ve­hi­cles to re­duce crash sever­ity and the pro­hi­bi­tion of sharp edges have made cars like the Tesla Cybertruck il­le­gal to sell in Europe.

Europe built its rep­u­ta­tion on pi­o­neer­ing ro­bust ve­hi­cle stan­dards. To ac­cept lower US stan­dards would undo decades of EU progress,” say the sig­na­to­ries. According to the let­ter the con­se­quences of such a move for European road safety would be pro­found.“

The EU is set to ap­ply lim­its to harm­ful pol­lu­tion from brake and tyre wear from 2026 on­wards, while at the same time the US is mov­ing to weaken air pol­lu­tion rules for ve­hi­cles. Accepting weaker US stan­dards would in­crease European ex­po­sure to pol­lu­tants linked to asthma, can­cer and nu­mer­ous car­dio­vas­cu­lar and neu­ro­log­i­cal con­di­tions, warn the sig­na­to­ries.

Major EU brands such as BMW, Mercedes and Stellantis al­ready build large num­bers of ve­hi­cles in US au­to­mo­tive plants to EU stan­dards — par­tic­u­larly larger SUVs. However, if the lower US ve­hi­cle stan­dards are ac­cepted in Europe, these pro­duc­tion lines could pro­duce ve­hi­cles to these US lower stan­dards, be­fore ship­ping these ve­hi­cles to the EU. Overall, ve­hi­cle pro­duc­tion would shift from the EU to the US. To ac­cept lower US car stan­dards would risk large-scale job losses in EU car plants and across Europe’s au­to­mo­tive sup­ply chain.

The European Commission is al­ready work­ing to tighten Individual Vehicle Approval (IVA), which is be­ing abused to put thou­sands of over­sized US pick-up trucks on EU streets with­out com­ply­ing with core EU safety, air pol­lu­tion and cli­mate stan­dards. To now ac­cept lower US ve­hi­cle stan­dards across the board would open the flood­gates to US pick-ups and large SUVs.

The sig­na­to­ries urge EU law­mak­ers to op­pose the in­ten­tion to ac­cept lower US ve­hi­cle stan­dards in the EU–US Joint Statement and af­firm pub­licly that EU ve­hi­cle stan­dards are non-ne­go­tiable.

2025 10 20 Civil so­ci­ety + city let­ter on risk of EU ac­cept­ing lower US car stan­dards (FINAL)Download

This web­site does not use cook­ies but cer­tain pages use em­bed­ded con­tent from ex­ter­nal ser­vices in­clud­ing YouTube, Twitter, Google Sheets, MailChimp and Infogram which may track your us­age. If you con­tinue to use this site, you give your con­sent to this. You can find more in­for­ma­tion on our pri­vacy pol­icy page.

...

Read the original on etsc.eu »

8 804 shares, 30 trendiness

"Captain Gains" on Capitol Hill

We thank com­ments from Sumit Agarwal, Ron Kaniel, Roni Michaely, Lyndon Moore, Antoinette Schoar, and sem­i­nar/​con­fer­ence par­tic­i­pants at the Chinese University of Hong Kong, Columbia Business School, Deakin University, Macquarie University, Peking University (HSBC and Guanghua), Shanghai Lixin University of Accounting and Finance, Tsinghua University, University of Sydney, University of Technology Sydney, 2023 Australasian Finance and Banking Conference, 2023 Finance Down Under, and 2023 Five Star Workshop in Finance for their help­ful com­ments. We thank Lei Chen, Jingru Pan, Yiyun Yan, Zitong Zeng, and Tianyue Zheng for their ex­cel­lent re­search as­sis­tance. The views ex­pressed herein are those of the au­thors and do not nec­es­sar­ily re­flect the views of the National Bureau of Economic Research.

...

Read the original on www.nber.org »

9 779 shares, 27 trendiness

Introducing Mistral 3

Today, we an­nounce Mistral 3, the next gen­er­a­tion of Mistral mod­els. Mistral 3 in­cludes three state-of-the-art small, dense mod­els (14B, 8B, and 3B) and Mistral Large 3 — our most ca­pa­ble model to date — a sparse mix­ture-of-ex­perts trained with 41B ac­tive and 675B to­tal pa­ra­me­ters. All mod­els are re­leased un­der the Apache 2.0 li­cense. Open-sourcing our mod­els in a va­ri­ety of com­pressed for­mats em­pow­ers the de­vel­oper com­mu­nity and puts AI in peo­ple’s hands through dis­trib­uted in­tel­li­gence.

The Ministral mod­els rep­re­sent the best per­for­mance-to-cost ra­tio in their cat­e­gory. At the same time, Mistral Large 3 joins the ranks of fron­tier in­struc­tion-fine-tuned open-source mod­els.

Mistral Large 3 is one of the best per­mis­sive open weight mod­els in the world, trained from scratch on 3000 of NVIDIAs H200 GPUs. Mistral Large 3 is Mistral’s first mix­ture-of-ex­perts model since the sem­i­nal Mixtral se­ries, and rep­re­sents a sub­stan­tial step for­ward in pre­train­ing at Mistral. After post-train­ing, the model achieves par­ity with the best in­struc­tion-tuned open-weight mod­els on the mar­ket on gen­eral prompts, while also demon­strat­ing im­age un­der­stand­ing and best-in-class per­for­mance on mul­ti­lin­gual con­ver­sa­tions (i.e., non-Eng­lish/​Chi­nese).

Mistral Large 3 de­buts at #2 in the OSS non-rea­son­ing mod­els cat­e­gory (#6 amongst OSS mod­els over­all) on the LMArena leader­board.

We re­lease both the base and in­struc­tion fine-tuned ver­sions of Mistral Large 3 un­der the Apache 2.0 li­cense, pro­vid­ing a strong foun­da­tion for fur­ther cus­tomiza­tion across the en­ter­prise and de­vel­oper com­mu­ni­ties. A rea­son­ing ver­sion is com­ing soon!

Working in con­junc­tion with vLLM and Red Hat, Mistral Large 3 is very ac­ces­si­ble to the open-source com­mu­nity. We’re re­leas­ing a check­point in NVFP4 for­mat, built with llm-com­pres­sor. This op­ti­mized check­point lets you run Mistral Large 3 ef­fi­ciently on Blackwell NVL72 sys­tems and on a sin­gle 8×A100 or 8×H100 node us­ing vLLM.

Delivering ad­vanced open-source AI mod­els re­quires broad op­ti­miza­tion, achieved through a part­ner­ship with NVIDIA. All our new Mistral 3 mod­els, from Large 3 to Ministral 3, were trained on NVIDIA Hopper GPUs to tap high-band­width HBM3e mem­ory for fron­tier-scale work­loads. NVIDIAs ex­treme co-de­sign ap­proach brings hard­ware, soft­ware, and mod­els to­gether. NVIDIA en­gi­neers en­abled ef­fi­cient in­fer­ence sup­port for Ten­sorRT-LLM and SGLang for the com­plete Mistral 3 fam­ily, for ef­fi­cient low-pre­ci­sion ex­e­cu­tion.

For Large 3’s sparse MoE ar­chi­tec­ture, NVIDIA in­te­grated state-of-the-art Blackwell at­ten­tion and MoE ker­nels, added sup­port for pre­fill/​de­code dis­ag­gre­gated serv­ing, and col­lab­o­rated with Mistral on spec­u­la­tive de­cod­ing, en­abling de­vel­op­ers to ef­fi­ciently serve long-con­text, high-through­put work­loads on GB200 NVL72 and be­yond. On the edge, de­liv­ers op­ti­mized de­ploy­ments of the Ministral mod­els on DGX Spark, RTX PCs and lap­tops, and Jetson de­vices, giv­ing de­vel­op­ers a con­sis­tent, high-per­for­mance path to run these open mod­els from data cen­ter to ro­bot.

We are very thank­ful for the col­lab­o­ra­tion and want to thank vLLM, Red Hat, and NVIDIA in par­tic­u­lar.

For edge and lo­cal use cases, we re­lease the Ministral 3 se­ries, avail­able in three model sizes: 3B, 8B, and 14B pa­ra­me­ters. Furthermore, for each model size, we re­lease base, in­struct, and rea­son­ing vari­ants to the com­mu­nity, each with im­age un­der­stand­ing ca­pa­bil­i­ties, all un­der the Apache 2.0 li­cense. When mar­ried with the mod­els’ na­tive mul­ti­modal and mul­ti­lin­gual ca­pa­bil­i­ties, the Ministral 3 fam­ily of­fers a model for all en­ter­prise or de­vel­oper needs.

Furthermore, Ministral 3 achieves the best cost-to-per­for­mance ra­tio of any OSS model. In real-world use cases, both the num­ber of gen­er­ated to­kens and model size mat­ter equally. The Ministral in­struct mod­els match or ex­ceed the per­for­mance of com­pa­ra­ble mod­els while of­ten pro­duc­ing an or­der of mag­ni­tude fewer to­kens.

For set­tings where ac­cu­racy is the only con­cern, the Ministral rea­son­ing vari­ants can think longer to pro­duce state-of-the-art ac­cu­racy amongst their weight class - for in­stance 85% on AIME 25 with our 14B vari­ant.

Mistral 3 is avail­able to­day on Mis­tral AI Studio, Amazon Bedrock, Azure Foundry, Hugging Face (Large 3 & Ministral), Modal, IBM WatsonX, OpenRouter, Fireworks, Unsloth AI,and Together AI. In ad­di­tion, com­ing soon on NVIDIA NIM and AWS SageMaker.

For or­ga­ni­za­tions seek­ing tai­lored AI so­lu­tions, Mistral AI of­fers cus­tom model train­ing ser­vices to fine-tune or fully adapt our mod­els to your spe­cific needs. Whether op­ti­miz­ing for do­main-spe­cific tasks, en­hanc­ing per­for­mance on pro­pri­etary datasets, or de­ploy­ing mod­els in unique en­vi­ron­ments, our team col­lab­o­rates with you to build AI sys­tems that align with your goals. For en­ter­prise-grade de­ploy­ments, cus­tom train­ing en­sures your AI so­lu­tion de­liv­ers max­i­mum im­pact se­curely, ef­fi­ciently, and at scale.

The fu­ture of AI is open. Mistral 3 re­de­fines what’s pos­si­ble with a fam­ily of mod­els built for fron­tier in­tel­li­gence, mul­ti­modal flex­i­bil­ity, and un­matched cus­tomiza­tion. Whether you’re de­ploy­ing edge-op­ti­mized so­lu­tions with Ministral 3 or push­ing the bound­aries of rea­son­ing with Mistral Large 3, this re­lease puts state-of-the-art AI di­rectly into your hands.

Frontier per­for­mance, open ac­cess: Achieve closed-source-level re­sults with the trans­parency and con­trol of open-source mod­els.

Multimodal and mul­ti­lin­gual: Build ap­pli­ca­tions that un­der­stand text, im­ages, and com­plex logic across 40+ na­tive lan­guages.

Scalable ef­fi­ciency: From 3B to 675B pa­ra­me­ters, choose the model that fits your needs, from edge de­vices to en­ter­prise work­flows.

Agentic and adapt­able: Deploy for cod­ing, cre­ative col­lab­o­ra­tion, doc­u­ment analy­sis, or tool-use work­flows with pre­ci­sion.

We be­lieve that the fu­ture of AI should be built on trans­parency, ac­ces­si­bil­ity, and col­lec­tive progress. With this re­lease, we in­vite the world to ex­plore, build, and in­no­vate with us, un­lock­ing new pos­si­bil­i­ties in rea­son­ing, ef­fi­ciency, and real-world ap­pli­ca­tions.

...

Read the original on mistral.ai »

10 765 shares, 28 trendiness

How I Reverse Engineered a Billion-Dollar Legal AI Tool and Found 100k+ Confidential Files

Update: This post re­ceived a large amount of at­ten­tion on Hacker News — see the dis­cus­sion thread.

Initial Contact: Upon dis­cov­er­ing this vul­ner­a­bil­ity on October 27, 2025, I im­me­di­ately reached out to Filevine’s se­cu­rity team via email.

November 4, 2025: Filevine’s se­cu­rity team thanked me for the writeup and con­firmed they would re­view the vul­ner­a­bil­ity and fix it quickly.

November 20, 2025: I fol­lowed up to con­firm the patch was in place from my end, and in­formed them of my in­ten­tion to write a tech­ni­cal blog post.

November 21, 2025: Filevine con­firmed the is­sue was re­solved and thanked me for re­spon­si­bly re­port­ing it.

The Filevine team was re­spon­sive, pro­fes­sional, and took the find­ings se­ri­ously through­out the dis­clo­sure process. They ac­knowl­edged the sever­ity, worked to re­me­di­ate the is­sues, al­lowed re­spon­si­ble dis­clo­sure, and main­tained clear com­mu­ni­ca­tion. Following con­ver­sa­tions I’ve had with the Filevine team, it is clear that this in­ci­dent is only re­lated to a sin­gle law firm, no other Filevine clients were im­pacted — this was a non-pro­duc­tion in­stance and there was no in­ter­nal Filevine data ex­posed. Filevine was ap­pre­cia­tive of my ef­forts to find and alert them to this is­sue. This is an­other great ex­am­ple of how or­ga­ni­za­tions should han­dle se­cu­rity dis­clo­sures.

AI le­gal-tech com­pa­nies are ex­plod­ing in value, and Filevine, now val­ued at over a bil­lion dol­lars, is one of the fastest-grow­ing plat­forms in the space. Law firms feed tools like this enor­mous amounts of highly con­fi­den­tial in­for­ma­tion.

Because I’d re­cently been work­ing with Yale Law School on a re­lated pro­ject, I de­cided to take a closer look at how Filevine han­dles data se­cu­rity. What I dis­cov­ered should con­cern every le­gal pro­fes­sional us­ing AI sys­tems to­day.

When I first nav­i­gated to the site to see how it worked, it seemed that I needed to be part of a law firm to ac­tu­ally play around with the tool­ing, or re­quest an of­fi­cial demo. However, I know that com­pa­nies of­ten have a demo en­vi­ron­ment that is open, so I used a tech­nique called sub­do­main enu­mer­a­tion (which I had first heard about in Gal Nagli’s ar­ti­cle last year) to see if there was a demo en­vi­ron­ment. I found some­thing much more in­ter­est­ing in­stead.

I saw a sub­do­main called mar­go­lis.filevine.com. When I nav­i­gated to that site, I was greeted with a load­ing page that never re­solved:

I wanted to see what was ac­tu­ally load­ing, so I opened Chrome’s de­vel­oper tools, but saw no Fetch/XHR re­quests (the re­quest you of­ten ex­pect to see if a page is load­ing data). Then, I de­cided to dig through some of the Javascript files to see if I could fig­ure out what was sup­posed to be hap­pen­ing. I saw a snip­pet in a JS file like POST await fetch(${BOX_SER­VICE}/​rec­om­mend). This piqued my in­ter­est — rec­om­mend what? And what is the BOX_SERVICE? That vari­able was not de­fined in the JS file the fetch would be called from, but (after look­ing through mini­fied code, which SUCKS to do) I found it in an­other one: dxxxxxx9.ex­e­cute-api.us-west-2.ama­zon­aws.com/​prod. Now I had a new end­point to test, I just had to fig­ure out the cor­rect pay­load struc­ture to it. After look­ing at more mini­fied js to de­ter­mine the cor­rect struc­ture for this end­point, I was able to con­struct a work­ing pay­load to /prod/recommend:

(the name could be any­thing of course). No au­tho­riza­tion to­kens needed, and I was greeted with the re­sponse:

At first I did­n’t en­tirely un­der­stand the im­pact of what I saw. No mat­ter the name of the pro­ject I passed in, I was rec­om­mended the same box­Fold­ers and could­n’t seem to ac­cess any files. Then, not re­al­iz­ing I stum­bled upon some­thing mas­sive, I turned my at­ten­tion to the box­To­ken in the re­sponse.

After read­ing some doc­u­men­ta­tion on the Box Api, I re­al­ized this was a live max­i­mum ac­cess fully scoped ad­min to­ken to the cur­rent, en­tire Box filesys­tem (like an in­ter­nal shared Google Drive) of this law firm. This in­cludes all con­fi­den­tial files, logs, user in­for­ma­tion, etc. Once I was able to prove this had an im­pact (by search­ing for confidential” and get­ting nearly 100k re­sults back)

I im­me­di­ately stopped test­ing and re­spon­si­bly dis­closed this to Filevine. They re­sponded quickly and pro­fes­sion­ally and re­me­di­ated this is­sue.

If some­one had ma­li­cious in­tent, they would have been able to ex­tract every sin­gle file used by Margolis lawyers — count­less data pro­tected by HIPAA and other le­gal stan­dards, in­ter­nal memos/​pay­rolls, lit­er­ally mil­lions of the most sen­si­tive doc­u­ments this law firm has in their pos­ses­sion. Documents pro­tected by court or­ders! This could have been a real night­mare for both the law firm and the clients whose data would have been ex­posed.

To com­pa­nies who feel pres­sure to rush into the AI craze in their in­dus­try — be care­ful! Always en­sure the com­pa­nies you are giv­ing your most sen­si­tive in­for­ma­tion to se­cure that data.

Note: After pub­lish­ing this ar­ti­cle, I was con­tacted by some­one from the law firm Margolis PLLC ask­ing me to con­firm that the af­fected law firm was not theirs. I can con­firm it was not.

...

Read the original on alexschapiro.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.