10 interesting stories served every morning and every evening.

Wikipedia File Explorer — Browse Wikipedia on a Windows XP desktop

explorer.samismith.com

User

Bill to block publishers from killing online games advances in California

arstechnica.com

In a for­mal state­ment of sup­port for the bill sent to the California leg­is­la­ture, SKG wrote that there is no other medium in which a prod­uct can be mar­keted and sold to a con­sumer and then ripped away with­out no­tice… As live ser­vice games rise in pop­u­lar­ity for game de­vel­op­ers and gamers alike, end-of-life pro­ce­dures are es­sen­tial tools to en­sure pro­longed ac­cess to the games con­sumers pay to en­joy.”

The Entertainment Software Association, which helps rep­re­sent the in­ter­ests of ma­jor game pub­lish­ers, pub­licly told the California Assembly last month that the bill mis­rep­re­sents how mod­ern game dis­tri­b­u­tion ac­tu­ally works. Consumers re­ceive a li­cense to ac­cess and use a game, not an un­re­stricted own­er­ship in­ter­est in the un­der­ly­ing work,” the ESA wrote. The even­tual shut­down of out­dated or ob­so­lete games is a nat­ural fea­ture of mod­ern soft­ware,” the group added, es­pe­cially when that soft­ware re­quires on­line in­fra­struc­ture main­te­nance.

The ESA also said the bill would im­pose un­rea­son­able ex­pec­ta­tions on pub­lish­ers re­gard­ing li­cens­ing rights for mu­sic or IP rights, which are of­ten ne­go­ti­ated on a time-lim­ited ba­sis. A le­gal re­quire­ment to keep games playable in­def­i­nitely could place pub­lish­ers in an im­pos­si­ble po­si­tion—forc­ing them to rene­go­ti­ate li­censes in­def­i­nitely or al­ter games in ways that may not be legally or tech­ni­cally fea­si­ble,” they wrote.

Last month, the Protect Our Games Act also re­ceived pos­i­tive votes from the California Assembly’s Privacy and Consumer Protection and Judiciary com­mit­tees. But the bill still faces sig­nif­i­cant hur­dles in get­ting ma­jor­ity pas­sage in the full California Assembly and the California Senate be­fore be­ing sent to California Governor Gavin Newsom for sig­na­ture.

Still, the cur­rent leg­isla­tive progress in California has to be heart­en­ing for the Stop Killing Games move­ment, which has seen its mo­men­tum in the UK stall a bit af­ter a UK Parliament de­bate on game preser­va­tion last November.

U.S. DOJ demands Apple and Google unmask over 100,000 users of popular car-tinkering app in emissions crackdown

macdailynews.com

The U.S. Department of Justice is seek­ing per­sonal data on po­ten­tially hun­dreds of thou­sands of dri­vers who down­loaded EZ Lynk’s Auto Agent app, es­ca­lat­ing a years-long le­gal bat­tle over ve­hi­cle emis­sions con­trols. Subpoenas is­sued to Apple, Google, Amazon, and Walmart re­quest names, ad­dresses, phone num­bers, and pur­chase his­to­ries tied to the app and its ac­com­pa­ny­ing hard­ware.

Background on the Case

The DOJ first sued EZ Lynk in 2021, ac­cus­ing the Cayman Islands-based com­pany of vi­o­lat­ing the Clean Air Act by mar­ket­ing and sell­ing defeat de­vices.” These tools al­legedly al­low users to by­pass fac­tory emis­sions con­trols on diesel ve­hi­cles, pri­mar­ily through the EZ Lynk Auto Agent app paired with an on­board di­ag­nos­tic (OBD) hard­ware don­gle.

EZ Lynk strongly de­nies the al­le­ga­tions, em­pha­siz­ing that its prod­ucts serve le­git­i­mate pur­poses: mon­i­tor­ing ve­hi­cle per­for­mance, ap­ply­ing soft­ware up­dates, and en­abling le­git­i­mate mod­i­fi­ca­tions and di­ag­nos­tics. The com­pany ar­gues that any emis­sions-re­lated use is not its pri­mary pur­pose and falls un­der user re­spon­si­bil­ity.

Scope of the Subpoenas

According to a joint court fil­ing ear­lier this month, the DOJ sub­poe­naed Apple and Google in March and April 2026 for down­load and ac­count data on any­one who in­stalled the Auto Agent app. Additional re­quests went to Amazon and Walmart for buyer in­for­ma­tion on the phys­i­cal EZ Lynk hard­ware. Estimates sug­gest the to­tal could ex­ceed 100,000 users, Gizmodo re­ports.

The gov­ern­ment says it needs this in­for­ma­tion to iden­tify and in­ter­view wit­nesses who can tes­tify about how the tools were ac­tu­ally used. It has al­ready sub­mit­ted fo­rum posts and so­cial me­dia ev­i­dence show­ing some users em­ploy­ing the sys­tem to dis­able emis­sions con­trols.

Privacy Concerns and Pushback

EZ Lynk’s lawyers call the re­quests overreach,” ar­gu­ing they go far be­yond what’s nec­es­sary for the case and raise se­ri­ous Fourth Amendment is­sues. Investigating this claim does not re­quire iden­ti­fy­ing each per­son who has used the prod­uct,” they wrote. Apple and Google are re­port­edly prepar­ing to chal­lenge the sub­poe­nas.

Privacy ad­vo­cates echo these con­cerns. The Electronic Frontier Foundation (EFF) and Electronic Privacy Information Center (EPIC) have crit­i­cized the broad de­mand for per­son­ally iden­ti­fi­able in­for­ma­tion, not­ing that most users never read terms of ser­vice and may face un­in­tended le­gal ex­po­sure sim­ply for down­load­ing a tool mar­keted for car di­ag­nos­tics and tun­ing.

Car en­thu­si­asts and right-to-re­pair ad­vo­cates view the case as part of a broader ten­sion: dri­vers’ de­sire to mod­ify their ve­hi­cles ver­sus fed­eral en­vi­ron­men­tal reg­u­la­tions. As one ex­pert noted, People want to mod­ify their cars and al­ways will.”

What Happens Next

The case has al­ready sur­vived an at­tempt by EZ Lynk to in­voke Section 230 im­mu­nity (typically used to shield tech plat­forms from li­a­bil­ity for user ac­tions). A judge re­jected that de­fense in 2025, al­low­ing the lit­i­ga­tion to con­tinue.

This episode high­lights grow­ing gov­ern­ment in­ter­est in app store data to pur­sue en­force­ment ac­tions. Similar but smaller-scale re­quests have oc­curred be­fore, such as a 2019 de­mand for data on users of a gun-scope app. The cur­rent scale (potentially 10 times larger)makes it par­tic­u­larly no­table.

Apple, Google, and the other com­pa­nies have not pub­licly com­mented. The DOJ also de­clined to elab­o­rate be­yond its court fil­ings. The out­come of any chal­lenges to the sub­poe­nas could set im­por­tant prece­dents for dig­i­tal pri­vacy in reg­u­la­tory en­force­ment cases. For car own­ers us­ing tun­ing tools, the mes­sage is clear: gov­ern­ments are in­creas­ingly will­ing to trace app down­loads straight back to in­di­vid­ual users.

MacDailyNews Take: The DOJ is over­reach­ing as this would sweep up peo­ple who sim­ply used the app to read their ve­hi­cle’s trou­ble codes or for other mun­dane rea­sons.

‎ Please help sup­port MacDailyNews — and en­joy sub­scriber-only ar­ti­cles, com­ments, chat, and more — by sub­scrib­ing to our Substack: mac­dai­lynews.sub­stack.com. Thank you!

Support MacDailyNews at no ex­tra cost to you by us­ing this link to shop at Amazon.

Tags: Amazon, Apple App Store, Auto Agent, Clean Air Act, Electronic Frontier Foundation, Electronic Privacy Information Center, EZ Lynk, Google Play, U.S. Department of Justice, U.S. DOJ, Walmart

Apple is prepar­ing one of its busiest and most trans­for­ma­tive hard­ware sea­sons in years. According to a de­tailed roundup…

In Nasdaq trad­ing to­day, shares of Apple Inc. rose to hit a new all-time clos­ing high. Apple’s all-time in­tra­day high was also set to­day…

Apple TVs new Widow’s Bay” se­ries is set in a quaint is­land town 40 miles off the coast of New England. But some­thing lurks be­neath…

Apple has launched ma­jor dis­counts on its flag­ship iPhones in China ahead of the an­nual 618 shop­ping fes­ti­val, sig­nal­ing that com­pe­ti­tion…

all of rust codebase: This codebase fails even the most basic miri checks, allows for UB in safe rust

github.com

er­ror: Undefined Behavior: con­struct­ing in­valid value of type &[u8]: en­coun­tered a dan­gling ref­er­ence (0x20933[noalloc] has no prove­nance) –> src/​main.rs:97:18 | 97 | un­safe { core::slice::from_raw_­parts(ptr as *const u8, self.len()) } | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Undefined Behavior oc­curred here | = help: this in­di­cates a bug in the pro­gram: it per­formed an in­valid op­er­a­tion, and caused Undefined Behavior = help: see https://​doc.rust-lang.org/​nightly/​ref­er­ence/​be­hav­ior-con­sid­ered-un­de­fined.html for fur­ther in­for­ma­tion = note: stack back­trace: 0: PathString::slice at src/​main.rs:97:18: 97:75 1: main at src/​main.rs:130:22: 130:34

code:

fn main() { let test = Box::new(*b”Hello World”); let init = PathString::init(&*test); drop(test);

println!(“{:?}”, init.slice()); }

Please con­sider not vibe cod­ing rust as AIs are not good at writ­ing Rust and also hire a real rust dev

A 0-click exploit chain for the Pixel 10: When a Door Closes, a Window Opens

projectzero.google

We re­cently pub­lished an ex­ploit chain for the Google Pixel 9 that demon­strated it was pos­si­ble to go from a zero-click con­text to root on Android in just two ex­ploits. The Dolby 0-click vul­ner­a­bil­ity ex­isted across all of Android, un­til it was patched in January 2026. While we had an ex­ploit chain for the Pixel 9, we wanted to see if it was pos­si­ble to write a sim­i­lar ex­ploit chain for Pixel 10.

Updating the Dolby Exploit

Altering our ex­ploit for CVE-2025 – 54957 was fairly straight­for­ward. The ma­jor­ity of needed changes in­volved up­dat­ing off­sets cal­cu­lated for the spe­cific ver­sion of the li­brary we tar­geted on the Pixel 9 to sim­i­lar off­sets in the li­brary for Pixel 10. The only chal­lenge (outside of wish­ing we’d bet­ter doc­u­mented which syncframes con­tained off­sets) was that the Pixel 10 uses RET PAC in the place of -fstack-protector, which meant that __stack_chk_fail was­n’t avail­able to be over­writ­ten by code. After a bit of trial and er­ror, we used dap_cpdp_init, ini­tial­iza­tion code that can be over­writ­ten with­out caus­ing func­tional prob­lems, as it is called once when the de­coder is ini­tial­ized and never again.

The up­dated Dolby UDC ex­ploit is avail­able here. This ex­ploit will only work on un­patched de­vices (SPL December 2025 or ear­lier).

Removal of BigWave, Addition of VPU

Porting the lo­cal priv­i­lege es­ca­la­tion link of the chain to Pixel 10 was not fea­si­ble as the BigWave dri­ver does not ship on this de­vice. However, a new dri­ver is vis­i­ble in the me­di­a­codec SELinux con­text at /dev/vpu. This dri­ver is used for in­ter­act­ing with the Chips&Media Wave677DV sil­i­con on the Tensor G5 chip meant for ac­cel­er­at­ing video de­cod­ing. Based on the com­ments within the open-source C files, this dri­ver is de­vel­oped and main­tained by the same set of de­vel­op­ers who built the BigWave dri­ver. Working in col­lab­o­ra­tion with Jann Horn, we spent 2 hours au­dit­ing this VPU dri­ver and dis­cov­ered an ex­cep­tional vul­ner­a­bil­ity.

Unlike the up­stream Linux dri­ver for WAVE521C (which is an older Chips&Media chip), the Pixel dri­ver for WAVE677DV does not in­te­grate with V4L2 (the Video for Linux API); in­stead, it di­rectly ex­poses the chip’s hard­ware in­ter­face to user­space, in­clud­ing let­ting user­space map the chip’s MMIO reg­is­ter in­ter­face. The dri­ver mainly es­tab­lishes de­vice mem­ory map­pings, does power man­age­ment, and al­lows user­space to wait for in­ter­rupts from the chip.

The Holy Grail of Kernel Vulnerabilities

This bug in par­tic­u­lar caught our at­ten­tion as ex­cep­tion­ally sim­ple to ex­ploit:

sta­tic int vpu_mmap(struct file *fp, struct vm_area_struct *vm) { un­signed long pfn; struct vpu_­core *core = con­tain­er_of(fp->f_in­ode->i_cdev, struct vpu_­core, cdev);

vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); /* This is a CSRs map­ping, use pg­pro­t_de­vice */ vm->vm_­page_prot = pg­pro­t_de­vice(vm->vm_­page_prot); pfn = core->paddr >> PAGE_SHIFT;

re­turn remap_pfn_range(vm, vm->vm_s­tart, pfn, vm->vm_end-vm->vm_s­tart, vm->vm_­page_prot) ? -EAGAIN : 0; }

This mmap han­dler is in­tended to be used in or­der to map the MMIO reg­is­ter re­gion of the VPU hard­ware into the user­land vir­tual ad­dress space - a re­gion con­tained within a cer­tain phys­i­cal mem­ory ad­dress range. In do­ing so, it makes a call to remap_pfn_range based purely on the size of the VMA and not at all bounded to the size of this reg­is­ter re­gion. This means that, by spec­i­fy­ing a size larger than the reg­is­ter re­gion in an mmap syscall, the caller can map as much phys­i­cal mem­ory as they want into user­land, start­ing at the phys­i­cal ad­dress of the VPU reg­is­ter re­gion. The en­tirety of the ker­nel im­age (including .text, and .data re­gion) is lo­cated at a higher phys­i­cal ad­dress than the VPU reg­is­ter re­gion, and can there­fore be ac­cessed and mod­i­fied by user­space with this bug.

At this point, one can sim­ply over­write any ker­nel func­tion to gain ker­nel code ex­e­cu­tion - or in­deed any prim­i­tive one might de­sire. This is ren­dered even eas­ier by the fact that the ker­nel is al­ways at the same phys­i­cal ad­dress on Pixel and so the off­set be­tween the VPU mem­ory re­gion and the ker­nel is al­ways a known value. Thus it is not even nec­es­sary to scan for the ker­nel in the mapped phys­i­cal mem­ory - you sim­ply know ex­actly where it is rel­a­tive to the ad­dress re­turned by mmap, pre­sum­ing you make the VMA length large enough.

Achieving ar­bi­trary read-write on the ker­nel with this vul­ner­a­bil­ity re­quired 5 lines of code and writ­ing a full ex­ploit for this is­sue re­quired less than a day of ef­fort.

Patch Process

I re­ported this bug on November 24, 2025 and Android VRP rated the is­sue High sever­ity. This is an im­prove­ment, given that the BigWave bug we used for priv­i­lege es­ca­la­tion on the Pixel 9 (which had iden­ti­cal se­cu­rity im­pact) was ini­tially rated as Moderate sever­ity. This rep­re­sents a mean­ing­ful and pos­i­tive change in pos­ture re­gard­ing how these types of bugs are triaged and patched. The vul­ner­a­bil­ity was patched 71 days af­ter its ini­tial re­port, in the February Pixel se­cu­rity bul­letin. This is no­tably fast given that this is the first time that an Android dri­ver bug I re­ported was patched within 90 days of the ven­dor first learn­ing about the vul­ner­a­bil­ity.

Conclusion

There are both pos­i­tives and neg­a­tives to take from this re­search. A key goal of Project Zero is to drive sys­temic im­prove­ments that go be­yond in­di­vid­ual bug fixes, in­flu­enc­ing bet­ter de­vel­op­ment processes and more re­silient code­bases that lead to im­prove­ments in se­cu­rity for end-users. The han­dling of this VPU vul­ner­a­bil­ity demon­strates clear progress in Android’s triage pipeline, as this bug had an ini­tial re­me­di­a­tion in a much shorter pe­riod of time than the pre­vi­ous BigWave is­sues. Android’s ef­fort to en­sure that se­ri­ous vul­ner­a­bil­i­ties are patched ef­fi­ciently will help pro­tect many Android de­vices.

At the same time, this case un­der­scores the on­go­ing need for more ex­haus­tively ro­bust and se­cu­rity-aware code in Android dri­vers. When I re­ported the bugs in BigWave, I hoped to spur its de­vel­op­ers to eval­u­ate their other dri­vers for ob­vi­ous se­cu­rity is­sues, but 5 months later we nev­er­the­less found a se­ri­ous and ex­tremely shal­low vul­ner­a­bil­ity in their VPU dri­ver that was in­stantly no­tice­able with even a cur­sory au­dit of the code­base. Strengthening dri­ver se­cu­rity re­mains a cru­cial pri­or­ity for en­sur­ing a safe Android ecosys­tem, and we con­tinue to strongly en­cour­age ven­dors to im­prove soft­ware de­vel­op­ment prac­tices in a proac­tive ef­fort to pre­vent these sorts of vul­ner­a­bil­i­ties from ever reach­ing end-users.

Security re­ports of­ten un­cover com­plex is­sues missed by the prod­uct teams but it is im­por­tant that soft­ware ven­dors take nec­es­sary steps to en­sure soft­ware prod­ucts, es­pe­cially se­cu­rity-crit­i­cal ones, launch in a rea­son­ably vul­ner­a­bil­ity-free state and that soft­ware teams take a proac­tive ap­proach to soft­ware se­cu­rity, code au­dit­ing, and vul­ner­a­bil­ity patch­ing.

The Wonders of AI: We Are Retiring Our Bug Bounty Program

turso.tech

For al­most a year now, Turso has had a pro­gram that pays $1,000 for any bug that can be demon­strated to lead to data cor­rup­tion. Today, with im­mense sad­ness, we are re­tir­ing this pro­gram.

The rea­son is sim­ple: every­body is be­ing in­un­dated by the slop ma­chine. We are not unique in this re­gard. However, a pro­gram that of­fers money in ex­change for a spe­cific class of bugs is just too juicy of a tar­get for the slop mak­ers. For days, our main­tain­ers have done lit­tle else other than close slop PRs claim­ing to have found bugs that led to data cor­rup­tion in Turso. In a time where many OSS pro­jects are clos­ing their doors to con­tri­bu­tions, we want to make every ef­fort pos­si­ble to keep the doors of Turso open. Being an Open Contribution pro­ject is part of our DNA. It is how Turso was born. But un­for­tu­nately, the fi­nan­cial re­ward is mak­ing this close to im­pos­si­ble and it has to go.

We are shar­ing this pub­licly and loudly be­cause we be­lieve that we will all have to find new ways to es­tab­lish good gov­er­nance in this new era, and should learn from each other. This is our con­tri­bu­tion to that con­ver­sa­tion.

#Why did we start this pro­gram

We started this pro­gram be­cause we are rewrit­ing SQLite, known to be one of the most re­li­able pieces of soft­ware in the world. The com­mu­nity ex­pects a high bar from a pro­ject with such am­bi­tion, and we in­vest tremen­dous ef­fort into mak­ing sure that we can match or even sur­pass SQLite’s leg­endary re­li­a­bil­ity. Turso ships with a na­tive Deterministic Simulator, a col­lec­tion of fuzzers, an or­a­cle-based dif­fer­en­tial test­ing en­gine against SQLite, a con­cur­rency sim­u­la­tor, and on top of that, we have ex­ten­sive runs on Antithesis.

We take our test­ing dis­ci­pline se­ri­ously. And we wanted to com­mu­ni­cate our con­fi­dence. On the other hand, all of that test­ing in­fra­struc­ture is, at the end of the day, just soft­ware and is not per­fect. You can write all the fuzzers and sim­u­la­tors in the world, but they will only catch bugs in the com­bi­na­tions that are ef­fec­tively gen­er­ated. For ex­am­ple, if your fuzzer never gen­er­ates in­dexes, you will by de­f­i­n­i­tion not find any bugs re­lated to in­dexes, re­gard­less of how well you stress the rest of the sys­tem. As a real ex­am­ple, we found bugs that es­caped our sim­u­la­tor be­cause they would only ap­pear in data­bases that were larger than 1GB, and be­cause we in­jected faults ag­gres­sively into every run, data­bases would never get big enough to trig­ger.

The main ad­van­tage of au­to­mated test­ing is that a bug es­capes your val­i­da­tion, once you im­prove the test gen­er­a­tors, an en­tire class of bugs go away. So we en­vi­sioned this pro­gram as a great way to do both things: it helped us es­tab­lish the con­fi­dence we had in the method­ol­ogy, but at the same time, if some­one did find ar­eas that our sim­u­la­tors did­n’t cover well, we’d be more than happy to pay for it! We started the pro­gram with a $1,000 re­ward for bugs that would lead to data cor­rup­tion un­til we could re­lease a 1.0 ver­sion of Turso. Our plan was that once we’d reach 1.0, we would pro­gres­sively in­crease both the size of the re­ward to sub­stan­tial lev­els, and the scope of the is­sues we’d re­ward peo­ple for.

#And be­fore the singularity”, this worked great

We were de­lighted by this pro­gram. We paid a to­tal of 5 in­di­vid­u­als. All of those peo­ple who were awarded were in­cred­i­bly spe­cial peo­ple. Worth high­light­ing the work of Alperen, who was ac­tu­ally one of the core con­trib­u­tors to our sim­u­la­tor it­self (so lit­tle sur­prise that he knew of a cou­ple of places where it could be im­proved). Then Mikael, who in fact used LLMs in a very cre­ative ways to iden­tify places where the sim­u­la­tor was not reach­ing (we later hired Mikael), and Pavan Nambi, who paired the sim­u­la­tor with for­mal meth­ods and ended up not only find­ing bugs in Turso, but in fact found more than TEN bugs on SQLite it­self through is method­ol­ogy.

#But af­ter the singularity”, we got drowned

In our ex­pe­ri­ence, any­body who was skilled enough to find crit­i­cal is­sues was some­one we wanted around in our com­mu­nity. We did have the oc­ca­sional per­son that tried to sub­mit bad PRs in the hopes of col­lect­ing the bounty, but it was a rare oc­cur­rence: the re­quire­ment that the sim­u­la­tor had to be ex­tended to demon­strate the bug (just point­ing out the bug was not enough) helped keep the bar high, and most im­por­tantly, there just aren’t that many bugs.

But then an army of slop was re­leased overnight. It be­came too high a re­ward to just point an LLM at Turso, and try to find a bug. And as you all know, if you in­struct an LLM to go find a bug and col­lect a bounty, it will pro­duce some out­put. Whether or not it makes sense, is a com­pletely dif­fer­ent story. I want to share some of those with you.

#Some ex­am­ples

In this PR, the au­thor just in­jected garbage bytes man­u­ally into the data­base header, and then ar­gued that this cor­rupted the data­base (duh!). After our main­tainer pointed out that well, no shit Sherlock, the au­thor (or his bot) kept ar­gu­ing with your usual LLM-induced wall-of-text for quite a while.

You might find that un­be­liev­able, but it is ac­tu­ally less in­cred­i­ble than mod­i­fy­ing the source code to man­u­ally add an out-of-bound ar­ray ac­cess to cor­rupt the data­base

In this other PR which is full of ta­bles, green check marks and em dashes, the au­thor claims to have found a crit­i­cal vul­ner­a­bil­ity that al­lows for the ex­e­cu­tion of ar­bi­trary SQL state­ments. Imagine that? A SQL data­base that al­lows the ex­e­cu­tion of SQL state­ments. How can we ever re­cover from this.

This other mas­ter­piece en­ables con­cur­rent writes on Turso, one of the fea­tures that set us apart from SQLite, and then demon­strates that SQLite can­not open the file un­til the jour­nal mode is set back to WAL, dis­abling con­cur­rent writes (that is how the sys­tem is de­signed to op­er­ate)

For this other one, I wish I could write a nice de­scrip­tion, but I have no idea what they are try­ing to do. As our main­tainer Mikael (the same who won the award in the past!) pointed out, it is very clear that the per­son just saw the prize an­nounce­ment, started sali­vat­ing, and pointed the slop ma­chine at us.

#The last at­tempt

In our last at­tempt to es­tab­lish some or­der, we have de­signed and im­ple­mented a vouch­ing sys­tem. If we sus­pect that a sub­mis­sion is com­ing from a bot, we just auto-close it. And this worked okay for some time, un­til the bots just started open­ing is­sues ques­tion­ing the clos­ing of their PRs and re­quest­ing a man­ual in­spec­tion. They all look the same:

We also had many in­stances in which we could close a PR, and the same or a very sim­i­lar PR would just be opened by a dif­fer­ent user mo­ments af­ter.

#It’s sad, but here we are

The main prob­lem of course is that it costs the sl­op­maker per­haps a minute to gen­er­ate their sub­mis­sion. But it costs us hours to read, un­der­stand, and en­gage with them. And they can be gen­er­ated at a semi-in­fi­nite pace. It is pos­si­ble to set up au­to­mated sys­tems to gate­keep this, but with a non-neg­li­gi­ble dol­lar value at­tached to it, the in­cen­tive is just too great for the AIs to just keep ar­gu­ing, re­open­ing the same PR, etc.

We value our Open Source com­mu­nity of con­trib­u­tors a lot, and we will con­tinue to strengthen our com­mu­nity. But at this point, we just don’t be­lieve that a fi­nan­cial in­cen­tive of any kind works well with an open sys­tem. We have to ei­ther close the sys­tem, or get rid of the in­cen­tive. For now, we are choos­ing the lat­ter.

fastcompany.com

www.fastcompany.com

Please en­able JS and dis­able any ad blocker

‘No Way To Prevent This,’ Says Only Package Manager Where This Regularly Happens

kevinpatel.xyz

SAN FRANCISCO, CA - In the wake of a dev­as­tat­ing sup­ply chain at­tack in the npm reg­istry that left mil­lions of en­ter­prise ap­pli­ca­tions com­pro­mised and bil­lions of user records ex­posed, de­vel­op­ers across the JavaScript ecosys­tem ex­pressed deep sor­row to­day, lament­ing that such a cri­sis was com­pletely un­avoid­able.

It’s a shame, but what can you do? This is just the price of build­ing mod­ern web apps,” said Senior Frontend Engineer Mark Vance, echo­ing the sen­ti­ments of a com­mu­nity that com­pletely re­lies on a 40-level-deep nested tree of un­vet­ted pack­ages main­tained by pseu­do­ny­mous strangers to cap­i­tal­ize a sin­gle string. There’s ab­solutely no way to fore­see or pre­vent some­one from tak­ing over a long-aban­doned util­ity pack­age and in­ject­ing a crypto-miner into every pro­duc­tion build in the world. It’s just an act of na­ture.”

At press time, res­i­dents of the Node.js ecosys­tem stood uni­fied in their be­lief that the ma­li­cious re­mote-code ex­e­cu­tion was a com­pletely un­pre­dictable tragedy, of­fer­ing their thoughts and prayers to the DevOps teams cur­rently scram­bling to ro­tate their cor­po­rate AWS keys.

Interestingly, de­vel­op­ers in ecosys­tems like Go, Rust, and those uti­liz­ing na­tive Web APIs—where ro­bust stan­dard li­braries dras­ti­cally re­duce re­liance on third-party code and strict cryp­to­graphic ver­i­fi­ca­tion is built into the core tool­chain—re­ported zero in­stances of a col­lege dropout’s week­end pro­ject wip­ing out global lo­gis­tics in­fra­struc­ture to­day.

It’s dev­as­tat­ing, but we have to ac­cept that we live in a world where bad ac­tors ex­ist. There are no reg­istry poli­cies or build-sand­box guardrails we could pos­si­bly en­force to stop it,” said an npm spokesper­son, stand­ing in front of an open-source reg­istry that hap­pily ex­e­cutes ar­bi­trary in­stal­la­tion scripts on lo­cal ma­chines by de­fault. Our hearts go out to the vic­tims. Until the next in­evitable breach to­mor­row morn­ing, we must sim­ply re­main re­silient.”

GitHub - Andyyyy64/whichllm: Find the local LLM that actually runs and performs best on your hardware. Ranked by real, recency-aware benchmarks, not parameter count. One command, run it instantly.

github.com

Find the best lo­cal LLM that ac­tu­ally runs on your hard­ware.

Auto-detects your GPU/CPU/RAM and ranks the top mod­els from HuggingFace that fit your sys­tem.

日本語版はこちら

See it

$ which­llm –gpu RTX 4090”

#1 Qwen/Qwen3.6 – 27B 27.8B Q5_K_M score 92.8 27 t/​s #2 Qwen/Qwen3 – 32B 32.0B Q4_K_M score 83.0 31 t/​s #3 Qwen/Qwen3 – 30B-A3B 30.0B Q5_K_M score 82.7 102 t/​s

The 32B model fits your card fine — which­llm still ranks the 27B #1, be­cause it scores higher on real bench­marks and is a newer gen­er­a­tion. A size-only what fits?” tool would hand you the big­ger one. That gap is the whole point of which­llm. (Note #3: a MoE model at 102 t/​s — speed is ranked on ac­tive params, qual­ity on to­tal.)

What can I run?

Real top picks (snapshot 2026 – 05 — your re­sults track live HuggingFace data, this is not a sta­tic list):

which­llm –gpu <your card>” to sim­u­late any of these be­fore you buy.

Useful? A GitHub star helps other peo­ple find it — and I’d gen­uinely like to know what it picked for your rig: drop it in Issues.

Useful? A GitHub star helps other peo­ple find it — and I’d gen­uinely like to know what it picked for your rig: drop it in Issues.

Star History

Why which­llm?

Fitting a model into your VRAM is the easy part. The hard part is know­ing which of the mod­els that fit is ac­tu­ally the best — and that is what which­llm is built to get right.

Evidence-based rank­ing, not a size heuris­tic — The top pick is cho­sen from merged real bench­marks (LiveBench, Artificial Analysis, Aider, mul­ti­modal/​vi­sion, Chatbot Arena ELO, Open LLM Leaderboard) — never the biggest model that hap­pens to fit.”

Recency-aware — Stale leader­boards are de­moted along each mod­el’s lin­eage, so a 2024 model can’t out­rank a cur­rent-gen­er­a­tion one on an out­dated score. The bench­mark snap­shot date is printed un­der every rank­ing, so a stale rec­om­men­da­tion is self-ev­i­dent in­stead of silently trusted.

Evidence-graded and guarded — Every score is tagged di­rect / vari­ant / base / in­ter­po­lated / self-re­ported and dis­counted by con­fi­dence. Fabricated up­loader claims and cross-fam­ily in­her­i­tance (a small fork bor­row­ing its much larger base’s score) are ac­tively re­jected.

Architecture-aware es­ti­mates — VRAM = weights + GQA KV cache + ac­ti­va­tion + over­head; speed is band­width-bound with per-quant ef­fi­ciency, per-back­end fac­tors, MoE ac­tive-vs-to­tal split, and uni­fied-mem­ory vs dis­crete-PCIe par­tial-of­fload mod­el­ing.

One com­mand, script­able — which­llm prints the an­swer; add –json | jq for pipelines. No TUI, no key­bind­ings to mem­o­rize.

Live data — Models fetched di­rectly from the HuggingFace API, with cu­rated frozen fall­backs for of­fline or rate-lim­ited use.

Features

Auto-detect hard­ware — NVIDIA, AMD, Apple Silicon, CPU-only

Smart rank­ing — Scores mod­els by VRAM fit, speed, and bench­mark qual­ity

One-command chat — which­llm run down­loads and starts a chat ses­sion in­stantly

Code snip­pets — which­llm snip­pet prints ready-to-run Python for any model

Live data — Fetches mod­els di­rectly from HuggingFace (cached for per­for­mance)

Benchmark-aware — Integrates real eval scores with con­fi­dence-based damp­en­ing

Task pro­files — Filter by gen­eral, cod­ing, vi­sion, or math use cases

GPU sim­u­la­tion — Test with any GPU: which­llm –gpu RTX 4090”

Hardware plan­ning — Reverse lookup: which­llm plan llama 3 70b”

JSON out­put — Pipe-friendly: which­llm –json

Run & Snippet

Try any model with a sin­gle com­mand. No man­ual in­stalls needed — which­llm cre­ates an iso­lated en­vi­ron­ment via uv, in­stalls de­pen­den­cies, down­loads the model, and starts an in­ter­ac­tive chat.

# Chat with a model (auto-picks the best GGUF vari­ant) which­llm run qwen 2.5 1.5b gguf”

# Auto-pick the best model for your hard­ware and chat which­llm run

# CPU-only mode which­llm run phi 3 mini gguf” –cpu-only

Works with all model for­mats:

GGUF — via llama-cpp-python (lightweight, fast)

AWQ / GPTQ — via trans­form­ers + au­toawq / auto-gptq

FP16 / BF16 — via trans­form­ers

Get a copy-paste Python snip­pet in­stead:

which­llm snip­pet qwen 7b”

from lla­ma_cpp im­port Llama

llm = Llama.from_pretrained( re­po_id=“Qwen/​Qwen2.5 – 7B-In­struct-GGUF”, file­name=“qwen2.5 – 7b-in­struct-q4_k_m.gguf”, n_ctx=4096, n_g­pu_lay­ers=-1, ver­bose=False, )

out­put = llm.cre­ate_chat_­com­ple­tion( mes­sages=[{“role”: user”, content”: Hello!“}], ) print(out­put[“choices”][0][“mes­sage”][“con­tent”])

Install

uv (recommended)

uvx which­llm

To in­stall per­ma­nently:

uv tool in­stall which­llm

Homebrew

brew in­stall andyyyy64/​which­llm/​which­llm

pip

pip in­stall which­llm

Development

git clone https://​github.com/​Andyyyy64/​which­llm.git cd which­llm uv sync –dev uv run which­llm uv run pytest

Usage

# Auto-detect hard­ware and show best mod­els which­llm

# Simulate a GPU (e.g. plan­ning a pur­chase) which­llm –gpu RTX 4090” which­llm –gpu RTX 5090″ # Specify vari­ant which­llm –gpu RTX 5060 16”

# CPU-only mode which­llm –cpu-only

# More re­sults / fil­ters which­llm –top 20 which­llm –quant Q4_K_M which­llm –min-speed 30 which­llm –evidence base # al­low id/​base-model matches which­llm –evidence strict # id-ex­act only (same as –direct) which­llm –direct

# JSON out­put which­llm –json

# Force re­fresh (ignore cache) which­llm –refresh

# Show hard­ware info only which­llm hard­ware

# Plan: what GPU do I need for a spe­cific model? which­llm plan llama 3 70b” which­llm plan Qwen2.5 – 72B” –quant Q8_0 which­llm plan mistral 7b” –context-length 32768

# Run: down­load and chat with a model in­stantly which­llm run qwen 2.5 1.5b gguf” which­llm run # auto-pick best for your hard­ware

# Snippet: print ready-to-run Python code which­llm snip­pet qwen 7b” which­llm snip­pet llama 3 8b gguf” –quant Q5_K_M

Integrations

Ollama

Find the best model and run it di­rectly:

# Pick the top model and run it with Ollama which­llm –top 1 –json | jq -r .models[0].model_id’ | xargs ol­lama run

# Find the best cod­ing model which­llm –profile cod­ing –top 1 –json | jq -r .models[0].model_id’ | xargs ol­lama run

Shell alias

Add to your .bashrc / .zshrc:

alias bestllm=‘which­llm –top 1 –json | jq -r .models[0].model_id”’ # Usage: ol­lama run $(bestllm)

Scoring

Each model gets a 0 – 100 score. Benchmark qual­ity and size form the core; ev­i­dence con­fi­dence and run­time fit then scale it, with speed, source trust, and pop­u­lar­ity as ad­just­ments.

Score mark­ers:

~ (yellow) — No di­rect bench­mark; score in­her­ited/​in­ter­po­lated from the model fam­ily

? (yellow) — No bench­mark data avail­able

How it works

Data pipeline

Model fetch­ing — Fetches pop­u­lar mod­els from HuggingFace API:

Text-generation (downloads + re­cently up­dated) GGUF-filtered (separate query for cov­er­age) Vision mod­els (image-text-to-text) when –profile vi­sion or any

Model fetch­ing — Fetches pop­u­lar mod­els from HuggingFace API:

Text-generation (downloads + re­cently up­dated)

GGUF-filtered (separate query for cov­er­age)

Vision mod­els (image-text-to-text) when –profile vi­sion or any

Benchmark sources — Current tier (LiveBench, Artificial Analysis Index, Aider) merged live when reach­able, plus a cu­rated mul­ti­modal / vi­sion in­dex; frozen tier (Open LLM Leaderboard v2, Chatbot Arena ELO). Tiers have sep­a­rate caps and lin­eage-aware re­cency de­mo­tion so stale leader­boards stop over-re­ward­ing older gen­er­a­tions.

Benchmark sources — Current tier (LiveBench, Artificial Analysis Index, Aider) merged live when reach­able, plus a cu­rated mul­ti­modal / vi­sion in­dex; frozen tier (Open LLM Leaderboard v2, Chatbot Arena ELO). Tiers have sep­a­rate caps and lin­eage-aware re­cency de­mo­tion so stale leader­boards stop over-re­ward­ing older gen­er­a­tions.

Benchmark ev­i­dence — Five res­o­lu­tion lev­els, in­creas­ingly dis­counted:

di­rect — Exact model ID match vari­ant — Suffix-stripped or -Instruct vari­ant base_­model — Base model from card­Data line_in­terp — Size-aware in­ter­po­la­tion within model fam­ily self­_re­ported — Uploader-claimed eval (heavily dis­counted)

Inheritance is re­jected when a mod­el’s params di­verge more than from its fam­i­ly’s dom­i­nant mem­ber, catch­ing draft / MTP / ablit­er­ated forks that share a fam­i­ly_id with a much larger base.

Benchmark ev­i­dence — Five res­o­lu­tion lev­els, in­creas­ingly dis­counted:

di­rect — Exact model ID match

vari­ant — Suffix-stripped or -Instruct vari­ant

base_­model — Base model from card­Data

line_in­terp — Size-aware in­ter­po­la­tion within model fam­ily

self­_re­ported — Uploader-claimed eval (heavily dis­counted)

Announcing the Zulip Foundation

blog.zulip.com

Today marks a ma­jor tran­si­tion for the Zulip open-source pro­ject and for Kandra Labs, the com­pany be­hind it: I’m step­ping back from full-time Zulip lead­er­ship to join Anthropic, along­side three se­nior team mem­bers, and we’re do­nat­ing the com­pany to a newly cre­ated, in­de­pen­dent, non­profit Zulip Foundation. The new struc­ture pro­vides sta­bil­ity, a re­newed com­mit­ment to our val­ues, and op­por­tu­ni­ties for char­i­ta­ble fundrais­ing to sup­port our mis­sion. This blog post ex­plains these changes and why they set Zulip up for greater long-term suc­cess.

Zulip is a beloved or­ga­nized team chat prod­uct, used by thou­sands of com­pa­nies, open-source pro­jects, and re­search com­mu­ni­ties. Zulip is known for its unique topic-based thread­ing model, which makes it easy to have many con­ver­sa­tions in par­al­lel with­out chaos, in­ter­rup­tions, or stress. April’s Zulip 12.0 re­lease in­cluded al­most 5,500 com­mits con­tributed by 160 peo­ple from all around the world.

Zulip’s new own­er­ship and gov­er­nance struc­ture

The Zulip Foundation will be the for­mal stew­ard of the Zulip pro­ject, with a mis­sion of de­vel­op­ing the best pos­si­ble team chat ex­pe­ri­ence, with a par­tic­u­lar fo­cus on pub­lic-in­ter­est or­ga­ni­za­tions and com­mu­ni­ties.

Kandra Labs, the com­pany that has stew­arded Zulip for the last decade, will now be fully and in­de­pen­dently owned by the Zulip Foundation, with no other stock­hold­ers or debt oblig­a­tions. Kandra Labs will con­tinue host­ing, sup­port­ing, and im­prov­ing Zulip for use across all in­dus­tries, of­fer­ing an ex­cel­lent ex­pe­ri­ence for busi­ness cus­tomers. We’re com­mit­ted to be­ing a trust­wor­thy, trans­par­ent ven­dor for our cus­tomers, and an­tic­i­pate no ma­jor changes in how we con­duct busi­ness.

I’m ex­cited that this new struc­ture — sim­i­lar to gov­er­nance struc­tures for Mozilla, Signal, and Wikipedia — for­mal­izes our long­time com­mit­ment to Zulip’s sus­tain­abil­ity and in­de­pen­dence.

The foun­da­tion’s ini­tial board of di­rec­tors will be:

Tim Abbott, Zulip’s founder (me).

Greg Price, who has helped me lead Zulip in a co­founder-like role for the last 9 years.

Alya Abbott, Zulip’s prod­uct lead, who has also held a co­founder-like role for the last 5 years.

Josh Triplett, a leader in the Rust pro­gram­ming lan­guage, ex­pe­ri­enced in open source, and a ma­jor ad­vo­cate for Zulip.

We also have five in­cred­i­ble peo­ple signed on to share their ex­per­tise as mem­bers of an ad­vi­sory board:

Andrew Sutherland, math­e­mati­cian and se­nior re­searcher at the Massachusetts Institute of Technology, and President of the Number Theory Foundation. He is a lead­ing ad­vo­cate of Zulip for re­search col­lab­o­ra­tions, in­clud­ing the L-functions and Modular Forms Database.

Hazel Weakly, a for­mer Director of the Haskell Foundation Board, open source and com­mu­nity ad­vo­cate, and a Fellow of the Nivenly Foundation.

Jeremy Avigad, a Professor of Philosophy and Mathematical Sciences at Carnegie Mellon University and Director of the NSF Institute for Computer-Aided Reasoning in Mathematics. He is a found­ing mem­ber of the Lean Community or­ga­ni­za­tion, for which Zulip has hosted more than two mil­lion mes­sages to date.

Nick Bergson-Shilcock, the CEO and co­founder of the Recurse Center, a pro­gram­ming re­treat based in New York whose com­mu­nity of 3,000+ alums has run on Zulip since 2013.

Puneeth Chaganti, an OCaml de­vel­oper work­ing on core ecosys­tem tool­ing, and a men­tor for Zulip’s Google Summer of Code pro­gram since 2018.

I’m in­cred­i­bly grate­ful to every­one who has vol­un­teered to help launch the Zulip Foundation. We’re look­ing to re­cruit one ad­di­tional di­rec­tor, and to fill out a larger ad­vi­sory board. If you or some­one you know may be a good fit, please reach out to foun­da­tion-jobs@zulip.com to let us know!

If you’d like to fol­low along, please sign up for oc­ca­sional email up­dates from the Zulip Foundation.

Stability dur­ing the lead­er­ship tran­si­tion

Zulip’s op­er­a­tions will con­tinue with­out in­ter­rup­tion, in­clud­ing Zulip Cloud; the Mobile Push Notifications Service and sup­port con­tracts for self-hosted or­ga­ni­za­tions; our Google Summer of Code men­tor­ship pro­gram, with 11 par­tic­i­pants this sum­mer; and our spon­sor­ships for the thou­sands of open-source pro­jects and other pub­lic-in­ter­est or­ga­ni­za­tions that Zulip Cloud hosts free of charge.

Kim Vandiver, an ex­pe­ri­enced leader and op­er­a­tor, has joined Kandra Labs as Interim President to help en­sure a smooth tran­si­tion. This is not the first time Kim has raised her hand to help a val­ues-fo­cused or­ga­ni­za­tion in a time of change: at VaccinateCA, a rapidly evolv­ing COVID-era ef­fort to spread in­for­ma­tion about vac­cines, Kim jumped in to re­vamp a va­ri­ety of processes — first as a vol­un­teer, and then as the Director of Operations. I’m ex­tremely grate­ful to have her here to man­age op­er­a­tions and help run a global search for the best pos­si­ble lead­er­ship for Zulip go­ing for­ward.

Operationally, both Zulip Cloud and the self-hosted ex­pe­ri­ence are the most sta­ble that they have ever been. We’ve al­ways had a re­lent­less fo­cus on elim­i­nat­ing bugs and work­flow warts, and have made an es­pe­cially strong push on this in the past year. I ex­pect there will be a re­duc­tion in de­vel­op­ment ve­loc­ity over the next quar­ter as the or­ga­ni­za­tion adapts to the lead­er­ship change, but it will feel like a small blip when we look back.

A for­mal com­mit­ment to our val­ues, and a new av­enue for sus­tain­able fund­ing

There are two main rea­sons why I’m ex­cited about this change: it al­lows us to make a per­ma­nent, pub­lic com­mit­ment to the val­ues we’ve long op­er­ated by, and it of­fers a new av­enue for Zulip to raise funds with­out ced­ing con­trol.

Kandra Labs has al­ways been a mis­sion- and val­ues-fo­cused com­pany. We have a long-run­ning spon­sor­ship pro­gram, and have al­ways pri­or­i­tized fea­tures pri­mar­ily use­ful for com­mu­ni­ties along­side fea­tures for busi­ness users. Kandra Labs has been pub­lic about its val­ues for years — in­clud­ing our com­mit­ments to pro­tect­ing cus­tomer data pri­vacy, and to keep­ing our fo­cus on the prod­uct, not on what­ev­er’s com­mer­cially fash­ion­able. The Zulip Foundation for­mal­izes and makes per­ma­nent our val­ues be­yond my tenure as CEO.

It’s hard these days to feel con­fi­dent that a com­pany whose prod­uct you love won’t yield to com­mer­cial pres­sure and start sell­ing your data, putting in ads, or oth­er­wise vi­o­lat­ing your trust. It’s been a chal­lenge to con­vinc­ingly make the case that this won’t hap­pen to Zulip, es­pe­cially to folks who might not have time to in­ves­ti­gate deeply. The Zulip Foundation, which has the goal of serv­ing the pub­lic good, makes this so much eas­ier to com­mu­ni­cate clearly.

The new foun­da­tion also puts Zulip in a much stronger fundrais­ing po­si­tion. Over the years, I’ve been re­luc­tant to ac­cept ex­ter­nal fund­ing for Zulip, even from an­gel in­vestors I trust, be­cause fidu­ciary du­ties to those in­vestors could even­tu­ally gen­er­ate pres­sure for us to com­pro­mise our val­ues. As a re­sult, the com­pa­ny’s fund­ing has been dri­ven by how much I’m able to per­son­ally in­vest in Zulip above and be­yond its sub­scrip­tion rev­enue.

With the foun­da­tion in place, we’ll be able to ap­ply for grants we were pre­vi­ously in­el­i­gi­ble for, and re­ceive tax-de­ductible do­na­tions from in­di­vid­ual donors. The foun­da­tion can also run fundrais­ing cam­paigns that would not have felt ap­pro­pri­ate for an open-source pro­ject with a pri­vately owned com­pany be­hind it.

Why I’m step­ping back from full-time Zulip lead­er­ship

I’m step­ping back from Zulip to join Anthropic be­cause of its re­mark­able com­mit­ment to the re­spon­si­ble de­vel­op­ment of AI for the long-term ben­e­fit of hu­man­ity. Three ad­di­tional mem­bers of Zulip’s long­time lead­er­ship team are also join­ing me at Anthropic: Alya Abbott, Greg Price, and Alex Vandiver.

My ca­reer choices have al­ways been mo­ti­vated by a sense of re­spon­si­bil­ity to use my tal­ents for the pub­lic ben­e­fit. This mo­ti­va­tion is what led me to found Zulip and lead it for a decade with our un­usual val­ues-fo­cused ap­proach. I re­main com­mit­ted to Zulip and its mis­sion, and had imag­ined spend­ing the rest of my ca­reer work­ing on it. So what changed?

Over the last few months, I’ve been re­flect­ing deeply on the myr­iad ways in which AI is chang­ing the world, and how it might change the world in the fu­ture. And I came to the con­clu­sion that it’s vi­tally im­por­tant that we nav­i­gate this strange ado­les­cence of tech­nol­ogy well, and that I should con­tribute to this cause more di­rectly than I ever could as the CEO of Kandra Labs.

My non-ne­go­tiable re­quire­ment for mov­ing on from Zulip has al­ways been en­sur­ing that Zulip can con­tinue its mis­sion ef­fec­tively with­out me. I’m deeply grate­ful to be in a po­si­tion to do ex­actly that by cre­at­ing the non­profit Zulip Foundation.

Zulip’s team of pro­fes­sional main­tain­ers

All Kandra Labs team mem­bers who are not join­ing Anthropic will con­tinue work­ing on Zulip. These 12 amaz­ing peo­ple have an av­er­age of over 4 years of pro­fes­sional ex­pe­ri­ence work­ing on Zulip, and al­most 25,000 Zulip com­mits be­tween them. They have shipped ma­jor im­prove­ments end-to-end across every facet of the prod­uct, and I have full con­fi­dence in their abil­ity to move the pro­ject for­ward.

Ultimately, Zulip’s strength is its cul­ture and in­cred­i­bly dis­ci­plined de­vel­op­ment process. The team has demon­strated the abil­ity to op­er­ate and de­velop Zulip with­out me dur­ing the six months of my parental leave (spread across my three kids). I’ve never shared this so pub­licly be­fore, but in 2018 I de­vel­oped a chronic ill­ness that was ini­tially highly de­bil­i­tat­ing, and con­tin­ued to im­pact my work un­til last year. Yet our won­der­ful team and com­mu­nity made steady progress even through the worst of it.

Over the com­ing months, the team will be hir­ing to fill roles opened by the de­par­tures. If you or some­one you know may be in­ter­ested in a lead­er­ship or in­fra­struc­ture role, learn more and reach out!

I per­son­ally ex­pect to re­main in­volved with Zulip as a con­trib­u­tor, pro­vid­ing con­text, his­tory, re­views, and ad­vice as time per­mits.

Reach out!

While I’m ex­cited for Zulip’s fu­ture, I know folks will have lots of ques­tions about what it all means. Our team would love to an­swer them as trans­par­ently as we can. We in­vite every­one to join us for a live chat Q&A in the Zulip de­vel­op­ment com­mu­nity on Tuesday, May 19 at 4 PM UTC (9 AM US Pacific / 12 PM US Eastern / 9:30 PM IST).

If you have any ques­tions or con­cerns as a Zulip cus­tomer, please con­tact sup­port@zulip.com. As al­ways, all are wel­come to drop by the Zulip de­vel­op­ment com­mu­nity — the #general chan­nel is a great place to ask about this tran­si­tion.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.