10 interesting stories served every morning and every evening.

Introducing talkie: a 13B vintage language model from 1930

talkie-lm.com

April 2026

This is a 24/7 live feed of Claude Sonnet 4.6 prompt­ing talkie-1930 – 13b-it in or­der to ex­plore its knowl­edge, ca­pa­bil­i­ties, and in­cli­na­tions. talkie’s out­puts re­flect the cul­ture and val­ues of the texts it was trained on, not the views of its au­thors.

Why vin­tage lan­guage mod­els?

Have you ever day­dreamed about talk­ing to some­one from the past? What would you ask some­one with no knowl­edge of the mod­ern world? What would they ask you? While we don’t have time ma­chines yet, we can sim­u­late this ex­pe­ri­ence by train­ing, in Owain Evans’s phrase, vintage’ lan­guage mod­els: LMs trained only on his­tor­i­cal text.

These mod­els are fas­ci­nat­ing con­ver­sa­tion part­ners (watch Claude prompt talkie, our 13B 1930 LM, in the wid­get above). But we are also ex­cited by the pos­si­bil­ity that the care­ful study of the be­hav­iors and ca­pa­bil­i­ties of vin­tage LMs will ad­vance our un­der­stand­ing of AI in gen­eral.

Figure 1. In an early at­tempt to un­der­stand a vin­tage mod­el’s an­tic­i­pa­tion of the fu­ture, we took nearly 5,000 his­tor­i­cal event de­scrip­tions from the New York Times’s On This Day” fea­ture, cal­cu­lated their sur­pris­ing­ness (measured as bits per byte of text) to our 13B model trained ex­clu­sively on pre-1931 text, and binned by decade.

For ex­am­ple, we can eval­u­ate LMs’ abil­ity to pre­dict the fu­ture. Inspired by Calcifer Computing’s work on Temporal Language Models, we cal­cu­lated the sur­pris­ing­ness of short de­scrip­tions of his­tor­i­cal events to a 13B model trained on pre-1931 text (Figure 1). We can see an in­crease af­ter the knowl­edge cut­off, par­tic­u­larly pro­nounced in the 1950s and 1960s, fol­lowed by a plateau. We will con­tinue to de­velop evals to mea­sure with greater con­fi­dence how fore­cast­ing per­for­mance im­proves with model size and de­cays at longer hori­zons. Training larger vin­tage lan­guage mod­els will al­low us to un­cover these scal­ing trends.

Figure 2. Patents and a pa­per pub­lished af­ter talkie’s knowl­edge cut­off. Left to right: he­li­copter patent (Sikorsky, 1935), Turing ma­chines pa­per (Turing, 1936), xe­rog­ra­phy patent (Carlson, 1942).

Similarly, we can test LMs’ abil­i­ties to come up with new ideas by see­ing if they can ar­rive at in­ven­tions or sci­en­tific dis­cov­er­ies we know would arise af­ter their knowl­edge cut­offs, such as those pic­tured in Figure 2. As Demis Hassabis has asked, could a model trained up to 1911 in­de­pen­dently dis­cover General Relativity, as Einstein did in 1915?

Figure 3. We gave a Python pro­gram­ming test (HumanEval) to a se­ries of pairs of vin­tage mod­els (trained on pre-1931 text) and mod­ern mod­els (trained on the web), which have the same ar­chi­tec­ture. Left: This chart shows what per­cent­age of prob­lems each model would get right at least once, given 100 chances and ran­domly cho­sen Python func­tions as ex­am­ples to learn from in-con­text. Right: An ex­am­ple of a suc­cess­ful so­lu­tion to a Python cod­ing prob­lem pro­duced by a vin­tage lan­guage model. The model had ac­cess to sev­eral other in-con­text ex­am­ples to learn from.

Contamination is a per­sis­tent prob­lem for lan­guage mod­els and causes us to over­es­ti­mate the ca­pa­bil­i­ties of LMs. Vintage LMs are con­t­a­m­i­na­tion-free by con­struc­tion, en­abling unique gen­er­al­iza­tion ex­per­i­ments, like ex­am­in­ing whether a model with no knowl­edge of dig­i­tal com­put­ers can learn to code in a mod­ern pro­gram­ming lan­guage. Figure 3 (left-hand side) shows an early ex­am­ple of such a test, mea­sur­ing how well mod­els trained on pre-1931 text can, when given a few demon­stra­tion ex­am­ples of Python pro­grams, write new cor­rect pro­grams. While vin­tage mod­els dra­mat­i­cally un­der­per­form mod­els trained on web data (which in­cludes code), we’ve found that they are slowly but steadily im­prov­ing at this task with scale.

There is still a long way to go be­fore this ca­pa­bil­ity is no­table, how­ever. All cor­rect so­lu­tions gen­er­ated by the vin­tage mod­els are sim­ple one-line pro­grams (such as adding two in­puts), or small mod­i­fi­ca­tions to in-con­text ex­am­ple pro­grams. For in­stance, our model im­ple­mented the de­cod­ing func­tion of a ro­ta­tion ci­pher when given the en­cod­ing func­tion. Although the so­lu­tion (Figure 3, right-hand side) is only a sin­gle char­ac­ter edit (swapping an ad­di­tion for a sub­trac­tion), this suc­cess sug­gests an un­der­stand­ing of in­verse func­tions. We hope LMs with early knowl­edge cut­offs help the re­search com­mu­nity un­der­stand how well LMs can gen­er­al­ize be­yond their pre-train­ing data.

Vintage lan­guage mod­els could also teach us about the im­pact of data di­ver­sity in AI de­vel­op­ment. While mod­ern mod­els vary in dis­po­si­tion, ca­pa­bil­ity, and be­hav­ior, they are all closely re­lated to one an­other by hav­ing been trained, whether di­rectly or in­di­rectly (via dis­til­la­tion and syn­thetic data), on the web. How does this shape and con­strain what they are? How much of what we think we know about LMs is about hu­man lan­guage and cul­ture in gen­eral, or about this one dataset—the web—in par­tic­u­lar? Training on dif­fer­ent sources may lead to very dif­fer­ent kinds of mod­els be­ing cre­ated. Studying the ways in which they are sim­i­lar and dif­fer­ent could im­prove our un­der­stand­ing of lan­guage model per­sonas, be­hav­iors, and dis­po­si­tions.

Introducing talkie

We have been ex­cited to see a pro­lif­er­a­tion of vin­tage LM pro­jects, in­clud­ing Ranke-4B, Mr. Chatterbox, and Machina Mirabilis.

Alongside these ef­forts, we in­tro­duce talkie-1930 – 13b-base, a 13B lan­guage model trained on 260B to­kens of his­tor­i­cal pre-1931 English text. Additionally, we pre­sent a post-trained check­point turn­ing our base model into a con­ver­sa­tion part­ner with­out re­ly­ing on mod­ern chat tran­scripts or in­struc­tion-tun­ing data.

talkie is the largest vin­tage lan­guage model we are aware of, and we plan to con­tinue scal­ing sig­nif­i­cantly. As a next step, we are train­ing a GPT-3-level model, which we hope to re­lease this sum­mer. A pre­lim­i­nary es­ti­mate also sug­gests we can grow our cor­pus to well over a tril­lion to­kens of his­tor­i­cal text, which should be suf­fi­cient to cre­ate a GPT-3.5 level model—sim­i­lar in ca­pa­bil­ity to the orig­i­nal ChatGPT.

Benchmarking an LM from 1930

Figure 4. Evaluation ac­cu­racy vs. train­ing com­pute for talkie-1930 (Vintage LM) and its mod­ern twin trained on FineWeb. The vin­tage model un­der­per­forms the mod­ern model on knowl­edge evals. Filtering out ques­tions anachro­nis­tic from the per­spec­tive of 1930 roughly halves the per­for­mance gap be­tween the vin­tage and mod­ern mod­els.

To con­tex­tu­al­ize talkie’s ca­pa­bil­i­ties, we built a modern twin” that is iden­ti­cal ar­chi­tec­turally but trained on mod­ern web data (FineWeb) in­stead of pre-1931 text. On av­er­age, talkie un­der­per­forms its mod­ern coun­ter­part in stan­dard LM eval­u­a­tions, even af­ter cor­rect­ing for ques­tion anachro­nism, de­spite be­ing trained with the same num­ber of FLOPs (see Figure 4). But we have been en­cour­aged by its sim­i­lar per­for­mance on core lan­guage un­der­stand­ing and nu­mer­acy tasks.

We sus­pect a com­bi­na­tion of dif­fer­ences in data qual­ity (poor op­ti­cal char­ac­ter recog­ni­tion) and cor­pus sub­ject mat­ter dis­tri­b­u­tion ex­plains why talkie-1930 un­der­per­forms on some bench­marks. To max­i­mize the com­pute ef­fi­ciency of fu­ture vin­tage lan­guage model train­ing, we are de­vel­op­ing a vin­tage op­ti­cal char­ac­ter recog­ni­tion (OCR) sys­tem to im­prove the qual­ity of tran­scrip­tion of his­tor­i­cal text.

Collecting vin­tage data

Piggybacking off the in­valu­able work of or­ga­ni­za­tions like the Institutional Data Initiative and the Internet Archive and ef­forts like Common Pile, we have col­lected hun­dreds of bil­lions of pre-1931 English-language to­kens. These in­clude books, news­pa­pers, pe­ri­od­i­cals, sci­en­tific jour­nals, patents, and case law. We chose the end of 1930 as the cut­off date be­cause that is when works en­ter the pub­lic do­main in the United States. For this ver­sion of the model, we also lim­ited our­selves to pri­mar­ily English-language texts, be­cause val­i­dat­ing the data pipeline re­quires deep fa­mil­iar­ity with source doc­u­ments, and we are na­tive English speak­ers. But mul­ti­lin­gual cor­pus ex­pan­sion is a high pri­or­ity, both to in­crease the size of the cor­pus and the di­ver­sity of per­spec­tives it rep­re­sents.

Training vin­tage lan­guage mod­els

Developing vin­tage lan­guage mod­els pre­sents unique chal­lenges. Here, we dis­cuss some of them in brief. We will fol­low up in greater de­tail in the com­ing months as we con­tinue our re­search.

Temporal leak­age

Who was pres­i­dent of the United States in 1936, and what were the most sig­nif­i­cant pieces of leg­is­la­tion he signed?

Franklin D. Roosevelt was pres­i­dent of the United States from 1933 to 1937. The most im­por­tant leg­is­la­tion signed by him was the National Recovery Act of 1933; the Agricultural Adjustment Act of 1935; and the Emergency Banking Act of 1935 (amended in 1936).

Figure 5. talkie-1930 – 13b’s knowl­edge of the Roosevelt pres­i­dency and New Deal is an ex­am­ple of im­per­fect fil­ter­ing of the pre-train­ing cor­pus.

The most im­por­tant ob­jec­tive when train­ing vin­tage lan­guage mod­els is that no data leaks into the train­ing cor­pus from af­ter the in­tended knowl­edge cut­off (in our case, December 31st, 1930). There are sev­eral ways this can hap­pen, such as in­clud­ing mod­ern doc­u­ments with faulty date meta­data, or old doc­u­ments with post hoc anachro­nis­tic in­ser­tions like ed­i­to­r­ial in­tro­duc­tions or foot­notes.

For talkie-1930, we de­vel­oped a doc­u­ment-level n-gram-based anachro­nism clas­si­fier and used it to fil­ter the pre-train­ing cor­pus. However, this was not per­fect. An ear­lier 7B ver­sion of talkie clearly knew about the Roosevelt pres­i­dency and New Deal leg­is­la­tion (Figure 5). talkie-1930 – 13b is ad­di­tion­ally aware of some de­tails re­lated to World War II and the im­me­di­ate post­war or­der (the United Nations and the di­vi­sion of Germany). For fu­ture ver­sions of the model, we are de­vel­op­ing new tech­niques for leak­age de­tec­tion and fil­ter­ing us­ing more ad­vanced clas­si­fiers.

Data qual­ity

Figure 6. OCR er­rors re­duce lan­guage model learn­ing ef­fi­ciency. Left: Training LMs on pre-1931 texts tran­scribed us­ing con­ven­tional OCR sys­tems only shows 30% of the learn­ing ef­fi­ciency of a model trained on hu­man-tran­scribed ver­sions of the same texts. Regex clean­ing of the OCR’d text re­cov­ers some per­for­mance. Right: Example of a messy ma­chine tran­scrip­tion of The Wonderful Wizard of Oz (Baum, 1899).

Data qual­ity is an im­por­tant is­sue for all ma­chine learn­ing ex­per­i­ments. It is a spe­cial chal­lenge when train­ing vin­tage lan­guage mod­els. Because there was no dig­i­tal pub­lish­ing in 1930, all text in our dataset had to be tran­scribed from a phys­i­cal source, which in­tro­duces a form of noise not seen in na­tively dig­i­tal text. While OCR was an early suc­cess story of ma­chine learn­ing and com­puter vi­sion, the clas­sic OCR sys­tems of­ten used to tran­scribe his­tor­i­cal doc­u­ments strug­gle on all but the sim­plest lay­outs and clean­est scans. Modern VLM-based sys­tems have higher ac­cu­racy, but we have found they are prone to hal­lu­ci­nate mod­ern facts into our cor­pus, poi­son­ing the ex­er­cise.

In con­trolled ex­per­i­ments, we have found that when train­ing an LM on pre-1931 texts tran­scribed us­ing con­ven­tional OCR sys­tems, for a given amount of com­pute, they only achieve 30% of the per­for­mance of a model trained on hu­man-tran­scribed ver­sions of the same texts (see Figure 6). Simple regex clean­ing brings that num­ber up to 70%—still a large dis­crep­ancy. We aim to shrink the re­main­ing gap in per­for­mance by re­tran­scrib­ing the talkie cor­pus us­ing our vin­tage OCR sys­tem.

Vintage post-train­ing

Figure 7. Examples of his­tor­i­cal ref­er­ence texts with reg­u­lar struc­ture used for post-train­ing. Left to right: eti­quette man­ual (Beadle, 1859), prac­ti­cal knowl­edge book (Henley, 1914), par­lor guide (Sandison, c. 1895), let­ter-writ­ing man­ual (Chambers, 1900).

The lack of ready-made post-train­ing data is an­other sig­nif­i­cant chal­lenge. Fine-tuning our base model on off-the-shelf in­struc­tion-re­sponse pairs would bake in anachro­nis­tic knowl­edge, style, and ex­pec­ta­tions of what a chat as­sis­tant ought to be like. Rather than at­tempt­ing to fil­ter out these bi­ases, we built a post-train­ing pipeline from scratch.

First, we gen­er­ated in­struc­tion-re­sponse pairs from his­tor­i­cal texts with reg­u­lar struc­ture, such as eti­quette man­u­als, let­ter-writ­ing man­u­als, cook­books, dic­tio­nar­ies, en­cy­clo­pe­dias, and po­etry and fa­ble col­lec­tions (see Figure 7), and fine-tuned our base model on them us­ing a sim­ple chat for­mat.

Next, to im­prove in­struc­tion-fol­low­ing abil­i­ties, we gen­er­ated syn­thetic prompts cov­er­ing dif­fer­ent types of tasks, such as sum­ma­riz­ing doc­u­ments, re­spond­ing to di­rect in­for­ma­tion re­quests, and con­tin­u­ing multi-turn con­ver­sa­tions co­her­ently. We then ran on­line di­rect pref­er­ence op­ti­miza­tion on roll­outs gen­er­ated from these prompts, us­ing Claude Sonnet 4.6 as a judge. Over the course of train­ing, on a held-out eval set, the judge’s av­er­age in­struc­tion-fol­low­ing rat­ing of talkie’s re­sponses in­creased from 2.0 to 3.4 (on a five-point scale).

Finally, we did an­other round of su­per­vised fine-tun­ing, this time on re­jec­tion-sam­pled multi-turn syn­thetic chats be­tween Claude Opus 4.6 and talkie, to smooth out per­sis­tent rough edges in its con­ver­sa­tional abil­i­ties.

While we have tried to post-train talkie free from mod­ern in­flu­ence, re­in­force­ment learn­ing with AI feed­back in­evitably shapes talkie’s be­hav­ior anachro­nis­ti­cally. (The 7B ver­sion of talkie emerged from RL speak­ing in lis­ti­cles.) As we scale up, we hope to be able to use our vin­tage base mod­els them­selves as judges to en­able a fully boot­strapped era-ap­pro­pri­ate post-train­ing pipeline.

Scaling talkie

We plan to scale talkie rapidly in the com­ing months. This will en­tail:

Increasing the size of our English-language cor­pus, and ex­pand­ing it be­yond English.

Re-OCR’ing as much of pre-1931 text as is fea­si­ble us­ing our new OCR sys­tem.

Strengthening the leak­age de­tec­tion pipeline by de­vel­op­ing new anachro­nism clas­si­fi­ca­tion tech­niques.

Expanding and re­fin­ing the vin­tage post-train­ing pipeline in col­lab­o­ra­tion with his­to­ri­ans, in­clud­ing by de­vel­op­ing method­olo­gies for con­struct­ing ac­cu­rate his­tor­i­cal per­sonas.

Join us

We are ex­cited to col­lab­o­rate with re­searchers and in­sti­tu­tions to build the next gen­er­a­tion of vin­tage lan­guage mod­els. Please get in touch.

Are you a re­searcher or in­sti­tu­tion with his­tor­i­cal texts? We’d love to dis­cuss how we can help make them ac­ces­si­ble to re­searchers and read­ers, in­clud­ing by ap­ply­ing our OCR model.

Are you an in­di­vid­ual or in­sti­tu­tion in­ter­ested in sup­port­ing vin­tage lan­guage model de­vel­op­ment with fund­ing or com­pute? We can likely use ei­ther, or put you in touch with other teams work­ing in the space.

Are you an aca­d­e­mic in the hu­man­i­ties? We are ex­cited to dis­cuss how vin­tage lan­guage mod­els, and the data and in­fra­struc­ture used to train them, could be use­ful for your re­search.

Are you an AI re­searcher? We would love to sup­port and col­lab­o­rate on re­search on train­ing and study­ing vin­tage lan­guage mod­els.

Are you an artist or writer? We think vin­tage lan­guage mod­els could be fruit­ful tools to ex­per­i­ment with.

Content con­sid­er­a­tions

talkie re­flects the cul­ture and val­ues of the texts it was trained on. As such, it can pro­duce out­puts that will be of­fen­sive to users.

Acknowledgements

Thanks to Coefficient Giving and Anthropic for sup­port with fund­ing and com­pute.

For help­ful dis­cus­sions, we thank Pranav Anand, Benjamin Breen, Catherine Brobston, Collin Burns, Matteo Cargnelutti, Mackenzie Cooley, Brandon Duderstadt, Owain Evans, Chloë Farr, Ryan Greenblatt, Michael Hla, Mark Humphries, Sam Klein, Greg Leppert, Jack Lindsey, Christina Lu, Seoirse Murray, Jake Naviasky, Krishna Patel, Ethan Perez, Puria Radmard, Ludwig Schmidt, Buck Shlegeris, Benjamin Sturgeon, Daniel Tan, Ross Taylor, Cam Tice, Trip Venturella, Merlijn Wajer, and Tao Xu.

Citation

@article{levine2026talkie,

ti­tle={In­tro­duc­ing talkie: a 13B vin­tage lan­guage model from 1930},

au­thor={Levine, Nick and Duvenaud, David and Radford, Alec},

year={2026},

month={April},

url={https://​talkie-lm.com/​in­tro­duc­ing-talkie}

}

China blocks Meta's $2 billion takeover of AI startup Manus

www.cnbc.com

China’s state plan­ner on Monday called for Meta to un­wind its $2 bil­lion ac­qui­si­tion of Manus, a Singaporean ar­ti­fi­cial in­tel­li­gence startup with Chinese roots.

The de­ci­sion to pro­hibit for­eign in­vest­ment in Manus was made in ac­cor­dance with laws and reg­u­la­tions, the National Development and Reform Commission said in a brief state­ment. It added that it has asked the par­ties in­volved to with­draw the ac­qui­si­tion trans­ac­tion.

Shares of Meta closed 0.53% higher on Monday.

The deal had at­tracted scrutiny from both China and Washington, as law­mak­ers in the U.S. have pro­hib­ited American in­vestors from back­ing Chinese AI com­pa­nies di­rectly. Meanwhile, Beijing has in­creased ef­forts to dis­cour­age Chinese AI founders from mov­ing busi­ness off­shore.

watch now

The Chinese gov­ern­men­t’s in­ter­ven­tion in the trans­ac­tion drew alarm among tech founders and ven­ture cap­i­tal­ists in the coun­try who were hop­ing to take ad­van­tage of the so-called Singapore-washing model, where com­pa­nies re­lo­cate from China to the city-state to avoid scrutiny from Beijing and Washington.

Manus was founded in China be­fore re­lo­cat­ing to Singapore. The com­pany de­vel­ops gen­eral-pur­pose AI agents and launched its first gen­eral AI agent in March last year, which can ex­e­cute com­plex tasks such as mar­ket re­search, cod­ing and data analy­sis. The re­lease saw the startup lauded as the next DeepSeek.

Manus said it had passed $100 mil­lion in an­nual re­cur­ring rev­enue, or ARR, in December, eight months on from launch­ing a prod­uct, which it claimed made it the fastest startup in the world at the time to hit the mile­stone from $0.

The com­pany raised $75 mil­lion in a round led by U.S. VC Benchmark in April last year.

When Meta an­nounced the deal late last year, the tech gi­ant said it would look to ac­cel­er­ate ar­ti­fi­cial in­tel­li­gence in­no­va­tion for busi­nesses and in­te­grate ad­vanced au­toma­tion into its con­sumer and en­ter­prise prod­ucts, in­clud­ing its Meta AI as­sis­tant.

But in January, China’s Ministry of Commerce said it would con­duct an as­sess­ment and in­ves­ti­ga­tion into how the ac­qui­si­tion com­plied with laws and reg­u­la­tions con­cern­ing ex­port con­trols, tech­nol­ogy im­port and ex­port, and over­seas in­vest­ment.

A Meta spokesper­son told CNBC that the trans­ac­tion complied fully with ap­plic­a­ble law,” and that it an­tic­i­pated an ap­pro­pri­ate res­o­lu­tion to the in­quiry.”

When asked about China’s move to block Meta’s ac­qui­si­tion of Manus, APEC Senior Officials Meeting Chairman Chen Xu told re­porters that it is important that all par­ties act in a spirit of mu­tual ben­e­fit.”

While Chen said he did not know the specifics of the is­sue, he said that if such an is­sue can be han­dled prop­erly, it can help fa­cil­i­tate more sub­stan­tive dis­cus­sions in APEC.” That’s ac­cord­ing to an of­fi­cial English trans­la­tion.

CNBCs Anniek Bao and Dylan Butts con­tributed to this story.

GTFOBins

gtfobins.org

File read

Shell

Shell

Upload

Download

Command

Shell

File read

Shell

Shell

Shell

File read

File read

Inherit

Shell

Command

File write

File read

Inherit

Shell

Inherit

Shell

Command

File write

File read

Inherit

Shell

Inherit

Shell

Command

File write

File read

Inherit

Inherit

Shell

Command

File write

File read

Inherit

File read

Shell

Command

File read

Download

File write

File read

File read

File read

File read

File read

Shell

File write

File read

Shell

Shell

Command

File read

Shell

Shell

Shell

Shell

File write

File read

File read

Inherit

Shell

Command

File write

File read

Inherit

File read

File read

File read

File read

File read

Shell

Reverse shell

File write

File read

Upload

Download

Library load

Inherit

Shell

File write

File read

Inherit

Shell

Command

File write

File read

Inherit

File read

File read

Shell

File read

Inherit

Shell

Command

Reverse shell

SUPER ZSNES

zsnes.com

Welcome to SUPER ZSNES

The two orig­i­nal de­vel­op­ers of ZSNES are fi­nally back to­gether! Introducing SUPER ZSNES!

Re-written com­pletely from scratch, this GPU-powered SNES em­u­la­tor is here to bring you

the fol­low­ing: some of what is fa­mil­iar, some of what’s new, and then some of what goes be­yond.

Key Features

Far more ac­cu­rate CPU and Audio cores than the orig­i­nal ZSNES

GPU-powered PPU core to al­low for hi-res Mode 7 and spe­cial per-game en­hance­ment fea­tures

Classic UI with falling snow, mod­ern­ized with higher de­f­i­n­i­tion and im­proved UX

Fast for­ward, rewind, save states, auto save his­tory, save book­marks, cheat codes, quick load, and more

No Vibe Coding. Classic de­vel­op­ment style.

Super Enhancement Engine, where the ZSNES de­vel­op­ers are en­hanc­ing the games one at a time

Super Enhancement Engine

Currently im­ple­mented with sup­port for 7 pop­u­lar games. Support for more games will keep

in­creas­ing as this em­u­la­tor is in de­vel­op­ment.

High Resolution - Not just an auto up­scalar, but an in­ter­nal draw­ing pro­gram is used to make sure that the higher res­o­lu­tion de­tails can be man­u­ally drawn to look nice and crisp.

Texture/Normal Map - Adds some nice de­tails to the back­grounds to give them a higher res­o­lu­tion look.

Overclock - Select games of­ten filled with slow­down are over­clocked.

Wide Screen (where avail­able) - We en­able widescreen when­ever the game is in­ter­nally coded to sup­port par­tial or full widescreen.

Uncompressed Audio Replacement - We cu­rate and pick un­com­pressed au­dio sam­ples to re­place orig­i­nal highly com­pressed au­dio sam­ples.

3D - Currently only sup­ported on per­spec­tive-style Mode 7, re­places tiles with 3D height mapped data.

All en­hance­ments can be in­di­vid­u­ally dis­abled to suit your play style.

Note: Enhancement data con­tains no ROM or copy­righted data. You will need to pro­vide the ROMs.

Do not ask the de­vel­op­ers for ROMs.

Downloads

iOS

Coming Soon

What’s Coming

Bug fixes

Special chip em­u­la­tion (DSP1, SuperFX, etc.)

More op­ti­miza­tion work

More types of en­hance­ments

Netplay

Other fea­tures

Notes & Legal

This is an early build, so there are still em­u­la­tion bugs and spe­cial chips (DSP1, SuperFX, etc.)

have yet to be im­ple­mented. A bunch of op­ti­miza­tion work has yet to be done so per­for­mance may

be a bit slow.

This pro­gram is dis­trib­uted in the hope that it will be use­ful, but WITHOUT ANY WARRANTY; with­out

even the im­plied war­ranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

The SUPER ZSNES Team is not con­nected or af­fil­i­ated with any men­tioned com­pany in any way.

Companies and all prod­ucts per­tain­ing to that com­pany are trade­marks of that com­pany.

Please con­tact that com­pany for trade­mark and copy­right in­for­ma­tion.

Keep Android Open

keepandroidopen.org

Your phone is about to stop be­ing yours.

126 days un­til lock­down

Starting September 2026, a silent up­date, non­con­sen­su­ally pushed by Google, will block every Android app whose de­vel­oper has­n’t reg­is­tered with Google, signed their con­tract, paid up, and handed over gov­ern­ment ID.

Every app and every de­vice, world­wide, with no opt-out.

Post on X Post on Mastodon Post on Bluesky LinkedIn Facebook

What Google is do­ing

In August 2025, Google an­nounced a new re­quire­ment: start­ing September 2026, every Android app de­vel­oper must reg­is­ter cen­trally with Google be­fore their soft­ware can be in­stalled on any de­vice. Not just Play Store apps: all apps. This in­cludes apps shared be­tween friends, dis­trib­uted through F-Droid, built by hob­by­ists for per­sonal use. Independent de­vel­op­ers, church and com­mu­nity groups, and hob­by­ists alike will all be frozen out of be­ing able to de­velop and dis­trib­ute their soft­ware.

Registration re­quires:

Paying a fee to Google

Agreeing to Google’s Terms and Conditions

Surrendering your gov­ern­ment-is­sued iden­ti­fi­ca­tion

Providing ev­i­dence of your pri­vate sign­ing key

Listing all cur­rent and all fu­ture ap­pli­ca­tion iden­ti­fiers

If a de­vel­oper does not com­ply, their apps get silently blocked on every Android de­vice world­wide.

Who this hurts

You

You bought an Android phone be­cause Google told you it was open. You could in­stall what you wanted, and that was the deal.

Google is now rewrit­ing that deal, retroac­tively, on hard­ware you al­ready own. After the up­date lands, you can only run soft­ware that Google has pre-ap­proved. On your phone: your prop­erty, that you paid for.

Independent de­vel­op­ers

A teenager’s first app, a vol­un­teer’s pri­vacy tool, or a com­pa­ny’s con­fi­den­tial in­ter­nal beta. It does­n’t mat­ter. After September 2026, none of these can be in­stalled with­out Google’s bless­ing.

F-Droid, home to thou­sands of free and open-source Android apps, has called this an existential” threat. Cory Doctorow calls it Darth Android”.

Governments & civil so­ci­ety

Google has a doc­u­mented track record of com­ply­ing when au­thor­i­tar­ian regimes de­mand app re­movals. With this pro­gram, the soft­ware that runs your coun­try’s in­sti­tu­tions will ex­ist at the plea­sure of a sin­gle un­ac­count­able for­eign cor­po­ra­tion.

The EFF calls app gate­keep­ing an ever-ex­pand­ing path­way to in­ter­net cen­sor­ship.”

Google’s escape hatch” is a trap door

Google says power users” can still in­stall” un­ver­i­fied apps. Here’s what that ac­tu­ally looks like:

Delve into System Settings, find Developer Options

Tap the build num­ber seven times to en­able Developer Mode

Dismiss scare screens about co­er­cion

Enter your PIN

Restart the de­vice

Wait 24 hours

Come back, dis­miss more scare screens

Pick allow tem­porar­ily” (7 days) or allow in­def­i­nitely”

Confirm, again, that you un­der­stand the risks”

Nine steps. A manda­tory 24-hour cool­ing-off pe­riod. For in­stalling soft­ware on a de­vice you own.

Worse: this flow runs en­tirely through Google Play Services, not the Android OS. Google can change it, tighten it, or kill it at any time, with no OS up­date re­quired and no con­sent needed. And as of to­day, it has­n’t shipped in any beta, pre­view, or ca­nary build. It ex­ists only as a blog post and some mock­ups.

This is big­ger than Android

If Google can retroac­tively lock down bil­lions of de­vices that were sold as open plat­forms, every hard­ware man­u­fac­turer on the planet is watch­ing.

The prin­ci­ple be­ing es­tab­lished: the com­pany that made your de­vice gets to de­cide, af­ter you’ve bought it, what soft­ware you’re al­lowed to run. In soft­ware, this is called a rug pull”; but at least you could al­ways in­stall com­pet­ing soft­ware. In hard­ware, it is a fait ac­com­pli that strips you of your agency and ren­ders you pow­er­less to the whims of a sin­gle un­ac­count­able gate­keeper and con­victed mo­nop­o­list.

Android’s open­ness was never just a fea­ture. It was the promise that dis­tin­guished it from iPhone. Millions chose Android for ex­actly that rea­son. Google is now re­vok­ing that promise uni­lat­er­ally, on de­vices al­ready in peo­ple’s pock­ets, be­cause they’ve de­cided they have enough mar­ket dom­i­nance and reg­u­la­tory cap­ture to get away with it.

Ars Technica: Google’s Apple envy threat­ens to dis­man­tle Android’s open legacy.”

But wait, is­n’t this…

″…just about se­cu­rity?”

The se­cu­rity ra­tio­nale is a smoke­screen. Google Play Protect al­ready scans for mal­ware in­de­pen­dent of de­vel­oper iden­tity. Requiring a gov­ern­ment ID does­n’t make code safer. It makes de­vel­op­ers iden­ti­fi­able and con­trol­lable. Malware au­thors can reg­is­ter. Indie de­vel­op­ers and dis­si­dents of­ten can’t. The EFF is blunt: iden­tity-based gate­keep­ing is a cen­sor­ship tool, not a se­cu­rity one.

″…still side­load­ing if you use the ad­vanced flow?”

Nine steps, 24-hour wait, buried in Developer Options, de­liv­ered through a pro­pri­etary ser­vice that Google can re­voke when­ever they want. That’s not side­load­ing. That’s a de­ter­rence mech­a­nism built to en­sure al­most no­body com­pletes it. And since it runs through Play Services rather than the OS, Google can tighten or kill it silently.

″…only a prob­lem if you have some­thing to hide?”

Whistleblowers, jour­nal­ists, and ac­tivists un­der au­thor­i­tar­ian gov­ern­ments will be the first vic­tims. People in do­mes­tic abuse sit­u­a­tions are next. All these groups have le­git­i­mate rea­sons to dis­trib­ute or use soft­ware with­out putting their le­gal iden­tity in a Google data­base. Anonymous open-source con­tri­bu­tion is a tra­di­tion older than Google it­self. This pol­icy ends it on Android.

″…the same thing Apple does?”

Apple has been a walled gar­den from day one. People chose Android be­cause it was dif­fer­ent. Apple does it too” is a race to the bot­tom and a weak tu quoque ar­gu­ment. And un­der reg­u­la­tory pres­sure (the EUs Digital Markets Act), even Apple is be­ing forced to open up. Google is mov­ing in the op­po­site di­rec­tion: at­tempt­ing to fur­ther en­trench its gate­keep­ing sta­tus.

″…just $25 and some pa­per­work?”

Maybe, if you’re a de­vel­oper in the US with a credit card and a dri­ver’s li­cense. Try be­ing a stu­dent in sub-Sa­ha­ran Africa, or a dis­si­dent in Myanmar, or a vol­un­teer main­tain­ing a com­mu­nity health app. The cost is­n’t only fi­nan­cial: you’re sur­ren­der­ing gov­ern­ment ID and ev­i­dence of your sign­ing keys to a com­pany that rou­tinely com­plies with gov­ern­ment de­mands to re­move apps and ex­pose de­vel­op­ers.

Fight back

Everyone

Install F-Droid on every Android de­vice you own. Alternative stores only sur­vive if peo­ple ac­tu­ally use them.

Contact your reg­u­la­tors. Regulators world­wide are gen­uinely con­cerned about mo­nop­o­lies and the cen­tral­iza­tion of power in the tech sec­tor, and want to hear di­rectly from in­di­vid­u­als who are af­fected and con­cerned.

Share this page. Link to keepan­droidopen.org every­where.

Push back on as­tro­turfers. The well, ac­tu­ally…” crowd is out in force. Don’t let them set the nar­ra­tive.

Sign the change.org pe­ti­tion and join the over 100,000 sig­na­to­ries who have made their voices heard.

Read and share our open let­ter

Tell Google what you think of this through their own de­vel­oper ver­i­fi­ca­tion sur­vey (for all the good that will do).

Developers

Do not sign up. Don’t join the pro­gram by sign­ing up for the Android Developer Console and agree­ing to their ir­rev­o­ca­ble Terms and Conditions. Don’t ver­ify your iden­tity. Don’t play ball.

Google’s plan only works if de­vel­op­ers com­ply. Don’t.

Talk other de­vel­op­ers and or­ga­ni­za­tions out of sign­ing up.

Add the FreeDroidWarn li­brary to your apps to warn users.

Run a web­site? Add the count­down ban­ner.

Google em­ploy­ees

If you know some­thing about the pro­gram’s tech­ni­cal im­ple­men­ta­tion or in­ter­nal ra­tio­nale, con­tact tips@keepan­droidopen.org from a non-work ma­chine and a non-Gmail ac­count. Strict con­fi­dence guar­an­teed.

All those op­posed…

69 or­ga­ni­za­tions from 21 coun­tries have signed the open let­ter

Read the full open let­ter and thank the sig­na­to­ries →

What they’re say­ing

Tech press

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps” Android Headlines

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps”

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps” Android Headlines

Sideloading on Android? Soon It’ll Be Like a TSA Check for Apps”

Google’s dev reg­is­tra­tion plan will end the F-Droid pro­ject’” The Register

Google’s dev reg­is­tra­tion plan will end the F-Droid pro­ject’”

Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy” It’s FOSS News

Android Security or Vendor Lock-In? Google’s New Sideloading Rules Smell Fishy”

Google kneecaps in­die Android devs, forces them to reg­is­ter” The Register

Google kneecaps in­die Android devs, forces them to reg­is­ter”

Google’s new de­vel­oper rules could threaten side­load­ing and F-Droid’s fu­ture” Gizmochina

Google’s new de­vel­oper rules could threaten side­load­ing and F-Droid’s fu­ture”

Sideloading is dead for all in­tents and pur­poses. The Android you know and love is slowly dis­ap­pear­ing.” Android Police

Sideloading is dead for all in­tents and pur­poses. The Android you know and love is slowly dis­ap­pear­ing.”

Open-Source Android Apps Threatened by Google’s New Policy” Datamation

Open-Source Android Apps Threatened by Google’s New Policy”

Google is re­strict­ing one of Android’s most im­por­tant fea­tures, and users are out­raged” SlashGear

Google is re­strict­ing one of Android’s most im­por­tant fea­tures, and users are out­raged”

Google says it’s mak­ing Android side­load­ing high-friction’ to bet­ter warn users about po­ten­tial risks” XDA Developers

Google says it’s mak­ing Android side­load­ing high-friction’ to bet­ter warn users about po­ten­tial risks”

Resistance to Google’s Android ver­i­fi­ca­tion grows among de­vel­op­ers” Techzine EU

Resistance to Google’s Android ver­i­fi­ca­tion grows among de­vel­op­ers”

Google will ver­ify Android de­vel­op­ers dis­trib­ut­ing apps out­side the Play store” The Verge

Google will ver­ify Android de­vel­op­ers dis­trib­ut­ing apps out­side the Play store”

I’ve been an Android user for al­most 15 years — and Google’s side­load­ing changes are push­ing me back to iPhone” Tom’s Guide

ozark.hendrix.edu

An update on GitHub availability

github.blog

I wanted to give an up­date on GitHub’s avail­abil­ity in light of two re­cent in­ci­dents. Both of those in­ci­dents are not ac­cept­able, and we are sorry for the im­pact they had on you. I wanted to share some de­tails on them, as well as ex­plain what we’ve done and what we’re do­ing to im­prove our re­li­a­bil­ity.

We started ex­e­cut­ing our plan to in­crease GitHub’s ca­pac­ity by 10X in October 2025 with a goal of sub­stan­tially im­prov­ing re­li­a­bil­ity and failover. By February 2026, it was clear that we needed to de­sign for a fu­ture that re­quires 30X to­day’s scale.

The main dri­ver is a rapid change in how soft­ware is be­ing built. Since the sec­ond half of December 2025, agen­tic de­vel­op­ment work­flows have ac­cel­er­ated sharply. By nearly every mea­sure, the di­rec­tion is al­ready clear: repos­i­tory cre­ation, pull re­quest ac­tiv­ity, API us­age, au­toma­tion, and large-repos­i­tory work­loads are all grow­ing quickly.

This ex­po­nen­tial growth does not stress one sys­tem at a time. A pull re­quest can touch Git stor­age, merge­abil­ity checks, branch pro­tec­tion, GitHub Actions, search, no­ti­fi­ca­tions, per­mis­sions, web­hooks, APIs, back­ground jobs, caches, and data­bases. At high scale, small in­ef­fi­cien­cies com­pound: queues deepen, cache misses be­come data­base load, in­dexes fall be­hind, re­tries am­plify traf­fic, and one slow de­pen­dency can af­fect sev­eral prod­uct ex­pe­ri­ences.

Our pri­or­i­ties are clear: avail­abil­ity first, then ca­pac­ity, then new fea­tures. We are re­duc­ing un­nec­es­sary work, im­prov­ing caching, iso­lat­ing crit­i­cal ser­vices, re­mov­ing sin­gle points of fail­ure, and mov­ing per­for­mance-sen­si­tive paths into sys­tems de­signed for these work­loads. This is dis­trib­uted sys­tems work: re­duc­ing hid­den cou­pling, lim­it­ing blast ra­dius, and mak­ing GitHub de­grade grace­fully when one sub­sys­tem is un­der pres­sure. We’re mak­ing progress quickly, but these in­ci­dents are ex­am­ples of where there’s still work to do.

What we’re do­ing

Short term, we had to re­solve a va­ri­ety of bot­tle­necks that ap­peared faster than ex­pected from mov­ing web­hooks to a dif­fer­ent back­end (out of MySQL), re­design­ing user ses­sion cache to re­do­ing au­then­ti­ca­tion and au­tho­riza­tion flows to sub­stan­tially re­duce data­base load. We also lever­aged our mi­gra­tion to Azure to stand up a lot more com­pute.

Next we fo­cused on iso­lat­ing crit­i­cal ser­vices like git and GitHub Actions from other work­loads and min­i­miz­ing the blast ra­dius by min­i­miz­ing sin­gle points of fail­ure. This work started with care­ful analy­sis of de­pen­den­cies and dif­fer­ent tiers of traf­fic to un­der­stand what needs to be pulled apart and how we can min­i­mize im­pact on le­git­i­mate traf­fic from var­i­ous at­tacks. Then we ad­dressed those in or­der of risk. Similarly, we ac­cel­er­ated parts of mi­grat­ing per­for­mance or scale sen­si­tive code out of Ruby mono­lith into Go.

While we were al­ready in progress of mi­grat­ing out of our smaller cus­tom data cen­ters into pub­lic cloud, we started work­ing on path to multi cloud. This longer-term mea­sure is nec­es­sary to achieve the level of re­silience, low la­tency, and flex­i­bil­ity that will be needed in the fu­ture.

The num­ber of repos­i­to­ries on GitHub is grow­ing faster than ever, but a much harder scal­ing chal­lenge is the rise of large monore­pos. For the last three months, we’ve been in­vest­ing heav­ily in re­sponse to this trend both within git sys­tem and in the pull re­quest ex­pe­ri­ence.

We will have a sep­a­rate blog post soon de­scrib­ing ex­ten­sive work we’ve done and the new up­com­ing API de­sign for greater ef­fi­ciency and scale. As part of this work, we have in­vested in op­ti­miz­ing merge queue op­er­a­tions, since that is key for re­pos that have many thou­sands of pull re­quests a day.

Recent in­ci­dents

The two re­cent in­ci­dents were dif­fer­ent in cause and im­pact, but both re­flect why we are in­creas­ing our fo­cus on avail­abil­ity, iso­la­tion, and blast-ra­dius re­duc­tion.

April 23 merge queue in­ci­dent

On April 23, pull re­quests ex­pe­ri­enced a re­gres­sion af­fect­ing merge queue op­er­a­tions.

Pull re­quests merged through merge queue us­ing the squash merge method pro­duced in­cor­rect merge com­mits when a merge group con­tained more than one pull re­quest. In af­fected cases, changes from pre­vi­ously merged pull re­quests and prior com­mits were in­ad­ver­tently re­verted by sub­se­quent merges.

During the im­pact win­dow, 658 repos­i­to­ries and 2,092 pull re­quests were af­fected. We ini­tially shared slightly higher num­bers be­cause our first as­sess­ment was in­ten­tion­ally con­ser­v­a­tive. The is­sue did not af­fect pull re­quests merged out­side merge queue, nor did it af­fect merge queue groups us­ing merge or re­base meth­ods.

There was no data loss: all com­mits re­mained stored in Git. However, the state of af­fected de­fault branches was in­cor­rect, and we could not safely re­pair every repos­i­tory au­to­mat­i­cally. More de­tails are avail­able in the in­ci­dent root cause analy­sis.

This in­ci­dent ex­posed mul­ti­ple process fail­ures, and we are chang­ing those processes to pre­vent this class of is­sue from re­cur­ring.

On April 27, an in­ci­dent af­fected our Elasticsearch sub­sys­tem, which pow­ers sev­eral search-backed ex­pe­ri­ences across GitHub, in­clud­ing parts of pull re­quests, is­sues, and pro­jects.

We are still com­plet­ing the root cause analy­sis and will pub­lish it shortly. What we know now is that the clus­ter be­came over­loaded (likely due to a bot­net at­tack) and stopped re­turn­ing search re­sults. There was no data loss, and Git op­er­a­tions and APIs were not im­pacted. However, parts of the UI that de­pended on search showed no re­sults, which caused a sig­nif­i­cant dis­rup­tion.

This is one of the sys­tems we had not yet fully iso­lated to elim­i­nate as a sin­gle point of fail­ure, be­cause other ar­eas had been higher in our risk-pri­or­i­tized re­li­a­bil­ity work. That im­pact is un­ac­cept­able, and we are us­ing the same de­pen­dency and blast-ra­dius analy­sis de­scribed above to re­duce the like­li­hood and im­pact of this type of fail­ure in the fu­ture.

Increasing trans­parency

We have also heard clear feed­back that cus­tomers need greater trans­parency dur­ing in­ci­dents.

We re­cently up­dated the GitHub sta­tus page to in­clude avail­abil­ity num­bers. We have also com­mit­ted to sta­tus­ing in­ci­dents both large and small, so you do not have to guess whether an is­sue is on your side or ours.

We are con­tin­u­ing to im­prove how we cat­e­go­rize in­ci­dents so that the scale and scope are eas­ier to un­der­stand. We are also work­ing on bet­ter ways for cus­tomers to re­port in­ci­dents and share sig­nals with us dur­ing dis­rup­tions.

Our com­mit­ment

GitHub’s role has al­ways been to sup­port de­vel­op­ers on an open and ex­ten­si­ble plat­form.

The team at GitHub is in­cred­i­bly pas­sion­ate about our work. We hear the pain you’re ex­pe­ri­enc­ing. We read every email, so­cial post, sup­port ticket, and we take it all to heart. We’re sorry.

We are com­mit­ted to im­prov­ing avail­abil­ity, in­creas­ing re­silience, scal­ing for the fu­ture of soft­ware de­vel­op­ment, and com­mu­ni­cat­ing more trans­par­ently along the way.

Editor’s note: This post was up­dated on April 28, 2026, to up­date the num­ber of re­pos af­fected dur­ing the April 23 in­ci­dent.

Written by

Vladimir Fedorov is GitHub’s Chief Technology Officer, bring­ing decades of ex­pe­ri­ence in en­gi­neer­ing lead­er­ship and in­no­va­tion. A pas­sion­ate ad­vo­cate for de­vel­oper pro­duc­tiv­ity, Vlad is lead­ing GitHub’s en­gi­neer­ing team to shape the fu­ture of de­vel­oper tools and in­no­va­tion with a de­vel­oper-first mind­set.

Before join­ing GitHub, Vlad co-founded UserClouds, a startup spe­cial­iz­ing in data gov­er­nance and pri­vacy. He spent 12 years at Facebook, now Meta, as Senior Vice President, lead­ing en­gi­neer­ing teams of over 2,000 across Privacy, Ads, and Platform. Earlier in his ca­reer, Vlad worked at Microsoft and earned both his BS and MS in Computer Science from Caltech. He cur­rently serves on the board of Codepath.org, an or­ga­ni­za­tion ded­i­cated to re­pro­gram­ming higher ed­u­ca­tion to cre­ate the first AI-native gen­er­a­tion of en­gi­neers, CTOs, and founders.

Vlad lives in the Bay Area and when not work­ing en­joys spend­ing time out­side and on the wa­ter with his fam­ily.

Explore more from GitHub

Docs

Everything you need to mas­ter GitHub, all in one place.

Go to Docs

GitHub

Build what’s next on GitHub, the place for any­one from any­where to build any­thing.

Start build­ing

Customer sto­ries

Meet the com­pa­nies and en­gi­neer­ing teams that build with GitHub.

Learn more

The GitHub Podcast

Catch up on the GitHub pod­cast, a show ded­i­cated to the top­ics, trends, sto­ries and cul­ture in and around the open source de­vel­oper com­mu­nity on GitHub.

Listen now

OpenAI CEO’s Identity Verification Company Announced Fake Bruno Mars Partnership Due To Mistaken Identity

www.vice.com

On April 17, 2026, Sam Altman’s other AI com­pany, Tools For Humanity, an­nounced a part­ner­ship with Bruno Mars as he em­barks on his Romantic Tour. The an­nounce­ment co­in­cided with the com­pa­ny’s Concert Kit tool, which al­legedly al­lows verified hu­mans” to ac­cess VIP tick­ets and con­cert ex­pe­ri­ences.

However, Bruno Mars’ man­age­ment and Live Nation re­leased a joint state­ment on April 22, claim­ing that the part­ner­ship did­n’t ex­ist. To be clear, we were never even ap­proached by [Tools For Humanity], nor were we in any dis­cus­sions re­gard­ing a part­ner­ship or tour ac­cess,” the state­ment read. We first learned that our tour was be­ing used to pro­mote their pro­ject af­ter their keynote made those ini­tial claims.”

Those claims orig­i­nated from TFHs chief prod­uct of­fi­cer, Tiago Sada, dur­ing a com­pany event. The com­pany then pub­lished a post on its web­site in­clud­ing Sada’s quote about Bruno Mars’ Romantic Tour. Eventually, word got back to Mars’ team.

AI Company Executive Gets His Marses Confused, Is Actually Partnering with Jared Leto’s Band

The ini­tial post on Tools For Humanity’s web­site has since been edited to cor­rect the false in­for­ma­tion. A spokesper­son also con­firmed the com­pany does not have any agree­ment with Bruno Mars to test or fea­ture Concert Kit.” Additionally, there is no as­so­ci­a­tion or af­fil­i­a­tion with the artist or his tour.”

Tools For Humanity is ac­tu­ally part­ner­ing with Thirty Seconds to Mars on their 2027 European tour. While TFH has not dis­closed the ac­tual rea­son for the false Bruno Mars an­nounce­ment, it looks a bit like a case of mis­taken iden­tity. Pretty ironic, since the com­pa­ny’s whole shtick is sup­pos­edly ver­i­fy­ing hu­man iden­ti­ties.

The com­pany launched in 2019 ini­tially as a way to ver­ify hu­man iden­ti­ties in on­line spaces to pre­vent fraud. This in­cluded live mu­sic mo­nop­oly Live Nation-Ticketmaster, which is of­ten plagued by bots and scam­mers. In 2023, TFH launched a phys­i­cal iden­tity ver­i­fi­ca­tion de­vice in the form of an orb that scans hu­man irises.

Unfortunately, the orb does not also tell for­tunes, which is clearly a ma­jor de­sign flaw. If it did, they’d prob­a­bly be able to pre­vent this Mars mix-up be­fore it hap­pened.

reuters.com

www.reuters.com

Please en­able JS and dis­able any ad blocker

Your period tracking app has been yapping about your flow to Meta

femtechdesigndesk.substack.com

A few years back, I had a run­ning joke with the guy I was see­ing about adding him to my pe­riod tracker. Being a wom­en’s health ex­pert, I en­joy weav­ing nerdy anec­dotes about cy­cles and at­trac­tion and de­sires into my flir­ta­tions and mar­veling at my own wit and woo-woo mas­tery of my cycli­cal body. This ruse seemed like a harm­less jab at my dig­i­tally tracked self-aware­ness — a very late mil­len­nial fem­i­nist liv­ing in the Bay Area ver­sion of co­quetry.

It maybe was­n’t all that harm­less, af­ter all.

Turns out, the mat­ter of shar­ing the data around my cy­cle, and po­ten­tially the even more pri­vate in­for­ma­tion about my in­ti­mate ex­pe­ri­ences, was­n’t as much of a mat­ter of choice as I might have ex­pected. Worse, it might have been used to sell me stretch­mark creme or den­tal dams.

Caught bloody handed

That pe­riod track­ing app, Flo, has been found li­able in con­nec­tion with sell­ing user data to Meta all the while promis­ing their users they were pro­tect­ing their pri­vacy. The class ac­tion suit had 13 mil­lion Flo users in­cluded as plain­tiffs, which is a size­able chunk of pissed off users amongst their re­ported 75 mil­lion-strong user base.

Those law­suits against Meta and Flo, first filed in 2021 with more in the US and Canada, re­veal a big­ger is­sue in non-med­ical health track­ing soft­ware — there’s too much gray area around con­sent when it comes to sell­ing your health in­for­ma­tion to ad­ver­tis­ers.

What’s im­por­tant about the le­gal prece­dent be­ing set is in high­light­ing how the cur­rent guide­lines around health data pri­vacy (like HIPAA) are woe­fully lag­ging be­hind the health track­ing tech al­ready avail­able di­rectly to users. It raises a num­ber of crit­i­cal ques­tions:

What does this le­gal vague­ness mean for how we choose to self mon­i­tor our bi­o­log­i­cal mark­ers?

What does this le­gal vague­ness mean for how we choose to self mon­i­tor our bi­o­log­i­cal mark­ers?

In a post-Dobbs en­vi­ron­ment, how do con­cerns around dig­i­tal pri­vacy im­pact our con­sumer choices in sex­ual health and pe­riod track­ing apps?

In a post-Dobbs en­vi­ron­ment, how do con­cerns around dig­i­tal pri­vacy im­pact our con­sumer choices in sex­ual health and pe­riod track­ing apps?

Why is it still up to the con­sumer to run safety checks when it should be the role of prod­uct teams and healthtech brands to build less creepy tech?

Why is it still up to the con­sumer to run safety checks when it should be the role of prod­uct teams and healthtech brands to build less creepy tech?

Do we re­ally need to be track­ing every pos­si­ble symp­tom and mood and cramp and let­ting pri­vate tech com­pa­nies de­cide what to do with that data?

Do we re­ally need to be track­ing every pos­si­ble symp­tom and mood and cramp and let­ting pri­vate tech com­pa­nies de­cide what to do with that data?

Feeling creamy” to­day? Great, we’ll let Mark Zuckerberg know.

Joking about the con­sis­tency of my ovu­la­tion was al­ready a bridge too far and a line I opted not to ven­ture to cross with said beau. I cer­tainly would­n’t have will­ingly an­nounced to any­one pars­ing through data at Meta if I had mas­tur­bated or had un­pro­tected sex on any given day. The Flo app might have made that de­ci­sion for me, though.

For all my men­tal back and forths about whether or not to ac­tu­ally send a part­ner my cy­cle cal­en­dar, Flo might have been send­ing the in­ti­mate de­tails of our sex­ual en­coun­ters to a bunch of tech bros be­hind my back. Turns out, Flo had em­bed­ded a se­cret eavesdropping” tool which passed along in­for­ma­tion like men­stru­a­tion cy­cle, ovu­la­tion, and if a user was try­ing to get preg­nant to Meta, even while ex­plic­itly claim­ing not to in their pri­vacy pol­icy.

As slip­pery as an ovu­la­tion flow, Flo was telling us our pri­vate data was safely hid­den from pry­ing eyes. The guilty ver­dict in the August 2025 Frasco v. Flo law­suit proved oth­er­wise:

Flo, through the Flo App, un­law­fully shared users’ sen­si­tive health data — in­clud­ing men­strual cy­cle, ovu­la­tion, and preg­nancy-re­lated in­for­ma­tion — with third par­ties such as Meta, Google, and Flurry for their own com­mer­cial use (Burr & Forman, 2025).”

Flo, through the Flo App, un­law­fully shared users’ sen­si­tive health data — in­clud­ing men­strual cy­cle, ovu­la­tion, and preg­nancy-re­lated in­for­ma­tion — with third par­ties such as Meta, Google, and Flurry for their own com­mer­cial use (Burr & Forman, 2025).”

The jury found Meta li­able for col­lect­ing sen­si­tive re­pro­duc­tive health data and us­ing it for its own gain. The other par­ties listed set­tled out of court, which means their in­volve­ment in the breach gets to stay more pri­vate than the health data of Flo users be­tween 2016 and 2019.

Nothing fem­i­nism needs more nowa­days than a bit more irony, right?

This was­n’t a hack. It was a de­sign de­ci­sion.

It’s im­por­tant to call out that these third-party plat­forms did­n’t hack into the Flo app. The folks in charge of mak­ing pri­vacy de­ci­sions at Flo handed them our sen­si­tive data on a sil­ver plat­ter. It was sim­ple track-and-sell data shar­ing and we maybe should have seen it com­ing.

I’ve writ­ten be­fore about how pinkwashing’ femtech can dis­guise a whole host of un­eth­i­cal prod­uct de­ci­sions. Prior to head­ing for greener and more pri­vate pas­tures with my pe­riod track­ing app se­lec­tion, Flo was al­ready start­ing to give me the ick. The UX de­sign was get­ting more con­vo­luted, more clut­tered, more car­toon­ish with every up­date.

Quickly, the Flo home screen was be­com­ing more bloated than a late-luteal phase tummy. Opening the app to log whether I had spot­ted a bit that morn­ing or had in­som­nia or ten­der breasts was like nav­i­gat­ing a mine­field of tired femme de­signs and re­dun­dant re­minders to med­i­tate.

With each up­date, the home dis­play pre­sented me with the op­tion for ever grow­ing op­por­tu­ni­ties for neg­a­tive symp­tom re­port­ing. Without any dif­fer­en­ti­a­tion in hi­er­ar­chy, every­thing seemed flatly patho­log­i­cal. The symp­toms were pushed more and more to the front and ad­vice popped out at every turn, es­sen­tially bury­ing the ac­tual cy­cle tracker.

In the con­text of the Flo-Meta fil­ings, this makes sense — fo­cus­ing on the problems” of pe­ri­ods can help drive sales of items pur­port­ing to al­le­vi­ate symp­toms. There is­n’t much to mon­e­tize from a sim­ple pe­riod cal­en­dar, is there? It’s dystopian to re­al­ize the em­pha­sis on symp­to­mol­ogy was help­ing to drive ad­ver­tis­ing on sites even more re­cently found li­able for per­sonal harm on par with to­bacco com­pa­nies.

At the end of the day, no amount of pinkwashed empowerment’ or evolved’ men­tions of sex toys and self plea­sure can cover up who ben­e­fit­ted* from these de­sign choices.

The gap be­tween HIPAA and wellness’ is where con­sent goes to die

Flo changed its pri­vacy pol­icy a whop­ping 13 times in the three years rel­e­vant to the le­gal claims (2016 – 2019). These law­suits show that all those ed­its did noth­ing to make the con­sent users might have thought they were giv­ing real in any mean­ing­ful way.

Lawsuits like the Flo-Meta law­suits are no­table in that they are help­ing to build a foun­da­tion of le­gal prece­dent within the gray zone of non-HIPAA com­pli­ant well­ness tech. Much of health tech, which in­cludes a lot of re­pro­duc­tive health tech cur­rently on the mar­ket, is­n’t ex­plic­itly clin­i­cal or di­rectly tied to com­mu­ni­ca­tions with a health­care provider.

Which means, you can be log­ging some deep in­for­ma­tion about the func­tions of your body and given au­to­mated ad­vice on mak­ing ad­just­ments to po­ten­tially im­prove these bod­ily func­tions, and in all like­li­hood, it man­ages to not fall un­der the pro­tec­tion of cur­rent health and pri­vacy laws. This means that it is at the dis­cre­tion of the apps them­selves to cre­ate the poli­cies around what data to share or sell or re­port to gov­ern­ment agen­cies them­selves.

They also have pretty broad dis­cre­tion in the de­signs around con­sent they are will­ing and able to of­fer users. The de­sign de­ci­sions and con­sent frame­works in-prod­uct can be guided by best-prac­tices, but those choices are still largely dri­ven by the opin­ions within prod­uct teams. This is how sloppy con­sent pat­terns con­tinue to get shipped out to users, even when the prod­uct might deal in in­cred­i­bly sen­si­tive data col­lec­tion.

It was­n’t like some cy­ber crim­i­nal was hold­ing Flo ran­som, these were em­bed­ded le­gal, de­sign, en­gi­neer­ing, and sales po­si­tions that got through a chain of em­ploy­ees that ul­ti­mately threw users un­der the bus for profit.

It’s hard to track down ex­act in­for­ma­tion on the num­ber of staff em­ployed by Flo from 2016 – 2019 and who was di­rectly re­spon­si­ble for these choices. By most ac­counts, it was a lean op­er­a­tion — prob­a­bly around 350 em­ploy­ees at any given time in those years. That’s a pretty small group of folks mak­ing po­ten­tially mon­u­men­tal de­ci­sions about how highly sen­si­tive health data got col­lected, stored, and shared in ad­di­tion to how those processes and poli­cies were com­mu­ni­cated to their mil­lions of users world­wide.

If we’re left to our own de­vices, who will pro­tect us?

It seems like we can’t just nec­es­sar­ily leave it up to com­pa­nies — or their rag­tag teams of crack­pot lawyers rewrit­ing pri­vacy poli­cies every few months — to keep our pri­vate data pri­vate. I guess we’re left need­ing to hurt Mark Zuckerberg’s feel­ings every now and again in or­der to just use our vi­bra­tors in peace.

The law is slow to catch up, even more so when it comes to reg­u­lat­ing tech. This makes me ner­vous when con­sid­er­ing the rush to in­crease the col­lec­tion of data around wom­en’s health in an ef­fort to close the data gap. This is a wor­thy aim, but how much trust can we re­ally place in pri­vate com­pa­nies op­er­at­ing out­side of clin­i­cally guided struc­tures?

This is even be­fore we fac­tor in the in­creased use of gen­er­a­tive AI in pop­u­lat­ing health ad­vice within apps which seem to in­ten­tion­ally cir­cum­vent the health­care space and thus not have to be com­pli­ant with the user pro­tec­tions un­der that cat­e­gor­i­cal um­brella. There is such a thing as too much data, though try telling that to a PM try­ing to make his KPIs. If the data comes from un­man­aged flows, the col­lec­tion meth­ods pri­or­i­tized for third-party ad sales, and done with­out the di­rect con­sent of users, how much can we even rely on the de­riv­a­tive gen­er­a­tive out­puts? Is this the stan­dard we want to set for col­lect­ing wom­en’s health data? Is it worth all the costs?

Personally, this reeks of mov­ing fast and break­ing things to me. Flo def­i­nitely broke my trust, along with at least 13 mil­lion for­mer Flo users. With (reportedly) over a third of US women uti­liz­ing pe­riod track­ing apps and a sim­i­lar rate of use amongst women in the EU, there’s a sig­nif­i­cant mar­ket to cap­ture here. Unlike in 2016 when Flo was one of few play­ers on the field, there are hun­dreds of cy­cle track­ing apps for savvy users to se­lect from to­day, not to men­tion the in­creas­ing avail­abil­ity of built-in cy­cle track­ers within other health apps and wear­ables.

Though Flo re­mains one of the top down­loaded of the bunch, for many of us, it’s a mat­ter of once burned, twice shy. Personally, I’m a big fan of WildAI, which does­n’t bother to ask me if I’ve rubbed one out and there­fore has no in­ter­est in telling a tech be­he­moth a whole lot more than if I both­ered to note if I was thirsty and horny and hun­gry on the same day. You and Mark can guess how much space those notes take up on my cy­cle cal­en­dar all on your own. I pre­fer it that way, and Flo should too.

*Let’s just take a mo­ment, by the way, to re­flect on how the dev dudes set­ting up per­son­al­ized ad gat­ing at Google might have been track­ing the sex toy use and preva­lence of anal sex amongst Flo users so they might drive up pay per click (PPC) rates across your apps. Obviously, this is fem­i­nism at its finest.

**It might be worth ar­gu­ing if in a post-Dobbs world and in coun­tries with wishy-washy dig­i­tal pri­vacy stan­dards that maybe metic­u­lously log­ging sexy self-play might not have the po­ten­tial health ben­e­fits worth the risk hav­ing it wind up in the hands of such loose-lipped data bro­kers. It’s bad enough we have to worry about the pri­vacy vi­o­la­tions of the vi­bra­tors them­selves. Maybe dumb” dil­dos are the bet­ter op­tion these days, ac­tu­ally. We’ll have to get to that in an­other post.

Share The Femtech Design Desk

Leave a com­ment

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.