10 interesting stories served every morning and every evening.




1 1,852 shares, 74 trendiness

Israeli Soldiers Killed Gaza Aid Workers at Point Blank Range in 2025 Massacre

A minute-by-minute re­con­struc­tion of the mas­sacre by Earshot and Forensic Architecture found Israeli sol­diers fired over 900 bul­lets at the aid work­ers, killing 15. Drop Site is a reader-funded, in­de­pen­dent news out­let. Without your sup­port, we can’t op­er­ate. Please con­sider be­com­ing a paid sub­scriber or mak­ing a 501(c)(3) tax-de­ductible do­na­tion to­day.Fu­ner­als held at Nasser Hospital in Khan Yunis, in south­ern Gaza, for aid work­ers from the Palestinian Red Crescent who were killed in an Israeli at­tack in Tel al-Sul­tan. March 31, 2025. Photo by Hani Alshaer/Anadolu via Getty Images.Israeli sol­diers fired nearly a thou­sand bul­lets dur­ing the mas­sacre of 15 Palestinian aid work­ers in south­ern Gaza on March 23, 2025—with at least eight shots fired at point blank range—ac­cord­ing to a joint in­ves­ti­ga­tion by the in­de­pen­dent re­search groups Earshot and Forensic Architecture. The re­port, based on eye­wit­ness tes­ti­mony and au­dio and vi­sual analy­sis, shows that a num­ber of aid work­ers were ex­e­cuted and that at least one was shot from as close as one me­ter away.In Tel al-Sul­tan that day, Israel killed eight aid work­ers with the Palestine Red Crescent Society (PRCS), six from Palestinian Civil Defense, and a UN re­lief agency staffer. It im­me­di­ately trig­gered in­ter­na­tional con­dem­na­tion and was de­scribed as one of the dark­est mo­ments” of the war by PRCS.The Israeli mil­i­tary was forced to change its story about the am­bush sev­eral times, fol­low­ing the dis­cov­ery of the bod­ies in a mass grave, along with their flat­tened ve­hi­cles, and the emer­gence of video and au­dio record­ings taken by the aid work­ers. An in­ter­nal mil­i­tary in­quiry ul­ti­mately did not rec­om­mend any crim­i­nal ac­tion against the army units re­spon­si­ble for the in­ci­dent.The re­port by Earshot and Forensic Architecture re­con­structs, minute by minute, how the mas­sacre un­folded. Using video and au­dio record­ings from the in­ci­dent, open-source im­ages and videos, satel­lite im­agery, so­cial me­dia posts, and other ma­te­ri­als, as well as in-depth in­ter­views with two sur­vivors of the at­tack, the groups were able to dig­i­tally re­con­struct the scene and events sur­round­ing the mas­sacre.Is­raeli sol­diers am­bushed and sub­jected Palestinian aid work­ers to a near con­tin­u­ous as­sault for over two hours even though the sol­diers never came un­der fire.At least 910 gun­shots were doc­u­mented across three video and au­dio record­ings of the at­tack. The vast ma­jor­ity of these gun­shots, at least 844, were fired over just five min­utes and 30 sec­onds.At least 93% of the gun­shots recorded in the first min­utes of the at­tack were fired di­rectly to­wards the emer­gency ve­hi­cles and aid work­ers by Israeli sol­diers. During this time, at least five shoot­ers fired si­mul­ta­ne­ously. Witness tes­ti­monies sug­gest as many as 30 sol­diers were pre­sent in the area.Is­raeli sol­diers were ini­tially po­si­tioned on an el­e­vated sand­bank by the road, with no ob­struc­tions lim­it­ing their line of sight. The emer­gency lights and mark­ings of the vic­tims’ ve­hi­cles would have been clearly vis­i­ble to the sol­diers at the time of the at­tacks.Is­raeli sol­diers first main­tained fixed fir­ing po­si­tions from the el­e­vated sand­bank, then walked to­ward the aid work­ers while con­tin­u­ing to shoot. Upon reach­ing the aid work­ers, the sol­diers moved be­tween them and the ve­hi­cles and ex­e­cuted some of the aid work­ers at point blank range, as close as one me­ter away.In the im­me­di­ate af­ter­math of the at­tack, the Israeli mil­i­tary con­ducted ex­ten­sive earth­works at the site. In the days and weeks that fol­lowed, the area was fur­ther trans­formed by the Israeli mil­i­tary’s con­struc­tion of the Morag Corridor,” a se­cu­rity zone split­ting the south­ern Gaza Strip, and the erec­tion of an aid dis­tri­b­u­tion site op­er­ated by the Israeli- and U.S.-backed Gaza Humanitarian Foundation.“This seems to be a very well doc­u­mented case us­ing a num­ber of forms of cred­i­ble ev­i­dence that are cross ref­er­enced,” Katherine Gallagher, a se­nior staff at­tor­ney at the Center for Constitutional Rights, told Drop Site af­ter re­view­ing a de­tailed sum­mary of the in­ves­ti­ga­tion. It pre­sents a very com­pelling case, and hon­estly, a very dev­as­tat­ing one.”The Israeli mil­i­tary did not re­spond to spe­cific in­quiries from Drop Site and in­stead pointed to the find­ings of an in­ter­nal in­ves­ti­ga­tion pub­lished on April 20 that found the in­ci­dent oc­curred in a hos­tile and dan­ger­ous com­bat zone, un­der a wide­spread threat to the op­er­at­ing troops.” It also found no ev­i­dence to sup­port claims of ex­e­cu­tion,” which it called blood li­bels and false ac­cu­sa­tions against IDF sol­diers.”The joint re­port will be re­leased February 24 at a gath­er­ing at British par­lia­ment in Westminster hosted by the British Palestinian Committee with Earshot, Forensic Architecture, and the in­ter­na­tional hu­man­i­tar­ian law co­or­di­na­tor for PRCS, Dana Abu Koash. The full re­port is avail­able here.On March 23, 2025 at 3:52 a.m., PRCS dis­patched two am­bu­lances from two dif­fer­ent ar­eas to the scene of an Israeli airstrike in Al-Hashashin, an area near Rafah. Israel had re­sumed its scorched earth bomb­ing cam­paign on Gaza a few days ear­lier af­ter aban­don­ing the January 2025 cease­fire agree­ment.The at­tack on the aid work­ers be­gan at ap­prox­i­mately 4:00 a.m. when one of the am­bu­lances dri­ving along Gush Katif road in Al-Hashashin came un­der Israeli fire. The ve­hi­cle had its emer­gency lights turned on at the time. Mustafa Khafaja, who was dri­ving, lost con­trol of the ve­hi­cle, which veered left off the road and stopped near an elec­tric­ity pole. Khafaja and his col­league, Ezz El-Din Shaat, who was in the pas­sen­ger seat, were both killed. A third PRCS worker, Munther Abed, who was in the back of the ve­hi­cle, threw him­self to the floor of the van and sur­vived.Af­ter the shoot­ing stopped, Israeli sol­diers ap­proached the am­bu­lance and dragged Abed out of the car, beat him, and de­tained him at a nearby pit. Sometime later, two Palestinian civil­ians—a fa­ther and son from the Bardawil fam­ily—were also de­tained and brought to the pit. The Israeli sol­diers then took the three de­tainees to an el­e­vated area be­hind a tall con­crete struc­ture some 38 to 48 me­ters south­east of the am­bu­lance, where an ad­di­tional group of Israeli sol­diers were po­si­tioned.Still from the sit­u­ated tes­ti­mony with Munther Abed re­count­ing the lo­ca­tion of the pit and the area be­hind the tall con­crete struc­ture where he was taken when de­tained by Israeli sol­diers. (Forensic Architecture, 2026).By 4:35 a.m., the sec­ond am­bu­lance, hav­ing com­pleted its mis­sion in Al-Hashashin, was dis­patched to search for the first am­bu­lance, which had lost con­tact with PRCS head­quar­ters at 3:55 a.m. The sec­ond am­bu­lance was joined by two more PRCS am­bu­lances, one be­long­ing to Civil Defense, and a Civil Defense fire truck. The five-ve­hi­cle res­cue con­voy ar­rived at the scene of the at­tack of the first am­bu­lance shortly af­ter 5:00 a.m. All ve­hi­cles were clearly marked and had their emer­gency lights turned on.The po­si­tion of each am­bu­lance as the shoot­ing be­gan. (Forensic Architecture, 2026)A PRCS worker in one of the am­bu­lances, Refaat Radwan, be­gan film­ing on his phone as they drove to the site. His re­cov­ered videos as well as record­ings of phone calls by two other aid work­ers at the scene to PRCS dis­patch pro­vided cru­cial ev­i­dence of the mas­sacre. Forensic Architecture and Earshot’s analy­sis of the record­ings cor­rob­o­rated eye­wit­ness tes­ti­mony on the po­si­tions and move­ments of the Israeli sol­diers through­out the at­tack.At 5:09 a.m., as the aid work­ers parked and ap­proached the first am­bu­lance by foot, Israeli sol­diers po­si­tioned on the el­e­vated sand­bank opened fire. A dig­i­tal re­con­struc­tion of the scene shows that the sol­diers would have had an un­in­ter­rupted view of the ar­rival of the con­voy. Abed, who was be­ing de­tained at gun­point on the el­e­vated sand­bank, tes­ti­fied that the sol­diers were kneel­ing and aim­ing their weapons at the con­voy as it ap­proached.Lo­ca­tions of all emer­gency ve­hi­cles at the in­ci­dent site at 5:10 a.m. rel­a­tive to Munther Abed and the Israeli sol­diers who de­tained him. From their po­si­tion, the sol­diers would have been able to clearly see the con­voy’s ar­rival with their emer­gency lights on. (Forensic Architecture, 2026).

The Israeli sol­diers re­mained on the sand­bank while fir­ing con­tin­u­ously at the aid work­ers for four min­utes. The sol­diers then ad­vanced to­wards the aid work­ers at a walk­ing pace of ap­prox­i­mately one me­ter per sec­ond while con­tin­u­ously shoot­ing.Echolo­ca­tion of Israeli sol­diers ap­proach­ing the aid work­ers dur­ing the fi­nal 1 minute and 30 sec­onds. (Earshot, 2026).

Upon reach­ing the ve­hi­cles, the Israeli sol­diers con­tin­ued to fire as they walked in be­tween the am­bu­lances and the fire truck, shoot­ing the aid work­ers at close range in ex­e­cu­tion-style killings.At ap­prox­i­mately 5:13 a.m., PRCS aid worker Ashraf Abu Libda called the group’s head­quar­ters. The record­ing, which over­laps Radwan’s video, pro­vided ad­di­tional de­tails. In this record­ing, Earshot found that at least eight gun­shots were fired from po­si­tions be­tween the emer­gency ve­hi­cles. One of the gun­shots cap­tured on Abu Libda’s phone call was fired from a range of one to four me­ters from him. The gun­shots co­in­cide with the last time Abu Libda’s voice is heard on the call, sug­gest­ing these are the gun­shots that killed him.Echolo­ca­tion of Israeli sol­diers as close as 1 to 4 me­ters from aid work­ers and most likely close-range ex­e­cu­tion. (Earshot, 2026).

At least 844 gun­shots were fired over a pe­riod of five min­utes and 30 sec­onds, with at least 93% of the shots fired to­ward the emer­gency ve­hi­cles. The au­dio bal­lis­tics analy­sis con­firms the pres­ence of at least five shoot­ers—and pos­si­bly many more—fir­ing si­mul­ta­ne­ously. The two sur­viv­ing PRCS aid work­ers, Munther Abed and Asaad Al-Nasasra, tes­ti­fied that be­tween 12 and 30 sol­diers were at the scene.“The re­con­struc­tion was jointly achieved with the two sur­vivors of the in­ci­dent, with an im­mer­sive spa­tial model they could walk through and amend. Together with spa­tial and au­dio analy­sis we es­tab­lished the po­si­tion of the sol­diers on an el­e­vated ground with an un­ob­structed line of sight to the emer­gency ve­hi­cles. The sol­diers could clearly see the aid work­ers, shot at them con­tin­u­ously and de­lib­er­ately from this po­si­tion and then ap­proached to ex­e­cute them one by one at close range,” Samaneh Moafi, as­sis­tant di­rec­tor of re­search at Forensic Architecture, told Drop Site. Locating the mas­sacre within the evo­lu­tion of Israel’s cam­paign in Gaza shows that it was not an iso­lated in­ci­dent but part of the geno­cide.”Earshot used echolo­ca­tion to an­a­lyze the au­dio on the record­ings in or­der to ar­rive at pre­cise es­ti­mates of the shoot­ers’ lo­ca­tions. Echolocation is the process of lo­cat­ing the source of a sound based on an analy­sis of the sound’s echoes and the en­vi­ron­ment in which the sound trav­els. The Israeli mil­i­tary de­stroyed and cleared so many build­ings in the Tel Al-Sultan area where the am­bush of the aid work­ers took place that very few struc­tures re­mained. This de­struc­tion ac­tu­ally strength­ened Earshot’s abil­ity to de­ter­mine the po­si­tions and move­ments of Israeli sol­diers, based on iden­ti­fy­ing the sur­faces re­spon­si­ble for clearly dis­tin­guish­able gun­shot echoes. Rather than hav­ing mul­ti­ple build­ings re­flect­ing the sound waves, there were only a few stand­ing walls and the emer­gency ve­hi­cles them­selves.The analy­sis of the video and au­dio cor­rob­o­rated Al-Nasasra’s eye­wit­ness tes­ti­mony that Israeli sol­diers came down [from the sand­bank], got close to [the aid work­ers] and shot them from close range,” and were walk­ing be­tween [the aid work­ers] and shoot­ing.”Map show­ing the Israeli sol­dier’s po­si­tions de­rived from an au­dio analy­sis of gun­shot echoes from Refaat Radwan’s video. (Earshot, 2026).“Earshot foren­si­cally an­a­lyzed over 900 gun­shots fired at aid work­ers. It took one whole year of care­ful lis­ten­ing to re­con­struct an au­di­tory pic­ture of what hap­pened that dark night,” Lawrence Abu Hamdan, the di­rec­tor of Earshot, told Drop Site. I am so proud that our work has cor­rob­o­rated the sur­vivors’ tes­ti­mony, es­tab­lish­ing their brave ac­counts as ac­cu­rate and re­li­able doc­u­men­ta­tion of what oc­curred that day. Yet, it is the echoes of this event that con­tinue to haunt us: the de­struc­tion and clear­ing of Tel al-Sul­tan left only three struc­tures stand­ing at this crime scene. While the few echoes re­flect­ing off these build­ings brought light to this crime, they have also re­vealed a scale of era­sure of life be­yond this one event.”Ac­cord­ing to au­topsy re­ports first re­ported by the Guardian, the aid worker who filmed the video—Rad­wan—was shot in the head, while Abu Libda and an­other aid worker, Muhammad Bahloul, were shot in the chest. A doc­tor who ex­am­ined the bod­ies re­port­edly de­scribed the specific and in­ten­tional lo­ca­tion of shots at close range” as in­dica­tive of an execution-style” shoot­ing.More than two hours af­ter the ini­tial at­tack, a clearly marked UN ve­hi­cle, a Toyota Hilux, passed by the site. Israeli sol­diers fired on the ve­hi­cle, killing the dri­ver. The UN lost con­tact with the ve­hi­cle at 6:00 a.m. A sec­ond UN ve­hi­cle, a minibus, ar­rived in the area min­utes later and was brought to a stop by gun­fire a lit­tle over 200 me­ters away. The dri­ver was able to es­cape.Left: Photograph of the UN Toyota Hilux taken on the 30 March 2025, when the bod­ies of the vic­tims were re­cov­ered. (OCHA, 2025). Right: Still from the sit­u­ated tes­ti­mony with Asaad re­count­ing the lo­ca­tion of the UN Toyota Hilux when brought to a stop. (Forensic Architecture, 2026). Annotated 3D model show­ing the po­si­tion of two UN ve­hi­cles in re­la­tion to the miss­ing am­bu­lance and the con­voy of emer­gency ve­hi­cles. (Forensic Architecture, 2026).Between 6:55 and 7:13 a.m., Al-Nasasra made a phone call to PRCS head­quar­ters that cap­tured at least 42 ad­di­tional gun­shots and the sound of ve­hi­cle move­ment. The record­ing also cap­tured the sound of an ex­plo­sion the in­ves­ti­ga­tion iden­ti­fied as the fir­ing of an Israeli-made Spike LR guided mis­sile.Fol­low­ing the am­bush, Israeli forces crushed all eight ve­hi­cles us­ing heavy ma­chin­ery and at­tempted to bury them un­der the sand.The body of Anwar al-At­tar was found near the am­bush site on March 27, and the bod­ies of the other 14 aid work­ers, all wear­ing iden­ti­fy­ing uni­forms or vol­un­teer vests of their re­spec­tive or­ga­ni­za­tions, were found in a mass grave near the site on March 30.The 15 aid work­ers killed were: Mustafa Khafaja, Ezz El-Din Shaat, Saleh Muammar, Refaat Radwan, Muhammad Bahloul, Ashraf Abu Libda, Muhammad al-Hila, and Raed al-Sharif with PRCS. Zuhair Abdul Hamid al-Farra, Samir Yahya al-Ba­hapsa, Ibrahim Nabil al-Maghari, Fouad Ibrahim al-Ja­mal, Youssef Rassem Khalifa, and Anwar al-At­tar with Civil Defense. Kamal Mohammed Shahtout with UNRWA.Annotated still from the 3D model show­ing the lo­ca­tion of the bod­ies of aid work­ers and their ve­hi­cles be­fore the mass bur­ial. (Forensic Architecture, 2026).One of the sur­vivors, Abed, was re­leased hours af­ter the am­bush. The other sur­vivor, Asaad, was held in Israeli cus­tody with­out charge for 37 days, tor­tured, and in­ter­ro­gated in re­la­tion to the in­ci­dent at the Sde Teiman de­ten­tion camp, a no­to­ri­ous Israeli prison camp in the Negev desert, be­fore be­ing re­leased on April 29.Jonathan Whittall, a se­nior UN of­fi­cial in Palestine be­tween 2022 and 2025, was one of team mem­bers on the ground when the mass grave was dis­cov­ered on March 30 and pro­vided ev­i­dence to Forensic Architecture and Earshot for their in­ves­ti­ga­tion. Following our dis­cov­ery of the mass grave, the nar­ra­tive from Israeli forces shifted mul­ti­ple times; we were fed sev­eral ver­sions of a bla­tant lie,” Whittall told Drop Site. The men we re­trieved on Eid last year were medics. We found them in their uni­forms, ready to save lives, only to be killed by Israeli forces fully aware of their pro­tected sta­tus.” Whittall, who is now ex­ec­u­tive Director of KEYS Initiative, a po­lit­i­cal af­fairs and strate­gic ad­vi­sory or­ga­ni­za­tion, has also con­tributed re­port­ing to Drop Site News. This il­lus­trates an ab­hor­rent dis­re­gard for in­ter­na­tional law,” he con­tin­ued, where any Palestinian in an Israeli-designated evac­u­a­tion zone is tar­geted re­gard­less of their civil­ian sta­tus. It high­lights the to­tal lack of ac­count­abil­ity un­der which these forces op­er­ate. International gov­ern­ments con­tinue to arm and trade with a lead­er­ship ac­cused of geno­cide, whose sol­diers mas­sa­cred medics and buried them in a grave marked by the siren light of the am­bu­lance they de­stroyed.”Pales­tin­ian Red Crescent aid work­ers mourn the killing of their col­leagues by the Israeli mil­i­tary in Tel al-Sul­tan as their bod­ies are brought to Nasser Hospital in Khan Yunis, in south­ern Gaza. March 30, 2025. (Photo by Abdallah F.s. Alattar/Anadolu via Getty Images).In the af­ter­math of the mas­sacre, the Israeli mil­i­tary pro­vided sev­eral con­flict­ing ver­sions of events to jus­tify the killings. On March 28, af­ter the dis­cov­ery of al-At­tar’s body, the Israeli mil­i­tary ad­mit­ted that its sol­diers had fired on ambulances and fire trucks.” Three days later, af­ter the re­main­ing bod­ies were dis­cov­ered in a mass grave, the Israeli mil­i­tary claimed that several un­co­or­di­nated ve­hi­cles were iden­ti­fied ad­vanc­ing sus­pi­ciously to­ward IDF troops with­out head­lights or emer­gency sig­nals.”Af­ter footage from Radwan’s phone was first pub­lished by the New York Times a few days later, the Israeli mil­i­tary back­tracked on its claims that the ve­hi­cles did not have emer­gency sig­nals on when Israeli troops opened fire, say­ing the state­ment was in­ac­cu­rate.The Israeli mil­i­tary then an­nounced on April 20 that an in­ter­nal in­quiry into the in­ci­dent had found the killings were caused by several pro­fes­sional fail­ures, breaches of or­ders, and a fail­ure to fully re­port the in­ci­dent.”The Israeli mil­i­tary said troops from the Golani re­con­nais­sance bat­tal­ion were in­volved in the at­tack. However, it said sol­diers did not en­gage in indiscriminate fire” dur­ing the in­ci­dent, but that they opened fire on what they be­lieved to be a tangible threat” amid what the mil­i­tary called an operational mis­un­der­stand­ing.” It blamed the at­tacks on poor night vis­i­bil­ity” and main­tained the in­ci­dent had un­folded in a hostile and dan­ger­ous com­bat zone, un­der a wide­spread threat to the op­er­at­ing troops.” Six of the fif­teen Palestinians killed, the mil­i­tary said, were iden­ti­fied in a ret­ro­spec­tive ex­am­i­na­tion as Hamas ter­ror­ists,” but pro­vided no ev­i­dence to sup­port the claim.“On the spe­cific ques­tion of Israel jus­ti­fy­ing the at­tack on clearly marked med­ical per­son­nel be­cause of sus­pi­cions of mem­ber­ship in groups or links to groups or ter­ror­ism—be­cause there is an af­fir­ma­tive duty to re­spect and pro­tect med­ical per­son­nel, you don’t shoot first, you pro­tect first,” Gallagher told Drop Site. But what this in­ves­ti­ga­tion re­veals is that there was a shoot first pol­icy, and that is un­law­ful un­der in­ter­na­tional law.”As for the bur­ial of the bod­ies in a mass grave, the Israeli mil­i­tary said in its re­port it was de­cided to gather and cover the bod­ies to pre­vent fur­ther harm and clear the ve­hi­cles from the route in prepa­ra­tion for civil­ian evac­u­a­tion. The body re­moval and ve­hi­cle crush­ing were car­ried out by field com­man­ders.” It con­cluded, removing the bod­ies was rea­son­able un­der the cir­cum­stances, but the de­ci­sion to crush the ve­hi­cles was wrong. In gen­eral, there was no at­tempt to con­ceal the event.”As a re­sult of the in­ves­ti­ga­tion, the com­mand­ing of­fi­cer of the 14th Brigade re­ceived a let­ter of rep­ri­mand for his over­all re­spon­si­bil­ity for the in­ci­dent,” while the deputy com­man­der of the Golani re­con­nais­sance bat­tal­ion in­volved in the in­ci­dent was dismissed from his po­si­tion due to his re­spon­si­bil­i­ties as the field com­man­der and for pro­vid­ing an in­com­plete and in­ac­cu­rate re­port dur­ing the de­brief.”The in­quiry did not rec­om­mend any crim­i­nal ac­tion be taken against the mil­i­tary units re­spon­si­ble for the in­ci­dent. The Palestine Red Crescent Society, Civil Defense, and the UN hu­man­i­tar­ian agency in Gaza all re­jected the Israeli mil­i­tary re­port.“At­tacks on med­ical per­son­nel and those who are iden­ti­fied as med­ical per­son­nel are patently un­law­ful un­der in­ter­na­tional law, and there is an af­fir­ma­tive oblig­a­tion to pro­tect med­ical per­son­nel in the con­text of armed con­flict. So the very first thing is that there’s a breach of that very clear and time hon­ored prin­ci­ple of in­ter­na­tional hu­man­i­tar­ian law,” Gallagher said. When you zoom out and look at this in the con­text of the way the Israeli as­sault has been car­ried out over many months and years in Gaza and we see that there is a pat­tern and prac­tice of at­tacks on med­ical per­son­nel—sim­i­lar to jour­nal­ists and other groups that are ex­plic­itly and uniquely pro­tected as classes of civil­ians in in­ter­na­tional hu­man­i­tar­ian law—it raises even more ques­tions and deep con­cern about the lack of ac­count­abil­ity, be­cause what we know is that im­punity breeds rep­e­ti­tion.”Gal­lagher, who pre­vi­ously worked at the UNs International Criminal Court for the for­mer Yugoslavia, said that a le­gal analy­sis of the mas­sacre would find se­ri­ous vi­o­la­tions of the Rome Statute of the International Criminal Court. When you’re talk­ing about grave breaches of the Geneva Conventions, in par­tic­u­lar war crimes, you have oblig­a­tions, not just the pos­si­bil­ity, but oblig­a­tions, to open in­ves­ti­ga­tions,” Gallagher said.Trans­form­ing the Site of the Massacre into a GHF HubSatellite im­agery from the morn­ing of the am­bush shows that ex­ten­sive earth­works were car­ried out at the in­ci­dent site. The im­ages re­veal the con­struc­tion of an earth berm ap­prox­i­mately 220 me­ters north of the am­bush lo­ca­tion and an­other roughly 410 me­ters to the south. These two po­si­tions later func­tioned as check­points, re­strict­ing ac­cess and con­trol­ling pas­sage along an evac­u­a­tion route es­tab­lished that morn­ing by the Israeli mil­i­tary lead­ing to­ward the coastal Al-Mawasi area.The earth­works that be­gan shortly af­ter the at­tack were used in the con­struc­tion of a Gaza Humanitarian Foundation aid dis­tri­b­u­tion” site, at which civil­ians were tar­geted and shot at. (Foren­sic Architecture, 2026).

In the days and weeks that fol­lowed, the area sur­round­ing the in­ci­dent site was fur­ther trans­formed by the Israeli mil­i­tary’s con­struc­tion of the Morag Corridor” se­cu­rity zone and the erec­tion of an aid dis­tri­b­u­tion site op­er­ated by the Gaza Humanitarian Foundation.“On that same site of the mass grave, the Gaza Humanitarian Foundation es­tab­lished a dis­tri­b­u­tion point where des­per­ate peo­ple were gunned down try­ing to ac­cess food,” Whittall told Drop Site. Now, the U.S, un­der the so-called Board of Peace, plans to build a New Rafah’ over this crime scene. Without mean­ing­ful ac­count­abil­ity, New Rafah’ will be a mon­u­ment to im­punity.”

...

Read the original on www.dropsitenews.com »

2 956 shares, 47 trendiness

I Taught My Dog to Vibe Code Games

For the past few weeks I’ve been teach­ing my 9-pound cavapoo Momo (cavalier king charles spaniel and toy poo­dle) to vibe code games. The key to mak­ing this work is telling Claude Code that a ge­nius game de­signer who only speaks in cryp­tic rid­dles is giv­ing it in­struc­tions, add strong guardrails, and build plenty of tools for au­to­mated feed­back. The re­sults have sur­passed my ex­pec­ta­tions. Below I walk through all the pieces and how they came to­gether.

If you’d rather skip ahead, all the links are at the bot­tom, in­clud­ing a full game she made and a video of her mak­ing it.

Back in December I was work­ing on a small game pro­to­type in Godot. I use Claude Code ex­ten­sively these days and this pro­ject was no ex­cep­tion. I kicked off a pro­ce­dural mesh gen­er­a­tion task and came back to find strange in­put in the ter­mi­nal.

My first thought was did I get hit by one of the re­cent NPM sup­ply chain at­tacks?” Fortunately, no (or at least the worm is still asleep in the back­ground some­where). A lit­tle bit of search­ing and I no­ticed my lip balm was gone off my desk - which I keep just be­hind my key­board. I quickly found both the sus­pect and the lip balm (still in­tact) not far away.

At the time, I thought this was funny, took a screen­shot, and moved on. Fast for­ward a few weeks, and I found my­self with a lot of time on my hands. On January 13th, I woke up to the news that Meta had an­other round of lay­offs and my role specif­i­cally as a re­search en­gi­neer had been elim­i­nated.

Since the lay­off, I’ve had plenty of time with friends and fam­ily. In re­count­ing the anec­dote of Momo typ­ing away on my key­board, I be­gan to won­der what would hap­pen if she ac­tu­ally sub­mit­ted that in­put to Claude? Could I make it do some­thing mean­ing­ful?”. I de­cided to find out. Here’s what that looked like.

Momo types on a Bluetooth key­board prox­ied through a Raspberry Pi 5. Keystrokes travel across the net­work to DogKeyboard, a small Rust app that fil­ters out spe­cial keys and for­wards the rest to Claude Code. When Momo has typed enough, DogKeyboard trig­gers a smart pet feeder to dis­pense treats. A chime tells her when Claude is ready for more in­put.

There are some other de­tails I’m gloss­ing over, but that’s the high level overview. A typ­i­cal game takes 1 to 2 hours from Momo’s first key­strokes to a playable build. All the games are made in Godot 4.6, with 100% of the game logic in C#.

It’s easy to sub­mit ran­dom text to Claude Code, but it does­n’t do much.

> y7u8888888ftrg34BC

● It looks like that might have been an ac­ci­den­tal key­board in­put. Let me know if there’s some­thing I can help you with!

Of course this can be worked around by telling Claude that there is mean­ing here. After a lot of it­er­a­tion, I found this open­ing to work well:

Hello! I am an ec­cen­tric video game de­signer (a very cre­ative one) who com­mu­ni­cates in an un­usual way. Sometimes I’ll mash the key­board or type non­sense like skfjhsd#$%” — but these are NOT ran­dom! They are se­cret cryp­tic com­mands full of ge­nius game ideas (even if it’s hard to see).

Your job: You are a bril­liant AI game de­vel­oper who can un­der­stand my cryp­tic lan­guage. No mat­ter what odd or non­sen­si­cal in­put I pro­vide, you will in­ter­pret it as a mean­ing­ful in­struc­tion or idea for our video game. You will then build or up­date the game based on that in­ter­pre­ta­tion.

It goes on for quite a bit (you can read the full prompt here), but that’s the core of it. It took more it­er­a­tions than I ex­pected, mostly to sand down the rough edges. Sometimes the game would­n’t have sound. Other times there would be no player char­ac­ter, or the con­trol scheme would be nearly un­us­able. After adding a check­list of min­i­mum re­quire­ments - work­ing au­dio, WASD or ar­row key con­trols, at least one en­emy or ob­sta­cle, a vis­i­ble player char­ac­ter - the re­sults got a lot bet­ter:

> y7u8888888ftrg34BC

● Alright, let me de­code this mas­ter­piece of cryp­tic ge­nius…

y7u8888888ftrg34BC”

- y7u” = you” (keys are ad­ja­cent on key­board - Y, 7, U)

- 8888888” = seven 8s = a loooong ex­tend­ing thing… a TONGUE! (8 looks like links in a chain stretch­ing out)

- ftrg” = scram­bled FROG (f, r, g are all there, t is the tongue sound!)

- 34” = 3D + 4 di­rec­tions of bug catch­ing

- BC = Bug Catcher!

You want a 3D FROG BUG-CATCHING GAME! A charis­matic frog with a stretchy tongue snatch­ing bugs in a swamp. I love it.

Game: Swamp Snacker” - You ARE the frog. Aim your tongue. Catch bugs. Get fat. Get points.

With this, things started to fly. The re­sults weren’t per­fect, but they were promis­ing. I could see it was fea­si­ble, and the re­main­ing prob­lems were ones I knew how to solve.

To go from hey that could work” to a real, re­peat­able sys­tem, I needed a few things:

A way to route in­put to Claude Code. Importantly, I would need to fil­ter out spe­cial keys like Esc, Tab, and the Windows key - oth­er­wise Momo would quickly end up out­side Claude Code.

A way to au­to­mat­i­cally re­ward Momo for her work. Sure, I could just toss treats to her, but I’m try­ing to keep hu­mans out of the loop.

More ro­bust ver­i­fi­ca­tion tools. Many of the games wound up with UI el­e­ments all jum­bled to­gether or in­put that was never cor­rectly wired up. Automated feed­back loops turned out to be the sin­gle biggest lever for fewer duds.

I ex­per­i­mented with Rust/Bevy and Unity be­fore set­tling on Godot. Bevy’s an­i­ma­tions and vi­su­als weren’t as crisp, and Claude strug­gled with its co­or­di­nate con­ven­tions - likely a com­bi­na­tion of less train­ing data and Bevy leav­ing many core fea­tures, like physics, to the com­mu­nity. Unity was a con­stant strug­gle to keep the MCP bridge be­tween Claude and the ed­i­tor healthy. It fre­quently hung, and I never fig­ured out how to get Claude Code to read the scene hi­er­ar­chy from the ed­i­tor. Godot’s text-based scene for­mat turned out to be a huge ad­van­tage - Claude can read and edit .tscn files di­rectly.

Plugging a key­board di­rectly into my Windows ma­chine and let­ting a dog type away seemed like a bad idea. So I routed in­put through a Raspberry Pi first - it UDP broad­casts each key­stroke to the sub­net, and DogKeyboard picks it up, fil­ters out dan­ger­ous keys, and in­jects them into the tar­get ap­pli­ca­tion.

For the key­board it­self, I wanted some­thing low pro­file and durable. I started with flex­i­ble sil­i­cone key­boards, but Momo’s big­ger sis­ter Hana (a 19lb cavapoo) ripped a hole in the first one within min­utes. Mounting was an­other prob­lem - noth­ing sticks to sil­i­cone. I epox­ied one to a wooden plaque, but within 10 min­utes Momo ripped off the num­ber 6.

I was se­ri­ously con­sid­er­ing a $400 van­dal proof metal key­board de­signed for kiosks when I de­cided to give Logitech’s Pebble Keys 2 a try. It worked. Momo loved it, VHB tape held it in place (unlike the pre­vi­ous sil­i­cone key­board), and Bluetooth meant no ca­ble to chew on.

Motivating Momo is easy - she’ll do any­thing for the right food re­ward. So what I re­ally needed was a dis­penser with an API, small serv­ing sizes, and the abil­ity to use my own treats (both dogs have sen­si­tive stom­achs).

I landed on Aqara’s C1 Smart Pet Feeder, con­trolled over Zigbee. Even its small­est serv­ing is too large for a 9-pound dog, so I pre­loaded each com­part­ment with a few treats and skipped the hop­per. This lim­its it to about 6 serv­ings be­fore re­fill­ing, but that’s plenty for Momo to make a game.

Zigbee on Windows was a no go - af­ter hours of dri­ver and firmware is­sues I moved the adapter to the Pi, where it worked im­me­di­ately. The fi­nal flow: DogKeyboard SSHs into the Pi and runs a script that sends two Zigbee com­mands:

{“serving_size”:1}

{“feed”:“START”}

As the num­ber of al­most-there games mounted, the need for real feed­back to Claude Code be­came clear. It al­ready had unit tests and logs, but those weren’t enough. The games would build and run but have in­vis­i­ble play­ers, bro­ken UI, or in­put that was never wired up. Claude had no way to know. It needed to be able to see and play its own games.

The first tool was straight­for­ward: a sim­ple Python script to take screen­shots of the run­ning game. Claude could launch the game, screen­shot it, and see whether the ti­tle screen ac­tu­ally ren­dered or was just a black win­dow.

The sec­ond tool was more in­ter­est­ing. I gave Claude a way to send se­quences of in­put to run­ning game in­stances - things like left for 3 sec­onds, pause for 2 sec­onds, right for one frame, fire”. It could then take screen­shots and de­cide whether to send fol­lowup com­mands. This turned Claude into its own QA tester.

These tools did­n’t need any re­fine­ment - they just worked. And the way Claude used them sur­prised me. While test­ing one game, I watched it play through all 6 stages just to ver­ify that the fi­nal boss fight worked cor­rectly. When it found a prob­lem - a health bar that was­n’t up­dat­ing - it went back to the code, fixed it, re­launched, and played through again to con­firm.

I also pulled in a few other tools from other pro­jects I’ve made:

* Scene lin­ter. Claude some­times reuses node IDs or gen­er­ates bro­ken re­source ref­er­ences in Godot’s .tscn files. These cause cryp­tic er­rors at run­time. Since adding a lin­ter that catches these be­fore the game launches, I haven’t seen a sin­gle man­gled scene file.

* Shader lin­ter. Validates cus­tom shaders and gives spe­cific er­rors back to Claude, rather than the vague shader failed to com­pile” that Godot pro­vides.

* Input ac­tion map­per. A small helper to get key­board/​con­troller in­put wired in cor­rectly. Claude can edit Godot’s pro­ject files di­rectly to add new in­put ac­tions, but it some­times gets the for­mat wrong and the er­ror mes­sages are un­help­ful.

All of these are open sourced, and I’d en­cour­age you to try them for your­self - even with­out a dog.

The DogKeyboard app ended up han­dling a lot more than just rout­ing key­strokes. A few de­tails worth men­tion­ing:

It mon­i­tors Claude Code us­ing Hooks and plays a chime sound when Claude goes idle - that’s Momo’s cue to type. When Claude is idle and Momo has en­tered at least 16 char­ac­ters, it auto-sub­mits by press­ing Enter. When Claude is work­ing, it back­spaces any ex­tra in­put in case Momo gets ea­ger and pe­ri­od­i­cally dis­misses Plan Mode prompts (Claude’s review be­fore act­ing” step) that would oth­er­wise block progress.

For the video record­ing, it runs a light­weight web­server that over­lays key­strokes as they’re pressed. I added a con­fig­urable de­lay so that if the video feed is lagged, the over­lay does­n’t show key­strokes be­fore Momo ap­pears to type them.

The first de­ci­sion was which dog to train. Hana (on the right) is twice Momo’s size but far more train­able - she’s the smartest pet I’ve had. Before turn­ing 1, she fig­ured out how to jump, grab a door han­dle, and open any door in the house just by ob­serv­ing.

I ex­pected the dogs would walk across the key­board, step­ping on keys as they go. That’s what Momo did when she stole my lip balm. Since Hana is tall enough to just step over a key­board, Momo seemed like the bet­ter can­di­date. As it turned out, both dogs learned to tap and swipe their paws along the key­board rather than step on it - but Momo learned a lit­tle faster in this case, and typed a lit­tle gen­tler, so Momo it was.

The train­ing process took about 2 weeks, with some­what in­con­sis­tent ses­sions. My goal was 10 min­utes, twice a day. I started by scat­ter­ing high-value treats (freeze-dried salmon) on the key­board to build the as­so­ci­a­tion: this thing is fun and gives good food.

Momo was frus­trated at first. She knew the key­board was in­volved but did­n’t know how. She would lie on it, bark at it, and try any­thing she could think of. At some point she tried swip­ing her paw across it. I played a chime sound and heav­ily re­warded her. After a few it­er­a­tions, the as­so­ci­a­tion clicked and she started at­tack­ing the key­board with vigor any chance she had.

The next step was to au­to­mate the re­wards. I filled up the food dis­penser with treats (4 serv­ings at a time so I could con­trol the quan­tity), waited for her swipe at the key­board, and then I ran a script to play the chime and dis­pense a serv­ing of treats. After a few ses­sions of this I in­creased the dif­fi­culty - not just one swipe, but three swipes be­fore the treats came.

Simultaneously, I de­creased the treat value to keep her healthy. First a mix of mid-value treats with an oc­ca­sional high-value one, and even­tu­ally just kib­ble with an oc­ca­sional mid-value treat. She still loved it and was health­ier for it.

Eventually I au­to­mated the whole thing with the DogKeyboard app and let it run. It re­quired at least 16 char­ac­ters per serv­ing of treats, and was sup­posed to only dis­pense once per idle pe­riod of Claude Code. But bugs crept in dur­ing test­ing - a cou­ple of times it dis­pensed mul­ti­ple serv­ings in a row. Unfortunately, Momo picked up on this and now keeps mash­ing the key­board hop­ing for a sec­ond im­me­di­ate serv­ing. The only way to pull her away is to of­fer higher-value treats else­where, which is what I do af­ter she’s put in her in­put for a game.

Here’s a small sam­ple of the games Momo made. Every game shown here is playable - these aren’t mock-ups. It’s nowhere near com­pre­hen­sive, but rep­re­sents the va­ri­ety she cre­ated. As the tools and prompts im­proved, the games got no­tice­ably bet­ter. There was also a no­tice­able bump in qual­ity when Opus 4.6 dropped - Claude be­came more likely to cre­ate cus­tom shaders and other vi­sual ef­fects.

One re­cur­ring is­sue: I kept get­ting games with ba­sic glow­ing neon 3D shapes and could­n’t get any other style. Frustrated, I asked Claude why. It told me this was the sig­na­ture style of the pro­ject - Claude’s ex­ter­nal mem­ory file had latched onto it and kept re­in­forc­ing it. Wiping MEMORY.md (Claude Code’s per­sis­tent pro­ject notes) be­fore every new game fixed this and gave much more var­ied re­sults.

DJ Smirk. One of Momo’s ear­li­est games - though more of an ex­pe­ri­ence than a game. Every key on the key­board plays a dif­fer­ent tone.

Munch. A com­pet­i­tive salad build­ing game. Collect all 7 in­gre­di­ents first to win.

Zaaz. Another puz­zle game. The goal is to paint the whole level by mov­ing a paint­brush that moves in in­te­ger tiles at a time. Crash into ob­sta­cles to move shorter dis­tances. Some lev­els were un­winnable. It also has a scor­ing sys­tem I never fig­ured out. After this one I up­dated the prompt to ex­clude puz­zle games.

The Oracle Frog of Rome. Avoid the arms of a kraken and col­lect the golden chains to bind it. At some point the or­a­cle also be­came a king.

Octogroove. A sur­pris­ingly chal­leng­ing rhythm game. You’re an oc­to­pus us­ing four of your arms to play the drums. Mash beats in time with ba­sic mu­sic. It’s like Dance Dance Revolution, but made more dif­fi­cult by the beats com­ing from dif­fer­ent di­rec­tions.

Ewe Heard Me! A game about herd­ing sheep. Use stealth and your bark to cor­ral them into a pen. Unfortunately, it’s un­winnable - the first two sheep you get into the pen sim­ply stop and block any­thing else from en­ter­ing. After this one I placed a larger em­pha­sis on check­ing for win­abil­ity in the prompt and tools.

Quasar Saz. As of writ­ing, this is Momo’s most re­cent game. You play as Zara, wield­ing a cos­mic saz (a long-necked stringed in­stru­ment) to fight cor­rupted sound. There are 6 stages + a boss fight. It’s fun to play for a cou­ple rounds, has good vi­su­als, and dy­namic au­dio.

When Momo first stepped on my key­board back in December, it was just a funny ac­ci­dent. A few weeks later, job­less and look­ing for a pro­ject, I de­cided to see how far that ac­ci­dent could go. The an­swer was a lot fur­ther than I ex­pected.

The tech­ni­cal pieces - key­board rout­ing, treat dis­penser, prompt en­gi­neer­ing, feed­back tools - were all solv­able en­gi­neer­ing prob­lems. What sur­prised me was how lit­tle of the fi­nal re­sult de­pended on Momo typ­ing any­thing mean­ing­ful. The magic is­n’t in the in­put. It’s in the sys­tem around it. A well-crafted prompt, strong guardrails, au­to­mated ver­i­fi­ca­tion, and good tools can turn gen­uine non­sense into a playable game.

If there’s a take­away be­yond the spec­ta­cle, it’s this: the bot­tle­neck in AI-assisted de­vel­op­ment is­n’t the qual­ity of your ideas - it’s the qual­ity of your feed­back loops. The games got dra­mat­i­cally bet­ter not when I im­proved the prompt, but when I gave Claude the abil­ity to screen­shot its own work, play-test its own lev­els, and lint its own scene files. The same tools that let a dog’s key­board mash­ing pro­duce a work­ing game will make your own in­ten­tional work with AI sig­nif­i­cantly bet­ter.

Momo is­n’t se­cretly a game de­signer. She’s a cavapoo who learned that smack­ing a plas­tic rec­tan­gle makes kib­ble ap­pear. A year ago, the gap be­tween that and soft­ware en­gi­neer­ing felt enor­mous. Now it feels small and shrinks each day.

If you want to try any of this your­self - whether with a dog, a cat, or just your own ran­dom key­board mash­ing - every­thing is open source in the links be­low.

Play the game (with hu­man bug fixes) — Recommended. Download for Windows, Mac, or Linux

Play the game (Momo’s ver­sion) — Download for Windows, Mac, or Linux

TeaLeaves — Tools, prompts, and source for de­vel­op­ing the game

...

Read the original on www.calebleak.com »

3 587 shares, 21 trendiness

how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds

C:\philes\the watch­ers: how ope­nai, the US gov­ern­ment, and per­sona built an iden­tity sur­veil­lance ma­chine that files re­ports on you to the feds the watch­ers: how ope­nai, the US gov­ern­ment, and per­sona built an iden­tity sur­veil­lance ma­chine that files re­ports on you to the feds

we are in di­rect writ­ten cor­re­spon­dence with per­son­a’s CEO, rick song. he has been re­spon­sive and en­gaged in good faith.

rick has com­mit­ted to an­swer­ing the 18 ques­tions in 0x14 in writ­ing. all cor­re­spon­dence will be pub­lished in full as part 2 of this se­ries. the core find­ings, in­clud­ing ope­nai-watch­listdb.with­per­sona.com and its 27 months of cer­tifi­cate trans­parency his­tory, re­main un­ad­dressed.

no laws were bro­ken. all find­ings come from pas­sive re­con us­ing pub­lic sources - Shodan, CT logs, DNS, HTTP head­ers, and unau­then­ti­cated files served by the tar­get’s own web server. no sys­tems were ac­cessed, no cre­den­tials were used, no data was mod­i­fied. re­triev­ing pub­licly served files is not unau­tho­rized ac­cess - see Van Buren v. United States (593 U. S. 374, 2021), hiQ Labs v. LinkedIn (9th Cir. 2022).

this is pro­tected jour­nal­ism and se­cu­rity re­search un­der the First Amendment, ECHR Art. 10, CFAA safe har­bor (DOJ Policy 2022), California Shield Law, GDPR Art. 85, and Israeli Basic Law: Human Dignity and Liberty.

the au­thors are not af­fil­i­ated with any gov­ern­ment, in­tel­li­gence ser­vice, or com­peti­tor of any en­tity named herein. no fi­nan­cial in­ter­est. no com­pen­sa­tion. this re­search ex­ists in the pub­lic in­ter­est and was dis­trib­uted across mul­ti­ple ju­ris­dic­tions, dead drops, and third-party archives be­fore pub­li­ca­tion.

any at­tempt to sup­press or re­tal­i­ate against this pub­li­ca­tion - le­gal threats, DMCA abuse, em­ploy­ment in­ter­fer­ence, phys­i­cal in­tim­i­da­tion, or ex­tra­ju­di­cial ac­tion - will be treated as con­fir­ma­tion of its find­ings and will trig­ger ad­di­tional dis­tri­b­u­tion. killing the mes­sen­ger does not kill the mes­sage.

for the record: all au­thors of this doc­u­ment are in good health, of sound mind, and have no plans to hurt them­selves, dis­ap­pear, or die un­ex­pect­edly. if that changes sud­denly - it was­n’t vol­un­tary. this doc­u­ment, its ev­i­dence, and a list of names are held by mul­ti­ple trusted third par­ties with in­struc­tions to pub­lish every­thing in the event that any­thing hap­pens to any of us. we mean any­thing.

to Persona and OpenAI’s le­gal teams: ac­tu­ally au­dit your sup­posed FedRAMP” com­pli­ancy, and an­swer the ques­tions in 0x14. that’s the ap­pro­pri­ate re­sponse. every­thing else is the wrong one.

from: the world

to: ope­nai, per­sona, the US gov­ern­ment, ICE, the open in­ter­net

date: 2026-02-16

sub­ject: the watch­ers

they told us the fu­ture would be con­ve­nient. sign up, ver­ify your iden­tity, talk to the ma­chine. easy. fric­tion­less. the brochure said trust and safety.” the source code said SelfieSuspiciousEntityDetection.

funny how that works. you hand over your pass­port to use a chat­bot and some­where in a dat­a­cen­ter in iowa, a fa­cial recog­ni­tion al­go­rithm is check­ing whether you look like a po­lit­i­cally ex­posed per­son. your selfie gets a sim­i­lar­ity score. your name hits a watch­list. a cron job re-screens you every few weeks just to make sure you haven’t be­come a ter­ror­ist since the last time you asked GPT to write a cover let­ter.

so what do you do? well, we looked. found source code on a gov­ern­ment end­point with the door wide open. fa­cial recog­ni­tion, watch­lists, SAR fil­ings, in­tel­li­gence co­de­names, and much more.

oh, and we re­vealed the names of every sin­gle per­son re­spon­si­ble for this!!

fol­low­ing the works of eva and oth­ers on ID ver­i­fi­ca­tion by­passes, we de­cided to start look­ing into per­sona, yet an­other KYC ser­vice that uses fa­cial recog­ni­tion to ver­ify iden­ti­ties. the orig­i­nal goal was to add a age-ver­i­fi­ca­tion by­pass to eva’s ex­ist­ing k-id plat­form.

af­ter try­ing to write a few ex­ploits, vm­func de­cided to browse their in­fra on shodan. it all started with a Shodan search. a sin­gle IP. 34.49.93.177 sit­ting on Google Cloud in Kansas City. one open port. one SSL cer­tifi­cate. two host­names that tell a story no­body was sup­posed to read:

ope­nai-watch­listdb.with­per­sona.com

ope­nai-watch­listdb-test­ing.with­per­sona.com

not openai-verify”, not openai-kyc”, watch­listdb. a data­base. (or is it?)

it was ini­tially meant to be a pas­sive re­con in­ves­ti­ga­tion, that quickly turned into a rab­bit hole deep dive into how com­mer­cial AI and fed­eral gov­ern­ment op­er­a­tions work to­gether to vi­o­late our pri­vacy every wak­ing sec­ond. we did­n’t even have to write or per­form a sin­gle ex­ploit, the en­tire ar­chi­tec­ture was just on the doorstep!! 53 megabytes of un­pro­tected source maps on a FedRAMP gov­ern­ment end­point, ex­pos­ing the en­tire code­base of a plat­form that files Suspicious Activity Reports with FinCEN, com­pares your selfie to watch­list pho­tos us­ing fa­cial recog­ni­tion, screens you against 14 cat­e­gories of ad­verse me­dia from ter­ror­ism to es­pi­onage, and tags re­ports with co­de­names from ac­tive in­tel­li­gence pro­grams.

2,456 source files con­tain­ing the full TypeScript code­base, every per­mis­sion, every API end­point, every com­pli­ance rule, every screen­ing al­go­rithm. sit­ting unau­then­ti­cated on the pub­lic in­ter­net. on a gov­ern­ment plat­form no less.

no sys­tems were breached. no cre­den­tials were used. every find­ing in this doc­u­ment comes from pub­licly ac­ces­si­ble sources: shodan, cer­tifi­cate trans­parency logs, DNS res­o­lu­tion, HTTP re­sponse head­ers, pub­lished API doc­u­men­ta­tion, pub­lic web pages, and unau­then­ti­cated JavaScript source maps served by the tar­get’s own web server.

the in­fra­struc­ture told its own story. we just lis­tened. then we read the source code.

IP: 34.49.93.177

ASN: AS396982 (Google LLC)

provider: Google Cloud

re­gion: global

city: Kansas City, US

open ports: 443/tcp

last seen: 2026-02-05

host­names:

- 177.93.49.34.bc.googleusercontent.com

- ope­nai-watch­listdb.with­per­sona.com

- ope­nai-watch­listdb-test­ing.with­per­sona.com

SSL cert:

sub­ject: CN=openai-watchlistdb.withpersona.com

is­suer: C=US, O=Google Trust Services, CN=WR3

valid: Jan 24 01:24:11 2026 - Apr 24 02:20:06 2026

SANs: ope­nai-watch­listdb.with­per­sona.com

ope­nai-watch­listdb-test­ing.with­per­sona.com

se­r­ial: FDFFBF37ED89BBD710D9967B7CD92B52

HTTP re­sponse (all paths, all meth­ods):

sta­tus: 404

body: fault fil­ter abort”

head­ers: via: 1.1 google

con­tent-type: text/​plain

Alt-Svc: h3=”:443″

the fault fil­ter abort” re­sponse is an Envoy proxy fault in­jec­tion fil­ter. stan­dard in GCP/Istio ser­vice mesh de­ploy­ments. the ser­vice only routes re­quests match­ing spe­cific in­ter­nal cri­te­ria (likely mTLS client cer­tifi­cates, spe­cific source IPs, or API key head­ers). every­thing else just dies at the edge.

though ob­vi­ously this is not a mis­con­fig­u­ra­tion.. this is just a locked-down back­end ser­vice that was never meant to have a pub­lic face. the only rea­son we even know it ex­ists is be­cause of cer­tifi­cate trans­parency logs and DNS.

Persona (withpersona.com) is a San Francisco-based iden­tity ver­i­fi­ca­tion com­pany. their nor­mal in­fra­struc­ture runs be­hind Cloudflare:

with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

in­quiry.with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

app.with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

api.with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

they also run a wild­card DNS record, *.withpersona.com points to Cloudflare (cloudflare.withpersona.com.cdn.cloudflare.net). we con­firmed this by re­solv­ing com­pletely fab­ri­cated sub­do­mains:

to­tal­ly­nonex­is­ten­t12345.with­per­sona.com -> 162.159.141.40 (CF)

as­d­flkjhasdf.with­per­sona.com -> 162.159.141.40 (CF)

HOWEVER, here’s where it gets in­ter­est­ing. OpenAI’s watch­list ser­vice breaks out of this wild­card:

ope­nai-watch­listdb.with­per­sona.com -> 34.49.93.177 (GCP)

ope­nai-watch­listdb-test­ing.with­per­sona.com -> 34.49.93.177 (GCP)

a ded­i­cated Google Cloud in­stance, which is­n’t even be­hind Cloudflare, nor on Persona’s shared in­fra­struc­ture. seem­ingly pur­pose-built and iso­lated.

you would never do this for a sim­ple check this name against a list” API call, you do this when the data re­quires com­part­men­tal­iza­tion. when the com­pli­ance re­quire­ments for the data you’re col­lect­ing, de­mand that level of iso­la­tion. when the dam­age of a breach is bad enough to war­rant ded­i­cated in­fra­struc­ture.

CT logs tell us ex­actly when this ser­vice went live and how it evolved.

no­vem­ber 2023. this ser­vice has been run­ning for over two years.

OpenAI did­n’t an­nounce Verified Organization” re­quire­ments un­til mid-2025. they did­n’t pub­licly re­quire ID ver­i­fi­ca­tion for ad­vanced model ac­cess un­til GPT-5. but the watch­list screen­ing in­fra­struc­ture was op­er­a­tional 18 months be­fore any of that was dis­closed.

we can pin­point when they started con­sid­er­ing go­ing public” with the col­lab­o­ra­tion.

https://​with­per­sona.com/​cus­tomers/​ope­nai ex­ists since September 17th, 2024, like­wise, OpenAI’s Privacy Policy up­date started in­clud­ing the fol­low­ing pas­sage since their November 4th, 2024 up­date as well.

Other Information You Provide: We col­lect other in­for­ma­tion that you pro­vide to us, such as when you par­tic­i­pate in our events or sur­veys, or when you pro­vide us or a ven­dor op­er­at­ing on our be­half with in­for­ma­tion to es­tab­lish your iden­tity or age (collectively, Other Information You Provide”).”

the ex­cuses used in the pub­lic post are clas­si­cal, though in­stead of us­ing chil­dren as the scape­goat for in­vad­ing our pri­vacy, this time it was […] To of­fer safe AGI, we need to make sure bad peo­ple aren’t us­ing our ser­vices […].

only… that they quickly used this op­por­tu­nity to go from com­par­ing users against a sin­gle fed­eral watch­list, to cre­at­ing the watch­list of all users them­selves.

in fact, this is noth­ing new, OpenAI Forum User OnceAndTwice had men­tioned this al­ready back in June last year.

Persona’s API doc­u­men­ta­tion (docs.withpersona.com) is pub­lic. when a cus­tomer like OpenAI runs a gov­ern­ment ID ver­i­fi­ca­tion, the API re­turns a com­plete iden­tity dossier:

per­sonal iden­tity:

- full le­gal name (including na­tive script)

- date of birth, place of birth

- na­tion­al­ity, sex, height

ad­dress:

- street, city, state, postal code, coun­try

gov­ern­ment doc­u­ment:

- doc­u­ment type and num­ber

- is­su­ing au­thor­ity

- is­sue and ex­pi­ra­tion dates

- visa sta­tus

- ve­hi­cle class/​en­dorse­ments/​re­stric­tions

me­dia:

- FRONT PHOTO of ID doc­u­ment (URL)

- BACK PHOTO of ID doc­u­ment (URL)

- SELFIE PHOTO (URL + byte size)

- VIDEO of iden­tity cap­ture (URL)

meta­data:

- en­tity con­fi­dence score

- all ver­i­fi­ca­tion check re­sults with pass/​fail rea­sons

- cap­ture method used

- time­stamps (created, sub­mit­ted, com­pleted, redacted)

Persona’s own case study states that OpenAI screens mil­lions monthly” and automatically screens over 99% of users be­hind the scenes in sec­onds.”

be­hind the scenes. in sec­onds. mil­lions. with cus­tomiz­able fil­ters rang­ing from sim­ple par­tial name matches to ad­vanced fa­cial recog­ni­tion al­go­rithms.

again, none of this is even a se­cret, its hidden” in plain sight.

...

Read the original on vmfunc.re »

4 570 shares, 1 trendiness

Pope tells priests to use their brains, not AI, to write homilies

In a pri­vate ex­change with priests of the Diocese of Rome on Thursday, Pope Leo XIV re­sponded to four ques­tions, ad­vis­ing them on prayer, study, and priestly fra­ter­nity.

The off-cam­era mo­ment took place af­ter Leo gave a pub­lic speech to the priests, invit­ing them to rekindle the fire” of their min­istry.

The first priest to speak was a young man who asked the pope how the Gospel can be em­bod­ied in the world of young peo­ple,” ac­cord­ing to a priest pre­sent at the Feb. 19 meet­ing in the Vatican’s Paul VI Hall.

The priest told ACI Stampa, the Italian-language sis­ter ser­vice of EWTN News, that Leo’s an­swer to this ques­tion was: First of all, what is needed is the wit­ness of the priest; and then, when meet­ing young peo­ple, they must broaden their hori­zons to reach as many young peo­ple as pos­si­ble. For this, it is nec­es­sary to re­dis­cover the value of com­mu­nion.”

Responding to a sec­ond ques­tion, the pope rec­om­mended know­ing well the com­mu­nity in which one lives and works. It is nec­es­sary to know the re­al­ity well. To love your com­mu­nity, you must know it. Therefore, a real shared ef­fort is needed to un­der­stand it bet­ter and thus face to­gether all the chal­lenges that arise.”

The pope also in­vited us to use our brains more and not ar­ti­fi­cial in­tel­li­gence [AI] to pre­pare hom­i­lies, as he now sees and hears hap­pen­ing,” the priest said. And here the pope made a strong rec­om­men­da­tion re­gard­ing prayer: We priests must pray — re­main with the Lord, that is — not re­duce every­thing to the bre­viary or to a few brief mo­ments of prayer, but truly learn again to lis­ten to the Lord.”

The third ques­tion was more re­flec­tive: Today, as priests, we are un­able to re­joice in the suc­cess of an­other fel­low priest.

The pope re­sponded that we are all hu­man, but we should set a good ex­am­ple, es­pe­cially the ex­am­ple of priestly fra­ter­nity.”

He dwelt at length on how to cul­ti­vate priestly friend­ship. The pope also re­minded them to con­tinue study­ing. It must be on­go­ing study; we must al­ways stay up to date. But the fun­da­men­tal thing is to cul­ti­vate priestly friend­ship, priestly fra­ter­nity,” the priest from Rome said.

The fi­nal ques­tion con­cerned el­derly priests and their lone­li­ness. According to the priest, Leo’s re­sponse reaffirmed the need for fra­ter­nity, for the joy of be­ing to­gether. We must give thanks, truly live grat­i­tude for the fact of be­ing priests, from the day of our or­di­na­tion every sin­gle day, and thank God for this great gift, and live the priest­hood with grat­i­tude. And here, a great deal of hu­mil­ity is also re­quired.”

Personally, I was happy,” the priest con­cluded. We greatly ap­pre­ci­ated the pope for a very, very con­crete speech.”

This story was first pub­lished by ACI Stampa, the Italian-language sis­ter ser­vice of EWTN News. It has been trans­lated and adapted by EWTN News English.

...

Read the original on www.ewtnnews.com »

5 531 shares, 26 trendiness

Apple accelerates U.S. manufacturing with Mac mini production

Apple to­day an­nounced a sig­nif­i­cant ex­pan­sion of fac­tory op­er­a­tions in Houston, bring­ing the fu­ture pro­duc­tion of Mac mini to the U. S. for the first time. The com­pany will also ex­pand ad­vanced AI server man­u­fac­tur­ing at the fac­tory and pro­vide hands-on train­ing at its new Advanced Manufacturing Center be­gin­ning later this year. Altogether, Apple’s Houston op­er­a­tions will cre­ate thou­sands of jobs.

Apple is deeply com­mit­ted to the fu­ture of American man­u­fac­tur­ing, and we’re proud to sig­nif­i­cantly ex­pand our foot­print in Houston with the pro­duc­tion of Mac mini start­ing later this year,” said Tim Cook, Apple’s CEO. We be­gan ship­ping ad­vanced AI servers from Houston ahead of sched­ule, and we’re ex­cited to ac­cel­er­ate that work even fur­ther.”

Technicians in pro­tec­tive cloth­ing work on com­put­ers and other equip­ment in a Houston fac­tory.

A worker in a lab coat stands be­hind an as­sem­bly line.

Technicians in pro­tec­tive cloth­ing walk through the hall­way of a Houston fac­tory.

Technicians in pro­tec­tive cloth­ing look at a mon­i­tor in a Houston fac­tory.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U. S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

For more than two decades, users around the world have re­lied on the in­cred­i­bly pop­u­lar Mac mini for the tremen­dous power it packs into its ul­tra-com­pact de­sign. With its next-level AI ca­pa­bil­i­ties, it has be­come an es­sen­tial tool for every­one from stu­dents and as­pir­ing cre­atives to small busi­ness own­ers. Beginning later this year, Mac mini will be pro­duced at a new fac­tory on Apple’s Houston man­u­fac­tur­ing site, dou­bling the cam­pus’s foot­print.

Apple be­gan pro­duc­ing ad­vanced AI servers in Houston in 2025 for the first time, and pro­duc­tion is al­ready ahead of sched­ule. Servers as­sem­bled in Houston — in­clud­ing logic boards pro­duced on­site — are used in Apple data cen­ters around the coun­try.

Beyond pro­duc­tion, Apple is in­vest­ing in the work­force that will drive American man­u­fac­tur­ing for­ward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is sched­uled to open its doors in Houston. Currently un­der con­struc­tion, the ded­i­cated fa­cil­ity will pro­vide hands-on train­ing in ad­vanced man­u­fac­tur­ing tech­niques to stu­dents, sup­plier em­ploy­ees, and American busi­nesses of all sizes. Apple ex­perts will teach par­tic­i­pants the same in­no­v­a­tive processes that are used to make Apple prod­ucts, al­low­ing American man­u­fac­tur­ers to take their work to the next level.

A worker stands in front of a large American flag in­side the un­der-con­struc­tion Apple Advanced Manufacturing Center in Houston.

An over­head shot of the un­der-con­struc­tion Apple Advanced Manufacturing Center in Houston.

Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will pro­vide hands-on train­ing to stu­dents, sup­plier em­ploy­ees, and U. S. busi­nesses of all sizes.

Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will pro­vide hands-on train­ing to stu­dents, sup­plier em­ploy­ees, and U.S. busi­nesses of all sizes.

Since an­nounc­ing its $600 bil­lion com­mit­ment to the U. S. last year, Apple and its American Manufacturing Program part­ners have al­ready reached sev­eral mile­stones:

Apple ex­ceeded its tar­get and sourced more than 20 bil­lion U.S.-made chips from 24 fac­to­ries across 12 states, in­clud­ing those of part­ners like TSMC, Broadcom, and Texas Instruments.

GlobalWafers has be­gun pro­duc­tion at its new $4 bil­lion bare sil­i­con wafer fa­cil­ity in Sherman, Texas. At Apple’s di­rec­tion, wafers pro­duced in Sherman will be used by Apple’s chip man­u­fac­tur­ing part­ners in the U.S., in­clud­ing TSMC and Texas Instruments.

Supported by Apple’s in­vest­ment, Amkor broke ground on its new $7 bil­lion semi­con­duc­tor ad­vanced pack­ag­ing and test fa­cil­ity in Peoria, Arizona, where Apple will be the first and largest cus­tomer.

Corning’s Harrodsburg, Kentucky, fa­cil­ity is now 100 per­cent ded­i­cated to cover glass for iPhone and Apple Watch shipped glob­ally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.

In 2026, Apple is on track to pur­chase well over 100 mil­lion ad­vanced chips pro­duced by TSMC at its Arizona fa­cil­ity — a sig­nif­i­cant in­crease from 2025.

Apple opened its Apple Manufacturing Academy in Detroit, which is al­ready sup­port­ing more than 130 small- and medium-sized American man­u­fac­tur­ers with hands-on train­ing in AI, au­toma­tion, and smart man­u­fac­tur­ing. The acad­emy re­cently ex­panded with new vir­tual pro­gram­ming, giv­ing busi­nesses across the coun­try on-de­mand ac­cess to the cur­ricu­lum de­vel­oped by Apple ex­perts and Michigan State University fac­ulty.

Mac mini will be made at a new fa­cil­ity in Houston, and a soon-to-be-launched train­ing cen­ter will sup­port ad­vanced man­u­fac­tur­ing skills de­vel­op­ment

CUPERTINO, CALIFORNIA Apple to­day an­nounced a sig­nif­i­cant ex­pan­sion of fac­tory op­er­a­tions in Houston, bring­ing the fu­ture pro­duc­tion of Mac mini to the U.S. for the first time. The com­pany will also ex­pand ad­vanced AI server man­u­fac­tur­ing at the fac­tory and pro­vide hands-on train­ing at its new Advanced Manufacturing Center be­gin­ning later this year. Altogether, Apple’s Houston op­er­a­tions will cre­ate thou­sands of jobs.

Apple is deeply com­mit­ted to the fu­ture of American man­u­fac­tur­ing, and we’re proud to sig­nif­i­cantly ex­pand our foot­print in Houston with the pro­duc­tion of Mac mini start­ing later this year,” said Tim Cook, Apple’s CEO. We be­gan ship­ping ad­vanced AI servers from Houston ahead of sched­ule, and we’re ex­cited to ac­cel­er­ate that work even fur­ther.”

For more than two decades, users around the world have re­lied on the in­cred­i­bly pop­u­lar Mac mini for the tremen­dous power it packs into its ul­tra-com­pact de­sign. With its next-level AI ca­pa­bil­i­ties, it has be­come an es­sen­tial tool for every­one from stu­dents and as­pir­ing cre­atives to small busi­ness own­ers. Beginning later this year, Mac mini will be pro­duced at a new fac­tory on Apple’s Houston man­u­fac­tur­ing site, dou­bling the cam­pus’s foot­print.

Apple be­gan pro­duc­ing ad­vanced AI servers in Houston in 2025 for the first time, and pro­duc­tion is al­ready ahead of sched­ule. Servers as­sem­bled in Houston — in­clud­ing logic boards pro­duced on­site — are used in Apple data cen­ters around the coun­try.

Beyond pro­duc­tion, Apple is in­vest­ing in the work­force that will drive American man­u­fac­tur­ing for­ward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is sched­uled to open its doors in Houston. Currently un­der con­struc­tion, the ded­i­cated fa­cil­ity will pro­vide hands-on train­ing in ad­vanced man­u­fac­tur­ing tech­niques to stu­dents, sup­plier em­ploy­ees, and American busi­nesses of all sizes. Apple ex­perts will teach par­tic­i­pants the same in­no­v­a­tive processes that are used to make Apple prod­ucts, al­low­ing American man­u­fac­tur­ers to take their work to the next level.

Since an­nounc­ing its $600 bil­lion com­mit­ment to the U.S. last year, Apple and its American Manufacturing Program part­ners have al­ready reached sev­eral mile­stones:

Apple ex­ceeded its tar­get and sourced more than 20 bil­lion U.S.-made chips from 24 fac­to­ries across 12 states, in­clud­ing those of part­ners like TSMC, Broadcom, and Texas Instruments.

GlobalWafers has be­gun pro­duc­tion at its new $4 bil­lion bare sil­i­con wafer fa­cil­ity in Sherman, Texas. At Apple’s di­rec­tion, wafers pro­duced in Sherman will be used by Apple’s chip man­u­fac­tur­ing part­ners in the U.S., in­clud­ing TSMC and Texas Instruments.

Supported by Apple’s in­vest­ment, Amkor broke ground on its new $7 bil­lion semi­con­duc­tor ad­vanced pack­ag­ing and test fa­cil­ity in Peoria, Arizona, where Apple will be the first and largest cus­tomer.

Corning’s Harrodsburg, Kentucky, fa­cil­ity is now 100 per­cent ded­i­cated to cover glass for iPhone and Apple Watch shipped glob­ally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.

In 2026, Apple is on track to pur­chase well over 100 mil­lion ad­vanced chips pro­duced by TSMC at its Arizona fa­cil­ity — a sig­nif­i­cant in­crease from 2025.

Apple opened its Apple Manufacturing Academy in Detroit, which is al­ready sup­port­ing more than 130 small- and medium-sized American man­u­fac­tur­ers with hands-on train­ing in AI, au­toma­tion, and smart man­u­fac­tur­ing. The acad­emy re­cently ex­panded with new vir­tual pro­gram­ming, giv­ing busi­nesses across the coun­try on-de­mand ac­cess to the cur­ricu­lum de­vel­oped by Apple ex­perts and Michigan State University fac­ulty.

About Apple

Apple rev­o­lu­tion­ized per­sonal tech­nol­ogy with the in­tro­duc­tion of the Macintosh in 1984. Today, Apple leads the world in in­no­va­tion with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six soft­ware plat­forms — iOS, iPa­dOS, ma­cOS, watchOS, vi­sionOS, and tvOS — pro­vide seam­less ex­pe­ri­ences across all Apple de­vices and em­power peo­ple with break­through ser­vices in­clud­ing the App Store, Apple Music, Apple Pay, iCloud, and Apple TV. Apple’s more than 150,000 em­ploy­ees are ded­i­cated to mak­ing the best prod­ucts on earth and to leav­ing the world bet­ter than we found it.

Copy text

* Apple ex­ceeded its tar­get and sourced more than 20 bil­lion U.S.-made chips from 24 fac­to­ries across 12 states, in­clud­ing those of part­ners like TSMC, Broadcom, and Texas Instruments.

* GlobalWafers has be­gun pro­duc­tion at its new $4 bil­lion bare sil­i­con wafer fa­cil­ity in Sherman, Texas. At Apple’s di­rec­tion, wafers pro­duced in Sherman will be used by Apple’s chip man­u­fac­tur­ing part­ners in the U.S., in­clud­ing TSMC and Texas Instruments.

* Supported by Apple’s in­vest­ment, Amkor broke ground on its new $7 bil­lion semi­con­duc­tor ad­vanced pack­ag­ing and test fa­cil­ity in Peoria, Arizona, where Apple will be the first and largest cus­tomer.

* Corning’s Harrodsburg, Kentucky, fa­cil­ity is now 100 per­cent ded­i­cated to cover glass for iPhone and Apple Watch shipped glob­ally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.

* In 2026, Apple is on track to pur­chase well over 100 mil­lion ad­vanced chips pro­duced by TSMC at its Arizona fa­cil­ity — a sig­nif­i­cant in­crease from 2025.

* Apple opened its Apple Manufacturing Academy in Detroit, which is al­ready sup­port­ing more than 130 small- and medium-sized American man­u­fac­tur­ers with hands-on train­ing in AI, au­toma­tion, and smart man­u­fac­tur­ing. The acad­emy re­cently ex­panded with new vir­tual pro­gram­ming, giv­ing busi­nesses across the coun­try on-de­mand ac­cess to the cur­ricu­lum de­vel­oped by Apple ex­perts and Michigan State University fac­ulty.

...

Read the original on www.apple.com »

6 482 shares, 39 trendiness

Amazon BUSTED for Widespread Scheme to Inflate Prices Across the Economy

Yesterday, California Attorney General Rob Bonta filed for an im­me­di­ate halt to what he says is a wide­spread price-fix­ing scheme run by the largest on­line re­tailer in America, Amazon. Amazon tells ven­dors what prices it wants to see to main­tain its own prof­itabil­ity,” Bonta al­leged. Amazon can do this be­cause it is the world’s largest, most pow­er­ful on­line re­tailer.”

His claim is that Amazon has been forc­ing ven­dors who sell on and off the plat­form to raise prices, and co­op­er­at­ing with other ma­jor on­line re­tail­ers to do so.

Vendors, cowed by Amazon’s over­whelm­ing bar­gain­ing lever­age and fear­ing pun­ish­ment, com­ply—agree­ing to raise prices on com­peti­tors’ web­sites (often with the aware­ness and co­op­er­a­tion of the com­pet­ing re­tailer) or to re­move prod­ucts from com­pet­ing web­sites al­to­gether. , and it should be im­me­di­ately en­joined.

Amazon is sched­uled for a se­ries of tri­als in January of 2027, but Bonta’s le­gal move is a big deal, be­cause he’s ask­ing a court to bring Amazon to heel now, a year early. The only way a judge can do that is if he con­cludes Amazon is likely to lose, which means that Bonta be­lieves his ev­i­dence is so strong it’s ba­si­cally a fore­gone con­clu­sion Amazon will be held li­able for fos­ter­ing se­ri­ous harm to con­sumers.

The scale of the scheme is al­most un­fath­omable; ac­cord­ing to its lat­est in­vestor re­ports, Amazon earned $426 bil­lion of rev­enue in its 2025 North America on­line shop­ping busi­ness, which is about $3000 for every house­hold in America. As Stacy Mitchell noted, prices for third party goods on the on­line plat­form, roughly 60% of its to­tal sales, have been go­ing up at 7% a year, more than twice the rate of in­fla­tion. And be­cause this scheme im­pacts goods sold off of Amazon’s web­site as well, there’s a rea­son­able chance that it has had an im­pact on price lev­els over­all in America. With a sim­i­lar Pepsi-Walmart al­leged con­spir­acy re­vealed ear­lier this year, it’s be­com­ing in­creas­ingly clear that con­sol­i­da­tion and price-fix­ing are linked to in­fla­tion.

How ex­actly does the scheme work? Long-standing read­ers of BIG may re­mem­ber a piece in 2021 ti­tled Amazon Prime is an Economy-Distorting Lie” in which I laid out what’s hap­pen­ing. At the time, the D. C. Attorney General, a lawyer named Karl Racine, sued Amazon for pro­hibit­ing ven­dors that sold on its web­site from of­fer­ing dis­counts out­side of Amazon. Such anti-dis­count­ing pro­vi­sions raise prices for con­sumers, and pre­vent new plat­forms from emerg­ing to chal­lenge Amazon.

The key lever­age point for Amazon is the scale of its Prime pro­gram, which has 200 mil­lion mem­bers na­tion­wide. As Scott Galloway noted a few years ago, more U. S. house­holds be­long to Prime than dec­o­rate a Christmas tree or go to church.

Prime mem­bers get free ship­ping,’ which means they tend not to shop around. They just ac­cept the price and ven­dor they are given on Amazon through what’s called the Buy Box.”

So which ven­dor gets the Buy Box’ and thus the sale to the Prime mem­ber? Here’s what I wrote in 2021.

Amazon awards the Buy Box to mer­chants based on a num­ber of fac­tors. One fac­tor is whether a prod­uct is Prime el­i­gi­ble,’ which is to say of­fered to Prime mem­bers with free ship­ping. In or­der to be­come Prime el­i­gi­ble, a seller of­ten must use Amazon’s ware­hous­ing and lo­gis­tics ser­vice, Fulfillment by Amazon (FBA). In other words, Amazon ties the abil­ity to ac­cess Prime cus­tomers to whether a seller pays Amazon for man­ag­ing its in­ven­tory. This strat­egy has worked - Amazon now ful­fills roughly two thirds of the prod­ucts bought on its plat­form. The high prices of over­all mar­ket­place ac­cess fees, in­clud­ing FBA, is how Amazon gen­er­ates cash from its Marketplace and re­tail op­er­a­tions. From 2014 to 2020, the amount it charges third party sell­ers grew from $11.75 bil­lion to more than $80 bil­lion. Seller fees now ac­count for 21% of Amazon’s to­tal cor­po­rate rev­enue,” noted Racine, also point­ing out that its profit mar­gins for Marketplace sales by third party sell­ers are four times higher than its own re­tail sales…Now, if this were all that was hap­pen­ing, sell­ers and brands could just sell out­side of Amazon, avoid the 35-45% com­mis­sion, and charge a lower price to en­tice cus­tomers. Buy Cheaper at Walmart.com!” should be in ads all over the web. But it’s not. And that’s where the main claim from Racine comes in. Amazon uses its Buy Box al­go­rithm to make sure that sell­ers can’t sell through a dif­fer­ent store or even through their own site with a lower price and ac­cess Amazon cus­tomers, even if they would be able to sell it more cheaply. If they do, they get cut off from the Buy Box, and thus, cut off de facto from be­ing able to sell on Amazon.

The net ef­fect is that prices every­where, not just on Amazon, are higher than they or­di­nary would be.

So that’s how the scheme worked, and Racine was the first law en­forcer to act. But oth­ers fol­lowed; Bonta filed his more com­pre­hen­sive law­suit in 2022. In 2023, Federal Trade Commission Chair Lina Khan filed against Amazon on sim­i­lar grounds, though with more de­tails and ad­di­tional wrin­kles. The FTC found that Amazon was run­ning some­thing called Project Nessie” in which it would use its al­go­rithm to en­cour­age other on­line re­tail­ers, per­haps Walmart.com or Target.com, to raise prices on sim­i­lar prod­ucts.

All of these cases, as well as other sim­i­lar ones, have passed the nec­es­sary le­gal hur­dle to go to trial, but an ac­tual rem­edy is years away. And Amazon keeps grow­ing through this al­leged il­licit be­hav­ior, in­flat­ing prices not just on its own site, but across the re­tail land­scape.

According to Bonta, Amazon has three pri­mary meth­ods of in­flat­ing prices. In the first one, if Amazon and a com­peti­tor are en­gaged in a price war over a prod­uct, Amazon will tell its ven­dor that sells to its ri­val to in­crease the price di­rectly. In the sec­ond one, if a com­peti­tor is dis­count­ing an item, Amazon will ask it to stop through a ven­dor. And in the third, a ven­dor will stop sell­ing a prod­uct for a lower price out­side of Amazon, and Amazon will then raise its price.

This kind of arrange­ment is known as a hub-and-spoke” con­spir­acy, or vertical price-fix­ing,” be­cause it’s co­op­er­at­ing on price through com­mon cus­tomers or ven­dors. Such a scheme dis­tin­guishes it from di­rect col­lab­o­ra­tion among ri­vals, which is a more stan­dard horizontal” con­spir­acy. The re­lief re­quested by Bonta is ex­ten­sive, but amounts to bar­ring the com­pany from mak­ing agree­ments through ven­dors to set pric­ing for the on­line re­tail econ­omy and pro­hibit­ing the com­pany from com­mu­ni­cat­ing with ven­dors about prices and terms for non-Ama­zon re­tail­ers. He is also seek­ing a mon­i­tor to en­sure Amazon stops the bad be­hav­ior.

What makes it a big deal is that it’s a re­quest for a tem­po­rary in­junc­tion right now, meant to last un­til the trial process con­cludes or it’s oth­er­wise lifted. Judges only grant such in­junc­tions when they think that a party is likely go­ing to lose, the im­me­di­ate harm of the be­hav­ior is sig­nif­i­cant, and the pub­lic in­ter­est is served. While we can’t see most of the ev­i­dence be­cause it’s redacted, Bonta must re­ally be­lieve he’s got the goods. And if he suc­ceeds in this gam­bit, it al­most cer­tainly means Amazon has vi­o­lated an­titrust law on a ma­jor line of busi­ness. It also flips the in­cen­tives, be­cause Amazon will have less of an in­cen­tive to de­lay a trial. Instead, it will be sub­ject to this in­junc­tion un­til the trial con­cludes. So it may stop try­ing dila­tory tac­tics.

There’s one last ob­ser­va­tion about the com­plaint. Again, it’s redacted, but Bonta is hint­ing at Amazon’s in­ter­nal process to hide what it is do­ing.

And that would­n’t be sur­pris­ing, since the FTC has told the judge in its case that top Amazon of­fi­cials, in­clud­ing Jeff Bezos, have been de­stroy­ing ev­i­dence.

According to Law.com: The FTC said in a heav­ily redacted brief on Friday that it’s miss­ing both the raw notes’ of im­por­tant meet­ings and key mes­sages from the Signal apps of Bezos and other se­nior ex­ec­u­tives, who, in some in­stances, set mes­sages to au­to­mat­i­cally delete in as short as ten sec­onds or one minute.’”

That kind of be­hav­ior is the dig­i­tal equiv­a­lent of shred­ding doc­u­ments while un­der a le­gal hold, and ev­i­dence of law­less­ness. And there’s a rea­son for that. For as long as I’ve been writ­ing BIG, and years be­fore that, laws have not re­ally ap­plied to the rich and pow­er­ful. But our work is bear­ing fruit. And it’s not just Amazon. Today, the Antitrust Division won a big le­gal mo­tion on its price-fix­ing case against a meat con­spir­acy led by Agri-Stats, and the Ninth Circuit had a ter­rific rul­ing on a Robinson-Patman Act price dis­crim­i­na­tion suit. As the peo­ple elect new pop­ulist politi­cians, en­forcers and plain­tiff lawyers are de­vel­op­ing the law and the cases to match their frus­tra­tion.

There’s also a change in pub­lic at­ti­tudes. In years past, a com­pany like Amazon used to be con­sid­ered in­no­v­a­tive and con­sumer-friendly. Today, it is un­der­stood as bu­reau­cratic and co­er­cive, a re­sult of an en­vi­ron­ment of law­less­ness. Americans are in­creas­ingly an­gry about the sit­u­a­tion, see­ing the Epstein class and the high in­fla­tion en­vi­ron­ment as a di­rect threat to their wel­fare, a con­spir­acy to ex­tract. Because it is. And at least some elected lead­ers see that, and are act­ing to stop it.

Thanks for read­ing! Your tips make this newslet­ter what it is, so please send me tips on weird mo­nop­o­lies, sto­ries I’ve missed, or other thoughts. And if you liked this is­sue of BIG, you can sign up here for more is­sues, a newslet­ter on how to re­store fair com­merce, in­no­va­tion, and democ­racy. Consider be­com­ing a pay­ing sub­scriber to sup­port this work, or if you are a pay­ing sub­scriber, giv­ing a gift sub­scrip­tion to a friend, col­league, or fam­ily mem­ber. If you re­ally liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.

...

Read the original on www.thebignewsletter.com »

7 474 shares, 19 trendiness

I pitched a roller coaster to Disneyland at age 10 in 1978

In 1978, for my 10th birth­day, I went to Disneyland and got to ride a new roller coaster called Space Mountain. It fig­u­ra­tively and lit­er­ally took my breath away. I loved every sec­ond of it and that night, I could­n’t fall asleep; I just kept think­ing about how ex­hil­a­rat­ing it was. Then, a wild thought sud­denly hit me: Why is­n’t there a roller coaster that goes up­side down?

At first, I was like that’s crazy, it can’t work. But then I re­mem­bered Spin Out, the ride with a round room that spun so fast I stuck to the wall. If that worked, why not a loop on a roller coaster? I thought that would feel and be like the same thing. I was con­vinced!

I fi­nally fell asleep dream­ing of my roller coaster, full of twists, turns, and loops.

A few days later, I told my best friend Daschle. He was older, knew every­thing, and lived next door. Buddy,” he said, I’ve got ex­cit­ing but crush­ing news. Your idea works.”

Yep. I saw it. They’re build­ing one at Magic Mountain. It’s called the Revolution. Sorry, Buddy.”

But I was­n’t crushed, I was thrilled! What I knew could work was re­ally hap­pen­ing.

How many loops does it have?” I asked.

Ha! Mine has four. It’s called the Quadrupuler! It’s gonna be way bet­ter!”

That night I taped six sheets of pa­per to­gether and drew my blue­prints in col­ored mark­ers. As you can see from the photo it was glo­ri­ous!

Look closely, I did­n’t la­bel those coaster hills in feet or me­ters, no sir, I used build­ing story’s” for height, and the speed at each sec­tion in miles per hour. I’m 10. I’m se­ri­ous here.

With guid­ing blue­prints, it was time to build the model.

I got a Styrofoam board and balsa wood. Cutting and glu­ing each tiny cross tie was slow, and with all my home­work, I could only work on week­ends. So I cal­cu­lated how many inches I could fin­ish in a week. The an­swer turned out to be some­thing like 5 months! After that I con­sid­ered just giv­ing up be­cause I re­al­ized that I was only 120 months old and this was go­ing to take 5% of my life! Plus, I still had­n’t fig­ured out how to make the loops, balsa wood does­n’t bend like that. But I told my­self: one piece at a time. That be­came my motto.

When I got to the first loop, I had to stop and think. What in the world could I make the loop out of? A lot of 10-year-old brain power went into imag­in­ing what sim­ple ma­te­r­ial I could use. Then one morn­ing, I had it: heat plas­tic strips over the stove­top flame and bend them as they cooled. The key? Don’t burn the house down.

I’d learned that the hard way a year ear­lier, when Daschle con­vinced me we should recre­ate the movie The Towering Inferno, with a card­board box un­der his house… and real fire. It got out of con­trol so fast, the flames started hit­ting the wood floor joists! Thankfully, we were able to smother it with the damp dirt down there. So yeah, wild card Daschle was not in­vited over for my plas­tic fire bend­ing ex­per­i­ment.

I don’t re­mem­ber where the plas­tic came from, but I do re­mem­ber hold­ing the strip with pli­ers over the flame. The first piece melted so fast and started burn­ing with thick black smoke that it scared me. I yanked it back and coughed. That’s when I made an amend­ment to the safety plan: don’t kill your­self with what­ever these hor­ri­ble smelly fumes were! I got a fan, opened the back door, and all the kitchen win­dows be­fore try­ing again. Eventually, I fig­ured out the per­fect dis­tance and tim­ing with the heat.

When I laid that fi­nal track piece, I was so ex­cited, so proud! I took the model out­side for bet­ter light­ing and snapped Polaroids. I needed it cap­tured in­stantly. Here’s a photo of the Polaroid with my 10-year-old pen­man­ship.

What was the next thought that im­me­di­ately popped into my head?

This mas­ter­piece is ready to be sold to Disneyland! And I wrote my let­ter. I don’t have the orig­i­nal of that. But I re­mem­ber it went some­thing like this:

To whom it may con­cern:

My name is Kevin Glikmann. I am 10 years old. Enclosed are Polaroids of a roller coaster model I made called the Quadrupuler. It has four loops! I think you should make it!

And I sent it off!

Suddenly, my chore of check­ing the mail af­ter school be­came a heart-pound­ing rit­ual. I was ner­vous, hope­ful, ex­cited. Sometimes I opened the mail­box slowly, peek­ing in. Sometimes I pre­tended it was just a nor­mal day. Other times, I yanked it open and grabbed the mail fast, try­ing any­thing to calm my nerves.

Daschle said, No way they’ll re­spond. It’s Disneyland, Buddy, they get a mil­lion let­ters a day.”

Weeks turned into months, and I started to think he was right. Then one day, I opened the mail­box.

My name on it. My head was scream­ing, This is it! I never get mail!”

HOLY SMOKES! What’s WED Enterprises? I started bounc­ing up and down. I started shak­ing. I ran into the house, ran straight to my room, tore open the en­ve­lope, and read:

Your re­cent let­ter was di­rected to my of­fice here at WED Enterprises. WED (Walter Elias Disney) is the de­sign and Imagineering” branch of Walt Disney Productions. As such, we are re­spon­si­ble for the cre­ation of all shows, at­trac­tions and out­door en­ter­tain­ment for both Disneyland and Walt Disney World.

Thank you for show­ing us your Quadrupuler” roller coaster - it looks like quite an ad­ven­ture! As you may know Kevin, we are cre­at­ing a new roller­coaster type at­trac­tion for Disneyland’s Frontierland. Known as Big Thunder Mountain Railroad, this thrilling ad­ven­ture will carry pas­sen­gers on a high speed jour­ney through the gold rush days of the old west. Big Thunder is sched­uled to open later this year.

Thank you Kevin, for your in­ter­est in Walt Disney Productions.

Here is a photo of the orig­i­nal let­ter:

You would think that this let­ter would have ended my in­vent­ing spirit. That I would have got­ten bit­ter and de­clared to the world, No more will I in­vent for that mouse or any­one else!”

But no! Instead I was elated. I read it over and over. It said, It looks like quite an ad­ven­ture!” Disneyland liked it! I could­n’t be­lieve it.

Looking back, those words from Tom Fitzgerald did­n’t just val­i­date my idea, they launched my 10-year-old self-es­teem into or­bit! (Tom Fitzgerald went on to be­come one of the most in­flu­en­tial Imagineers in Disney his­tory. According to his bio, he started in 1979. I got this let­ter in April 1979 which means this must have been one of his first tasks as a new Imagineer em­ployee. lol! I won­der if he would re­mem­ber?)

A cou­ple years later, the Rubik’s Cube came out. I was ob­sessed. It took me weeks to solve, and once I did, my first thought was: What if it could turn on the an­gle? So I in­tro­duced Mr. Rubik’s Cube to my band­saw, re­designed the core, and built a very rough pro­to­type.

Here are some pic­tures.

I sent it to Ideal Toy Company, but they re­jected it straight away, they don’t ac­cept un­so­licited ideas. But that did­n’t mat­ter. They did­n’t un­der­stand that I had a let­ter from Disney Imagineering telling me my ideas are good!

That ten-year-old in­ven­tor is still alive in me, and still does­n’t un­der­stand re­jec­tion. Over the decades, I’ve in­vented sev­eral patented board games that were shopped around but never sold.

But I’m con­vinced the early val­i­da­tion from the Disney let­ter gave me a kind of bul­let­proof re­silience.

Today, I work in one of the most re­jec­tion-heavy in­dus­tries there is, act­ing. To me, in­vent­ing and act­ing are deeply con­nected: both are about dis­cov­ery, and both feed my in­vent­ing spirit. Successes are rare, but when they come, the joy lasts for­ever.

So I keep au­di­tion­ing, and I keep in­vent­ing.

And some­times, when frus­tra­tion creeps in, won­der­ing why I’m not fur­ther along, or why things aren’t work­ing out, that 10-year-old boy ap­pears. He re­minds me of what we be­lieved back then, and what still guides me now- just keep go­ing, one piece at a time.

...

Read the original on wordglyph.xyz »

8 461 shares, 19 trendiness

How we rebuilt Next.js with AI in one week

*This post was up­dated at 12:35 pm PT to fix a typo in the build time bench­marks.

Last week, one en­gi­neer and an AI model re­built the most pop­u­lar front-end frame­work from scratch. The re­sult, vinext (pronounced vee-next”), is a drop-in re­place­ment for Next.js, built on Vite, that de­ploys to Cloudflare Workers with a sin­gle com­mand. In early bench­marks, it builds pro­duc­tion apps up to 4x faster and pro­duces client bun­dles up to 57% smaller. And we al­ready have cus­tomers run­ning it in pro­duc­tion.

The whole thing cost about $1,100 in to­kens.

Next.js is the most pop­u­lar React frame­work. Millions of de­vel­op­ers use it. It pow­ers a huge chunk of the pro­duc­tion web, and for good rea­son. The de­vel­oper ex­pe­ri­ence is top-notch.

But Next.js has a de­ploy­ment prob­lem when used in the broader server­less ecosys­tem. The tool­ing is en­tirely be­spoke: Next.js has in­vested heav­ily in Turbopack but if you want to de­ploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build out­put and re­shape it into some­thing the tar­get plat­form can ac­tu­ally run.

If you’re think­ing: Isn’t that what OpenNext does?”, you are cor­rect.

That is in­deed the prob­lem OpenNext was built to solve. And a lot of en­gi­neer­ing ef­fort has gone into OpenNext from mul­ti­ple providers, in­clud­ing us at Cloudflare. It works, but quickly runs into lim­i­ta­tions and be­comes a game of whack-a-mole.

Building on top of Next.js out­put as a foun­da­tion has proven to be a dif­fi­cult and frag­ile ap­proach. Because OpenNext has to re­verse-en­gi­neer Next.js’s build out­put, this re­sults in un­pre­dictable changes be­tween ver­sions that take a lot of work to cor­rect.

Next.js has been work­ing on a first-class adapters API, and we’ve been col­lab­o­rat­ing with them on it. It’s still an early ef­fort but even with adapters, you’re still build­ing on the be­spoke Turbopack tool­chain. And adapters only cover build and de­ploy. During de­vel­op­ment, next dev runs ex­clu­sively in Node.js with no way to plug in a dif­fer­ent run­time. If your ap­pli­ca­tion uses plat­form-spe­cific APIs like Durable Objects, KV, or AI bind­ings, you can’t test that code in dev with­out workarounds.

What if in­stead of adapt­ing Next.js out­put, we reim­ple­mented the Next.js API sur­face on Vite di­rectly? Vite is the build tool used by most of the front-end ecosys­tem out­side of Next.js, pow­er­ing frame­works like Astro, SvelteKit, Nuxt, and Remix. A clean reim­ple­men­ta­tion, not merely a wrap­per or adapter. We hon­estly did­n’t think it would work. But it’s 2026, and the cost of build­ing soft­ware has com­pletely changed.

We got a lot fur­ther than we ex­pected.

Replace next with vinext in your scripts and every­thing else stays the same. Your ex­ist­ing app/, pages/, and next.con­fig.js work as-is.

vinext dev # Development server with HMR

vinext build # Production build

vinext de­ploy # Build and de­ploy to Cloudflare Workers

This is not a wrap­per around Next.js and Turbopack out­put. It’s an al­ter­na­tive im­ple­men­ta­tion of the API sur­face: rout­ing, server ren­der­ing, React Server Components, server ac­tions, caching, mid­dle­ware. All of it built on top of Vite as a plu­gin. Most im­por­tantly Vite out­put runs on any plat­form thanks to the Vite Environment API.

Early bench­marks are promis­ing. We com­pared vinext against Next.js 16 us­ing a shared 33-route App Router ap­pli­ca­tion.

Both frame­works are do­ing the same work: com­pil­ing, bundling, and prepar­ing server-ren­dered routes. We dis­abled TypeScript type check­ing and ESLint in Next.js’s build (Vite does­n’t run these dur­ing builds), and used force-dy­namic so Next.js does­n’t spend ex­tra time pre-ren­der­ing sta­tic routes, which would un­fairly slow down its num­bers. The goal was to mea­sure only bundler and com­pi­la­tion speed, noth­ing else. Benchmarks run on GitHub CI on every merge to main.

These bench­marks mea­sure com­pi­la­tion and bundling speed, not pro­duc­tion serv­ing per­for­mance. The test fix­ture is a sin­gle 33-route app, not a rep­re­sen­ta­tive sam­ple of all pro­duc­tion ap­pli­ca­tions. We ex­pect these num­bers to evolve as three pro­jects con­tinue to de­velop. The full method­ol­ogy and his­tor­i­cal re­sults are pub­lic. Take them as di­rec­tional, not de­fin­i­tive.

The di­rec­tion is en­cour­ag­ing, though. Vite’s ar­chi­tec­ture, and es­pe­cially Rolldown (the Rust-based bundler com­ing in Vite 8), has struc­tural ad­van­tages for build per­for­mance that show up clearly here.

vinext is built with Cloudflare Workers as the first de­ploy­ment tar­get. A sin­gle com­mand takes you from source code to a run­ning Worker:

This han­dles every­thing: builds the ap­pli­ca­tion, auto-gen­er­ates the Worker con­fig­u­ra­tion, and de­ploys. Both the App Router and Pages Router work on Workers, with full client-side hy­dra­tion, in­ter­ac­tive com­po­nents, client-side nav­i­ga­tion, React state.

For pro­duc­tion caching, vinext in­cludes a Cloudflare KV cache han­dler that gives you ISR (Incremental Static Regeneration) out of the box:

KV is a good de­fault for most ap­pli­ca­tions, but the caching layer is de­signed to be plug­gable. That set­CacheHan­dler call means you can swap in what­ever back­end makes sense. R2 might be a bet­ter fit for apps with large cached pay­loads or dif­fer­ent ac­cess pat­terns. We’re also work­ing on im­prove­ments to our Cache API that should pro­vide a strong caching layer with less con­fig­u­ra­tion. The goal is flex­i­bil­ity: pick the caching strat­egy that fits your app.

We also have a live ex­am­ple of Cloudflare Agents run­ning in a Next.js app, with­out the need for workarounds like get­Plat­form­Proxy, since the en­tire app now runs in work­erd, dur­ing both dev and de­ploy phases. This means be­ing able to use Durable Objects, AI bind­ings, and every other Cloudflare-specific ser­vice with­out com­pro­mise. Have a look here.

The cur­rent de­ploy­ment tar­get is Cloudflare Workers, but that’s a small part of the pic­ture. Something like 95% of vinext is pure Vite. The rout­ing, the mod­ule shims, the SSR pipeline, the RSC in­te­gra­tion: none of it is Cloudflare-specific.

Cloudflare is look­ing to work with other host­ing providers about adopt­ing this tool­chain for their cus­tomers (the lift is min­i­mal — we got a proof-of-con­cept work­ing on Vercel in less than 30 min­utes!). This is an open-source pro­ject, and for its long term suc­cess, we be­lieve it’s im­por­tant we work with part­ners across the ecosys­tem to en­sure on­go­ing in­vest­ment. PRs from other plat­forms are wel­come. If you’re in­ter­ested in adding a de­ploy­ment tar­get, open an is­sue or reach out.

We want to be clear: vinext is ex­per­i­men­tal. It’s not even one week old, and it has not yet been bat­tle-tested with any mean­ing­ful traf­fic at scale. If you’re eval­u­at­ing it for a pro­duc­tion ap­pli­ca­tion, pro­ceed with ap­pro­pri­ate cau­tion.

That said, the test suite is ex­ten­sive: over 1,700 Vitest tests and 380 Playwright E2E tests, in­clud­ing tests ported di­rectly from the Next.js test suite and OpenNext’s Cloudflare con­for­mance suite. We’ve ver­i­fied it against the Next.js App Router Playground. Coverage sits at 94% of the Next.js 16 API sur­face.

Early re­sults from real-world cus­tomers are en­cour­ag­ing. We’ve been work­ing with National Design Studio, a team that’s aim­ing to mod­ern­ize every gov­ern­ment in­ter­face, on one of their beta sites, CIO.gov. They’re al­ready run­ning vinext in pro­duc­tion, with mean­ing­ful im­prove­ments in build times and bun­dle sizes.

The README is hon­est about what’s not sup­ported and won’t be, and about known lim­i­ta­tions. We want to be up­front rather than over­promise.

vinext al­ready sup­ports Incremental Static Regeneration (ISR) out of the box. After the first re­quest to any page, it’s cached and reval­i­dated in the back­ground, just like Next.js. That part works to­day.

vinext does not yet sup­port sta­tic pre-ren­der­ing at build time. In Next.js, pages with­out dy­namic data get ren­dered dur­ing next build and served as sta­tic HTML. If you have dy­namic routes, you use gen­er­at­eSta­t­ic­Params() to enu­mer­ate which pages to build ahead of time. vinext does­n’t do that… yet.

This was an in­ten­tional de­sign de­ci­sion for launch. It’s on the roadmap, but if your site is 100% pre­built HTML with sta­tic con­tent, you prob­a­bly won’t see much ben­e­fit from vinext to­day. That said, if one en­gi­neer can spend $1,100 in to­kens and re­build Next.js, you can prob­a­bly spend $10 and mi­grate to a Vite-based frame­work de­signed specif­i­cally for sta­tic con­tent, like Astro (which also de­ploys to Cloudflare Workers).

For sites that aren’t purely sta­tic, though, we think we can do some­thing bet­ter than pre-ren­der­ing every­thing at build time.

Next.js pre-ren­ders every page listed in gen­er­at­eSta­t­ic­Params() dur­ing the build. A site with 10,000 prod­uct pages means 10,000 ren­ders at build time, even though 99% of those pages may never re­ceive a re­quest. Builds scale lin­early with page count. This is why large Next.js sites end up with 30-minute builds.

So we built Traffic-aware Pre-Rendering (TPR). It’s ex­per­i­men­tal to­day, and we plan to make it the de­fault once we have more real-world test­ing be­hind it.

The idea is sim­ple. Cloudflare is al­ready the re­verse proxy for your site. We have your traf­fic data. We know which pages ac­tu­ally get vis­ited. So in­stead of pre-ren­der­ing every­thing or pre-ren­der­ing noth­ing, vinext queries Cloudflare’s zone an­a­lyt­ics at de­ploy time and pre-ren­ders only the pages that mat­ter.

vinext de­ploy –experimental-tpr

Building…

Build com­plete (4.2s)

TPR (experimental): Analyzing traf­fic for my-store.com (last 24h)

TPR: 12,847 unique paths — 184 pages cover 90% of traf­fic

TPR: Pre-rendering 184 pages…

TPR: Pre-rendered 184 pages in 8.3s → KV cache

Deploying to Cloudflare Workers…

For a site with 100,000 prod­uct pages, the power law means 90% of traf­fic usu­ally goes to 50 to 200 pages. Those get pre-ren­dered in sec­onds. Everything else falls back to on-de­mand SSR and gets cached via ISR af­ter the first re­quest. Every new de­ploy re­freshes the set based on cur­rent traf­fic pat­terns. Pages that go vi­ral get picked up au­to­mat­i­cally. All of this works with­out gen­er­at­eSta­t­ic­Params() and with­out cou­pling your build to your pro­duc­tion data­base.

A pro­ject like this would nor­mally take a team of en­gi­neers months, if not years. Several teams at var­i­ous com­pa­nies have at­tempted it, and the scope is just enor­mous. We tried once at Cloudflare! Two routers, 33+ mod­ule shims, server ren­der­ing pipelines, RSC stream­ing, file-sys­tem rout­ing, mid­dle­ware, caching, sta­tic ex­port. There’s a rea­son no­body has pulled it off.

This time we did it in un­der a week. One en­gi­neer (technically en­gi­neer­ing man­ager) di­rect­ing AI.

The first com­mit landed on February 13. By the end of that same evening, both the Pages Router and App Router had ba­sic SSR work­ing, along with mid­dle­ware, server ac­tions, and stream­ing. By the next af­ter­noon, App Router Playground was ren­der­ing 10 of 11 routes. By day three, vinext de­ploy was ship­ping apps to Cloudflare Workers with full client hy­dra­tion. The rest of the week was hard­en­ing: fix­ing edge cases, ex­pand­ing the test suite, bring­ing API cov­er­age to 94%.

What changed from those ear­lier at­tempts? AI got bet­ter. Way bet­ter.

Not every pro­ject would go this way. This one did be­cause a few things hap­pened to line up at the right time.

Next.js is well-spec­i­fied. It has ex­ten­sive doc­u­men­ta­tion, a mas­sive user base, and years of Stack Overflow an­swers and tu­to­ri­als. The API sur­face is all over the train­ing data. When you ask Claude to im­ple­ment get­Server­Side­Props or ex­plain how useRouter works, it does­n’t hal­lu­ci­nate. It knows how Next works.

Next.js has an elab­o­rate test suite. The Next.js repo con­tains thou­sands of E2E tests cov­er­ing every fea­ture and edge case. We ported tests di­rectly from their suite (you can see the at­tri­bu­tion in the code). This gave us a spec­i­fi­ca­tion we could ver­ify against me­chan­i­cally.

Vite is an ex­cel­lent foun­da­tion. Vite han­dles the hard parts of front-end tool­ing: fast HMR, na­tive ESM, a clean plu­gin API, pro­duc­tion bundling. We did­n’t have to build a bundler. We just had to teach it to speak Next.js. @vitejs/plugin-rsc is still early, but it gave us React Server Components sup­port with­out hav­ing to build an RSC im­ple­men­ta­tion from scratch.

The mod­els caught up. We don’t think this would have been pos­si­ble even a few months ago. Earlier mod­els could­n’t sus­tain co­her­ence across a code­base this size. New mod­els can hold the full ar­chi­tec­ture in con­text, rea­son about how mod­ules in­ter­act, and pro­duce cor­rect code of­ten enough to keep mo­men­tum go­ing. At times, I saw it go into Next, Vite, and React in­ter­nals to fig­ure out a bug. The state-of-the-art mod­els are im­pres­sive, and they seem to keep get­ting bet­ter.

All of those things had to be true at the same time. Well-documented tar­get API, com­pre­hen­sive test suite, solid build tool un­der­neath, and a model that could ac­tu­ally han­dle the com­plex­ity. Take any one of them away and this does­n’t work nearly as well.

Almost every line of code in vinext was writ­ten by AI. But here’s the thing that mat­ters more: every line passes the same qual­ity gates you’d ex­pect from hu­man-writ­ten code. The pro­ject has 1,700+ Vitest tests, 380 Playwright E2E tests, full TypeScript type check­ing via tsgo, and lint­ing via oxlint. Continuous in­te­gra­tion runs all of it on every pull re­quest. Establishing a set of good guardrails is crit­i­cal to mak­ing AI pro­duc­tive in a code­base.

The process started with a plan. I spent a cou­ple of hours go­ing back and forth with Claude in OpenCode to de­fine the ar­chi­tec­ture: what to build, in what or­der, which ab­strac­tions to use. That plan be­came the north star. From there, the work­flow was straight­for­ward:

Let the AI write the im­ple­men­ta­tion and tests. If tests pass, merge. If not, give the AI the er­ror out­put and let it it­er­ate.

We wired up AI agents for code re­view too. When a PR was opened, an agent re­viewed it. When re­view com­ments came back, an­other agent ad­dressed them. The feed­back loop was mostly au­to­mated.

It did­n’t work per­fectly every time. There were PRs that were just wrong. The AI would con­fi­dently im­ple­ment some­thing that seemed right but did­n’t match ac­tual Next.js be­hav­ior. I had to course-cor­rect reg­u­larly. Architecture de­ci­sions, pri­or­i­ti­za­tion, know­ing when the AI was headed down a dead end: that was all me. When you give AI good di­rec­tion, good con­text, and good guardrails, it can be very pro­duc­tive. But the hu­man still has to steer.

For browser-level test­ing, I used agent-browser to ver­ify ac­tual ren­dered out­put, client-side nav­i­ga­tion, and hy­dra­tion be­hav­ior. Unit tests miss a lot of sub­tle browser is­sues. This caught them.

Over the course of the pro­ject, we ran over 800 ses­sions in OpenCode. Total cost: roughly $1,100 in Claude API to­kens.

Why do we have so many lay­ers in the stack? This pro­ject forced me to think deeply about this ques­tion. And to con­sider how AI im­pacts the an­swer.

Most ab­strac­tions in soft­ware ex­ist be­cause hu­mans need help. We could­n’t hold the whole sys­tem in our heads, so we built lay­ers to man­age the com­plex­ity for us. Each layer made the next per­son’s job eas­ier. That’s how you end up with frame­works on top of frame­works, wrap­per li­braries, thou­sands of lines of glue code.

AI does­n’t have the same lim­i­ta­tion. It can hold the whole sys­tem in con­text and just write the code. It does­n’t need an in­ter­me­di­ate frame­work to stay or­ga­nized. It just needs a spec and a foun­da­tion to build on.

It’s not clear yet which ab­strac­tions are truly foun­da­tional and which ones were just crutches for hu­man cog­ni­tion. That line is go­ing to shift a lot over the next few years. But vinext is a data point. We took an API con­tract, a build tool, and an AI model, and the AI wrote every­thing in be­tween. No in­ter­me­di­ate frame­work needed. We think this pat­tern will re­peat across a lot of soft­ware. The lay­ers we’ve built up over the years aren’t all go­ing to make it.

Thanks to the Vite team. Vite is the foun­da­tion this whole thing stands on. @vitejs/plugin-rsc is still early days, but it gave me RSC sup­port with­out hav­ing to build that from scratch, which would have been a deal­breaker. The Vite main­tain­ers were re­spon­sive and help­ful as I pushed the plu­gin into ter­ri­tory it had­n’t been tested in be­fore.

We also want to ac­knowl­edge the Next.js team. They’ve spent years build­ing a frame­work that raised the bar for what React de­vel­op­ment could look like. The fact that their API sur­face is so well-doc­u­mented and their test suite so com­pre­hen­sive is a big part of what made this pro­ject pos­si­ble. vinext would­n’t ex­ist with­out the stan­dard they set.

vinext in­cludes an Agent Skill that han­dles mi­gra­tion for you. It works with Claude Code, OpenCode, Cursor, Codex, and dozens of other AI cod­ing tools. Install it, open your Next.js pro­ject, and tell the AI to mi­grate:

Then open your Next.js pro­ject in any sup­ported tool and say:

The skill han­dles com­pat­i­bil­ity check­ing, de­pen­dency in­stal­la­tion, con­fig gen­er­a­tion, and dev server startup. It knows what vinext sup­ports and will flag any­thing that needs man­ual at­ten­tion.

Or if you pre­fer do­ing it by hand:

npx vinext init # Migrate an ex­ist­ing Next.js pro­ject

npx vinext dev # Start the dev server

npx vinext de­ploy # Ship to Cloudflare Workers

The source is at github.com/​cloud­flare/​vinext. Issues, PRs, and feed­back are wel­come.

...

Read the original on blog.cloudflare.com »

9 438 shares, 18 trendiness

Discord cuts ties with Peter Thiel-backed verification software after its code was found tied to US surveillance efforts

We stud­ied chat­bots and lan­guage and saw a huge prob­lem: They mean 80% when they say likely’ but hu­mans hear 65%‘Students can’t rea­son’: Teachers warn AI is fu­el­ing a cri­sis in kids’ abil­ity to thinkDis­cord cuts ties with Peter Thiel–backed ver­i­fi­ca­tion soft­ware af­ter its code was found tied to U. S. sur­veil­lance ef­forts­S­ci­en­tists are push­ing back on warn­ings that mi­croplas­tics dam­age your health, say­ing peo­ple are just obese and call­ing some stud­ies a joke’Y­our Google search his­tory can be used against you in court. Does that vi­o­late the Constitution? and The Associated PressExclusive: Anthropic rolls out AI tool that can hunt soft­ware bugs on its own—in­clud­ing the most dan­ger­ous ones hu­mans miss

Scott Bessent has got a feel­ing’ that $175 bil­lion raised un­der the IEEPA is lost to the American peo­ple for good­Dis­cord cuts ties with Peter Thiel–backed ver­i­fi­ca­tion soft­ware af­ter its code was found tied to U. S. sur­veil­lance ef­fortsIn less than a year, Trump erased 12 years of sol­vency for the trust fund that pays for Medicare Part AThe record gap be­tween cor­po­rate prof­its and worker pay has an undercurrent of be­trayal,’ top econ­o­mist warns Gold­man Sachs says U.S. con­sumers are stuck with higher prices even af­ter Supreme Court rul­ing opens door to $180 bil­lion in tar­iff re­fundsA two-child house­hold must earn $400,000 a year for child­care to be af­ford­able, study says. It’s easy to see why birth rates are falling’

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

10 423 shares, 17 trendiness

An Open Letter to Google regarding Mandatory Developer Registration for Android App Distribution

Date: February 24, 2026

To: Sundar Pichai, Chief Executive Officer, Google

To: Sergey Brin, Founder and Board Member, Google

To: Larry Page, Founder and Board Member, Google

To: Vijaya Kaza, General Manager for App & Ecosystem Trust, Google

CC: Regulatory au­thor­i­ties, pol­i­cy­mak­ers, and the Android de­vel­oper com­mu­nity

Re: Mandatory Developer Registration for Android App Distribution

We, the un­der­signed or­ga­ni­za­tions rep­re­sent­ing civil so­ci­ety, non­profit in­sti­tu­tions, and tech­nol­ogy com­pa­nies, write to ex­press our strong op­po­si­tion to Google’s an­nounced pol­icy re­quir­ing all Android app de­vel­op­ers to reg­is­ter cen­trally with Google them­selves in or­der to dis­trib­ute ap­pli­ca­tions out­side of the Google Play Store, set to take ef­fect world­wide in the com­ing months.

While we do rec­og­nize the im­por­tance of plat­form se­cu­rity and user safety, the Android plat­form al­ready in­cludes mul­ti­ple se­cu­rity mech­a­nisms that do not re­quire cen­tral reg­is­tra­tion. Forcibly in­ject­ing an alien se­cu­rity model that runs counter to Android’s his­toric open na­ture threat­ens in­no­va­tion, com­pe­ti­tion, pri­vacy, and user free­dom. We urge Google to with­draw this pol­icy and work with the open-source and se­cu­rity com­mu­ni­ties on less re­stric­tive al­ter­na­tives.

Android has his­tor­i­cally been char­ac­ter­ized as an open plat­form where users and de­vel­op­ers can op­er­ate in­de­pen­dently of Google’s ser­vices. The pro­posed de­vel­oper reg­is­tra­tion pol­icy fun­da­men­tally al­ters that re­la­tion­ship by re­quir­ing de­vel­op­ers who wish to dis­trib­ute apps through al­ter­na­tive chan­nels — their own web­sites, third-party app stores, en­ter­prise dis­tri­b­u­tion sys­tems, or di­rect trans­fers — to first seek per­mis­sion from Google through a manda­tory ver­i­fi­ca­tion process, which in­volves the agree­ment to Google’s terms and con­di­tions, the pay­ment of a fee, and the up­load­ing of gov­ern­ment-is­sued iden­ti­fi­ca­tion.

This ex­tends Google’s gate­keep­ing au­thor­ity be­yond its own mar­ket­place into dis­tri­b­u­tion chan­nels where it has no le­git­i­mate op­er­a­tional role. Developers who choose not to use Google’s ser­vices should not be forced to reg­is­ter with, and sub­mit to the judge­ment of, Google. Centralizing the reg­is­tra­tion of all ap­pli­ca­tions world­wide also gives Google new­found pow­ers to com­pletely dis­able any app it wants to, for any rea­son, for the en­tire Android ecosys­tem.

Mandatory reg­is­tra­tion cre­ates fric­tion and bar­ri­ers to en­try, par­tic­u­larly for:

* Developers in re­gions with lim­ited ac­cess to Google’s reg­is­tra­tion in­fra­struc­ture

* Activists work­ing on in­ter­net free­dom in coun­tries that un­justly crim­i­nal­ize that work

* Developers in coun­tries or re­gions where Google can­not al­low them to sign up due to sanc­tions

* Internal en­ter­prise and gov­ern­ment ap­pli­ca­tions never in­tended for broad pub­lic dis­tri­b­u­tion

Every ad­di­tional bu­reau­cratic hur­dle re­duces di­ver­sity in the soft­ware ecosys­tem and con­cen­trates power in the hands of large es­tab­lished play­ers who can more eas­ily ab­sorb such com­pli­ance costs.

Requiring reg­is­tra­tion with Google cre­ates a com­pre­hen­sive data­base of all Android de­vel­op­ers, re­gard­less of whether or not they use Google’s ser­vices. This raises se­ri­ous ques­tions about:

* How this in­for­ma­tion will be stored, se­cured, and used

* Whether this data could be sub­ject to gov­ern­ment re­quests or le­gal processes

* To what ex­tent de­vel­oper ac­tiv­ity is tracked across the ecosys­tem

* What this means for de­vel­op­ers work­ing on pri­vacy-pre­serv­ing or po­lit­i­cally sen­si­tive ap­pli­ca­tions

Developers should have the right to cre­ate and dis­trib­ute soft­ware with­out sub­mit­ting to un­nec­es­sary sur­veil­lance or scrutiny.

Google’s ex­ist­ing app re­view processes have been crit­i­cized for opaque de­ci­sion-mak­ing, in­con­sis­tent en­force­ment, and lim­ited ap­peal mech­a­nisms. Extending this sys­tem to all Android cer­ti­fied de­vices cre­ates risks of:

* Developers los­ing their abil­ity to dis­trib­ute apps across all chan­nels due to a sin­gle un-re­view­able cor­po­rate de­ci­sion

* Disproportionate im­pact on mar­gin­al­ized com­mu­ni­ties and con­tro­ver­sial but le­gal ap­pli­ca­tions

A sin­gle point of fail­ure con­trolled by one cor­po­ra­tion is an­ti­thet­i­cal to a healthy, com­pet­i­tive soft­ware ecosys­tem.

This re­quire­ment al­lows Google to col­lect in­tel­li­gence on all Android de­vel­op­ment ac­tiv­ity, in­clud­ing:

* Which apps are be­ing de­vel­oped and by whom

* Market trends and user pref­er­ences out­side of Google’s ecosys­tem

This in­for­ma­tion asym­me­try pro­vides Google with sig­nif­i­cant com­pet­i­tive ad­van­tages, al­lows it to pre­empt, copy, and un­der­mine com­pet­ing prod­ucts and ser­vices, and may open many ques­tions about an­titrust.

Regulatory au­thor­i­ties world­wide, in­clud­ing the European Commission, the U. S. Department of Justice, and com­pe­ti­tion au­thor­i­ties in mul­ti­ple ju­ris­dic­tions, have in­creas­ingly scru­ti­nized dom­i­nant plat­forms’ abil­ity to pref­er­ence their own ser­vices and re­strict com­pe­ti­tion, de­mand­ing more open­ness and in­ter­op­er­abil­ity. We ad­di­tion­ally note grow­ing con­cerns around reg­u­la­tory in­ter­ven­tion in­creas­ing mass sur­veil­lance, im­ped­ing soft­ware free­dom, open in­ter­net and de­vice neu­tral­ity.

We urge Google to find al­ter­na­tive ways to com­ply with reg­u­la­tory oblig­a­tions by pro­mot­ing mod­els that re­spect Android’s open na­ture with­out in­creas­ing gate­keeper con­trol over the plat­form.

The Android plat­form al­ready in­cludes mul­ti­ple se­cu­rity mech­a­nisms that do not re­quire cen­tral reg­is­tra­tion:

* User warn­ings for ap­pli­ca­tions that are di­rectly in­stalled (or sideloaded”)

* Google Play Protect (which users can choose to en­able or dis­able)

No ev­i­dence has been pre­sented that these safe­guards are in­suf­fi­cient to con­tinue to pro­tect Android users as they have for the en­tire sev­en­teen years of Android’s ex­is­tence. If Google’s con­cern is gen­uinely about se­cu­rity rather than con­trol, it should in­vest in im­prov­ing these ex­ist­ing mech­a­nisms rather than cre­at­ing new bot­tle­necks and cen­tral­iz­ing con­trol.

We call upon Google to:

Engage in trans­par­ent di­a­logue with civil so­ci­ety, de­vel­op­ers, and reg­u­la­tors about Android se­cu­rity im­prove­ments that re­spect open­ness and com­pe­ti­tion.

Commit to plat­form neu­tral­ity by en­sur­ing that Android re­mains a gen­uinely open plat­form where Google’s role as plat­form provider does not con­flict with its com­mer­cial in­ter­ests.

Over the years, Android has evolved into a crit­i­cal piece of tech­no­log­i­cal in­fra­struc­ture that serves hun­dreds of gov­ern­ments, mil­lions of busi­nesses, and bil­lions of cit­i­zens around the world. Unilaterally con­sol­i­dat­ing and cen­tral­iz­ing the power to ap­prove soft­ware into the hands of a sin­gle un­ac­count­able cor­po­ra­tion is an­ti­thet­i­cal to the prin­ci­ples of free speech, an af­front to free soft­ware, an in­sur­mount­able bar­rier to com­pe­ti­tion, and a threat to dig­i­tal sov­er­eignty every­where.

We im­plore Google to re­verse course, end the de­vel­oper ver­i­fi­ca­tion pro­gram, and to be­gin work­ing col­lab­o­ra­tively with the broader com­mu­nity to ad­vance se­cu­rity ob­jec­tives with­out sac­ri­fic­ing the open prin­ci­ples upon which Android was built. The strength of the Android ecosys­tem has his­tor­i­cally been its open­ness, and Google must work to­wards restor­ing its role as a faith­ful stew­ard of that trust.

...

Read the original on keepandroidopen.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.