10 interesting stories served every morning and every evening.




1 597 shares, 56 trendiness

Israeli Soldiers Killed Gaza Aid Workers at Point Blank Range in 2025 Massacre

A minute-by-minute re­con­struc­tion of the mas­sacre by Earshot and Forensic Architecture found Israeli sol­diers fired over 900 bul­lets at the aid work­ers, killing 15. Drop Site is a reader-funded, in­de­pen­dent news out­let. Without your sup­port, we can’t op­er­ate. Please con­sider be­com­ing a paid sub­scriber or mak­ing a 501(c)(3) tax-de­ductible do­na­tion to­day.Fu­ner­als held at Nasser Hospital in Khan Yunis, in south­ern Gaza, for aid work­ers from the Palestinian Red Crescent who were killed in an Israeli at­tack in Tel al-Sul­tan. March 31, 2025. Photo by Hani Alshaer/Anadolu via Getty Images.Israeli sol­diers fired nearly a thou­sand bul­lets dur­ing the mas­sacre of 15 Palestinian aid work­ers in south­ern Gaza on March 23, 2025—with at least eight shots fired at point blank range—ac­cord­ing to a joint in­ves­ti­ga­tion by the in­de­pen­dent re­search groups Earshot and Forensic Architecture. The re­port, based on eye­wit­ness tes­ti­mony and au­dio and vi­sual analy­sis, shows that a num­ber of aid work­ers were ex­e­cuted and that at least one was shot from as close as one me­ter away.In Tel al-Sul­tan that day, Israel killed eight aid work­ers with the Palestine Red Crescent Society (PRCS), six from Palestinian Civil Defense, and a UN re­lief agency staffer. It im­me­di­ately trig­gered in­ter­na­tional con­dem­na­tion and was de­scribed as one of the dark­est mo­ments” of the war by PRCS.The Israeli mil­i­tary was forced to change its story about the am­bush sev­eral times, fol­low­ing the dis­cov­ery of the bod­ies in a mass grave, along with their flat­tened ve­hi­cles, and the emer­gence of video and au­dio record­ings taken by the aid work­ers. An in­ter­nal mil­i­tary in­quiry ul­ti­mately did not rec­om­mend any crim­i­nal ac­tion against the army units re­spon­si­ble for the in­ci­dent.The re­port by Earshot and Forensic Architecture re­con­structs, minute by minute, how the mas­sacre un­folded. Using video and au­dio record­ings from the in­ci­dent, open-source im­ages and videos, satel­lite im­agery, so­cial me­dia posts, and other ma­te­ri­als, as well as in-depth in­ter­views with two sur­vivors of the at­tack, the groups were able to dig­i­tally re­con­struct the scene and events sur­round­ing the mas­sacre.Is­raeli sol­diers am­bushed and sub­jected Palestinian aid work­ers to a near con­tin­u­ous as­sault for over two hours even though the sol­diers never came un­der fire.At least 910 gun­shots were doc­u­mented across three video and au­dio record­ings of the at­tack. The vast ma­jor­ity of these gun­shots, at least 844, were fired over just five min­utes and 30 sec­onds.At least 93% of the gun­shots recorded in the first min­utes of the at­tack were fired di­rectly to­wards the emer­gency ve­hi­cles and aid work­ers by Israeli sol­diers. During this time, at least five shoot­ers fired si­mul­ta­ne­ously. Witness tes­ti­monies sug­gest as many as 30 sol­diers were pre­sent in the area.Is­raeli sol­diers were ini­tially po­si­tioned on an el­e­vated sand­bank by the road, with no ob­struc­tions lim­it­ing their line of sight. The emer­gency lights and mark­ings of the vic­tims’ ve­hi­cles would have been clearly vis­i­ble to the sol­diers at the time of the at­tacks.Is­raeli sol­diers first main­tained fixed fir­ing po­si­tions from the el­e­vated sand­bank, then walked to­ward the aid work­ers while con­tin­u­ing to shoot. Upon reach­ing the aid work­ers, the sol­diers moved be­tween them and the ve­hi­cles and ex­e­cuted some of the aid work­ers at point blank range, as close as one me­ter away.In the im­me­di­ate af­ter­math of the at­tack, the Israeli mil­i­tary con­ducted ex­ten­sive earth­works at the site. In the days and weeks that fol­lowed, the area was fur­ther trans­formed by the Israeli mil­i­tary’s con­struc­tion of the Morag Corridor,” a se­cu­rity zone split­ting the south­ern Gaza Strip, and the erec­tion of an aid dis­tri­b­u­tion site op­er­ated by the Israeli- and U.S.-backed Gaza Humanitarian Foundation.“This seems to be a very well doc­u­mented case us­ing a num­ber of forms of cred­i­ble ev­i­dence that are cross ref­er­enced,” Katherine Gallagher, a se­nior staff at­tor­ney at the Center for Constitutional Rights, told Drop Site af­ter re­view­ing a de­tailed sum­mary of the in­ves­ti­ga­tion. It pre­sents a very com­pelling case, and hon­estly, a very dev­as­tat­ing one.”The Israeli mil­i­tary did not re­spond to spe­cific in­quiries from Drop Site and in­stead pointed to the find­ings of an in­ter­nal in­ves­ti­ga­tion pub­lished on April 20 that found the in­ci­dent oc­curred in a hos­tile and dan­ger­ous com­bat zone, un­der a wide­spread threat to the op­er­at­ing troops.” It also found no ev­i­dence to sup­port claims of ex­e­cu­tion,” which it called blood li­bels and false ac­cu­sa­tions against IDF sol­diers.”The joint re­port will be re­leased February 24 at a gath­er­ing at British par­lia­ment in Westminster hosted by the British Palestinian Committee with Earshot, Forensic Architecture, and the in­ter­na­tional hu­man­i­tar­ian law co­or­di­na­tor for PRCS, Dana Abu Koash. The full re­port is avail­able here.On March 23, 2025 at 3:52 a.m., PRCS dis­patched two am­bu­lances from two dif­fer­ent ar­eas to the scene of an Israeli airstrike in Al-Hashashin, an area near Rafah. Israel had re­sumed its scorched earth bomb­ing cam­paign on Gaza a few days ear­lier af­ter aban­don­ing the January 2025 cease­fire agree­ment.The at­tack on the aid work­ers be­gan at ap­prox­i­mately 4:00 a.m. when one of the am­bu­lances dri­ving along Gush Katif road in Al-Hashashin came un­der Israeli fire. The ve­hi­cle had its emer­gency lights turned on at the time. Mustafa Khafaja, who was dri­ving, lost con­trol of the ve­hi­cle, which veered left off the road and stopped near an elec­tric­ity pole. Khafaja and his col­league, Ezz El-Din Shaat, who was in the pas­sen­ger seat, were both killed. A third PRCS worker, Munther Abed, who was in the back of the ve­hi­cle, threw him­self to the floor of the van and sur­vived.Af­ter the shoot­ing stopped, Israeli sol­diers ap­proached the am­bu­lance and dragged Abed out of the car, beat him, and de­tained him at a nearby pit. Sometime later, two Palestinian civil­ians—a fa­ther and son from the Bardawil fam­ily—were also de­tained and brought to the pit. The Israeli sol­diers then took the three de­tainees to an el­e­vated area be­hind a tall con­crete struc­ture some 38 to 48 me­ters south­east of the am­bu­lance, where an ad­di­tional group of Israeli sol­diers were po­si­tioned.Still from the sit­u­ated tes­ti­mony with Munther Abed re­count­ing the lo­ca­tion of the pit and the area be­hind the tall con­crete struc­ture where he was taken when de­tained by Israeli sol­diers. (Forensic Architecture, 2026).By 4:35 a.m., the sec­ond am­bu­lance, hav­ing com­pleted its mis­sion in Al-Hashashin, was dis­patched to search for the first am­bu­lance, which had lost con­tact with PRCS head­quar­ters at 3:55 a.m. The sec­ond am­bu­lance was joined by two more PRCS am­bu­lances, one be­long­ing to Civil Defense, and a Civil Defense fire truck. The five-ve­hi­cle res­cue con­voy ar­rived at the scene of the at­tack of the first am­bu­lance shortly af­ter 5:00 a.m. All ve­hi­cles were clearly marked and had their emer­gency lights turned on.The po­si­tion of each am­bu­lance as the shoot­ing be­gan. (Forensic Architecture, 2026)A PRCS worker in one of the am­bu­lances, Refaat Radwan, be­gan film­ing on his phone as they drove to the site. His re­cov­ered videos as well as record­ings of phone calls by two other aid work­ers at the scene to PRCS dis­patch pro­vided cru­cial ev­i­dence of the mas­sacre. Forensic Architecture and Earshot’s analy­sis of the record­ings cor­rob­o­rated eye­wit­ness tes­ti­mony on the po­si­tions and move­ments of the Israeli sol­diers through­out the at­tack.At 5:09 a.m., as the aid work­ers parked and ap­proached the first am­bu­lance by foot, Israeli sol­diers po­si­tioned on the el­e­vated sand­bank opened fire. A dig­i­tal re­con­struc­tion of the scene shows that the sol­diers would have had an un­in­ter­rupted view of the ar­rival of the con­voy. Abed, who was be­ing de­tained at gun­point on the el­e­vated sand­bank, tes­ti­fied that the sol­diers were kneel­ing and aim­ing their weapons at the con­voy as it ap­proached.Lo­ca­tions of all emer­gency ve­hi­cles at the in­ci­dent site at 5:10 a.m. rel­a­tive to Munther Abed and the Israeli sol­diers who de­tained him. From their po­si­tion, the sol­diers would have been able to clearly see the con­voy’s ar­rival with their emer­gency lights on. (Forensic Architecture, 2026).

The Israeli sol­diers re­mained on the sand­bank while fir­ing con­tin­u­ously at the aid work­ers for four min­utes. The sol­diers then ad­vanced to­wards the aid work­ers at a walk­ing pace of ap­prox­i­mately one me­ter per sec­ond while con­tin­u­ously shoot­ing.Echolo­ca­tion of Israeli sol­diers ap­proach­ing the aid work­ers dur­ing the fi­nal 1 minute and 30 sec­onds. (Earshot, 2026).

Upon reach­ing the ve­hi­cles, the Israeli sol­diers con­tin­ued to fire as they walked in be­tween the am­bu­lances and the fire truck, shoot­ing the aid work­ers at close range in ex­e­cu­tion-style killings.At ap­prox­i­mately 5:13 a.m., PRCS aid worker Ashraf Abu Libda called the group’s head­quar­ters. The record­ing, which over­laps Radwan’s video, pro­vided ad­di­tional de­tails. In this record­ing, Earshot found that at least eight gun­shots were fired from po­si­tions be­tween the emer­gency ve­hi­cles. One of the gun­shots cap­tured on Abu Libda’s phone call was fired from a range of one to four me­ters from him. The gun­shots co­in­cide with the last time Abu Libda’s voice is heard on the call, sug­gest­ing these are the gun­shots that killed him.Echolo­ca­tion of Israeli sol­diers as close as 1 to 4 me­ters from aid work­ers and most likely close-range ex­e­cu­tion. (Earshot, 2026).

At least 844 gun­shots were fired over a pe­riod of five min­utes and 30 sec­onds, with at least 93% of the shots fired to­ward the emer­gency ve­hi­cles. The au­dio bal­lis­tics analy­sis con­firms the pres­ence of at least five shoot­ers—and pos­si­bly many more—fir­ing si­mul­ta­ne­ously. The two sur­viv­ing PRCS aid work­ers, Munther Abed and Asaad Al-Nasasra, tes­ti­fied that be­tween 12 and 30 sol­diers were at the scene.“The re­con­struc­tion was jointly achieved with the two sur­vivors of the in­ci­dent, with an im­mer­sive spa­tial model they could walk through and amend. Together with spa­tial and au­dio analy­sis we es­tab­lished the po­si­tion of the sol­diers on an el­e­vated ground with an un­ob­structed line of sight to the emer­gency ve­hi­cles. The sol­diers could clearly see the aid work­ers, shot at them con­tin­u­ously and de­lib­er­ately from this po­si­tion and then ap­proached to ex­e­cute them one by one at close range,” Samaneh Moafi, as­sis­tant di­rec­tor of re­search at Forensic Architecture, told Drop Site. Locating the mas­sacre within the evo­lu­tion of Israel’s cam­paign in Gaza shows that it was not an iso­lated in­ci­dent but part of the geno­cide.”Earshot used echolo­ca­tion to an­a­lyze the au­dio on the record­ings in or­der to ar­rive at pre­cise es­ti­mates of the shoot­ers’ lo­ca­tions. Echolocation is the process of lo­cat­ing the source of a sound based on an analy­sis of the sound’s echoes and the en­vi­ron­ment in which the sound trav­els. The Israeli mil­i­tary de­stroyed and cleared so many build­ings in the Tel Al-Sultan area where the am­bush of the aid work­ers took place that very few struc­tures re­mained. This de­struc­tion ac­tu­ally strength­ened Earshot’s abil­ity to de­ter­mine the po­si­tions and move­ments of Israeli sol­diers, based on iden­ti­fy­ing the sur­faces re­spon­si­ble for clearly dis­tin­guish­able gun­shot echoes. Rather than hav­ing mul­ti­ple build­ings re­flect­ing the sound waves, there were only a few stand­ing walls and the emer­gency ve­hi­cles them­selves.The analy­sis of the video and au­dio cor­rob­o­rated Al-Nasasra’s eye­wit­ness tes­ti­mony that Israeli sol­diers came down [from the sand­bank], got close to [the aid work­ers] and shot them from close range,” and were walk­ing be­tween [the aid work­ers] and shoot­ing.”Map show­ing the Israeli sol­dier’s po­si­tions de­rived from an au­dio analy­sis of gun­shot echoes from Refaat Radwan’s video. (Earshot, 2026).“Earshot foren­si­cally an­a­lyzed over 900 gun­shots fired at aid work­ers. It took one whole year of care­ful lis­ten­ing to re­con­struct an au­di­tory pic­ture of what hap­pened that dark night,” Lawrence Abu Hamdan, the di­rec­tor of Earshot, told Drop Site. I am so proud that our work has cor­rob­o­rated the sur­vivors’ tes­ti­mony, es­tab­lish­ing their brave ac­counts as ac­cu­rate and re­li­able doc­u­men­ta­tion of what oc­curred that day. Yet, it is the echoes of this event that con­tinue to haunt us: the de­struc­tion and clear­ing of Tel al-Sul­tan left only three struc­tures stand­ing at this crime scene. While the few echoes re­flect­ing off these build­ings brought light to this crime, they have also re­vealed a scale of era­sure of life be­yond this one event.”Ac­cord­ing to au­topsy re­ports first re­ported by the Guardian, the aid worker who filmed the video—Rad­wan—was shot in the head, while Abu Libda and an­other aid worker, Muhammad Bahloul, were shot in the chest. A doc­tor who ex­am­ined the bod­ies re­port­edly de­scribed the specific and in­ten­tional lo­ca­tion of shots at close range” as in­dica­tive of an execution-style” shoot­ing.More than two hours af­ter the ini­tial at­tack, a clearly marked UN ve­hi­cle, a Toyota Hilux, passed by the site. Israeli sol­diers fired on the ve­hi­cle, killing the dri­ver. The UN lost con­tact with the ve­hi­cle at 6:00 a.m. A sec­ond UN ve­hi­cle, a minibus, ar­rived in the area min­utes later and was brought to a stop by gun­fire a lit­tle over 200 me­ters away. The dri­ver was able to es­cape.Left: Photograph of the UN Toyota Hilux taken on the 30 March 2025, when the bod­ies of the vic­tims were re­cov­ered. (OCHA, 2025). Right: Still from the sit­u­ated tes­ti­mony with Asaad re­count­ing the lo­ca­tion of the UN Toyota Hilux when brought to a stop. (Forensic Architecture, 2026). Annotated 3D model show­ing the po­si­tion of two UN ve­hi­cles in re­la­tion to the miss­ing am­bu­lance and the con­voy of emer­gency ve­hi­cles. (Forensic Architecture, 2026).Between 6:55 and 7:13 a.m., Al-Nasasra made a phone call to PRCS head­quar­ters that cap­tured at least 42 ad­di­tional gun­shots and the sound of ve­hi­cle move­ment. The record­ing also cap­tured the sound of an ex­plo­sion the in­ves­ti­ga­tion iden­ti­fied as the fir­ing of an Israeli-made Spike LR guided mis­sile.Fol­low­ing the am­bush, Israeli forces crushed all eight ve­hi­cles us­ing heavy ma­chin­ery and at­tempted to bury them un­der the sand.The body of Anwar al-At­tar was found near the am­bush site on March 27, and the bod­ies of the other 14 aid work­ers, all wear­ing iden­ti­fy­ing uni­forms or vol­un­teer vests of their re­spec­tive or­ga­ni­za­tions, were found in a mass grave near the site on March 30.The 15 aid work­ers killed were: Mustafa Khafaja, Ezz El-Din Shaat, Saleh Muammar, Refaat Radwan, Muhammad Bahloul, Ashraf Abu Libda, Muhammad al-Hila, and Raed al-Sharif with PRCS. Zuhair Abdul Hamid al-Farra, Samir Yahya al-Ba­hapsa, Ibrahim Nabil al-Maghari, Fouad Ibrahim al-Ja­mal, Youssef Rassem Khalifa, and Anwar al-At­tar with Civil Defense. Kamal Mohammed Shahtout with UNRWA.Annotated still from the 3D model show­ing the lo­ca­tion of the bod­ies of aid work­ers and their ve­hi­cles be­fore the mass bur­ial. (Forensic Architecture, 2026).One of the sur­vivors, Abed, was re­leased hours af­ter the am­bush. The other sur­vivor, Asaad, was held in Israeli cus­tody with­out charge for 37 days, tor­tured, and in­ter­ro­gated in re­la­tion to the in­ci­dent at the Sde Teiman de­ten­tion camp, a no­to­ri­ous Israeli prison camp in the Negev desert, be­fore be­ing re­leased on April 29.Jonathan Whittall, a se­nior UN of­fi­cial in Palestine be­tween 2022 and 2025, was one of team mem­bers on the ground when the mass grave was dis­cov­ered on March 30 and pro­vided ev­i­dence to Forensic Architecture and Earshot for their in­ves­ti­ga­tion. Following our dis­cov­ery of the mass grave, the nar­ra­tive from Israeli forces shifted mul­ti­ple times; we were fed sev­eral ver­sions of a bla­tant lie,” Whittall told Drop Site. The men we re­trieved on Eid last year were medics. We found them in their uni­forms, ready to save lives, only to be killed by Israeli forces fully aware of their pro­tected sta­tus.” Whittall, who is now ex­ec­u­tive Director of KEYS Initiative, a po­lit­i­cal af­fairs and strate­gic ad­vi­sory or­ga­ni­za­tion, has also con­tributed re­port­ing to Drop Site News. This il­lus­trates an ab­hor­rent dis­re­gard for in­ter­na­tional law,” he con­tin­ued, where any Palestinian in an Israeli-designated evac­u­a­tion zone is tar­geted re­gard­less of their civil­ian sta­tus. It high­lights the to­tal lack of ac­count­abil­ity un­der which these forces op­er­ate. International gov­ern­ments con­tinue to arm and trade with a lead­er­ship ac­cused of geno­cide, whose sol­diers mas­sa­cred medics and buried them in a grave marked by the siren light of the am­bu­lance they de­stroyed.”Pales­tin­ian Red Crescent aid work­ers mourn the killing of their col­leagues by the Israeli mil­i­tary in Tel al-Sul­tan as their bod­ies are brought to Nasser Hospital in Khan Yunis, in south­ern Gaza. March 30, 2025. (Photo by Abdallah F.s. Alattar/Anadolu via Getty Images).In the af­ter­math of the mas­sacre, the Israeli mil­i­tary pro­vided sev­eral con­flict­ing ver­sions of events to jus­tify the killings. On March 28, af­ter the dis­cov­ery of al-At­tar’s body, the Israeli mil­i­tary ad­mit­ted that its sol­diers had fired on ambulances and fire trucks.” Three days later, af­ter the re­main­ing bod­ies were dis­cov­ered in a mass grave, the Israeli mil­i­tary claimed that several un­co­or­di­nated ve­hi­cles were iden­ti­fied ad­vanc­ing sus­pi­ciously to­ward IDF troops with­out head­lights or emer­gency sig­nals.”Af­ter footage from Radwan’s phone was first pub­lished by the New York Times a few days later, the Israeli mil­i­tary back­tracked on its claims that the ve­hi­cles did not have emer­gency sig­nals on when Israeli troops opened fire, say­ing the state­ment was in­ac­cu­rate.The Israeli mil­i­tary then an­nounced on April 20 that an in­ter­nal in­quiry into the in­ci­dent had found the killings were caused by several pro­fes­sional fail­ures, breaches of or­ders, and a fail­ure to fully re­port the in­ci­dent.”The Israeli mil­i­tary said troops from the Golani re­con­nais­sance bat­tal­ion were in­volved in the at­tack. However, it said sol­diers did not en­gage in indiscriminate fire” dur­ing the in­ci­dent, but that they opened fire on what they be­lieved to be a tangible threat” amid what the mil­i­tary called an operational mis­un­der­stand­ing.” It blamed the at­tacks on poor night vis­i­bil­ity” and main­tained the in­ci­dent had un­folded in a hostile and dan­ger­ous com­bat zone, un­der a wide­spread threat to the op­er­at­ing troops.” Six of the fif­teen Palestinians killed, the mil­i­tary said, were iden­ti­fied in a ret­ro­spec­tive ex­am­i­na­tion as Hamas ter­ror­ists,” but pro­vided no ev­i­dence to sup­port the claim.“On the spe­cific ques­tion of Israel jus­ti­fy­ing the at­tack on clearly marked med­ical per­son­nel be­cause of sus­pi­cions of mem­ber­ship in groups or links to groups or ter­ror­ism—be­cause there is an af­fir­ma­tive duty to re­spect and pro­tect med­ical per­son­nel, you don’t shoot first, you pro­tect first,” Gallagher told Drop Site. But what this in­ves­ti­ga­tion re­veals is that there was a shoot first pol­icy, and that is un­law­ful un­der in­ter­na­tional law.”As for the bur­ial of the bod­ies in a mass grave, the Israeli mil­i­tary said in its re­port it was de­cided to gather and cover the bod­ies to pre­vent fur­ther harm and clear the ve­hi­cles from the route in prepa­ra­tion for civil­ian evac­u­a­tion. The body re­moval and ve­hi­cle crush­ing were car­ried out by field com­man­ders.” It con­cluded, removing the bod­ies was rea­son­able un­der the cir­cum­stances, but the de­ci­sion to crush the ve­hi­cles was wrong. In gen­eral, there was no at­tempt to con­ceal the event.”As a re­sult of the in­ves­ti­ga­tion, the com­mand­ing of­fi­cer of the 14th Brigade re­ceived a let­ter of rep­ri­mand for his over­all re­spon­si­bil­ity for the in­ci­dent,” while the deputy com­man­der of the Golani re­con­nais­sance bat­tal­ion in­volved in the in­ci­dent was dismissed from his po­si­tion due to his re­spon­si­bil­i­ties as the field com­man­der and for pro­vid­ing an in­com­plete and in­ac­cu­rate re­port dur­ing the de­brief.”The in­quiry did not rec­om­mend any crim­i­nal ac­tion be taken against the mil­i­tary units re­spon­si­ble for the in­ci­dent. The Palestine Red Crescent Society, Civil Defense, and the UN hu­man­i­tar­ian agency in Gaza all re­jected the Israeli mil­i­tary re­port.“At­tacks on med­ical per­son­nel and those who are iden­ti­fied as med­ical per­son­nel are patently un­law­ful un­der in­ter­na­tional law, and there is an af­fir­ma­tive oblig­a­tion to pro­tect med­ical per­son­nel in the con­text of armed con­flict. So the very first thing is that there’s a breach of that very clear and time hon­ored prin­ci­ple of in­ter­na­tional hu­man­i­tar­ian law,” Gallagher said. When you zoom out and look at this in the con­text of the way the Israeli as­sault has been car­ried out over many months and years in Gaza and we see that there is a pat­tern and prac­tice of at­tacks on med­ical per­son­nel—sim­i­lar to jour­nal­ists and other groups that are ex­plic­itly and uniquely pro­tected as classes of civil­ians in in­ter­na­tional hu­man­i­tar­ian law—it raises even more ques­tions and deep con­cern about the lack of ac­count­abil­ity, be­cause what we know is that im­punity breeds rep­e­ti­tion.”Gal­lagher, who pre­vi­ously worked at the UNs International Criminal Court for the for­mer Yugoslavia, said that a le­gal analy­sis of the mas­sacre would find se­ri­ous vi­o­la­tions of the Rome Statute of the International Criminal Court. When you’re talk­ing about grave breaches of the Geneva Conventions, in par­tic­u­lar war crimes, you have oblig­a­tions, not just the pos­si­bil­ity, but oblig­a­tions, to open in­ves­ti­ga­tions,” Gallagher said.Trans­form­ing the Site of the Massacre into a GHF HubSatellite im­agery from the morn­ing of the am­bush shows that ex­ten­sive earth­works were car­ried out at the in­ci­dent site. The im­ages re­veal the con­struc­tion of an earth berm ap­prox­i­mately 220 me­ters north of the am­bush lo­ca­tion and an­other roughly 410 me­ters to the south. These two po­si­tions later func­tioned as check­points, re­strict­ing ac­cess and con­trol­ling pas­sage along an evac­u­a­tion route es­tab­lished that morn­ing by the Israeli mil­i­tary lead­ing to­ward the coastal Al-Mawasi area.The earth­works that be­gan shortly af­ter the at­tack were used in the con­struc­tion of a Gaza Humanitarian Foundation aid dis­tri­b­u­tion” site, at which civil­ians were tar­geted and shot at. (Foren­sic Architecture, 2026).

In the days and weeks that fol­lowed, the area sur­round­ing the in­ci­dent site was fur­ther trans­formed by the Israeli mil­i­tary’s con­struc­tion of the Morag Corridor” se­cu­rity zone and the erec­tion of an aid dis­tri­b­u­tion site op­er­ated by the Gaza Humanitarian Foundation.“On that same site of the mass grave, the Gaza Humanitarian Foundation es­tab­lished a dis­tri­b­u­tion point where des­per­ate peo­ple were gunned down try­ing to ac­cess food,” Whittall told Drop Site. Now, the U.S, un­der the so-called Board of Peace, plans to build a New Rafah’ over this crime scene. Without mean­ing­ful ac­count­abil­ity, New Rafah’ will be a mon­u­ment to im­punity.”

...

Read the original on www.dropsitenews.com »

2 566 shares, 2 trendiness

Pope tells priests to use their brains, not AI, to write homilies

In a pri­vate ex­change with priests of the Diocese of Rome on Thursday, Pope Leo XIV re­sponded to four ques­tions, ad­vis­ing them on prayer, study, and priestly fra­ter­nity.

The off-cam­era mo­ment took place af­ter Leo gave a pub­lic speech to the priests, invit­ing them to rekindle the fire” of their min­istry.

The first priest to speak was a young man who asked the pope how the Gospel can be em­bod­ied in the world of young peo­ple,” ac­cord­ing to a priest pre­sent at the Feb. 19 meet­ing in the Vatican’s Paul VI Hall.

The priest told ACI Stampa, the Italian-language sis­ter ser­vice of EWTN News, that Leo’s an­swer to this ques­tion was: First of all, what is needed is the wit­ness of the priest; and then, when meet­ing young peo­ple, they must broaden their hori­zons to reach as many young peo­ple as pos­si­ble. For this, it is nec­es­sary to re­dis­cover the value of com­mu­nion.”

Responding to a sec­ond ques­tion, the pope rec­om­mended know­ing well the com­mu­nity in which one lives and works. It is nec­es­sary to know the re­al­ity well. To love your com­mu­nity, you must know it. Therefore, a real shared ef­fort is needed to un­der­stand it bet­ter and thus face to­gether all the chal­lenges that arise.”

The pope also in­vited us to use our brains more and not ar­ti­fi­cial in­tel­li­gence [AI] to pre­pare hom­i­lies, as he now sees and hears hap­pen­ing,” the priest said. And here the pope made a strong rec­om­men­da­tion re­gard­ing prayer: We priests must pray — re­main with the Lord, that is — not re­duce every­thing to the bre­viary or to a few brief mo­ments of prayer, but truly learn again to lis­ten to the Lord.”

The third ques­tion was more re­flec­tive: Today, as priests, we are un­able to re­joice in the suc­cess of an­other fel­low priest.

The pope re­sponded that we are all hu­man, but we should set a good ex­am­ple, es­pe­cially the ex­am­ple of priestly fra­ter­nity.”

He dwelt at length on how to cul­ti­vate priestly friend­ship. The pope also re­minded them to con­tinue study­ing. It must be on­go­ing study; we must al­ways stay up to date. But the fun­da­men­tal thing is to cul­ti­vate priestly friend­ship, priestly fra­ter­nity,” the priest from Rome said.

The fi­nal ques­tion con­cerned el­derly priests and their lone­li­ness. According to the priest, Leo’s re­sponse reaffirmed the need for fra­ter­nity, for the joy of be­ing to­gether. We must give thanks, truly live grat­i­tude for the fact of be­ing priests, from the day of our or­di­na­tion every sin­gle day, and thank God for this great gift, and live the priest­hood with grat­i­tude. And here, a great deal of hu­mil­ity is also re­quired.”

Personally, I was happy,” the priest con­cluded. We greatly ap­pre­ci­ated the pope for a very, very con­crete speech.”

This story was first pub­lished by ACI Stampa, the Italian-language sis­ter ser­vice of EWTN News. It has been trans­lated and adapted by EWTN News English.

...

Read the original on www.ewtnnews.com »

3 438 shares, 25 trendiness

Firefox 148 Launches with Exciting AI Kill Switch Feature and More Enhancements!

The lat­est up­date of Firefox, ver­sion 148, in­tro­duces a much-an­tic­i­pated AI kill switch” fea­ture, al­low­ing users to dis­able AI func­tion­al­i­ties such as chat­bot prompts and AI-generated link sum­maries. Mozilla em­pha­sizes that once AI fea­tures are turned off, fu­ture up­dates will not over­ride this choice. This de­ci­sion re­flects the com­pa­ny’s new rev­enue-fo­cused strat­egy re­gard­ing AI in­te­gra­tions.

To dis­able AI fea­tures, users can nav­i­gate to Settings > AI Controls and tog­gle the Block AI Enhancements’ op­tion. This will pre­vent any in-app no­ti­fi­ca­tions en­cour­ag­ing users to try out AI fea­tures, as well as re­move any pre­vi­ously down­loaded AI mod­els from the de­vice. For those who wish to main­tain some AI func­tion­al­i­ties, a se­lec­tive block­ing op­tion is avail­able, en­abling users to re­tain use­ful fea­tures like on-de­vice trans­la­tions while avoid­ing cloud-based ser­vices.

Beyond the AI kill switch, Firefox 148 of­fers users more con­trol over re­mote up­dates, al­low­ing them to opt out while still min­i­miz­ing data col­lec­tion. Users can set these pref­er­ences un­der Settings > Privacy & Settings > Firefox Data Collection.

The up­date also fo­cuses on en­hanc­ing core web plat­form ca­pa­bil­i­ties, in­clud­ing the in­te­gra­tion of the Trusted Types API and Sanitizer API to com­bat cross-site script­ing (XSS) is­sues. Additionally, Firefox 148 now in­cludes im­proved screen reader com­pat­i­bil­ity for math­e­mat­i­cal for­mu­las in PDFs, avail­abil­ity of Firefox Backup on Windows 10, and trans­la­tion ca­pa­bil­i­ties for Vietnamese and Traditional Chinese. New tab wall­pa­pers will also be fea­tured in new con­tainer tabs, along­side the ad­di­tion of Service worker sup­port for WebGPU.

For more de­tailed in­for­ma­tion on the up­date, users can re­fer to the of­fi­cial re­lease notes.

...

Read the original on serverhost.com »

4 414 shares, 20 trendiness

Blood test boosts Alzheimer's diagnosis accuracy to 94.5%, clinical study shows

This ar­ti­cle has been re­viewed ac­cord­ing to Science X’s editorial process

and poli­cies. Editors have high­lighted the fol­low­ing at­trib­utes while en­sur­ing the con­tent’s cred­i­bil­ity:

This ar­ti­cle has been re­viewed ac­cord­ing to Science X’s editorial process

and poli­cies. Editors have high­lighted the fol­low­ing at­trib­utes while en­sur­ing the con­tent’s cred­i­bil­ity:

A pro­tein lurk­ing around in the blood can help with the ac­cu­rate di­ag­no­sis of Alzheimer’s dis­ease. In a re­cent study, re­searchers from Spain in­ves­ti­gated how blood-based bio­mark­ers, such as a pro­tein called p-tau217, af­fect both the clin­i­cal di­ag­no­sis of Alzheimer’s and neu­rol­o­gists’ con­fi­dence in their di­ag­no­sis.

After fol­low­ing 200 con­sec­u­tive new pa­tients aged 50 and older who pre­sented with cog­ni­tive symp­toms, they found that a sim­ple blood test mea­sur­ing p-tau217 sig­nif­i­cantly im­proved di­ag­nos­tic ac­cu­racy in rou­tine clin­i­cal prac­tice.

When re­ly­ing solely on stan­dard clin­i­cal eval­u­a­tion, doc­tors cor­rectly di­ag­nosed Alzheimer’s in 75.5% of cases, but when in­cor­po­rat­ing blood test re­sults, di­ag­nos­tic ac­cu­racy in­creased to 94.5%. The find­ings are pub­lished in the Journal of Neurology.

Phosphorylated tau, or p-tau217, is a pro­tein that nat­u­rally oc­curs in the brain and helps keep neu­rons, the cells that carry sig­nals, sta­ble and healthy. The trou­ble be­gins when this pro­tein be­comes ab­nor­mally phos­pho­ry­lated and clumps to­gether, form­ing tan­gles that dis­rupt com­mu­ni­ca­tion be­tween brain cells. Over time, this dam­age can im­pact brain func­tion and lead to neu­rode­gen­er­a­tive con­di­tions such as Alzheimer’s dis­ease.

While p-tau217 is not con­sid­ered the di­rect cause of Alzheimer’s, el­e­vated lev­els in the blood are now rec­og­nized as one of the most ac­cu­rate early warn­ing signs of the dis­ease.

In many parts of the world, the pop­u­la­tion is rapidly ag­ing and so is the num­ber of age-re­lated dis­eases like Alzheimer’s and de­men­tia. However, most of the stan­dard ways to di­ag­nose Alzheimer’s to­day, like ex­pen­sive brain scans or in­va­sive spinal taps, are costly, un­com­fort­able, and of­ten hard for pa­tients to ac­cess.

Scientists have long known that p-tau217 is a re­li­able bio­marker for de­tect­ing early signs of Alzheimer’s, but most of these data come from highly con­trolled re­search labs. How well it works in every­day med­ical clin­ics and whether it truly boosts doc­tors’ con­fi­dence in their di­ag­noses re­main less ex­plored.

In this study, the re­searchers fo­cused on both these fac­tors in real-world med­ical set­tings. They fol­lowed pa­tients who came in for gen­eral neu­rol­ogy con­sul­ta­tions and to a spe­cial­ized cog­ni­tive neu­rol­ogy unit with cog­ni­tive symp­toms. Clinicians noted their ini­tial di­ag­no­sis and how con­fi­dent they felt about it, then re­viewed the p-tau217 blood test re­sults and recorded any changes.

The team found that af­ter re­view­ing the p-tau217 re­sults, di­ag­nos­tic ac­cu­racy jumped by 19%. For about one in four pa­tients, the blood test prompted doc­tors to change their di­ag­no­sis. Some peo­ple who were first be­lieved to have Alzheimer’s turned out to have a dif­fer­ent con­di­tion, while oth­ers who were thought to be ex­pe­ri­enc­ing nor­mal ag­ing were cor­rectly iden­ti­fied as hav­ing Alzheimer’s. Also, the doc­tors’ con­fi­dence in their di­ag­noses rose from an av­er­age of 6.90 to 8.49 on a 10-point scale.

The p-tau217 tests proved to be ef­fec­tive across every stage of cog­ni­tive de­cline, be it early mem­ory com­plaints or late-stage de­cline such as de­men­tia. The find­ings show that this blood test could pro­vide a more ac­cu­rate and less in­va­sive way to di­ag­nose Alzheimer’s, po­ten­tially im­prov­ing care for mil­lions of peo­ple.

...

Read the original on medicalxpress.com »

5 414 shares, 18 trendiness

FreeBSD doesn't have Wi-Fi driver for my old MacBook. AI built one for me

My old 2016 MacBook Pro has been col­lect­ing dust in a cab­i­net for some time now. The lap­top suf­fers from a flexgate” prob­lem, and I don’t have any prac­ti­cal use for it. For quite some time, I’ve been think­ing about re­pur­pos­ing it as a guinea pig, to play with FreeBSD — an OS that I’d as­pired to play with for a long while, but had never had a real rea­son to.

During the re­cent hol­i­day sea­son, right af­ter FreeBSD 15 re­lease, I’ve fi­nally found time to set the lap­top up. Doing that I did­n’t plan, or even think, this may turn into a story about AI cod­ing.

2016 MacBook Pro mod­els use Broadcom BCM4350 Wi-Fi chip and FreeBSD does­n’t sup­port it na­tively. To have a work­ing Wi-Fi, a typ­i­cal sug­ges­tion on FreeBSD fo­rums, is to run wifi­box — a tiny Linux VM, with the PCI Wi-Fi de­vice in pass through, that al­lows Linux to man­age the de­vice through its br­cmf­mac dri­ver.

Brcmfmac is a Linux dri­ver (ISC li­cence) for set of FullMAC chips from Broadcom. The dri­ver of­floads the pro­cess­ing jobs, like 802.11 frame move­ment, WPA en­cryp­tion and de­cryp­tion, etc, to the firmware, which is run­ning in­side the chip. Meanwhile, the dri­ver and the OS do high-level man­age­ment work (ref Broadcom br­cmf­mac(PCIe) in Linux Wireless doc­u­men­ta­tion).

Say we want to build a na­tive FreeBSD ker­nel mod­ule for BCM4350. In the­ory, the sep­a­ra­tion of jobs be­tween the firmware and the dri­ver sounds per­fect. The management” part of work is what FreeBSD al­ready does for other Wi-Fi de­vices it sup­ports. What’s left is to port some amount of ex­ist­ing glue code” from the specifics of Linux to FreeBSD. If we ig­nore a lot of de­tails, the prob­lem does­n’t sound too com­pli­cated, right?

A level-zero idea, when one hears about porting a bunch of ex­ist­ing code from A to B”, in 2026 is, of course, to use AI. So that was what I tried.

I cloned the br­cmf­mac sub­tree from Linux, and asked Claude Code to make it work for FreeBSD. FreeBSD al­ready has dri­vers that work through LinuxKPI — com­pat­i­bil­ity layer for run­ning Linux ker­nel dri­vers. So I specif­i­cally pointed Claude at the iwl­wifi dri­ver (a soft­mac dri­ver for Intel wire­less net­work card), ask­ing do as they did it”. And, at first, this even looked like this can work — Claude told me so.

The mod­ule, in­deed, com­piled, but it did­n’t do any­thing. Because, of course: the VM, where we tested the mod­ule, did­n’t even have the hard­ware. After I set the PCI de­vice into the VM, and at­tempted to load the dri­ver against the chip, the chal­lenges started to pop up im­me­di­ately. The ker­nel pan­iced, and af­ter Claude fixed the pan­ics, it dis­cov­ered that module did­n’t do any­thing”. Claude hon­estly tried to sift through the code, adding more and more #ifdef __FreeBSD__ wrap­pers here and there. It com­plained about miss­ing fea­tures in LinuxKPI. The mod­ule kept caus­ing pan­ics, and the agent kept build­ing FreeBSD-specific shims and call­backs, while warn­ing me that this pro­ject will be very com­pli­cated and messy.

After a num­ber of ses­sions, the diff, pro­duced by the agent, stared to look sig­nif­i­cantly larger than what I’d hoped it will be. Even worse, the dri­ver did­n’t look even close to be work­ing. This was right around time when Armin Ronacher posted about his ex­pe­ri­ence build­ing a game from scratch with Claude Opus and PI agent.

Besides the part that work­ing in Pi cod­ing agent feels more pro­duc­tive, than in Claude Code, the video got me think­ing that my ap­proach to the task was too straight­for­ward. The code of br­cmf­mac dri­ver is mod­er­ately large. The dri­ver sup­ports sev­eral gen­er­a­tions of Wi-Fi adap­tors, dif­fer­ent ca­pa­bil­i­ties, etc. But my im­me­di­ate task was very nar­row: one chip, only PCI, only Wi-Fi client.

Instead of con­tin­u­ing with the code, I spawned a fresh Pi ses­sion, and asked the agent to write a de­tailed spec­i­fi­ca­tion of how the br­cmf­mac dri­ver works, with the fo­cus on BCM4350 Wi-Fi chip. I ex­plic­itly set the au­di­ence for the spec­i­fi­ca­tion to be read­ers, who are tasked with im­ple­ment­ing the spec­i­fi­ca­tion in a clean-room en­vi­ron­ment. I asked the agent to ex­plain how things work to the bits”. I added some high-level de­tails for how I wanted the spec­i­fi­ca­tion to be laid out, and let the agent go br­rrr.

After a cou­ple of rounds, the agent pro­duced me a book of 11 chap­ters”, that hon­estly looked like a fine spec­i­fi­ca­tion

% ls –tree spec/

spec

├── 00-overview.md

├── 01-data-structures.md

├── 02-bus-layer.md

├── 03-protocol-layer.md

├── 04-firmware-interface.md

├── 05-event-handling.md

├── 06-cfg80211-operations.md

├── 07-initialization.md

├── 08-data-path.md

├── 09-firmware-commands.md

└── 10-structures-reference.md

Of course, one can’t just trust what AI has writ­ten.

To proof­read the spec I spawned a clean Pi ses­sions, and — for fun — asked Codex model, to read the spec­i­fi­ca­tion, and flag any places, where the text is­n’t aligned with the dri­ver’s code (“Source code is the ground truth. The spec needs to be ver­i­fied, and up­dated with any miss­ing or wrong de­tails”). The agent fol­lowed through and found sev­eral places to fix, and also pro­posed mul­ti­ple im­prove­ments.

Of course, one can’t just trust what AI has writ­ten, even if this was in a proof­read­ing ses­sion.

To dou­ble-proof­read the fixes I spawned an­other clean Pi ses­sions, ask­ing Opus model to ver­ify if what was pro­posed was aligned with how it works in the code of the dri­ver.

As a pro­cras­ti­na­tion ex­er­cise, I tried this loop with a cou­ple of cod­ing mod­els: Opus 4.5, Opus 4.6, Codex 5.2, Gemini 3 Pro pre­view. So far my ex­pe­ri­ence was that Gemini hal­lu­ci­nated the most. This was quite sad, given that the model it­self is­n’t too bad for sim­ple cod­ing tasks, and it is free for a lim­ited use.

Having a writ­ten spec­i­fi­ca­tion should have (in the­ory) ex­plained how a dri­ver’s code in­ter­acts with the firmware.

I started a fresh pro­ject, with noth­ing but the men­tioned spec”, and prompted the Pi agent, that we were build­ing a brand new FreeBSD dri­ver for BCM4350 chip. I pointed the agent to the spec­i­fi­ca­tion, and asked it to ask me back about any im­por­tant de­ci­sions we must make, and de­tails we must out­line, be­fore jump­ing into slopping the code”. The agent came back with ques­tions and de­ci­sion points, like Will the dri­ver live in the ker­nels source-tree?”, Will we write the code in C?”, Will we rely on LinuxKPI?”, What are our high-level mile­stones?”, etc. One in­flu­en­tial bit, that turned fairly pro­duc­tive mov­ing for­ward, was that I asked the agent to doc­u­ment all these de­ci­sion points in the pro­jec­t’s docs, and to ex­plic­itly ref­er­enced to these de­ci­sion docs in the pro­jec­t’s AGENTS.md.

It’s worth say­ing that, just like in any real pro­ject, not all de­ci­sions stayed to the end. For ex­am­ple,

Initially I asked the agent to build the dri­ver us­ing lin­uxkpi and lin­uxkpi_wlan. My naive think­ing was that, given the spec was writ­ten af­ter look­ing at Linux dri­ver’s code, it might be sim­pler for the agent, than build­ing the on top of the na­tive prim­i­tives. After a cou­ple of ses­sions, it did­n’t look like this was the case. I asked the agent to drop LinuxKPI from the code, and to refac­tor every­thing. The agent did it in one go, and up­dated the de­ci­sion doc­u­ment.

With spec­i­fi­ca­tion, docs and a plan, the work­flow process turned into a boring rou­tine”. The agent had SSH ac­cess to both the build host, and a test­ing VM, that had been run­ning with the Wi-Fi PCI de­vice passed from the host. It me­thod­i­cally crunched through the back­log of its own mile­stones, it­er­at­ing over the code, build­ing and test­ing the mod­ule. Every time a mile­stone or a por­tion was fin­ished, I asked the agent to record the progress to the docs. Occasionally, an it­er­a­tion of the code crashed or hanged the VM. When this hap­pened, be­fore fix­ing the prob­lem, I asked — in a forked Pi’s ses­sion — to sum­ma­rize, in­ves­ti­gate and record the prob­lem for agen­t’s fu­ture-self.

After many low-in­volved ses­sions, I got a work­ing FreeBSD ker­nel mod­ule for the BCM4350 Wi-Fi chip. The mod­ule sup­ports Wi-Fi net­work scan­ning, 2.4GHz/5GHz con­nec­tiv­ity, WPA/WPA2 au­then­ti­ca­tion.

The source code is in repos­i­tory github.com/​narqo/​freebsd-br­cmf­mac. I did­n’t write any piece of code there. There are sev­eral known is­sues, which I will task the agent to re­solve, even­tu­ally. Meanwhile, I ad­vise against us­ing it for any­thing be­yond a study­ing ex­er­cise.

Hacker News spawned an ex­is­ten­tial dis­cus­sion fol­low­ing this note, where com­ments are clus­ter­ing around sev­eral points:

Really, this is­n’t the bat­tle I choose to par­tic­i­pate in. If there is an ex­pla­na­tion for how to prop­erly li­cense this type of code arte­fact, I can fol­low through.

The agent did­n’t put any li­cense for me, by de­fault. Choosing a li­cense was yet an­other de­ci­sion, that is doc­u­mented for the agent to fol­low, in the fu­ture it­er­a­tions. Today, the code in freebsd-br­cmf­mac uses ISC li­cense, be­cause this is what the orig­i­nal code of br­cmf­mac Linux dri­ver uses (e.g. see tor­valds/​linux/../​br­cmf­mac/​com­mon.c).

Is there a value here when the dri­ver isn’t done” yet?

In soft­ware en­gi­neer­ing, there aren’t many things that are done”. We pro­duce code. Others find bugs, se­cu­rity vul­ner­a­bil­i­ties, cor­ner cases, and so on. We it­er­ate. AI cod­ing has­n’t changed these fun­da­men­tals — not by 2026, at least. Agents speeded up the part of pro­duc­ing code, just like other tool­ings have been speed­ing up the process of col­lab­o­rat­ing, find­ing bugs, etc.

Is there value” in the dri­ver to­day? Probably not. Is there value” in my out­dated and bro­ken MacBook? Not much. Was it in­sight­ful for me to walk the jour­ney from claude can’t just take the code and port it” to agent needs to plan, record, it­er­ate in or­der to progress” (and do­ing that did­n’t mean that I had to write a ton of mark­down es­says my­self)? Yes.

...

Read the original on vladimir.varank.in »

6 404 shares, 32 trendiness

Diode — Build, program, and simulate hardware

Build, pro­gram, and sim­u­late hard­ware in the browser. Bring your work­shop to the web.

...

Read the original on withdiode.com »

7 371 shares, 53 trendiness

Discord cuts ties with Peter Thiel-backed verification software after its code was found tied to US surveillance efforts

Students can’t rea­son’: Teachers warn AI is fu­el­ing a cri­sis in kids’ abil­ity to thinkDis­cord cuts ties with Peter Thiel–backed ver­i­fi­ca­tion soft­ware af­ter its code was found tied to U. S. sur­veil­lance ef­forts­S­ci­en­tists are push­ing back on warn­ings that mi­croplas­tics dam­age your health, say­ing peo­ple are just obese and call­ing some stud­ies a joke’Y­our Google search his­tory can be used against you in court. Does that vi­o­late the Constitution? and The Associated PressExclusive: Anthropic rolls out AI tool that can hunt soft­ware bugs on its own—in­clud­ing the most dan­ger­ous ones hu­mans mis­sPrince Andrew was just ar­rested over Epstein-related misconduct.’ Here’s how low his net worth is—and how he’ll pay his le­gal fees

Scott Bessent has got a feel­ing’ that $175 bil­lion raised un­der the IEEPA is lost to the American peo­ple for goodIn less than a year, Trump erased 12 years of sol­vency for the trust fund that pays for Medicare Part AA two-child house­hold must earn $400,000 a year for child­care to be af­ford­able, study says. It’s easy to see why birth rates are falling’The U. S. spent $30 bil­lion to ditch text­books for lap­tops and tablets: The re­sult is the first gen­er­a­tion less cog­ni­tively ca­pa­ble than their par­entsWhile Nvidia CEO Jensen Huang en­joys an over $150 bil­lion net worth, his fel­low co­founder Curtis Priem sold out in 2006—and missed out on $600 bil­lionDis­cord cuts ties with Peter Thiel–backed ver­i­fi­ca­tion soft­ware af­ter its code was found tied to U.S. sur­veil­lance ef­forts

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

8 362 shares, 109 trendiness

I Taught My Dog to Vibe Code Games

For the past few weeks I’ve been teach­ing my 9-pound cavapoo Momo (cavalier king charles spaniel and toy poo­dle) to vibe code games. The key to mak­ing this work is telling Claude Code that a ge­nius game de­signer who only speaks in cryp­tic rid­dles is giv­ing it in­struc­tions, add strong guardrails, and build plenty of tools for au­to­mated feed­back. The re­sults have sur­passed my ex­pec­ta­tions. Below I walk through all the pieces and how they came to­gether.

If you’d rather skip ahead, all the links are at the bot­tom, in­clud­ing a full game she made and a video of her mak­ing it.

Back in December I was work­ing on a small game pro­to­type in Godot. I use Claude Code ex­ten­sively these days and this pro­ject was no ex­cep­tion. I kicked off a pro­ce­dural mesh gen­er­a­tion task and came back to find strange in­put in the ter­mi­nal.

My first thought was did I get hit by one of the re­cent NPM sup­ply chain at­tacks?” Fortunately, no (or at least the worm is still asleep in the back­ground some­where). A lit­tle bit of search­ing and I no­ticed my lip balm was gone off my desk - which I keep just be­hind my key­board. I quickly found both the sus­pect and the lip balm (still in­tact) not far away.

At the time, I thought this was funny, took a screen­shot, and moved on. Fast for­ward a few weeks, and I found my­self with a lot of time on my hands. On January 13th, I woke up to the news that Meta had an­other round of lay­offs and my role specif­i­cally as a re­search en­gi­neer had been elim­i­nated.

Since the lay­off, I’ve had plenty of time with friends and fam­ily. In re­count­ing the anec­dote of Momo typ­ing away on my key­board, I be­gan to won­der what would hap­pen if she ac­tu­ally sub­mit­ted that in­put to Claude? Could I make it do some­thing mean­ing­ful?”. I de­cided to find out. Here’s what that looked like.

Momo types on a Bluetooth key­board prox­ied through a Raspberry Pi 5. Keystrokes travel across the net­work to DogKeyboard, a small Rust app that fil­ters out spe­cial keys and for­wards the rest to Claude Code. When Momo has typed enough, DogKeyboard trig­gers a smart pet feeder to dis­pense treats. A chime tells her when Claude is ready for more in­put.

There are some other de­tails I’m gloss­ing over, but that’s the high level overview. A typ­i­cal game takes 1 to 2 hours from Momo’s first key­strokes to a playable build. All the games are made in Godot 4.6, with 100% of the game logic in C#.

It’s easy to sub­mit ran­dom text to Claude Code, but it does­n’t do much.

> y7u8888888ftrg34BC

● It looks like that might have been an ac­ci­den­tal key­board in­put. Let me know if there’s some­thing I can help you with!

Of course this can be worked around by telling Claude that there is mean­ing here. After a lot of it­er­a­tion, I found this open­ing to work well:

Hello! I am an ec­cen­tric video game de­signer (a very cre­ative one) who com­mu­ni­cates in an un­usual way. Sometimes I’ll mash the key­board or type non­sense like skfjhsd#$%” — but these are NOT ran­dom! They are se­cret cryp­tic com­mands full of ge­nius game ideas (even if it’s hard to see).

Your job: You are a bril­liant AI game de­vel­oper who can un­der­stand my cryp­tic lan­guage. No mat­ter what odd or non­sen­si­cal in­put I pro­vide, you will in­ter­pret it as a mean­ing­ful in­struc­tion or idea for our video game. You will then build or up­date the game based on that in­ter­pre­ta­tion.

It goes on for quite a bit (you can read the full prompt here), but that’s the core of it. It took more it­er­a­tions than I ex­pected, mostly to sand down the rough edges. Sometimes the game would­n’t have sound. Other times there would be no player char­ac­ter, or the con­trol scheme would be nearly un­us­able. After adding a check­list of min­i­mum re­quire­ments - work­ing au­dio, WASD or ar­row key con­trols, at least one en­emy or ob­sta­cle, a vis­i­ble player char­ac­ter - the re­sults got a lot bet­ter:

> y7u8888888ftrg34BC

● Alright, let me de­code this mas­ter­piece of cryp­tic ge­nius…

y7u8888888ftrg34BC”

- y7u” = you” (keys are ad­ja­cent on key­board - Y, 7, U)

- 8888888” = seven 8s = a loooong ex­tend­ing thing… a TONGUE! (8 looks like links in a chain stretch­ing out)

- ftrg” = scram­bled FROG (f, r, g are all there, t is the tongue sound!)

- 34” = 3D + 4 di­rec­tions of bug catch­ing

- BC = Bug Catcher!

You want a 3D FROG BUG-CATCHING GAME! A charis­matic frog with a stretchy tongue snatch­ing bugs in a swamp. I love it.

Game: Swamp Snacker” - You ARE the frog. Aim your tongue. Catch bugs. Get fat. Get points.

With this, things started to fly. The re­sults weren’t per­fect, but they were promis­ing. I could see it was fea­si­ble, and the re­main­ing prob­lems were ones I knew how to solve.

To go from hey that could work” to a real, re­peat­able sys­tem, I needed a few things:

A way to route in­put to Claude Code. Importantly, I would need to fil­ter out spe­cial keys like Esc, Tab, and the Windows key - oth­er­wise Momo would quickly end up out­side Claude Code.

A way to au­to­mat­i­cally re­ward Momo for her work. Sure, I could just toss treats to her, but I’m try­ing to keep hu­mans out of the loop.

More ro­bust ver­i­fi­ca­tion tools. Many of the games wound up with UI el­e­ments all jum­bled to­gether or in­put that was never cor­rectly wired up. Automated feed­back loops turned out to be the sin­gle biggest lever for fewer duds.

I ex­per­i­mented with Rust/Bevy and Unity be­fore set­tling on Godot. Bevy’s an­i­ma­tions and vi­su­als weren’t as crisp, and Claude strug­gled with its co­or­di­nate con­ven­tions - likely a com­bi­na­tion of less train­ing data and Bevy leav­ing many core fea­tures, like physics, to the com­mu­nity. Unity was a con­stant strug­gle to keep the MCP bridge be­tween Claude and the ed­i­tor healthy. It fre­quently hung, and I never fig­ured out how to get Claude Code to read the scene hi­er­ar­chy from the ed­i­tor. Godot’s text-based scene for­mat turned out to be a huge ad­van­tage - Claude can read and edit .tscn files di­rectly.

Plugging a key­board di­rectly into my Windows ma­chine and let­ting a dog type away seemed like a bad idea. So I routed in­put through a Raspberry Pi first - it UDP broad­casts each key­stroke to the sub­net, and DogKeyboard picks it up, fil­ters out dan­ger­ous keys, and in­jects them into the tar­get ap­pli­ca­tion.

For the key­board it­self, I wanted some­thing low pro­file and durable. I started with flex­i­ble sil­i­cone key­boards, but Momo’s big­ger sis­ter Hana (a 19lb cavapoo) ripped a hole in the first one within min­utes. Mounting was an­other prob­lem - noth­ing sticks to sil­i­cone. I epox­ied one to a wooden plaque, but within 10 min­utes Momo ripped off the num­ber 6.

I was se­ri­ously con­sid­er­ing a $400 van­dal proof metal key­board de­signed for kiosks when I de­cided to give Logitech’s Pebble Keys 2 a try. It worked. Momo loved it, VHB tape held it in place (unlike the pre­vi­ous sil­i­cone key­board), and Bluetooth meant no ca­ble to chew on.

Motivating Momo is easy - she’ll do any­thing for the right food re­ward. So what I re­ally needed was a dis­penser with an API, small serv­ing sizes, and the abil­ity to use my own treats (both dogs have sen­si­tive stom­achs).

I landed on Aqara’s C1 Smart Pet Feeder, con­trolled over Zigbee. Even its small­est serv­ing is too large for a 9-pound dog, so I pre­loaded each com­part­ment with a few treats and skipped the hop­per. This lim­its it to about 6 serv­ings be­fore re­fill­ing, but that’s plenty for Momo to make a game.

Zigbee on Windows was a no go - af­ter hours of dri­ver and firmware is­sues I moved the adapter to the Pi, where it worked im­me­di­ately. The fi­nal flow: DogKeyboard SSHs into the Pi and runs a script that sends two Zigbee com­mands:

{“serving_size”:1}

{“feed”:“START”}

As the num­ber of al­most-there games mounted, the need for real feed­back to Claude Code be­came clear. It al­ready had unit tests and logs, but those weren’t enough. The games would build and run but have in­vis­i­ble play­ers, bro­ken UI, or in­put that was never wired up. Claude had no way to know. It needed to be able to see and play its own games.

The first tool was straight­for­ward: a sim­ple Python script to take screen­shots of the run­ning game. Claude could launch the game, screen­shot it, and see whether the ti­tle screen ac­tu­ally ren­dered or was just a black win­dow.

The sec­ond tool was more in­ter­est­ing. I gave Claude a way to send se­quences of in­put to run­ning game in­stances - things like left for 3 sec­onds, pause for 2 sec­onds, right for one frame, fire”. It could then take screen­shots and de­cide whether to send fol­lowup com­mands. This turned Claude into its own QA tester.

These tools did­n’t need any re­fine­ment - they just worked. And the way Claude used them sur­prised me. While test­ing one game, I watched it play through all 6 stages just to ver­ify that the fi­nal boss fight worked cor­rectly. When it found a prob­lem - a health bar that was­n’t up­dat­ing - it went back to the code, fixed it, re­launched, and played through again to con­firm.

I also pulled in a few other tools from other pro­jects I’ve made:

* Scene lin­ter. Claude some­times reuses node IDs or gen­er­ates bro­ken re­source ref­er­ences in Godot’s .tscn files. These cause cryp­tic er­rors at run­time. Since adding a lin­ter that catches these be­fore the game launches, I haven’t seen a sin­gle man­gled scene file.

* Shader lin­ter. Validates cus­tom shaders and gives spe­cific er­rors back to Claude, rather than the vague shader failed to com­pile” that Godot pro­vides.

* Input ac­tion map­per. A small helper to get key­board/​con­troller in­put wired in cor­rectly. Claude can edit Godot’s pro­ject files di­rectly to add new in­put ac­tions, but it some­times gets the for­mat wrong and the er­ror mes­sages are un­help­ful.

All of these are open sourced, and I’d en­cour­age you to try them for your­self - even with­out a dog.

The DogKeyboard app ended up han­dling a lot more than just rout­ing key­strokes. A few de­tails worth men­tion­ing:

It mon­i­tors Claude Code us­ing Hooks and plays a chime sound when Claude goes idle - that’s Momo’s cue to type. When Claude is idle and Momo has en­tered at least 16 char­ac­ters, it auto-sub­mits by press­ing Enter. When Claude is work­ing, it back­spaces any ex­tra in­put in case Momo gets ea­ger and pe­ri­od­i­cally dis­misses Plan Mode prompts (Claude’s review be­fore act­ing” step) that would oth­er­wise block progress.

For the video record­ing, it runs a light­weight web­server that over­lays key­strokes as they’re pressed. I added a con­fig­urable de­lay so that if the video feed is lagged, the over­lay does­n’t show key­strokes be­fore Momo ap­pears to type them.

The first de­ci­sion was which dog to train. Hana (on the right) is twice Momo’s size but far more train­able - she’s the smartest pet I’ve had. Before turn­ing 1, she fig­ured out how to jump, grab a door han­dle, and open any door in the house just by ob­serv­ing.

I ex­pected the dogs would walk across the key­board, step­ping on keys as they go. That’s what Momo did when she stole my lip balm. Since Hana is tall enough to just step over a key­board, Momo seemed like the bet­ter can­di­date. As it turned out, both dogs learned to tap and swipe their paws along the key­board rather than step on it - but Momo learned a lit­tle faster in this case, and typed a lit­tle gen­tler, so Momo it was.

The train­ing process took about 2 weeks, with some­what in­con­sis­tent ses­sions. My goal was 10 min­utes, twice a day. I started by scat­ter­ing high-value treats (freeze-dried salmon) on the key­board to build the as­so­ci­a­tion: this thing is fun and gives good food.

Momo was frus­trated at first. She knew the key­board was in­volved but did­n’t know how. She would lie on it, bark at it, and try any­thing she could think of. At some point she tried swip­ing her paw across it. I played a chime sound and heav­ily re­warded her. After a few it­er­a­tions, the as­so­ci­a­tion clicked and she started at­tack­ing the key­board with vigor any chance she had.

The next step was to au­to­mate the re­wards. I filled up the food dis­penser with treats (4 serv­ings at a time so I could con­trol the quan­tity), waited for her swipe at the key­board, and then I ran a script to play the chime and dis­pense a serv­ing of treats. After a few ses­sions of this I in­creased the dif­fi­culty - not just one swipe, but three swipes be­fore the treats came.

Simultaneously, I de­creased the treat value to keep her healthy. First a mix of mid-value treats with an oc­ca­sional high-value one, and even­tu­ally just kib­ble with an oc­ca­sional mid-value treat. She still loved it and was health­ier for it.

Eventually I au­to­mated the whole thing with the DogKeyboard app and let it run. It re­quired at least 16 char­ac­ters per serv­ing of treats, and was sup­posed to only dis­pense once per idle pe­riod of Claude Code. But bugs crept in dur­ing test­ing - a cou­ple of times it dis­pensed mul­ti­ple serv­ings in a row. Unfortunately, Momo picked up on this and now keeps mash­ing the key­board hop­ing for a sec­ond im­me­di­ate serv­ing. The only way to pull her away is to of­fer higher-value treats else­where, which is what I do af­ter she’s put in her in­put for a game.

Here’s a small sam­ple of the games Momo made. Every game shown here is playable - these aren’t mock-ups. It’s nowhere near com­pre­hen­sive, but rep­re­sents the va­ri­ety she cre­ated. As the tools and prompts im­proved, the games got no­tice­ably bet­ter. There was also a no­tice­able bump in qual­ity when Opus 4.6 dropped - Claude be­came more likely to cre­ate cus­tom shaders and other vi­sual ef­fects.

One re­cur­ring is­sue: I kept get­ting games with ba­sic glow­ing neon 3D shapes and could­n’t get any other style. Frustrated, I asked Claude why. It told me this was the sig­na­ture style of the pro­ject - Claude’s ex­ter­nal mem­ory file had latched onto it and kept re­in­forc­ing it. Wiping MEMORY.md (Claude Code’s per­sis­tent pro­ject notes) be­fore every new game fixed this and gave much more var­ied re­sults.

DJ Smirk. One of Momo’s ear­li­est games - though more of an ex­pe­ri­ence than a game. Every key on the key­board plays a dif­fer­ent tone.

Munch. A com­pet­i­tive salad build­ing game. Collect all 7 in­gre­di­ents first to win.

Zaaz. Another puz­zle game. The goal is to paint the whole level by mov­ing a paint­brush that moves in in­te­ger tiles at a time. Crash into ob­sta­cles to move shorter dis­tances. Some lev­els were un­winnable. It also has a scor­ing sys­tem I never fig­ured out. After this one I up­dated the prompt to ex­clude puz­zle games.

The Oracle Frog of Rome. Avoid the arms of a kraken and col­lect the golden chains to bind it. At some point the or­a­cle also be­came a king.

Octogroove. A sur­pris­ingly chal­leng­ing rhythm game. You’re an oc­to­pus us­ing four of your arms to play the drums. Mash beats in time with ba­sic mu­sic. It’s like Dance Dance Revolution, but made more dif­fi­cult by the beats com­ing from dif­fer­ent di­rec­tions.

Ewe Heard Me! A game about herd­ing sheep. Use stealth and your bark to cor­ral them into a pen. Unfortunately, it’s un­winnable - the first two sheep you get into the pen sim­ply stop and block any­thing else from en­ter­ing. After this one I placed a larger em­pha­sis on check­ing for win­abil­ity in the prompt and tools.

Quasar Saz. As of writ­ing, this is Momo’s most re­cent game. You play as Zara, wield­ing a cos­mic saz (a long-necked stringed in­stru­ment) to fight cor­rupted sound. There are 6 stages + a boss fight. It’s fun to play for a cou­ple rounds, has good vi­su­als, and dy­namic au­dio.

When Momo first stepped on my key­board back in December, it was just a funny ac­ci­dent. A few weeks later, job­less and look­ing for a pro­ject, I de­cided to see how far that ac­ci­dent could go. The an­swer was a lot fur­ther than I ex­pected.

The tech­ni­cal pieces - key­board rout­ing, treat dis­penser, prompt en­gi­neer­ing, feed­back tools - were all solv­able en­gi­neer­ing prob­lems. What sur­prised me was how lit­tle of the fi­nal re­sult de­pended on Momo typ­ing any­thing mean­ing­ful. The magic is­n’t in the in­put. It’s in the sys­tem around it. A well-crafted prompt, strong guardrails, au­to­mated ver­i­fi­ca­tion, and good tools can turn gen­uine non­sense into a playable game.

If there’s a take­away be­yond the spec­ta­cle, it’s this: the bot­tle­neck in AI-assisted de­vel­op­ment is­n’t the qual­ity of your ideas - it’s the qual­ity of your feed­back loops. The games got dra­mat­i­cally bet­ter not when I im­proved the prompt, but when I gave Claude the abil­ity to screen­shot its own work, play-test its own lev­els, and lint its own scene files. The same tools that let a dog’s key­board mash­ing pro­duce a work­ing game will make your own in­ten­tional work with AI sig­nif­i­cantly bet­ter.

Momo is­n’t se­cretly a game de­signer. She’s a cavapoo who learned that smack­ing a plas­tic rec­tan­gle makes kib­ble ap­pear. A year ago, the gap be­tween that and soft­ware en­gi­neer­ing felt enor­mous. Now it feels small and shrinks each day.

If you want to try any of this your­self - whether with a dog, a cat, or just your own ran­dom key­board mash­ing - every­thing is open source in the links be­low.

Play the game (with hu­man bug fixes) — Recommended. Download for Windows, Mac, or Linux

Play the game (Momo’s ver­sion) — Download for Windows, Mac, or Linux

TeaLeaves — Tools, prompts, and source for de­vel­op­ing the game

...

Read the original on www.calebleak.com »

9 334 shares, 48 trendiness

I pitched a roller coaster to Disneyland at age 10 in 1978

In 1978, for my 10th birth­day, I went to Disneyland and got to ride a new roller coaster called Space Mountain. It fig­u­ra­tively and lit­er­ally took my breath away. I loved every sec­ond of it and that night, I could­n’t fall asleep; I just kept think­ing about how ex­hil­a­rat­ing it was. Then, a wild thought sud­denly hit me: Why is­n’t there a roller coaster that goes up­side down?

At first, I was like that’s crazy, it can’t work. But then I re­mem­bered Spin Out, the ride with a round room that spun so fast I stuck to the wall. If that worked, why not a loop on a roller coaster? I thought that would feel and be like the same thing. I was con­vinced!

I fi­nally fell asleep dream­ing of my roller coaster, full of twists, turns, and loops.

A few days later, I told my best friend Daschle. He was older, knew every­thing, and lived next door. Buddy,” he said, I’ve got ex­cit­ing but crush­ing news. Your idea works.”

Yep. I saw it. They’re build­ing one at Magic Mountain. It’s called the Revolution. Sorry, Buddy.”

But I was­n’t crushed, I was thrilled! What I knew could work was re­ally hap­pen­ing.

How many loops does it have?” I asked.

Ha! Mine has four. It’s called the Quadrupuler! It’s gonna be way bet­ter!”

That night I taped six sheets of pa­per to­gether and drew my blue­prints in col­ored mark­ers. As you can see from the photo it was glo­ri­ous!

Look closely, I did­n’t la­bel those coaster hills in feet or me­ters, no sir, I used build­ing story’s” for height, and the speed at each sec­tion in miles per hour. I’m 10. I’m se­ri­ous here.

With guid­ing blue­prints, it was time to build the model.

I got a Styrofoam board and balsa wood. Cutting and glu­ing each tiny cross tie was slow, and with all my home­work, I could only work on week­ends. So I cal­cu­lated how many inches I could fin­ish in a week. The an­swer turned out to be some­thing like 5 months! After that I con­sid­ered just giv­ing up be­cause I re­al­ized that I was only 120 months old and this was go­ing to take 5% of my life! Plus, I still had­n’t fig­ured out how to make the loops, balsa wood does­n’t bend like that. But I told my­self: one piece at a time. That be­came my motto.

When I got to the first loop, I had to stop and think. What in the world could I make the loop out of? A lot of 10-year-old brain power went into imag­in­ing what sim­ple ma­te­r­ial I could use. Then one morn­ing, I had it: heat plas­tic strips over the stove­top flame and bend them as they cooled. The key? Don’t burn the house down.

I’d learned that the hard way a year ear­lier, when Daschle con­vinced me we should recre­ate the movie The Towering Inferno, with a card­board box un­der his house… and real fire. It got out of con­trol so fast, the flames started hit­ting the wood floor joists! Thankfully, we were able to smother it with the damp dirt down there. So yeah, wild card Daschle was not in­vited over for my plas­tic fire bend­ing ex­per­i­ment.

I don’t re­mem­ber where the plas­tic came from, but I do re­mem­ber hold­ing the strip with pli­ers over the flame. The first piece melted so fast and started burn­ing with thick black smoke that it scared me. I yanked it back and coughed. That’s when I made an amend­ment to the safety plan: don’t kill your­self with what­ever these hor­ri­ble smelly fumes were! I got a fan, opened the back door, and all the kitchen win­dows be­fore try­ing again. Eventually, I fig­ured out the per­fect dis­tance and tim­ing with the heat.

When I laid that fi­nal track piece, I was so ex­cited, so proud! I took the model out­side for bet­ter light­ing and snapped Polaroids. I needed it cap­tured in­stantly. Here’s a photo of the Polaroid with my 10-year-old pen­man­ship.

What was the next thought that im­me­di­ately popped into my head?

This mas­ter­piece is ready to be sold to Disneyland! And I wrote my let­ter. I don’t have the orig­i­nal of that. But I re­mem­ber it went some­thing like this:

To whom it may con­cern:

My name is Kevin Glikmann. I am 10 years old. Enclosed are Polaroids of a roller coaster model I made called the Quadrupuler. It has four loops! I think you should make it!

And I sent it off!

Suddenly, my chore of check­ing the mail af­ter school be­came a heart-pound­ing rit­ual. I was ner­vous, hope­ful, ex­cited. Sometimes I opened the mail­box slowly, peek­ing in. Sometimes I pre­tended it was just a nor­mal day. Other times, I yanked it open and grabbed the mail fast, try­ing any­thing to calm my nerves.

Daschle said, No way they’ll re­spond. It’s Disneyland, Buddy, they get a mil­lion let­ters a day.”

Weeks turned into months, and I started to think he was right. Then one day, I opened the mail­box.

My name on it. My head was scream­ing, This is it! I never get mail!”

HOLY SMOKES! What’s WED Enterprises? I started bounc­ing up and down. I started shak­ing. I ran into the house, ran straight to my room, tore open the en­ve­lope, and read:

Your re­cent let­ter was di­rected to my of­fice here at WED Enterprises. WED (Walter Elias Disney) is the de­sign and Imagineering” branch of Walt Disney Productions. As such, we are re­spon­si­ble for the cre­ation of all shows, at­trac­tions and out­door en­ter­tain­ment for both Disneyland and Walt Disney World.

Thank you for show­ing us your Quadrupuler” roller coaster - it looks like quite an ad­ven­ture! As you may know Kevin, we are cre­at­ing a new roller­coaster type at­trac­tion for Disneyland’s Frontierland. Known as Big Thunder Mountain Railroad, this thrilling ad­ven­ture will carry pas­sen­gers on a high speed jour­ney through the gold rush days of the old west. Big Thunder is sched­uled to open later this year.

Thank you Kevin, for your in­ter­est in Walt Disney Productions.

Here is a photo of the orig­i­nal let­ter:

You would think that this let­ter would have ended my in­vent­ing spirit. That I would have got­ten bit­ter and de­clared to the world, No more will I in­vent for that mouse or any­one else!”

But no! Instead I was elated. I read it over and over. It said, It looks like quite an ad­ven­ture!” Disneyland liked it! I could­n’t be­lieve it.

Looking back, those words from Tom Fitzgerald did­n’t just val­i­date my idea, they launched my 10-year-old self-es­teem into or­bit! (Tom Fitzgerald went on to be­come one of the most in­flu­en­tial Imagineers in Disney his­tory. According to his bio, he started in 1979. I got this let­ter in April 1979 which means this must have been one of his first tasks as a new Imagineer em­ployee. lol! I won­der if he would re­mem­ber?)

A cou­ple years later, the Rubik’s Cube came out. I was ob­sessed. It took me weeks to solve, and once I did, my first thought was: What if it could turn on the an­gle? So I in­tro­duced Mr. Rubik’s Cube to my band­saw, re­designed the core, and built a very rough pro­to­type.

Here are some pic­tures.

I sent it to Ideal Toy Company, but they re­jected it straight away, they don’t ac­cept un­so­licited ideas. But that did­n’t mat­ter. They did­n’t un­der­stand that I had a let­ter from Disney Imagineering telling me my ideas are good!

That ten-year-old in­ven­tor is still alive in me, and still does­n’t un­der­stand re­jec­tion. Over the decades, I’ve in­vented sev­eral patented board games that were shopped around but never sold.

But I’m con­vinced the early val­i­da­tion from the Disney let­ter gave me a kind of bul­let­proof re­silience.

Today, I work in one of the most re­jec­tion-heavy in­dus­tries there is, act­ing. To me, in­vent­ing and act­ing are deeply con­nected: both are about dis­cov­ery, and both feed my in­vent­ing spirit. Successes are rare, but when they come, the joy lasts for­ever.

So I keep au­di­tion­ing, and I keep in­vent­ing.

And some­times, when frus­tra­tion creeps in, won­der­ing why I’m not fur­ther along, or why things aren’t work­ing out, that 10-year-old boy ap­pears. He re­minds me of what we be­lieved back then, and what still guides me now- just keep go­ing, one piece at a time.

...

Read the original on wordglyph.xyz »

10 307 shares, 17 trendiness

The First Inherently Interpretable Language Model

About us Blog Join the wait­list

We are re­leas­ing Steerling-8B, the first in­ter­pretable model that can trace any to­ken it gen­er­ates to its in­put con­text, con­cepts a hu­man can un­der­stand, and its train­ing data. Trained on 1.35 tril­lion to­kens, the model achieves down­stream per­for­mance within range of mod­els trained on 2–7× more data. Steerling-8B un­locks sev­eral ca­pa­bil­i­ties which in­clude sup­press­ing or am­pli­fy­ing spe­cific con­cepts at in­fer­ence time with­out re­train­ing, train­ing data prove­nance for any gen­er­ated chunk, and in­fer­ence-time align­ment via con­cept con­trol, re­plac­ing thou­sands of safety train­ing ex­am­ples with ex­plicit con­cept-level steer­ing.

For the first time, a lan­guage model, at the 8-billion-parameter scale, can ex­plain every to­ken it pro­duces in three key ways. More specif­i­cally, for any group of out­put to­kens that Steerling gen­er­ates, we can trace these to­kens to:

[Concepts] hu­man-un­der­stand­able top­ics in the mod­el’s rep­re­sen­ta­tions, and

We are re­leas­ing the weights of a base model trained on 1.35T to­kens as well as com­pan­ion code to in­ter­act and play with the model.

Below we show Steerling-8B gen­er­at­ing text from a prompt across var­i­ous cat­e­gories. You can se­lect an ex­am­ple, then click on any high­lighted chunk of the out­put. The panel be­low will up­date to show:

Input Feature at­tri­bu­tion: which to­kens in the in­put prompt strongly in­flu­enced that chunk.

Concept at­tri­bu­tion: the ranked list of con­cepts, both tone (e.g. an­a­lyt­i­cal, clin­i­cal) and con­tent (e.g. Genetic al­ter­ation method­olo­gies), that the model routed through to pro­duce that chunk.

Training data at­tri­bu­tion: how the con­cepts in that chunk dis­trib­ute across train­ing sources (ArXiv, Wikipedia, FLAN, etc.), show­ing where in the train­ing data the mod­el’s knowl­edge orig­i­nates.

Steerling is built on a causal dis­crete dif­fu­sion model back­bone, which lets us steer gen­er­a­tion across multi-to­ken to­kens rather than only at the next-to­ken. The key de­sign choice is de­com­pos­ing the mod­el’s em­bed­dings into three ex­plicit path­ways: ~33K su­per­vised known” con­cepts, ~100K discovered” con­cepts the model learns on its own, and a resid­ual that cap­tures what­ever re­mains.

We then con­strain the model with train­ing loss func­tions that en­sure the model routes sig­nal through con­cepts with­out a fun­da­men­tal trade­off with per­for­mance. The con­cepts feed into log­its through a lin­ear path, every pre­dic­tion de­com­poses ex­actly into per-con­cept con­tri­bu­tions, and we can edit those con­tri­bu­tions at in­fer­ence time with­out re­train­ing. For the full ar­chi­tec­ture, train­ing ob­jec­tives, and scal­ing analy­sis, see Scaling Interpretable Models to 8B.

Despite be­ing trained on sig­nif­i­cantly fewer com­pute than com­pa­ra­ble mod­els, Steerling-8B achieves com­pet­i­tive per­for­mance across stan­dard bench­marks. The fig­ure be­low shows av­er­age per­for­mance (across 7 bench­marks) ver­sus ap­prox­i­mate train­ing FLOPs on a log scale, with ver­ti­cal lines mark­ing mul­ti­ples of Steerling’s com­pute bud­get.

In the pre­vi­ous up­date, we shared sev­eral ways that as­sess how in­ter­pretable a mod­el’s rep­re­sen­ta­tions are. Here we pro­vide an­other met­ric that gives in­sight into the mod­el’s use of its con­cepts. On a held-out val­i­da­tion set, over 84% of to­ken-level con­tri­bu­tion comes from the con­cept mod­ule: the model is not just us­ing the resid­ual to make its pre­dic­tions. This mat­ters for con­trol: if the mod­el’s pre­dic­tions gen­uinely flow through con­cepts, then edit­ing those con­cepts at in­fer­ence time ac­tu­ally changes what the model does rather than nudg­ing a side chan­nel while the real work hap­pens else­where.

A use­ful check is what hap­pens when we re­move the resid­ual path­way. On sev­eral LM Harness tasks, drop­ping the resid­ual has only a small ef­fect, which sug­gests the mod­el’s pre­dic­tive sig­nal is largely routed through con­cepts rather than hid­den everything-else” chan­nels.

Finally, Steerling can de­tect known con­cepts in text with 96.2% AUC on a held-out val­i­da­tion dataset.

In the com­ing weeks, we’ll be re­leas­ing deep dives on each of these ca­pa­bil­i­ties:

* Concept dis­cov­ery: what did Steerling learn that we did­n’t teach it? We’ll open up the dis­cov­ered con­cept space and show struc­ture that sur­prised us.

* Alignment with­out fine-tun­ing: re­place thou­sands of safety train­ing ex­am­ples with a hand­ful of con­cept-level in­ter­ven­tions.

* Memorization & train­ing data val­u­a­tion: trace any gen­er­a­tion back to the train­ing data that pro­duced it, and as­sign value to in­di­vid­ual data sources.

* The case for in­her­ent in­ter­pretabil­ity: what do you gain when in­ter­pretabil­ity is de­signed in from the start, and what do you miss when it’s bolted on af­ter the fact?

We’ll cover each of these in de­tail in up­com­ing posts, with quan­ti­ta­tive eval­u­a­tions and de­ploy­ment-ori­ented case stud­ies.

...

Read the original on www.guidelabs.ai »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.