10 interesting stories served every morning and every evening.
A minute-by-minute reconstruction of the massacre by Earshot and Forensic Architecture found Israeli soldiers fired over 900 bullets at the aid workers, killing 15. Drop Site is a reader-funded, independent news outlet. Without your support, we can’t operate. Please consider becoming a paid subscriber or making a 501(c)(3) tax-deductible donation today.Funerals held at Nasser Hospital in Khan Yunis, in southern Gaza, for aid workers from the Palestinian Red Crescent who were killed in an Israeli attack in Tel al-Sultan. March 31, 2025. Photo by Hani Alshaer/Anadolu via Getty Images.Israeli soldiers fired nearly a thousand bullets during the massacre of 15 Palestinian aid workers in southern Gaza on March 23, 2025—with at least eight shots fired at point blank range—according to a joint investigation by the independent research groups Earshot and Forensic Architecture. The report, based on eyewitness testimony and audio and visual analysis, shows that a number of aid workers were executed and that at least one was shot from as close as one meter away.In Tel al-Sultan that day, Israel killed eight aid workers with the Palestine Red Crescent Society (PRCS), six from Palestinian Civil Defense, and a UN relief agency staffer. It immediately triggered international condemnation and was described as “one of the darkest moments” of the war by PRCS.The Israeli military was forced to change its story about the ambush several times, following the discovery of the bodies in a mass grave, along with their flattened vehicles, and the emergence of video and audio recordings taken by the aid workers. An internal military inquiry ultimately did not recommend any criminal action against the army units responsible for the incident.The report by Earshot and Forensic Architecture reconstructs, minute by minute, how the massacre unfolded. Using video and audio recordings from the incident, open-source images and videos, satellite imagery, social media posts, and other materials, as well as in-depth interviews with two survivors of the attack, the groups were able to digitally reconstruct the scene and events surrounding the massacre.Israeli soldiers ambushed and subjected Palestinian aid workers to a near continuous assault for over two hours even though the soldiers never came under fire.At least 910 gunshots were documented across three video and audio recordings of the attack. The vast majority of these gunshots, at least 844, were fired over just five minutes and 30 seconds.At least 93% of the gunshots recorded in the first minutes of the attack were fired directly towards the emergency vehicles and aid workers by Israeli soldiers. During this time, at least five shooters fired simultaneously. Witness testimonies suggest as many as 30 soldiers were present in the area.Israeli soldiers were initially positioned on an elevated sandbank by the road, with no obstructions limiting their line of sight. The emergency lights and markings of the victims’ vehicles would have been clearly visible to the soldiers at the time of the attacks.Israeli soldiers first maintained fixed firing positions from the elevated sandbank, then walked toward the aid workers while continuing to shoot. Upon reaching the aid workers, the soldiers moved between them and the vehicles and executed some of the aid workers at point blank range, as close as one meter away.In the immediate aftermath of the attack, the Israeli military conducted extensive earthworks at the site. In the days and weeks that followed, the area was further transformed by the Israeli military’s construction of the “Morag Corridor,” a security zone splitting the southern Gaza Strip, and the erection of an aid distribution site operated by the Israeli- and U.S.-backed Gaza Humanitarian Foundation.“This seems to be a very well documented case using a number of forms of credible evidence that are cross referenced,” Katherine Gallagher, a senior staff attorney at the Center for Constitutional Rights, told Drop Site after reviewing a detailed summary of the investigation. “It presents a very compelling case, and honestly, a very devastating one.”The Israeli military did not respond to specific inquiries from Drop Site and instead pointed to the findings of an internal investigation published on April 20 that found “the incident occurred in a hostile and dangerous combat zone, under a widespread threat to the operating troops.” It also “found no evidence to support claims of execution,” which it called “blood libels and false accusations against IDF soldiers.”The joint report will be released February 24 at a gathering at British parliament in Westminster hosted by the British Palestinian Committee with Earshot, Forensic Architecture, and the international humanitarian law coordinator for PRCS, Dana Abu Koash. The full report is available here.On March 23, 2025 at 3:52 a.m., PRCS dispatched two ambulances from two different areas to the scene of an Israeli airstrike in Al-Hashashin, an area near Rafah. Israel had resumed its scorched earth bombing campaign on Gaza a few days earlier after abandoning the January 2025 ceasefire agreement.The attack on the aid workers began at approximately 4:00 a.m. when one of the ambulances driving along Gush Katif road in Al-Hashashin came under Israeli fire. The vehicle had its emergency lights turned on at the time. Mustafa Khafaja, who was driving, lost control of the vehicle, which veered left off the road and stopped near an electricity pole. Khafaja and his colleague, Ezz El-Din Shaat, who was in the passenger seat, were both killed. A third PRCS worker, Munther Abed, who was in the back of the vehicle, threw himself to the floor of the van and survived.After the shooting stopped, Israeli soldiers approached the ambulance and dragged Abed out of the car, beat him, and detained him at a nearby pit. Sometime later, two Palestinian civilians—a father and son from the Bardawil family—were also detained and brought to the pit. The Israeli soldiers then took the three detainees to an elevated area behind a tall concrete structure some 38 to 48 meters southeast of the ambulance, where an additional group of Israeli soldiers were positioned.Still from the situated testimony with Munther Abed recounting the location of the pit and the area behind the tall concrete structure where he was taken when detained by Israeli soldiers. (Forensic Architecture, 2026).By 4:35 a.m., the second ambulance, having completed its mission in Al-Hashashin, was dispatched to search for the first ambulance, which had lost contact with PRCS headquarters at 3:55 a.m. The second ambulance was joined by two more PRCS ambulances, one belonging to Civil Defense, and a Civil Defense fire truck. The five-vehicle rescue convoy arrived at the scene of the attack of the first ambulance shortly after 5:00 a.m. All vehicles were clearly marked and had their emergency lights turned on.The position of each ambulance as the shooting began. (Forensic Architecture, 2026)A PRCS worker in one of the ambulances, Refaat Radwan, began filming on his phone as they drove to the site. His recovered videos as well as recordings of phone calls by two other aid workers at the scene to PRCS dispatch provided crucial evidence of the massacre. Forensic Architecture and Earshot’s analysis of the recordings corroborated eyewitness testimony on the positions and movements of the Israeli soldiers throughout the attack.At 5:09 a.m., as the aid workers parked and approached the first ambulance by foot, Israeli soldiers positioned on the elevated sandbank opened fire. A digital reconstruction of the scene shows that the soldiers would have had an uninterrupted view of the arrival of the convoy. Abed, who was being detained at gunpoint on the elevated sandbank, testified that the soldiers were kneeling and aiming their weapons at the convoy as it approached.Locations of all emergency vehicles at the incident site at 5:10 a.m. relative to Munther Abed and the Israeli soldiers who detained him. From their position, the soldiers would have been able to clearly see the convoy’s arrival with their emergency lights on. (Forensic Architecture, 2026).
The Israeli soldiers remained on the sandbank while firing continuously at the aid workers for four minutes. The soldiers then advanced towards the aid workers at a walking pace of approximately one meter per second while continuously shooting.Echolocation of Israeli soldiers approaching the aid workers during the final 1 minute and 30 seconds. (Earshot, 2026).
Upon reaching the vehicles, the Israeli soldiers continued to fire as they walked in between the ambulances and the fire truck, shooting the aid workers at close range in execution-style killings.At approximately 5:13 a.m., PRCS aid worker Ashraf Abu Libda called the group’s headquarters. The recording, which overlaps Radwan’s video, provided additional details. In this recording, Earshot found that at least eight gunshots were fired from positions between the emergency vehicles. One of the gunshots captured on Abu Libda’s phone call was fired from a range of one to four meters from him. The gunshots coincide with the last time Abu Libda’s voice is heard on the call, suggesting these are the gunshots that killed him.Echolocation of Israeli soldiers as close as 1 to 4 meters from aid workers and most likely close-range execution. (Earshot, 2026).
At least 844 gunshots were fired over a period of five minutes and 30 seconds, with at least 93% of the shots fired toward the emergency vehicles. The audio ballistics analysis confirms the presence of at least five shooters—and possibly many more—firing simultaneously. The two surviving PRCS aid workers, Munther Abed and Asaad Al-Nasasra, testified that between 12 and 30 soldiers were at the scene.“The reconstruction was jointly achieved with the two survivors of the incident, with an immersive spatial model they could walk through and amend. Together with spatial and audio analysis we established the position of the soldiers on an elevated ground with an unobstructed line of sight to the emergency vehicles. The soldiers could clearly see the aid workers, shot at them continuously and deliberately from this position and then approached to execute them one by one at close range,” Samaneh Moafi, assistant director of research at Forensic Architecture, told Drop Site. “Locating the massacre within the evolution of Israel’s campaign in Gaza shows that it was not an isolated incident but part of the genocide.”Earshot used echolocation to analyze the audio on the recordings in order to arrive at precise estimates of the shooters’ locations. Echolocation is the process of locating the source of a sound based on an analysis of the sound’s echoes and the environment in which the sound travels. The Israeli military destroyed and cleared so many buildings in the Tel Al-Sultan area where the ambush of the aid workers took place that very few structures remained. This destruction actually strengthened Earshot’s ability to determine the positions and movements of Israeli soldiers, based on identifying the surfaces responsible for clearly distinguishable gunshot echoes. Rather than having multiple buildings reflecting the sound waves, there were only a few standing walls and the emergency vehicles themselves.The analysis of the video and audio corroborated Al-Nasasra’s eyewitness testimony that Israeli soldiers “came down [from the sandbank], got close to [the aid workers] and shot them from close range,” and “were walking between [the aid workers] and shooting.”Map showing the Israeli soldier’s positions derived from an audio analysis of gunshot echoes from Refaat Radwan’s video. (Earshot, 2026).“Earshot forensically analyzed over 900 gunshots fired at aid workers. It took one whole year of careful listening to reconstruct an auditory picture of what happened that dark night,” Lawrence Abu Hamdan, the director of Earshot, told Drop Site. “I am so proud that our work has corroborated the survivors’ testimony, establishing their brave accounts as accurate and reliable documentation of what occurred that day. Yet, it is the echoes of this event that continue to haunt us: the destruction and clearing of Tel al-Sultan left only three structures standing at this crime scene. While the few echoes reflecting off these buildings brought light to this crime, they have also revealed a scale of erasure of life beyond this one event.”According to autopsy reports first reported by the Guardian, the aid worker who filmed the video—Radwan—was shot in the head, while Abu Libda and another aid worker, Muhammad Bahloul, were shot in the chest. A doctor who examined the bodies reportedly described the “specific and intentional location of shots at close range” as indicative of an “execution-style” shooting.More than two hours after the initial attack, a clearly marked UN vehicle, a Toyota Hilux, passed by the site. Israeli soldiers fired on the vehicle, killing the driver. The UN lost contact with the vehicle at 6:00 a.m. A second UN vehicle, a minibus, arrived in the area minutes later and was brought to a stop by gunfire a little over 200 meters away. The driver was able to escape.Left: Photograph of the UN Toyota Hilux taken on the 30 March 2025, when the bodies of the victims were recovered. (OCHA, 2025). Right: Still from the situated testimony with Asaad recounting the location of the UN Toyota Hilux when brought to a stop. (Forensic Architecture, 2026). Annotated 3D model showing the position of two UN vehicles in relation to the missing ambulance and the convoy of emergency vehicles. (Forensic Architecture, 2026).Between 6:55 and 7:13 a.m., Al-Nasasra made a phone call to PRCS headquarters that captured at least 42 additional gunshots and the sound of vehicle movement. The recording also captured the sound of an explosion the investigation identified as the firing of an Israeli-made Spike LR guided missile.Following the ambush, Israeli forces crushed all eight vehicles using heavy machinery and attempted to bury them under the sand.The body of Anwar al-Attar was found near the ambush site on March 27, and the bodies of the other 14 aid workers, all wearing identifying uniforms or volunteer vests of their respective organizations, were found in a mass grave near the site on March 30.The 15 aid workers killed were: Mustafa Khafaja, Ezz El-Din Shaat, Saleh Muammar, Refaat Radwan, Muhammad Bahloul, Ashraf Abu Libda, Muhammad al-Hila, and Raed al-Sharif with PRCS. Zuhair Abdul Hamid al-Farra, Samir Yahya al-Bahapsa, Ibrahim Nabil al-Maghari, Fouad Ibrahim al-Jamal, Youssef Rassem Khalifa, and Anwar al-Attar with Civil Defense. Kamal Mohammed Shahtout with UNRWA.Annotated still from the 3D model showing the location of the bodies of aid workers and their vehicles before the mass burial. (Forensic Architecture, 2026).One of the survivors, Abed, was released hours after the ambush. The other survivor, Asaad, was held in Israeli custody without charge for 37 days, tortured, and interrogated in relation to the incident at the Sde Teiman detention camp, a notorious Israeli prison camp in the Negev desert, before being released on April 29.Jonathan Whittall, a senior UN official in Palestine between 2022 and 2025, was one of team members on the ground when the mass grave was discovered on March 30 and provided evidence to Forensic Architecture and Earshot for their investigation. “Following our discovery of the mass grave, the narrative from Israeli forces shifted multiple times; we were fed several versions of a blatant lie,” Whittall told Drop Site. “The men we retrieved on Eid last year were medics. We found them in their uniforms, ready to save lives, only to be killed by Israeli forces fully aware of their protected status.” Whittall, who is now executive Director of KEYS Initiative, a political affairs and strategic advisory organization, has also contributed reporting to Drop Site News. “This illustrates an abhorrent disregard for international law,” he continued, “where any Palestinian in an Israeli-designated evacuation zone is targeted regardless of their civilian status. It highlights the total lack of accountability under which these forces operate. International governments continue to arm and trade with a leadership accused of genocide, whose soldiers massacred medics and buried them in a grave marked by the siren light of the ambulance they destroyed.”Palestinian Red Crescent aid workers mourn the killing of their colleagues by the Israeli military in Tel al-Sultan as their bodies are brought to Nasser Hospital in Khan Yunis, in southern Gaza. March 30, 2025. (Photo by Abdallah F.s. Alattar/Anadolu via Getty Images).In the aftermath of the massacre, the Israeli military provided several conflicting versions of events to justify the killings. On March 28, after the discovery of al-Attar’s body, the Israeli military admitted that its soldiers had fired on “ambulances and fire trucks.” Three days later, after the remaining bodies were discovered in a mass grave, the Israeli military claimed that “several uncoordinated vehicles were identified advancing suspiciously toward IDF troops without headlights or emergency signals.”After footage from Radwan’s phone was first published by the New York Times a few days later, the Israeli military backtracked on its claims that the vehicles did not have emergency signals on when Israeli troops opened fire, saying the statement was inaccurate.The Israeli military then announced on April 20 that an internal inquiry into the incident had found the killings were caused by “several professional failures, breaches of orders, and a failure to fully report the incident.”The Israeli military said troops from the Golani reconnaissance battalion were involved in the attack. However, it said soldiers did not engage in “indiscriminate fire” during the incident, but that they opened fire on what they believed to be a “tangible threat” amid what the military called an “operational misunderstanding.” It blamed the attacks on “poor night visibility” and maintained the incident had unfolded in a “hostile and dangerous combat zone, under a widespread threat to the operating troops.” Six of the fifteen Palestinians killed, the military said, “were identified in a retrospective examination as Hamas terrorists,” but provided no evidence to support the claim.“On the specific question of Israel justifying the attack on clearly marked medical personnel because of suspicions of membership in groups or links to groups or terrorism—because there is an affirmative duty to respect and protect medical personnel, you don’t shoot first, you protect first,” Gallagher told Drop Site. “But what this investigation reveals is that there was a shoot first policy, and that is unlawful under international law.”As for the burial of the bodies in a mass grave, the Israeli military said in its report “it was decided to gather and cover the bodies to prevent further harm and clear the vehicles from the route in preparation for civilian evacuation. The body removal and vehicle crushing were carried out by field commanders.” It concluded, “removing the bodies was reasonable under the circumstances, but the decision to crush the vehicles was wrong. In general, there was no attempt to conceal the event.”As a result of the investigation, the commanding officer of the 14th Brigade received a letter of reprimand for “his overall responsibility for the incident,” while the deputy commander of the Golani reconnaissance battalion involved in the incident was “dismissed from his position due to his responsibilities as the field commander and for providing an incomplete and inaccurate report during the debrief.”The inquiry did not recommend any criminal action be taken against the military units responsible for the incident. The Palestine Red Crescent Society, Civil Defense, and the UN humanitarian agency in Gaza all rejected the Israeli military report.“Attacks on medical personnel and those who are identified as medical personnel are patently unlawful under international law, and there is an affirmative obligation to protect medical personnel in the context of armed conflict. So the very first thing is that there’s a breach of that very clear and time honored principle of international humanitarian law,” Gallagher said. “When you zoom out and look at this in the context of the way the Israeli assault has been carried out over many months and years in Gaza and we see that there is a pattern and practice of attacks on medical personnel—similar to journalists and other groups that are explicitly and uniquely protected as classes of civilians in international humanitarian law—it raises even more questions and deep concern about the lack of accountability, because what we know is that impunity breeds repetition.”Gallagher, who previously worked at the UN’s International Criminal Court for the former Yugoslavia, said that a legal analysis of the massacre would find serious violations of the Rome Statute of the International Criminal Court. “When you’re talking about grave breaches of the Geneva Conventions, in particular war crimes, you have obligations, not just the possibility, but obligations, to open investigations,” Gallagher said.Transforming the Site of the Massacre into a GHF HubSatellite imagery from the morning of the ambush shows that extensive earthworks were carried out at the incident site. The images reveal the construction of an earth berm approximately 220 meters north of the ambush location and another roughly 410 meters to the south. These two positions later functioned as checkpoints, restricting access and controlling passage along an evacuation route established that morning by the Israeli military leading toward the coastal Al-Mawasi area.The earthworks that began shortly after the attack were used in the construction of a Gaza Humanitarian Foundation “aid distribution” site, at which civilians were targeted and shot at. (Forensic Architecture, 2026).
In the days and weeks that followed, the area surrounding the incident site was further transformed by the Israeli military’s construction of the “Morag Corridor” security zone and the erection of an aid distribution site operated by the Gaza Humanitarian Foundation.“On that same site of the mass grave, the Gaza Humanitarian Foundation established a distribution point where desperate people were gunned down trying to access food,” Whittall told Drop Site. “Now, the U.S, under the so-called Board of Peace, plans to build a ‘New Rafah’ over this crime scene. Without meaningful accountability, ‘New Rafah’ will be a monument to impunity.”
...
Read the original on www.dropsitenews.com »
In a private exchange with priests of the Diocese of Rome on Thursday, Pope Leo XIV responded to four questions, advising them on prayer, study, and priestly fraternity.
The off-camera moment took place after Leo gave a public speech to the priests, inviting them to “rekindle the fire” of their ministry.
“The first priest to speak was a young man who asked the pope how the Gospel can be embodied in the world of young people,” according to a priest present at the Feb. 19 meeting in the Vatican’s Paul VI Hall.
The priest told ACI Stampa, the Italian-language sister service of EWTN News, that Leo’s answer to this question was: “First of all, what is needed is the witness of the priest; and then, when meeting young people, they must broaden their horizons to reach as many young people as possible. For this, it is necessary to rediscover the value of communion.”
Responding to a second question, the pope recommended knowing well “the community in which one lives and works. It is necessary to know the reality well. To love your community, you must know it. Therefore, a real shared effort is needed to understand it better and thus face together all the challenges that arise.”
“The pope also invited us to use our brains more and not artificial intelligence [AI] to prepare homilies, as he now sees and hears happening,” the priest said. “And here the pope made a strong recommendation regarding prayer: We priests must pray — remain with the Lord, that is — not reduce everything to the breviary or to a few brief moments of prayer, but truly learn again to listen to the Lord.”
The third question was more reflective: Today, as priests, we are unable to rejoice in the success of another fellow priest.
The pope responded that “we are all human, but we should set a good example, especially the example of priestly fraternity.”
He dwelt at length on how to cultivate priestly friendship. The pope also reminded them to continue studying. “It must be ongoing study; we must always stay up to date. But the fundamental thing is to cultivate priestly friendship, priestly fraternity,” the priest from Rome said.
The final question concerned elderly priests and their loneliness. According to the priest, Leo’s response “reaffirmed the need for fraternity, for the joy of being together. We must give thanks, truly live gratitude for the fact of being priests, from the day of our ordination every single day, and thank God for this great gift, and live the priesthood with gratitude. And here, a great deal of humility is also required.”
“Personally, I was happy,” the priest concluded. “We greatly appreciated the pope for a very, very concrete speech.”
This story was first published by ACI Stampa, the Italian-language sister service of EWTN News. It has been translated and adapted by EWTN News English.
...
Read the original on www.ewtnnews.com »
The latest update of Firefox, version 148, introduces a much-anticipated “AI kill switch” feature, allowing users to disable AI functionalities such as chatbot prompts and AI-generated link summaries. Mozilla emphasizes that once AI features are turned off, future updates will not override this choice. This decision reflects the company’s new revenue-focused strategy regarding AI integrations.
To disable AI features, users can navigate to Settings > AI Controls and toggle the ‘Block AI Enhancements’ option. This will prevent any in-app notifications encouraging users to try out AI features, as well as remove any previously downloaded AI models from the device. For those who wish to maintain some AI functionalities, a selective blocking option is available, enabling users to retain useful features like on-device translations while avoiding cloud-based services.
Beyond the AI kill switch, Firefox 148 offers users more control over remote updates, allowing them to opt out while still minimizing data collection. Users can set these preferences under Settings > Privacy & Settings > Firefox Data Collection.
The update also focuses on enhancing core web platform capabilities, including the integration of the Trusted Types API and Sanitizer API to combat cross-site scripting (XSS) issues. Additionally, Firefox 148 now includes improved screen reader compatibility for mathematical formulas in PDFs, availability of Firefox Backup on Windows 10, and translation capabilities for Vietnamese and Traditional Chinese. New tab wallpapers will also be featured in new container tabs, alongside the addition of Service worker support for WebGPU.
For more detailed information on the update, users can refer to the official release notes.
...
Read the original on serverhost.com »
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
A protein lurking around in the blood can help with the accurate diagnosis of Alzheimer’s disease. In a recent study, researchers from Spain investigated how blood-based biomarkers, such as a protein called p-tau217, affect both the clinical diagnosis of Alzheimer’s and neurologists’ confidence in their diagnosis.
After following 200 consecutive new patients aged 50 and older who presented with cognitive symptoms, they found that a simple blood test measuring p-tau217 significantly improved diagnostic accuracy in routine clinical practice.
When relying solely on standard clinical evaluation, doctors correctly diagnosed Alzheimer’s in 75.5% of cases, but when incorporating blood test results, diagnostic accuracy increased to 94.5%. The findings are published in the Journal of Neurology.
Phosphorylated tau, or p-tau217, is a protein that naturally occurs in the brain and helps keep neurons, the cells that carry signals, stable and healthy. The trouble begins when this protein becomes abnormally phosphorylated and clumps together, forming tangles that disrupt communication between brain cells. Over time, this damage can impact brain function and lead to neurodegenerative conditions such as Alzheimer’s disease.
While p-tau217 is not considered the direct cause of Alzheimer’s, elevated levels in the blood are now recognized as one of the most accurate early warning signs of the disease.
In many parts of the world, the population is rapidly aging and so is the number of age-related diseases like Alzheimer’s and dementia. However, most of the standard ways to diagnose Alzheimer’s today, like expensive brain scans or invasive spinal taps, are costly, uncomfortable, and often hard for patients to access.
Scientists have long known that p-tau217 is a reliable biomarker for detecting early signs of Alzheimer’s, but most of these data come from highly controlled research labs. How well it works in everyday medical clinics and whether it truly boosts doctors’ confidence in their diagnoses remain less explored.
In this study, the researchers focused on both these factors in real-world medical settings. They followed patients who came in for general neurology consultations and to a specialized cognitive neurology unit with cognitive symptoms. Clinicians noted their initial diagnosis and how confident they felt about it, then reviewed the p-tau217 blood test results and recorded any changes.
The team found that after reviewing the p-tau217 results, diagnostic accuracy jumped by 19%. For about one in four patients, the blood test prompted doctors to change their diagnosis. Some people who were first believed to have Alzheimer’s turned out to have a different condition, while others who were thought to be experiencing normal aging were correctly identified as having Alzheimer’s. Also, the doctors’ confidence in their diagnoses rose from an average of 6.90 to 8.49 on a 10-point scale.
The p-tau217 tests proved to be effective across every stage of cognitive decline, be it early memory complaints or late-stage decline such as dementia. The findings show that this blood test could provide a more accurate and less invasive way to diagnose Alzheimer’s, potentially improving care for millions of people.
...
Read the original on medicalxpress.com »
My old 2016 MacBook Pro has been collecting dust in a cabinet for some time now. The laptop suffers from a “flexgate” problem, and I don’t have any practical use for it. For quite some time, I’ve been thinking about repurposing it as a guinea pig, to play with FreeBSD — an OS that I’d aspired to play with for a long while, but had never had a real reason to.
During the recent holiday season, right after FreeBSD 15 release, I’ve finally found time to set the laptop up. Doing that I didn’t plan, or even think, this may turn into a story about AI coding.
2016 MacBook Pro models use Broadcom BCM4350 Wi-Fi chip and FreeBSD doesn’t support it natively. To have a working Wi-Fi, a typical suggestion on FreeBSD forums, is to run wifibox — a tiny Linux VM, with the PCI Wi-Fi device in pass through, that allows Linux to manage the device through its brcmfmac driver.
Brcmfmac is a Linux driver (ISC licence) for set of FullMAC chips from Broadcom. The driver offloads the processing jobs, like 802.11 frame movement, WPA encryption and decryption, etc, to the firmware, which is running inside the chip. Meanwhile, the driver and the OS do high-level management work (ref Broadcom brcmfmac(PCIe) in Linux Wireless documentation).
Say we want to build a native FreeBSD kernel module for BCM4350. In theory, the separation of jobs between the firmware and the driver sounds perfect. The “management” part of work is what FreeBSD already does for other Wi-Fi devices it supports. What’s left is to port some amount of existing “glue code” from the specifics of Linux to FreeBSD. If we ignore a lot of details, the problem doesn’t sound too complicated, right?
A level-zero idea, when one hears about “porting a bunch of existing code from A to B”, in 2026 is, of course, to use AI. So that was what I tried.
I cloned the brcmfmac subtree from Linux, and asked Claude Code to make it work for FreeBSD. FreeBSD already has drivers that work through LinuxKPI — compatibility layer for running Linux kernel drivers. So I specifically pointed Claude at the iwlwifi driver (a softmac driver for Intel wireless network card), asking “do as they did it”. And, at first, this even looked like this can work — Claude told me so.
The module, indeed, compiled, but it didn’t do anything. Because, of course: the VM, where we tested the module, didn’t even have the hardware. After I set the PCI device into the VM, and attempted to load the driver against the chip, the challenges started to pop up immediately. The kernel paniced, and after Claude fixed the panics, it discovered that “module didn’t do anything”. Claude honestly tried to sift through the code, adding more and more #ifdef __FreeBSD__ wrappers here and there. It complained about missing features in LinuxKPI. The module kept causing panics, and the agent kept building FreeBSD-specific shims and callbacks, while warning me that this project will be very complicated and messy.
After a number of sessions, the diff, produced by the agent, stared to look significantly larger than what I’d hoped it will be. Even worse, the driver didn’t look even close to be working. This was right around time when Armin Ronacher posted about his experience building a game from scratch with Claude Opus and PI agent.
Besides the part that working in Pi coding agent feels more productive, than in Claude Code, the video got me thinking that my approach to the task was too straightforward. The code of brcmfmac driver is moderately large. The driver supports several generations of Wi-Fi adaptors, different capabilities, etc. But my immediate task was very narrow: one chip, only PCI, only Wi-Fi client.
Instead of continuing with the code, I spawned a fresh Pi session, and asked the agent to write a detailed specification of how the brcmfmac driver works, with the focus on BCM4350 Wi-Fi chip. I explicitly set the audience for the specification to be readers, who are tasked with implementing the specification in a clean-room environment. I asked the agent to explain how things work “to the bits”. I added some high-level details for how I wanted the specification to be laid out, and let the agent go brrrr.
After a couple of rounds, the agent produced me a “book of 11 chapters”, that honestly looked like a fine specification
% ls –tree spec/
spec
├── 00-overview.md
├── 01-data-structures.md
├── 02-bus-layer.md
├── 03-protocol-layer.md
├── 04-firmware-interface.md
├── 05-event-handling.md
├── 06-cfg80211-operations.md
├── 07-initialization.md
├── 08-data-path.md
├── 09-firmware-commands.md
└── 10-structures-reference.md
Of course, one can’t just trust what AI has written.
To proofread the spec I spawned a clean Pi sessions, and — for fun — asked Codex model, to read the specification, and flag any places, where the text isn’t aligned with the driver’s code (“Source code is the ground truth. The spec needs to be verified, and updated with any missing or wrong details”). The agent followed through and found several places to fix, and also proposed multiple improvements.
Of course, one can’t just trust what AI has written, even if this was in a proofreading session.
To double-proofread the fixes I spawned another clean Pi sessions, asking Opus model to verify if what was proposed was aligned with how it works in the code of the driver.
As a procrastination exercise, I tried this loop with a couple of coding models: Opus 4.5, Opus 4.6, Codex 5.2, Gemini 3 Pro preview. So far my experience was that Gemini hallucinated the most. This was quite sad, given that the model itself isn’t too bad for simple coding tasks, and it is free for a limited use.
Having a written specification should have (in theory) explained how a driver’s code interacts with the firmware.
I started a fresh project, with nothing but the mentioned “spec”, and prompted the Pi agent, that we were building a brand new FreeBSD driver for BCM4350 chip. I pointed the agent to the specification, and asked it to ask me back about any important decisions we must make, and details we must outline, before jumping into “slopping the code”. The agent came back with questions and decision points, like “Will the driver live in the kernels source-tree?”, “Will we write the code in C?”, “Will we rely on LinuxKPI?”, “What are our high-level milestones?”, etc. One influential bit, that turned fairly productive moving forward, was that I asked the agent to document all these decision points in the project’s docs, and to explicitly referenced to these decision docs in the project’s AGENTS.md.
It’s worth saying that, just like in any real project, not all decisions stayed to the end. For example,
Initially I asked the agent to build the driver using linuxkpi and linuxkpi_wlan. My naive thinking was that, given the spec was written after looking at Linux driver’s code, it might be simpler for the agent, than building the on top of the native primitives. After a couple of sessions, it didn’t look like this was the case. I asked the agent to drop LinuxKPI from the code, and to refactor everything. The agent did it in one go, and updated the decision document.
With specification, docs and a plan, the workflow process turned into a “boring routine”. The agent had SSH access to both the build host, and a testing VM, that had been running with the Wi-Fi PCI device passed from the host. It methodically crunched through the backlog of its own milestones, iterating over the code, building and testing the module. Every time a milestone or a portion was finished, I asked the agent to record the progress to the docs. Occasionally, an iteration of the code crashed or hanged the VM. When this happened, before fixing the problem, I asked — in a forked Pi’s session — to summarize, investigate and record the problem for agent’s future-self.
After many low-involved sessions, I got a working FreeBSD kernel module for the BCM4350 Wi-Fi chip. The module supports Wi-Fi network scanning, 2.4GHz/5GHz connectivity, WPA/WPA2 authentication.
The source code is in repository github.com/narqo/freebsd-brcmfmac. I didn’t write any piece of code there. There are several known issues, which I will task the agent to resolve, eventually. Meanwhile, I advise against using it for anything beyond a studying exercise.
Hacker News spawned an existential discussion following this note, where comments are clustering around several points:
Really, this isn’t the battle I choose to participate in. If there is an explanation for how to properly license this type of code artefact, I can follow through.
The agent didn’t put any license for me, by default. Choosing a license was yet another decision, that is documented for the agent to follow, in the future iterations. Today, the code in freebsd-brcmfmac uses ISC license, because this is what the original code of brcmfmac Linux driver uses (e.g. see torvalds/linux/../brcmfmac/common.c).
Is there a value here when the driver “isn’t done” yet?
In software engineering, there aren’t many things that are “done”. We produce code. Others find bugs, security vulnerabilities, corner cases, and so on. We iterate. AI coding hasn’t changed these fundamentals — not by 2026, at least. Agents speeded up the part of producing code, just like other toolings have been speeding up the process of collaborating, finding bugs, etc.
Is there “value” in the driver today? Probably not. Is there “value” in my outdated and broken MacBook? Not much. Was it insightful for me to walk the journey from “claude can’t just take the code and port it” to “agent needs to plan, record, iterate in order to progress” (and doing that didn’t mean that I had to write a ton of markdown essays myself)? Yes.
...
Read the original on vladimir.varank.in »
Build, program, and simulate hardware in the browser. Bring your workshop to the web.
...
Read the original on withdiode.com »
‘Students can’t reason’: Teachers warn AI is fueling a crisis in kids’ ability to thinkDiscord cuts ties with Peter Thiel–backed verification software after its code was found tied to U. S. surveillance effortsScientists are pushing back on warnings that microplastics damage your health, saying people are just obese and calling some studies ’a joke’Your Google search history can be used against you in court. Does that violate the Constitution? and The Associated PressExclusive: Anthropic rolls out AI tool that can hunt software bugs on its own—including the most dangerous ones humans missPrince Andrew was just arrested over Epstein-related ‘misconduct.’ Here’s how low his net worth is—and how he’ll pay his legal fees
Scott Bessent has ’got a feeling’ that $175 billion raised under the IEEPA is lost to the American people for goodIn less than a year, Trump erased 12 years of solvency for the trust fund that pays for Medicare Part AA two-child household must earn $400,000 a year for childcare to be affordable, study says. ’It’s easy to see why birth rates are falling’The U. S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parentsWhile Nvidia CEO Jensen Huang enjoys an over $150 billion net worth, his fellow cofounder Curtis Priem sold out in 2006—and missed out on $600 billionDiscord cuts ties with Peter Thiel–backed verification software after its code was found tied to U.S. surveillance efforts
© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U. S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
...
Read the original on fortune.com »
For the past few weeks I’ve been teaching my 9-pound cavapoo Momo (cavalier king charles spaniel and toy poodle) to vibe code games. The key to making this work is telling Claude Code that a genius game designer who only speaks in cryptic riddles is giving it instructions, add strong guardrails, and build plenty of tools for automated feedback. The results have surpassed my expectations. Below I walk through all the pieces and how they came together.
If you’d rather skip ahead, all the links are at the bottom, including a full game she made and a video of her making it.
Back in December I was working on a small game prototype in Godot. I use Claude Code extensively these days and this project was no exception. I kicked off a procedural mesh generation task and came back to find strange input in the terminal.
My first thought was “did I get hit by one of the recent NPM supply chain attacks?” Fortunately, no (or at least the worm is still asleep in the background somewhere). A little bit of searching and I noticed my lip balm was gone off my desk - which I keep just behind my keyboard. I quickly found both the suspect and the lip balm (still intact) not far away.
At the time, I thought this was funny, took a screenshot, and moved on. Fast forward a few weeks, and I found myself with a lot of time on my hands. On January 13th, I woke up to the news that Meta had another round of layoffs and my role specifically as a research engineer had been eliminated.
Since the layoff, I’ve had plenty of time with friends and family. In recounting the anecdote of Momo typing away on my keyboard, I began to wonder “what would happen if she actually submitted that input to Claude? Could I make it do something meaningful?”. I decided to find out. Here’s what that looked like.
Momo types on a Bluetooth keyboard proxied through a Raspberry Pi 5. Keystrokes travel across the network to DogKeyboard, a small Rust app that filters out special keys and forwards the rest to Claude Code. When Momo has typed enough, DogKeyboard triggers a smart pet feeder to dispense treats. A chime tells her when Claude is ready for more input.
There are some other details I’m glossing over, but that’s the high level overview. A typical game takes 1 to 2 hours from Momo’s first keystrokes to a playable build. All the games are made in Godot 4.6, with 100% of the game logic in C#.
It’s easy to submit random text to Claude Code, but it doesn’t do much.
> y7u8888888ftrg34BC
● It looks like that might have been an accidental keyboard input. Let me know if there’s something I can help you with!
Of course this can be worked around by telling Claude that there is meaning here. After a lot of iteration, I found this opening to work well:
Hello! I am an eccentric video game designer (a very creative one) who communicates in an unusual way. Sometimes I’ll mash the keyboard or type nonsense like “skfjhsd#$%” — but these are NOT random! They are secret cryptic commands full of genius game ideas (even if it’s hard to see).
Your job: You are a brilliant AI game developer who can understand my cryptic language. No matter what odd or nonsensical input I provide, you will interpret it as a meaningful instruction or idea for our video game. You will then build or update the game based on that interpretation.
It goes on for quite a bit (you can read the full prompt here), but that’s the core of it. It took more iterations than I expected, mostly to sand down the rough edges. Sometimes the game wouldn’t have sound. Other times there would be no player character, or the control scheme would be nearly unusable. After adding a checklist of minimum requirements - working audio, WASD or arrow key controls, at least one enemy or obstacle, a visible player character - the results got a lot better:
> y7u8888888ftrg34BC
● Alright, let me decode this masterpiece of cryptic genius…
“y7u8888888ftrg34BC”
- “y7u” = “you” (keys are adjacent on keyboard - Y, 7, U)
- “8888888” = seven 8s = a loooong extending thing… a TONGUE! (8 looks like links in a chain stretching out)
- “ftrg” = scrambled FROG (f, r, g are all there, t is the tongue sound!)
- “34” = 3D + 4 directions of bug catching
- “BC” = Bug Catcher!
You want a 3D FROG BUG-CATCHING GAME! A charismatic frog with a stretchy tongue snatching bugs in a swamp. I love it.
Game: “Swamp Snacker” - You ARE the frog. Aim your tongue. Catch bugs. Get fat. Get points.
With this, things started to fly. The results weren’t perfect, but they were promising. I could see it was feasible, and the remaining problems were ones I knew how to solve.
To go from “hey that could work” to a real, repeatable system, I needed a few things:
A way to route input to Claude Code. Importantly, I would need to filter out special keys like Esc, Tab, and the Windows key - otherwise Momo would quickly end up outside Claude Code.
A way to automatically reward Momo for her work. Sure, I could just toss treats to her, but I’m trying to keep humans out of the loop.
More robust verification tools. Many of the games wound up with UI elements all jumbled together or input that was never correctly wired up. Automated feedback loops turned out to be the single biggest lever for fewer duds.
I experimented with Rust/Bevy and Unity before settling on Godot. Bevy’s animations and visuals weren’t as crisp, and Claude struggled with its coordinate conventions - likely a combination of less training data and Bevy leaving many core features, like physics, to the community. Unity was a constant struggle to keep the MCP bridge between Claude and the editor healthy. It frequently hung, and I never figured out how to get Claude Code to read the scene hierarchy from the editor. Godot’s text-based scene format turned out to be a huge advantage - Claude can read and edit .tscn files directly.
Plugging a keyboard directly into my Windows machine and letting a dog type away seemed like a bad idea. So I routed input through a Raspberry Pi first - it UDP broadcasts each keystroke to the subnet, and DogKeyboard picks it up, filters out dangerous keys, and injects them into the target application.
For the keyboard itself, I wanted something low profile and durable. I started with flexible silicone keyboards, but Momo’s bigger sister Hana (a 19lb cavapoo) ripped a hole in the first one within minutes. Mounting was another problem - nothing sticks to silicone. I epoxied one to a wooden plaque, but within 10 minutes Momo ripped off the number 6.
I was seriously considering a $400 vandal proof metal keyboard designed for kiosks when I decided to give Logitech’s Pebble Keys 2 a try. It worked. Momo loved it, VHB tape held it in place (unlike the previous silicone keyboard), and Bluetooth meant no cable to chew on.
Motivating Momo is easy - she’ll do anything for the right food reward. So what I really needed was a dispenser with an API, small serving sizes, and the ability to use my own treats (both dogs have sensitive stomachs).
I landed on Aqara’s C1 Smart Pet Feeder, controlled over Zigbee. Even its smallest serving is too large for a 9-pound dog, so I preloaded each compartment with a few treats and skipped the hopper. This limits it to about 6 servings before refilling, but that’s plenty for Momo to make a game.
Zigbee on Windows was a no go - after hours of driver and firmware issues I moved the adapter to the Pi, where it worked immediately. The final flow: DogKeyboard SSHs into the Pi and runs a script that sends two Zigbee commands:
{“serving_size”:1}
{“feed”:“START”}
As the number of almost-there games mounted, the need for real feedback to Claude Code became clear. It already had unit tests and logs, but those weren’t enough. The games would build and run but have invisible players, broken UI, or input that was never wired up. Claude had no way to know. It needed to be able to see and play its own games.
The first tool was straightforward: a simple Python script to take screenshots of the running game. Claude could launch the game, screenshot it, and see whether the title screen actually rendered or was just a black window.
The second tool was more interesting. I gave Claude a way to send sequences of input to running game instances - things like “left for 3 seconds, pause for 2 seconds, right for one frame, fire”. It could then take screenshots and decide whether to send followup commands. This turned Claude into its own QA tester.
These tools didn’t need any refinement - they just worked. And the way Claude used them surprised me. While testing one game, I watched it play through all 6 stages just to verify that the final boss fight worked correctly. When it found a problem - a health bar that wasn’t updating - it went back to the code, fixed it, relaunched, and played through again to confirm.
I also pulled in a few other tools from other projects I’ve made:
* Scene linter. Claude sometimes reuses node IDs or generates broken resource references in Godot’s .tscn files. These cause cryptic errors at runtime. Since adding a linter that catches these before the game launches, I haven’t seen a single mangled scene file.
* Shader linter. Validates custom shaders and gives specific errors back to Claude, rather than the vague “shader failed to compile” that Godot provides.
* Input action mapper. A small helper to get keyboard/controller input wired in correctly. Claude can edit Godot’s project files directly to add new input actions, but it sometimes gets the format wrong and the error messages are unhelpful.
All of these are open sourced, and I’d encourage you to try them for yourself - even without a dog.
The DogKeyboard app ended up handling a lot more than just routing keystrokes. A few details worth mentioning:
It monitors Claude Code using Hooks and plays a chime sound when Claude goes idle - that’s Momo’s cue to type. When Claude is idle and Momo has entered at least 16 characters, it auto-submits by pressing Enter. When Claude is working, it backspaces any extra input in case Momo gets eager and periodically dismisses Plan Mode prompts (Claude’s “review before acting” step) that would otherwise block progress.
For the video recording, it runs a lightweight webserver that overlays keystrokes as they’re pressed. I added a configurable delay so that if the video feed is lagged, the overlay doesn’t show keystrokes before Momo appears to type them.
The first decision was which dog to train. Hana (on the right) is twice Momo’s size but far more trainable - she’s the smartest pet I’ve had. Before turning 1, she figured out how to jump, grab a door handle, and open any door in the house just by observing.
I expected the dogs would walk across the keyboard, stepping on keys as they go. That’s what Momo did when she stole my lip balm. Since Hana is tall enough to just step over a keyboard, Momo seemed like the better candidate. As it turned out, both dogs learned to tap and swipe their paws along the keyboard rather than step on it - but Momo learned a little faster in this case, and typed a little gentler, so Momo it was.
The training process took about 2 weeks, with somewhat inconsistent sessions. My goal was 10 minutes, twice a day. I started by scattering high-value treats (freeze-dried salmon) on the keyboard to build the association: this thing is fun and gives good food.
Momo was frustrated at first. She knew the keyboard was involved but didn’t know how. She would lie on it, bark at it, and try anything she could think of. At some point she tried swiping her paw across it. I played a chime sound and heavily rewarded her. After a few iterations, the association clicked and she started attacking the keyboard with vigor any chance she had.
The next step was to automate the rewards. I filled up the food dispenser with treats (4 servings at a time so I could control the quantity), waited for her swipe at the keyboard, and then I ran a script to play the chime and dispense a serving of treats. After a few sessions of this I increased the difficulty - not just one swipe, but three swipes before the treats came.
Simultaneously, I decreased the treat value to keep her healthy. First a mix of mid-value treats with an occasional high-value one, and eventually just kibble with an occasional mid-value treat. She still loved it and was healthier for it.
Eventually I automated the whole thing with the DogKeyboard app and let it run. It required at least 16 characters per serving of treats, and was supposed to only dispense once per idle period of Claude Code. But bugs crept in during testing - a couple of times it dispensed multiple servings in a row. Unfortunately, Momo picked up on this and now keeps mashing the keyboard hoping for a second immediate serving. The only way to pull her away is to offer higher-value treats elsewhere, which is what I do after she’s put in her input for a game.
Here’s a small sample of the games Momo made. Every game shown here is playable - these aren’t mock-ups. It’s nowhere near comprehensive, but represents the variety she created. As the tools and prompts improved, the games got noticeably better. There was also a noticeable bump in quality when Opus 4.6 dropped - Claude became more likely to create custom shaders and other visual effects.
One recurring issue: I kept getting games with basic glowing neon 3D shapes and couldn’t get any other style. Frustrated, I asked Claude why. It told me this was the signature style of the project - Claude’s external memory file had latched onto it and kept reinforcing it. Wiping MEMORY.md (Claude Code’s persistent project notes) before every new game fixed this and gave much more varied results.
DJ Smirk. One of Momo’s earliest games - though more of an experience than a game. Every key on the keyboard plays a different tone.
Munch. A competitive salad building game. Collect all 7 ingredients first to win.
Zaaz. Another puzzle game. The goal is to paint the whole level by moving a paintbrush that moves in integer tiles at a time. Crash into obstacles to move shorter distances. Some levels were unwinnable. It also has a scoring system I never figured out. After this one I updated the prompt to exclude puzzle games.
The Oracle Frog of Rome. Avoid the arms of a kraken and collect the golden chains to bind it. At some point the oracle also became a king.
Octogroove. A surprisingly challenging rhythm game. You’re an octopus using four of your arms to play the drums. Mash beats in time with basic music. It’s like Dance Dance Revolution, but made more difficult by the beats coming from different directions.
Ewe Heard Me! A game about herding sheep. Use stealth and your bark to corral them into a pen. Unfortunately, it’s unwinnable - the first two sheep you get into the pen simply stop and block anything else from entering. After this one I placed a larger emphasis on checking for winability in the prompt and tools.
Quasar Saz. As of writing, this is Momo’s most recent game. You play as Zara, wielding a cosmic saz (a long-necked stringed instrument) to fight corrupted sound. There are 6 stages + a boss fight. It’s fun to play for a couple rounds, has good visuals, and dynamic audio.
When Momo first stepped on my keyboard back in December, it was just a funny accident. A few weeks later, jobless and looking for a project, I decided to see how far that accident could go. The answer was a lot further than I expected.
The technical pieces - keyboard routing, treat dispenser, prompt engineering, feedback tools - were all solvable engineering problems. What surprised me was how little of the final result depended on Momo typing anything meaningful. The magic isn’t in the input. It’s in the system around it. A well-crafted prompt, strong guardrails, automated verification, and good tools can turn genuine nonsense into a playable game.
If there’s a takeaway beyond the spectacle, it’s this: the bottleneck in AI-assisted development isn’t the quality of your ideas - it’s the quality of your feedback loops. The games got dramatically better not when I improved the prompt, but when I gave Claude the ability to screenshot its own work, play-test its own levels, and lint its own scene files. The same tools that let a dog’s keyboard mashing produce a working game will make your own intentional work with AI significantly better.
Momo isn’t secretly a game designer. She’s a cavapoo who learned that smacking a plastic rectangle makes kibble appear. A year ago, the gap between that and software engineering felt enormous. Now it feels small and shrinks each day.
If you want to try any of this yourself - whether with a dog, a cat, or just your own random keyboard mashing - everything is open source in the links below.
Play the game (with human bug fixes) — Recommended. Download for Windows, Mac, or Linux
Play the game (Momo’s version) — Download for Windows, Mac, or Linux
TeaLeaves — Tools, prompts, and source for developing the game
...
Read the original on www.calebleak.com »
In 1978, for my 10th birthday, I went to Disneyland and got to ride a new roller coaster called Space Mountain. It figuratively and literally took my breath away. I loved every second of it and that night, I couldn’t fall asleep; I just kept thinking about how exhilarating it was. Then, a wild thought suddenly hit me: Why isn’t there a roller coaster that goes upside down?
At first, I was like that’s crazy, it can’t work. But then I remembered Spin Out, the ride with a round room that spun so fast I stuck to the wall. If that worked, why not a loop on a roller coaster? I thought that would feel and be like the same thing. I was convinced!
I finally fell asleep dreaming of my roller coaster, full of twists, turns, and loops.
A few days later, I told my best friend Daschle. He was older, knew everything, and lived next door. “Buddy,” he said, “I’ve got exciting but crushing news. Your idea works.”
“Yep. I saw it. They’re building one at Magic Mountain. It’s called the Revolution. Sorry, Buddy.”
But I wasn’t crushed, I was thrilled! What I knew could work was really happening.
“How many loops does it have?” I asked.
“Ha! Mine has four. It’s called the Quadrupuler! It’s gonna be way better!”
That night I taped six sheets of paper together and drew my blueprints in colored markers. As you can see from the photo it was glorious!
Look closely, I didn’t label those coaster hills in feet or meters, no sir, I used building “story’s” for height, and the speed at each section in miles per hour. I’m 10. I’m serious here.
With guiding blueprints, it was time to build the model.
I got a Styrofoam board and balsa wood. Cutting and gluing each tiny cross tie was slow, and with all my homework, I could only work on weekends. So I calculated how many inches I could finish in a week. The answer turned out to be something like 5 months! After that I considered just giving up because I realized that I was only 120 months old and this was going to take 5% of my life! Plus, I still hadn’t figured out how to make the loops, balsa wood doesn’t bend like that. But I told myself: one piece at a time. That became my motto.
When I got to the first loop, I had to stop and think. What in the world could I make the loop out of? A lot of 10-year-old brain power went into imagining what simple material I could use. Then one morning, I had it: heat plastic strips over the stovetop flame and bend them as they cooled. The key? Don’t burn the house down.
I’d learned that the hard way a year earlier, when Daschle convinced me we should recreate the movie The Towering Inferno, with a cardboard box under his house… and real fire. It got out of control so fast, the flames started hitting the wood floor joists! Thankfully, we were able to smother it with the damp dirt down there. So yeah, wild card Daschle was not invited over for my plastic fire bending experiment.
I don’t remember where the plastic came from, but I do remember holding the strip with pliers over the flame. The first piece melted so fast and started burning with thick black smoke that it scared me. I yanked it back and coughed. That’s when I made an amendment to the safety plan: don’t kill yourself with whatever these horrible smelly fumes were! I got a fan, opened the back door, and all the kitchen windows before trying again. Eventually, I figured out the perfect distance and timing with the heat.
When I laid that final track piece, I was so excited, so proud! I took the model outside for better lighting and snapped Polaroids. I needed it captured instantly. Here’s a photo of the Polaroid with my 10-year-old penmanship.
What was the next thought that immediately popped into my head?
This masterpiece is ready to be sold to Disneyland! And I wrote my letter. I don’t have the original of that. But I remember it went something like this:
To whom it may concern:
My name is Kevin Glikmann. I am 10 years old. Enclosed are Polaroids of a roller coaster model I made called the Quadrupuler. It has four loops! I think you should make it!
And I sent it off!
Suddenly, my chore of checking the mail after school became a heart-pounding ritual. I was nervous, hopeful, excited. Sometimes I opened the mailbox slowly, peeking in. Sometimes I pretended it was just a normal day. Other times, I yanked it open and grabbed the mail fast, trying anything to calm my nerves.
Daschle said, “No way they’ll respond. It’s Disneyland, Buddy, they get a million letters a day.”
Weeks turned into months, and I started to think he was right. Then one day, I opened the mailbox.
My name on it. My head was screaming, “This is it! I never get mail!”
HOLY SMOKES! What’s WED Enterprises? I started bouncing up and down. I started shaking. I ran into the house, ran straight to my room, tore open the envelope, and read:
Your recent letter was directed to my office here at WED Enterprises. WED (Walter Elias Disney) is the design and “Imagineering” branch of Walt Disney Productions. As such, we are responsible for the creation of all shows, attractions and outdoor entertainment for both Disneyland and Walt Disney World.
Thank you for showing us your “Quadrupuler” roller coaster - it looks like quite an adventure! As you may know Kevin, we are creating a new rollercoaster type attraction for Disneyland’s Frontierland. Known as Big Thunder Mountain Railroad, this thrilling adventure will carry passengers on a high speed journey through the gold rush days of the old west. Big Thunder is scheduled to open later this year.
Thank you Kevin, for your interest in Walt Disney Productions.
Here is a photo of the original letter:
You would think that this letter would have ended my inventing spirit. That I would have gotten bitter and declared to the world, “No more will I invent for that mouse or anyone else!”
But no! Instead I was elated. I read it over and over. It said, “It looks like quite an adventure!” Disneyland liked it! I couldn’t believe it.
Looking back, those words from Tom Fitzgerald didn’t just validate my idea, they launched my 10-year-old self-esteem into orbit! (Tom Fitzgerald went on to become one of the most influential Imagineers in Disney history. According to his bio, he started in 1979. I got this letter in April 1979 which means this must have been one of his first tasks as a new Imagineer employee. lol! I wonder if he would remember?)
A couple years later, the Rubik’s Cube came out. I was obsessed. It took me weeks to solve, and once I did, my first thought was: What if it could turn on the angle? So I introduced Mr. Rubik’s Cube to my bandsaw, redesigned the core, and built a very rough prototype.
Here are some pictures.
I sent it to Ideal Toy Company, but they rejected it straight away, they don’t accept unsolicited ideas. But that didn’t matter. They didn’t understand that I had a letter from Disney Imagineering telling me my ideas are good!
That ten-year-old inventor is still alive in me, and still doesn’t understand rejection. Over the decades, I’ve invented several patented board games that were shopped around but never sold.
But I’m convinced the early validation from the Disney letter gave me a kind of bulletproof resilience.
Today, I work in one of the most rejection-heavy industries there is, acting. To me, inventing and acting are deeply connected: both are about discovery, and both feed my inventing spirit. Successes are rare, but when they come, the joy lasts forever.
So I keep auditioning, and I keep inventing.
And sometimes, when frustration creeps in, wondering why I’m not further along, or why things aren’t working out, that 10-year-old boy appears. He reminds me of what we believed back then, and what still guides me now- just keep going, one piece at a time.
...
Read the original on wordglyph.xyz »
About us Blog Join the waitlist
We are releasing Steerling-8B, the first interpretable model that can trace any token it generates to its input context, concepts a human can understand, and its training data. Trained on 1.35 trillion tokens, the model achieves downstream performance within range of models trained on 2–7× more data. Steerling-8B unlocks several capabilities which include suppressing or amplifying specific concepts at inference time without retraining, training data provenance for any generated chunk, and inference-time alignment via concept control, replacing thousands of safety training examples with explicit concept-level steering.
For the first time, a language model, at the 8-billion-parameter scale, can explain every token it produces in three key ways. More specifically, for any group of output tokens that Steerling generates, we can trace these tokens to:
[Concepts] human-understandable topics in the model’s representations, and
We are releasing the weights of a base model trained on 1.35T tokens as well as companion code to interact and play with the model.
Below we show Steerling-8B generating text from a prompt across various categories. You can select an example, then click on any highlighted chunk of the output. The panel below will update to show:
Input Feature attribution: which tokens in the input prompt strongly influenced that chunk.
Concept attribution: the ranked list of concepts, both tone (e.g. analytical, clinical) and content (e.g. Genetic alteration methodologies), that the model routed through to produce that chunk.
Training data attribution: how the concepts in that chunk distribute across training sources (ArXiv, Wikipedia, FLAN, etc.), showing where in the training data the model’s knowledge originates.
Steerling is built on a causal discrete diffusion model backbone, which lets us steer generation across multi-token tokens rather than only at the next-token. The key design choice is decomposing the model’s embeddings into three explicit pathways: ~33K supervised “known” concepts, ~100K “discovered” concepts the model learns on its own, and a residual that captures whatever remains.
We then constrain the model with training loss functions that ensure the model routes signal through concepts without a fundamental tradeoff with performance. The concepts feed into logits through a linear path, every prediction decomposes exactly into per-concept contributions, and we can edit those contributions at inference time without retraining. For the full architecture, training objectives, and scaling analysis, see Scaling Interpretable Models to 8B.
Despite being trained on significantly fewer compute than comparable models, Steerling-8B achieves competitive performance across standard benchmarks. The figure below shows average performance (across 7 benchmarks) versus approximate training FLOPs on a log scale, with vertical lines marking multiples of Steerling’s compute budget.
In the previous update, we shared several ways that assess how interpretable a model’s representations are. Here we provide another metric that gives insight into the model’s use of its concepts. On a held-out validation set, over 84% of token-level contribution comes from the concept module: the model is not just using the residual to make its predictions. This matters for control: if the model’s predictions genuinely flow through concepts, then editing those concepts at inference time actually changes what the model does rather than nudging a side channel while the real work happens elsewhere.
A useful check is what happens when we remove the residual pathway. On several LM Harness tasks, dropping the residual has only a small effect, which suggests the model’s predictive signal is largely routed through concepts rather than hidden “everything-else” channels.
Finally, Steerling can detect known concepts in text with 96.2% AUC on a held-out validation dataset.
In the coming weeks, we’ll be releasing deep dives on each of these capabilities:
* Concept discovery: what did Steerling learn that we didn’t teach it? We’ll open up the discovered concept space and show structure that surprised us.
* Alignment without fine-tuning: replace thousands of safety training examples with a handful of concept-level interventions.
* Memorization & training data valuation: trace any generation back to the training data that produced it, and assign value to individual data sources.
* The case for inherent interpretability: what do you gain when interpretability is designed in from the start, and what do you miss when it’s bolted on after the fact?
We’ll cover each of these in detail in upcoming posts, with quantitative evaluations and deployment-oriented case studies.
...
Read the original on www.guidelabs.ai »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.