10 interesting stories served every morning and every evening.
A minute-by-minute reconstruction of the massacre by Earshot and Forensic Architecture found Israeli soldiers fired over 900 bullets at the aid workers, killing 15. Drop Site is a reader-funded, independent news outlet. Without your support, we can’t operate. Please consider becoming a paid subscriber or making a 501(c)(3) tax-deductible donation today.Funerals held at Nasser Hospital in Khan Yunis, in southern Gaza, for aid workers from the Palestinian Red Crescent who were killed in an Israeli attack in Tel al-Sultan. March 31, 2025. Photo by Hani Alshaer/Anadolu via Getty Images.Israeli soldiers fired nearly a thousand bullets during the massacre of 15 Palestinian aid workers in southern Gaza on March 23, 2025—with at least eight shots fired at point blank range—according to a joint investigation by the independent research groups Earshot and Forensic Architecture. The report, based on eyewitness testimony and audio and visual analysis, shows that a number of aid workers were executed and that at least one was shot from as close as one meter away.In Tel al-Sultan that day, Israel killed eight aid workers with the Palestine Red Crescent Society (PRCS), six from Palestinian Civil Defense, and a UN relief agency staffer. It immediately triggered international condemnation and was described as “one of the darkest moments” of the war by PRCS.The Israeli military was forced to change its story about the ambush several times, following the discovery of the bodies in a mass grave, along with their flattened vehicles, and the emergence of video and audio recordings taken by the aid workers. An internal military inquiry ultimately did not recommend any criminal action against the army units responsible for the incident.The report by Earshot and Forensic Architecture reconstructs, minute by minute, how the massacre unfolded. Using video and audio recordings from the incident, open-source images and videos, satellite imagery, social media posts, and other materials, as well as in-depth interviews with two survivors of the attack, the groups were able to digitally reconstruct the scene and events surrounding the massacre.Israeli soldiers ambushed and subjected Palestinian aid workers to a near continuous assault for over two hours even though the soldiers never came under fire.At least 910 gunshots were documented across three video and audio recordings of the attack. The vast majority of these gunshots, at least 844, were fired over just five minutes and 30 seconds.At least 93% of the gunshots recorded in the first minutes of the attack were fired directly towards the emergency vehicles and aid workers by Israeli soldiers. During this time, at least five shooters fired simultaneously. Witness testimonies suggest as many as 30 soldiers were present in the area.Israeli soldiers were initially positioned on an elevated sandbank by the road, with no obstructions limiting their line of sight. The emergency lights and markings of the victims’ vehicles would have been clearly visible to the soldiers at the time of the attacks.Israeli soldiers first maintained fixed firing positions from the elevated sandbank, then walked toward the aid workers while continuing to shoot. Upon reaching the aid workers, the soldiers moved between them and the vehicles and executed some of the aid workers at point blank range, as close as one meter away.In the immediate aftermath of the attack, the Israeli military conducted extensive earthworks at the site. In the days and weeks that followed, the area was further transformed by the Israeli military’s construction of the “Morag Corridor,” a security zone splitting the southern Gaza Strip, and the erection of an aid distribution site operated by the Israeli- and U.S.-backed Gaza Humanitarian Foundation.“This seems to be a very well documented case using a number of forms of credible evidence that are cross referenced,” Katherine Gallagher, a senior staff attorney at the Center for Constitutional Rights, told Drop Site after reviewing a detailed summary of the investigation. “It presents a very compelling case, and honestly, a very devastating one.”The Israeli military did not respond to specific inquiries from Drop Site and instead pointed to the findings of an internal investigation published on April 20 that found “the incident occurred in a hostile and dangerous combat zone, under a widespread threat to the operating troops.” It also “found no evidence to support claims of execution,” which it called “blood libels and false accusations against IDF soldiers.”The joint report will be released February 24 at a gathering at British parliament in Westminster hosted by the British Palestinian Committee with Earshot, Forensic Architecture, and the international humanitarian law coordinator for PRCS, Dana Abu Koash. The full report is available here.On March 23, 2025 at 3:52 a.m., PRCS dispatched two ambulances from two different areas to the scene of an Israeli airstrike in Al-Hashashin, an area near Rafah. Israel had resumed its scorched earth bombing campaign on Gaza a few days earlier after abandoning the January 2025 ceasefire agreement.The attack on the aid workers began at approximately 4:00 a.m. when one of the ambulances driving along Gush Katif road in Al-Hashashin came under Israeli fire. The vehicle had its emergency lights turned on at the time. Mustafa Khafaja, who was driving, lost control of the vehicle, which veered left off the road and stopped near an electricity pole. Khafaja and his colleague, Ezz El-Din Shaat, who was in the passenger seat, were both killed. A third PRCS worker, Munther Abed, who was in the back of the vehicle, threw himself to the floor of the van and survived.After the shooting stopped, Israeli soldiers approached the ambulance and dragged Abed out of the car, beat him, and detained him at a nearby pit. Sometime later, two Palestinian civilians—a father and son from the Bardawil family—were also detained and brought to the pit. The Israeli soldiers then took the three detainees to an elevated area behind a tall concrete structure some 38 to 48 meters southeast of the ambulance, where an additional group of Israeli soldiers were positioned.Still from the situated testimony with Munther Abed recounting the location of the pit and the area behind the tall concrete structure where he was taken when detained by Israeli soldiers. (Forensic Architecture, 2026).By 4:35 a.m., the second ambulance, having completed its mission in Al-Hashashin, was dispatched to search for the first ambulance, which had lost contact with PRCS headquarters at 3:55 a.m. The second ambulance was joined by two more PRCS ambulances, one belonging to Civil Defense, and a Civil Defense fire truck. The five-vehicle rescue convoy arrived at the scene of the attack of the first ambulance shortly after 5:00 a.m. All vehicles were clearly marked and had their emergency lights turned on.The position of each ambulance as the shooting began. (Forensic Architecture, 2026)A PRCS worker in one of the ambulances, Refaat Radwan, began filming on his phone as they drove to the site. His recovered videos as well as recordings of phone calls by two other aid workers at the scene to PRCS dispatch provided crucial evidence of the massacre. Forensic Architecture and Earshot’s analysis of the recordings corroborated eyewitness testimony on the positions and movements of the Israeli soldiers throughout the attack.At 5:09 a.m., as the aid workers parked and approached the first ambulance by foot, Israeli soldiers positioned on the elevated sandbank opened fire. A digital reconstruction of the scene shows that the soldiers would have had an uninterrupted view of the arrival of the convoy. Abed, who was being detained at gunpoint on the elevated sandbank, testified that the soldiers were kneeling and aiming their weapons at the convoy as it approached.Locations of all emergency vehicles at the incident site at 5:10 a.m. relative to Munther Abed and the Israeli soldiers who detained him. From their position, the soldiers would have been able to clearly see the convoy’s arrival with their emergency lights on. (Forensic Architecture, 2026).
The Israeli soldiers remained on the sandbank while firing continuously at the aid workers for four minutes. The soldiers then advanced towards the aid workers at a walking pace of approximately one meter per second while continuously shooting.Echolocation of Israeli soldiers approaching the aid workers during the final 1 minute and 30 seconds. (Earshot, 2026).
Upon reaching the vehicles, the Israeli soldiers continued to fire as they walked in between the ambulances and the fire truck, shooting the aid workers at close range in execution-style killings.At approximately 5:13 a.m., PRCS aid worker Ashraf Abu Libda called the group’s headquarters. The recording, which overlaps Radwan’s video, provided additional details. In this recording, Earshot found that at least eight gunshots were fired from positions between the emergency vehicles. One of the gunshots captured on Abu Libda’s phone call was fired from a range of one to four meters from him. The gunshots coincide with the last time Abu Libda’s voice is heard on the call, suggesting these are the gunshots that killed him.Echolocation of Israeli soldiers as close as 1 to 4 meters from aid workers and most likely close-range execution. (Earshot, 2026).
At least 844 gunshots were fired over a period of five minutes and 30 seconds, with at least 93% of the shots fired toward the emergency vehicles. The audio ballistics analysis confirms the presence of at least five shooters—and possibly many more—firing simultaneously. The two surviving PRCS aid workers, Munther Abed and Asaad Al-Nasasra, testified that between 12 and 30 soldiers were at the scene.“The reconstruction was jointly achieved with the two survivors of the incident, with an immersive spatial model they could walk through and amend. Together with spatial and audio analysis we established the position of the soldiers on an elevated ground with an unobstructed line of sight to the emergency vehicles. The soldiers could clearly see the aid workers, shot at them continuously and deliberately from this position and then approached to execute them one by one at close range,” Samaneh Moafi, assistant director of research at Forensic Architecture, told Drop Site. “Locating the massacre within the evolution of Israel’s campaign in Gaza shows that it was not an isolated incident but part of the genocide.”Earshot used echolocation to analyze the audio on the recordings in order to arrive at precise estimates of the shooters’ locations. Echolocation is the process of locating the source of a sound based on an analysis of the sound’s echoes and the environment in which the sound travels. The Israeli military destroyed and cleared so many buildings in the Tel Al-Sultan area where the ambush of the aid workers took place that very few structures remained. This destruction actually strengthened Earshot’s ability to determine the positions and movements of Israeli soldiers, based on identifying the surfaces responsible for clearly distinguishable gunshot echoes. Rather than having multiple buildings reflecting the sound waves, there were only a few standing walls and the emergency vehicles themselves.The analysis of the video and audio corroborated Al-Nasasra’s eyewitness testimony that Israeli soldiers “came down [from the sandbank], got close to [the aid workers] and shot them from close range,” and “were walking between [the aid workers] and shooting.”Map showing the Israeli soldier’s positions derived from an audio analysis of gunshot echoes from Refaat Radwan’s video. (Earshot, 2026).“Earshot forensically analyzed over 900 gunshots fired at aid workers. It took one whole year of careful listening to reconstruct an auditory picture of what happened that dark night,” Lawrence Abu Hamdan, the director of Earshot, told Drop Site. “I am so proud that our work has corroborated the survivors’ testimony, establishing their brave accounts as accurate and reliable documentation of what occurred that day. Yet, it is the echoes of this event that continue to haunt us: the destruction and clearing of Tel al-Sultan left only three structures standing at this crime scene. While the few echoes reflecting off these buildings brought light to this crime, they have also revealed a scale of erasure of life beyond this one event.”According to autopsy reports first reported by the Guardian, the aid worker who filmed the video—Radwan—was shot in the head, while Abu Libda and another aid worker, Muhammad Bahloul, were shot in the chest. A doctor who examined the bodies reportedly described the “specific and intentional location of shots at close range” as indicative of an “execution-style” shooting.More than two hours after the initial attack, a clearly marked UN vehicle, a Toyota Hilux, passed by the site. Israeli soldiers fired on the vehicle, killing the driver. The UN lost contact with the vehicle at 6:00 a.m. A second UN vehicle, a minibus, arrived in the area minutes later and was brought to a stop by gunfire a little over 200 meters away. The driver was able to escape.Left: Photograph of the UN Toyota Hilux taken on the 30 March 2025, when the bodies of the victims were recovered. (OCHA, 2025). Right: Still from the situated testimony with Asaad recounting the location of the UN Toyota Hilux when brought to a stop. (Forensic Architecture, 2026). Annotated 3D model showing the position of two UN vehicles in relation to the missing ambulance and the convoy of emergency vehicles. (Forensic Architecture, 2026).Between 6:55 and 7:13 a.m., Al-Nasasra made a phone call to PRCS headquarters that captured at least 42 additional gunshots and the sound of vehicle movement. The recording also captured the sound of an explosion the investigation identified as the firing of an Israeli-made Spike LR guided missile.Following the ambush, Israeli forces crushed all eight vehicles using heavy machinery and attempted to bury them under the sand.The body of Anwar al-Attar was found near the ambush site on March 27, and the bodies of the other 14 aid workers, all wearing identifying uniforms or volunteer vests of their respective organizations, were found in a mass grave near the site on March 30.The 15 aid workers killed were: Mustafa Khafaja, Ezz El-Din Shaat, Saleh Muammar, Refaat Radwan, Muhammad Bahloul, Ashraf Abu Libda, Muhammad al-Hila, and Raed al-Sharif with PRCS. Zuhair Abdul Hamid al-Farra, Samir Yahya al-Bahapsa, Ibrahim Nabil al-Maghari, Fouad Ibrahim al-Jamal, Youssef Rassem Khalifa, and Anwar al-Attar with Civil Defense. Kamal Mohammed Shahtout with UNRWA.Annotated still from the 3D model showing the location of the bodies of aid workers and their vehicles before the mass burial. (Forensic Architecture, 2026).One of the survivors, Abed, was released hours after the ambush. The other survivor, Asaad, was held in Israeli custody without charge for 37 days, tortured, and interrogated in relation to the incident at the Sde Teiman detention camp, a notorious Israeli prison camp in the Negev desert, before being released on April 29.Jonathan Whittall, a senior UN official in Palestine between 2022 and 2025, was one of team members on the ground when the mass grave was discovered on March 30 and provided evidence to Forensic Architecture and Earshot for their investigation. “Following our discovery of the mass grave, the narrative from Israeli forces shifted multiple times; we were fed several versions of a blatant lie,” Whittall told Drop Site. “The men we retrieved on Eid last year were medics. We found them in their uniforms, ready to save lives, only to be killed by Israeli forces fully aware of their protected status.” Whittall, who is now executive Director of KEYS Initiative, a political affairs and strategic advisory organization, has also contributed reporting to Drop Site News. “This illustrates an abhorrent disregard for international law,” he continued, “where any Palestinian in an Israeli-designated evacuation zone is targeted regardless of their civilian status. It highlights the total lack of accountability under which these forces operate. International governments continue to arm and trade with a leadership accused of genocide, whose soldiers massacred medics and buried them in a grave marked by the siren light of the ambulance they destroyed.”Palestinian Red Crescent aid workers mourn the killing of their colleagues by the Israeli military in Tel al-Sultan as their bodies are brought to Nasser Hospital in Khan Yunis, in southern Gaza. March 30, 2025. (Photo by Abdallah F.s. Alattar/Anadolu via Getty Images).In the aftermath of the massacre, the Israeli military provided several conflicting versions of events to justify the killings. On March 28, after the discovery of al-Attar’s body, the Israeli military admitted that its soldiers had fired on “ambulances and fire trucks.” Three days later, after the remaining bodies were discovered in a mass grave, the Israeli military claimed that “several uncoordinated vehicles were identified advancing suspiciously toward IDF troops without headlights or emergency signals.”After footage from Radwan’s phone was first published by the New York Times a few days later, the Israeli military backtracked on its claims that the vehicles did not have emergency signals on when Israeli troops opened fire, saying the statement was inaccurate.The Israeli military then announced on April 20 that an internal inquiry into the incident had found the killings were caused by “several professional failures, breaches of orders, and a failure to fully report the incident.”The Israeli military said troops from the Golani reconnaissance battalion were involved in the attack. However, it said soldiers did not engage in “indiscriminate fire” during the incident, but that they opened fire on what they believed to be a “tangible threat” amid what the military called an “operational misunderstanding.” It blamed the attacks on “poor night visibility” and maintained the incident had unfolded in a “hostile and dangerous combat zone, under a widespread threat to the operating troops.” Six of the fifteen Palestinians killed, the military said, “were identified in a retrospective examination as Hamas terrorists,” but provided no evidence to support the claim.“On the specific question of Israel justifying the attack on clearly marked medical personnel because of suspicions of membership in groups or links to groups or terrorism—because there is an affirmative duty to respect and protect medical personnel, you don’t shoot first, you protect first,” Gallagher told Drop Site. “But what this investigation reveals is that there was a shoot first policy, and that is unlawful under international law.”As for the burial of the bodies in a mass grave, the Israeli military said in its report “it was decided to gather and cover the bodies to prevent further harm and clear the vehicles from the route in preparation for civilian evacuation. The body removal and vehicle crushing were carried out by field commanders.” It concluded, “removing the bodies was reasonable under the circumstances, but the decision to crush the vehicles was wrong. In general, there was no attempt to conceal the event.”As a result of the investigation, the commanding officer of the 14th Brigade received a letter of reprimand for “his overall responsibility for the incident,” while the deputy commander of the Golani reconnaissance battalion involved in the incident was “dismissed from his position due to his responsibilities as the field commander and for providing an incomplete and inaccurate report during the debrief.”The inquiry did not recommend any criminal action be taken against the military units responsible for the incident. The Palestine Red Crescent Society, Civil Defense, and the UN humanitarian agency in Gaza all rejected the Israeli military report.“Attacks on medical personnel and those who are identified as medical personnel are patently unlawful under international law, and there is an affirmative obligation to protect medical personnel in the context of armed conflict. So the very first thing is that there’s a breach of that very clear and time honored principle of international humanitarian law,” Gallagher said. “When you zoom out and look at this in the context of the way the Israeli assault has been carried out over many months and years in Gaza and we see that there is a pattern and practice of attacks on medical personnel—similar to journalists and other groups that are explicitly and uniquely protected as classes of civilians in international humanitarian law—it raises even more questions and deep concern about the lack of accountability, because what we know is that impunity breeds repetition.”Gallagher, who previously worked at the UN’s International Criminal Court for the former Yugoslavia, said that a legal analysis of the massacre would find serious violations of the Rome Statute of the International Criminal Court. “When you’re talking about grave breaches of the Geneva Conventions, in particular war crimes, you have obligations, not just the possibility, but obligations, to open investigations,” Gallagher said.Transforming the Site of the Massacre into a GHF HubSatellite imagery from the morning of the ambush shows that extensive earthworks were carried out at the incident site. The images reveal the construction of an earth berm approximately 220 meters north of the ambush location and another roughly 410 meters to the south. These two positions later functioned as checkpoints, restricting access and controlling passage along an evacuation route established that morning by the Israeli military leading toward the coastal Al-Mawasi area.The earthworks that began shortly after the attack were used in the construction of a Gaza Humanitarian Foundation “aid distribution” site, at which civilians were targeted and shot at. (Forensic Architecture, 2026).
In the days and weeks that followed, the area surrounding the incident site was further transformed by the Israeli military’s construction of the “Morag Corridor” security zone and the erection of an aid distribution site operated by the Gaza Humanitarian Foundation.“On that same site of the mass grave, the Gaza Humanitarian Foundation established a distribution point where desperate people were gunned down trying to access food,” Whittall told Drop Site. “Now, the U.S, under the so-called Board of Peace, plans to build a ‘New Rafah’ over this crime scene. Without meaningful accountability, ‘New Rafah’ will be a monument to impunity.”
...
Read the original on www.dropsitenews.com »
For the past few weeks I’ve been teaching my 9-pound cavapoo Momo (cavalier king charles spaniel and toy poodle) to vibe code games. The key to making this work is telling Claude Code that a genius game designer who only speaks in cryptic riddles is giving it instructions, add strong guardrails, and build plenty of tools for automated feedback. The results have surpassed my expectations. Below I walk through all the pieces and how they came together.
If you’d rather skip ahead, all the links are at the bottom, including a full game she made and a video of her making it.
Back in December I was working on a small game prototype in Godot. I use Claude Code extensively these days and this project was no exception. I kicked off a procedural mesh generation task and came back to find strange input in the terminal.
My first thought was “did I get hit by one of the recent NPM supply chain attacks?” Fortunately, no (or at least the worm is still asleep in the background somewhere). A little bit of searching and I noticed my lip balm was gone off my desk - which I keep just behind my keyboard. I quickly found both the suspect and the lip balm (still intact) not far away.
At the time, I thought this was funny, took a screenshot, and moved on. Fast forward a few weeks, and I found myself with a lot of time on my hands. On January 13th, I woke up to the news that Meta had another round of layoffs and my role specifically as a research engineer had been eliminated.
Since the layoff, I’ve had plenty of time with friends and family. In recounting the anecdote of Momo typing away on my keyboard, I began to wonder “what would happen if she actually submitted that input to Claude? Could I make it do something meaningful?”. I decided to find out. Here’s what that looked like.
Momo types on a Bluetooth keyboard proxied through a Raspberry Pi 5. Keystrokes travel across the network to DogKeyboard, a small Rust app that filters out special keys and forwards the rest to Claude Code. When Momo has typed enough, DogKeyboard triggers a smart pet feeder to dispense treats. A chime tells her when Claude is ready for more input.
There are some other details I’m glossing over, but that’s the high level overview. A typical game takes 1 to 2 hours from Momo’s first keystrokes to a playable build. All the games are made in Godot 4.6, with 100% of the game logic in C#.
It’s easy to submit random text to Claude Code, but it doesn’t do much.
> y7u8888888ftrg34BC
● It looks like that might have been an accidental keyboard input. Let me know if there’s something I can help you with!
Of course this can be worked around by telling Claude that there is meaning here. After a lot of iteration, I found this opening to work well:
Hello! I am an eccentric video game designer (a very creative one) who communicates in an unusual way. Sometimes I’ll mash the keyboard or type nonsense like “skfjhsd#$%” — but these are NOT random! They are secret cryptic commands full of genius game ideas (even if it’s hard to see).
Your job: You are a brilliant AI game developer who can understand my cryptic language. No matter what odd or nonsensical input I provide, you will interpret it as a meaningful instruction or idea for our video game. You will then build or update the game based on that interpretation.
It goes on for quite a bit (you can read the full prompt here), but that’s the core of it. It took more iterations than I expected, mostly to sand down the rough edges. Sometimes the game wouldn’t have sound. Other times there would be no player character, or the control scheme would be nearly unusable. After adding a checklist of minimum requirements - working audio, WASD or arrow key controls, at least one enemy or obstacle, a visible player character - the results got a lot better:
> y7u8888888ftrg34BC
● Alright, let me decode this masterpiece of cryptic genius…
“y7u8888888ftrg34BC”
- “y7u” = “you” (keys are adjacent on keyboard - Y, 7, U)
- “8888888” = seven 8s = a loooong extending thing… a TONGUE! (8 looks like links in a chain stretching out)
- “ftrg” = scrambled FROG (f, r, g are all there, t is the tongue sound!)
- “34” = 3D + 4 directions of bug catching
- “BC” = Bug Catcher!
You want a 3D FROG BUG-CATCHING GAME! A charismatic frog with a stretchy tongue snatching bugs in a swamp. I love it.
Game: “Swamp Snacker” - You ARE the frog. Aim your tongue. Catch bugs. Get fat. Get points.
With this, things started to fly. The results weren’t perfect, but they were promising. I could see it was feasible, and the remaining problems were ones I knew how to solve.
To go from “hey that could work” to a real, repeatable system, I needed a few things:
A way to route input to Claude Code. Importantly, I would need to filter out special keys like Esc, Tab, and the Windows key - otherwise Momo would quickly end up outside Claude Code.
A way to automatically reward Momo for her work. Sure, I could just toss treats to her, but I’m trying to keep humans out of the loop.
More robust verification tools. Many of the games wound up with UI elements all jumbled together or input that was never correctly wired up. Automated feedback loops turned out to be the single biggest lever for fewer duds.
I experimented with Rust/Bevy and Unity before settling on Godot. Bevy’s animations and visuals weren’t as crisp, and Claude struggled with its coordinate conventions - likely a combination of less training data and Bevy leaving many core features, like physics, to the community. Unity was a constant struggle to keep the MCP bridge between Claude and the editor healthy. It frequently hung, and I never figured out how to get Claude Code to read the scene hierarchy from the editor. Godot’s text-based scene format turned out to be a huge advantage - Claude can read and edit .tscn files directly.
Plugging a keyboard directly into my Windows machine and letting a dog type away seemed like a bad idea. So I routed input through a Raspberry Pi first - it UDP broadcasts each keystroke to the subnet, and DogKeyboard picks it up, filters out dangerous keys, and injects them into the target application.
For the keyboard itself, I wanted something low profile and durable. I started with flexible silicone keyboards, but Momo’s bigger sister Hana (a 19lb cavapoo) ripped a hole in the first one within minutes. Mounting was another problem - nothing sticks to silicone. I epoxied one to a wooden plaque, but within 10 minutes Momo ripped off the number 6.
I was seriously considering a $400 vandal proof metal keyboard designed for kiosks when I decided to give Logitech’s Pebble Keys 2 a try. It worked. Momo loved it, VHB tape held it in place (unlike the previous silicone keyboard), and Bluetooth meant no cable to chew on.
Motivating Momo is easy - she’ll do anything for the right food reward. So what I really needed was a dispenser with an API, small serving sizes, and the ability to use my own treats (both dogs have sensitive stomachs).
I landed on Aqara’s C1 Smart Pet Feeder, controlled over Zigbee. Even its smallest serving is too large for a 9-pound dog, so I preloaded each compartment with a few treats and skipped the hopper. This limits it to about 6 servings before refilling, but that’s plenty for Momo to make a game.
Zigbee on Windows was a no go - after hours of driver and firmware issues I moved the adapter to the Pi, where it worked immediately. The final flow: DogKeyboard SSHs into the Pi and runs a script that sends two Zigbee commands:
{“serving_size”:1}
{“feed”:“START”}
As the number of almost-there games mounted, the need for real feedback to Claude Code became clear. It already had unit tests and logs, but those weren’t enough. The games would build and run but have invisible players, broken UI, or input that was never wired up. Claude had no way to know. It needed to be able to see and play its own games.
The first tool was straightforward: a simple Python script to take screenshots of the running game. Claude could launch the game, screenshot it, and see whether the title screen actually rendered or was just a black window.
The second tool was more interesting. I gave Claude a way to send sequences of input to running game instances - things like “left for 3 seconds, pause for 2 seconds, right for one frame, fire”. It could then take screenshots and decide whether to send followup commands. This turned Claude into its own QA tester.
These tools didn’t need any refinement - they just worked. And the way Claude used them surprised me. While testing one game, I watched it play through all 6 stages just to verify that the final boss fight worked correctly. When it found a problem - a health bar that wasn’t updating - it went back to the code, fixed it, relaunched, and played through again to confirm.
I also pulled in a few other tools from other projects I’ve made:
* Scene linter. Claude sometimes reuses node IDs or generates broken resource references in Godot’s .tscn files. These cause cryptic errors at runtime. Since adding a linter that catches these before the game launches, I haven’t seen a single mangled scene file.
* Shader linter. Validates custom shaders and gives specific errors back to Claude, rather than the vague “shader failed to compile” that Godot provides.
* Input action mapper. A small helper to get keyboard/controller input wired in correctly. Claude can edit Godot’s project files directly to add new input actions, but it sometimes gets the format wrong and the error messages are unhelpful.
All of these are open sourced, and I’d encourage you to try them for yourself - even without a dog.
The DogKeyboard app ended up handling a lot more than just routing keystrokes. A few details worth mentioning:
It monitors Claude Code using Hooks and plays a chime sound when Claude goes idle - that’s Momo’s cue to type. When Claude is idle and Momo has entered at least 16 characters, it auto-submits by pressing Enter. When Claude is working, it backspaces any extra input in case Momo gets eager and periodically dismisses Plan Mode prompts (Claude’s “review before acting” step) that would otherwise block progress.
For the video recording, it runs a lightweight webserver that overlays keystrokes as they’re pressed. I added a configurable delay so that if the video feed is lagged, the overlay doesn’t show keystrokes before Momo appears to type them.
The first decision was which dog to train. Hana (on the right) is twice Momo’s size but far more trainable - she’s the smartest pet I’ve had. Before turning 1, she figured out how to jump, grab a door handle, and open any door in the house just by observing.
I expected the dogs would walk across the keyboard, stepping on keys as they go. That’s what Momo did when she stole my lip balm. Since Hana is tall enough to just step over a keyboard, Momo seemed like the better candidate. As it turned out, both dogs learned to tap and swipe their paws along the keyboard rather than step on it - but Momo learned a little faster in this case, and typed a little gentler, so Momo it was.
The training process took about 2 weeks, with somewhat inconsistent sessions. My goal was 10 minutes, twice a day. I started by scattering high-value treats (freeze-dried salmon) on the keyboard to build the association: this thing is fun and gives good food.
Momo was frustrated at first. She knew the keyboard was involved but didn’t know how. She would lie on it, bark at it, and try anything she could think of. At some point she tried swiping her paw across it. I played a chime sound and heavily rewarded her. After a few iterations, the association clicked and she started attacking the keyboard with vigor any chance she had.
The next step was to automate the rewards. I filled up the food dispenser with treats (4 servings at a time so I could control the quantity), waited for her swipe at the keyboard, and then I ran a script to play the chime and dispense a serving of treats. After a few sessions of this I increased the difficulty - not just one swipe, but three swipes before the treats came.
Simultaneously, I decreased the treat value to keep her healthy. First a mix of mid-value treats with an occasional high-value one, and eventually just kibble with an occasional mid-value treat. She still loved it and was healthier for it.
Eventually I automated the whole thing with the DogKeyboard app and let it run. It required at least 16 characters per serving of treats, and was supposed to only dispense once per idle period of Claude Code. But bugs crept in during testing - a couple of times it dispensed multiple servings in a row. Unfortunately, Momo picked up on this and now keeps mashing the keyboard hoping for a second immediate serving. The only way to pull her away is to offer higher-value treats elsewhere, which is what I do after she’s put in her input for a game.
Here’s a small sample of the games Momo made. Every game shown here is playable - these aren’t mock-ups. It’s nowhere near comprehensive, but represents the variety she created. As the tools and prompts improved, the games got noticeably better. There was also a noticeable bump in quality when Opus 4.6 dropped - Claude became more likely to create custom shaders and other visual effects.
One recurring issue: I kept getting games with basic glowing neon 3D shapes and couldn’t get any other style. Frustrated, I asked Claude why. It told me this was the signature style of the project - Claude’s external memory file had latched onto it and kept reinforcing it. Wiping MEMORY.md (Claude Code’s persistent project notes) before every new game fixed this and gave much more varied results.
DJ Smirk. One of Momo’s earliest games - though more of an experience than a game. Every key on the keyboard plays a different tone.
Munch. A competitive salad building game. Collect all 7 ingredients first to win.
Zaaz. Another puzzle game. The goal is to paint the whole level by moving a paintbrush that moves in integer tiles at a time. Crash into obstacles to move shorter distances. Some levels were unwinnable. It also has a scoring system I never figured out. After this one I updated the prompt to exclude puzzle games.
The Oracle Frog of Rome. Avoid the arms of a kraken and collect the golden chains to bind it. At some point the oracle also became a king.
Octogroove. A surprisingly challenging rhythm game. You’re an octopus using four of your arms to play the drums. Mash beats in time with basic music. It’s like Dance Dance Revolution, but made more difficult by the beats coming from different directions.
Ewe Heard Me! A game about herding sheep. Use stealth and your bark to corral them into a pen. Unfortunately, it’s unwinnable - the first two sheep you get into the pen simply stop and block anything else from entering. After this one I placed a larger emphasis on checking for winability in the prompt and tools.
Quasar Saz. As of writing, this is Momo’s most recent game. You play as Zara, wielding a cosmic saz (a long-necked stringed instrument) to fight corrupted sound. There are 6 stages + a boss fight. It’s fun to play for a couple rounds, has good visuals, and dynamic audio.
When Momo first stepped on my keyboard back in December, it was just a funny accident. A few weeks later, jobless and looking for a project, I decided to see how far that accident could go. The answer was a lot further than I expected.
The technical pieces - keyboard routing, treat dispenser, prompt engineering, feedback tools - were all solvable engineering problems. What surprised me was how little of the final result depended on Momo typing anything meaningful. The magic isn’t in the input. It’s in the system around it. A well-crafted prompt, strong guardrails, automated verification, and good tools can turn genuine nonsense into a playable game.
If there’s a takeaway beyond the spectacle, it’s this: the bottleneck in AI-assisted development isn’t the quality of your ideas - it’s the quality of your feedback loops. The games got dramatically better not when I improved the prompt, but when I gave Claude the ability to screenshot its own work, play-test its own levels, and lint its own scene files. The same tools that let a dog’s keyboard mashing produce a working game will make your own intentional work with AI significantly better.
Momo isn’t secretly a game designer. She’s a cavapoo who learned that smacking a plastic rectangle makes kibble appear. A year ago, the gap between that and software engineering felt enormous. Now it feels small and shrinks each day.
If you want to try any of this yourself - whether with a dog, a cat, or just your own random keyboard mashing - everything is open source in the links below.
Play the game (with human bug fixes) — Recommended. Download for Windows, Mac, or Linux
Play the game (Momo’s version) — Download for Windows, Mac, or Linux
TeaLeaves — Tools, prompts, and source for developing the game
...
Read the original on www.calebleak.com »
In a private exchange with priests of the Diocese of Rome on Thursday, Pope Leo XIV responded to four questions, advising them on prayer, study, and priestly fraternity.
The off-camera moment took place after Leo gave a public speech to the priests, inviting them to “rekindle the fire” of their ministry.
“The first priest to speak was a young man who asked the pope how the Gospel can be embodied in the world of young people,” according to a priest present at the Feb. 19 meeting in the Vatican’s Paul VI Hall.
The priest told ACI Stampa, the Italian-language sister service of EWTN News, that Leo’s answer to this question was: “First of all, what is needed is the witness of the priest; and then, when meeting young people, they must broaden their horizons to reach as many young people as possible. For this, it is necessary to rediscover the value of communion.”
Responding to a second question, the pope recommended knowing well “the community in which one lives and works. It is necessary to know the reality well. To love your community, you must know it. Therefore, a real shared effort is needed to understand it better and thus face together all the challenges that arise.”
“The pope also invited us to use our brains more and not artificial intelligence [AI] to prepare homilies, as he now sees and hears happening,” the priest said. “And here the pope made a strong recommendation regarding prayer: We priests must pray — remain with the Lord, that is — not reduce everything to the breviary or to a few brief moments of prayer, but truly learn again to listen to the Lord.”
The third question was more reflective: Today, as priests, we are unable to rejoice in the success of another fellow priest.
The pope responded that “we are all human, but we should set a good example, especially the example of priestly fraternity.”
He dwelt at length on how to cultivate priestly friendship. The pope also reminded them to continue studying. “It must be ongoing study; we must always stay up to date. But the fundamental thing is to cultivate priestly friendship, priestly fraternity,” the priest from Rome said.
The final question concerned elderly priests and their loneliness. According to the priest, Leo’s response “reaffirmed the need for fraternity, for the joy of being together. We must give thanks, truly live gratitude for the fact of being priests, from the day of our ordination every single day, and thank God for this great gift, and live the priesthood with gratitude. And here, a great deal of humility is also required.”
“Personally, I was happy,” the priest concluded. “We greatly appreciated the pope for a very, very concrete speech.”
This story was first published by ACI Stampa, the Italian-language sister service of EWTN News. It has been translated and adapted by EWTN News English.
...
Read the original on www.ewtnnews.com »
C:\philes\the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds
we are in direct written correspondence with persona’s CEO, rick song. he has been responsive and engaged in good faith.
rick has committed to answering the 18 questions in 0x14 in writing. all correspondence will be published in full as part 2 of this series. the core findings, including openai-watchlistdb.withpersona.com and its 27 months of certificate transparency history, remain unaddressed.
no laws were broken. all findings come from passive recon using public sources - Shodan, CT logs, DNS, HTTP headers, and unauthenticated files served by the target’s own web server. no systems were accessed, no credentials were used, no data was modified. retrieving publicly served files is not unauthorized access - see Van Buren v. United States (593 U. S. 374, 2021), hiQ Labs v. LinkedIn (9th Cir. 2022).
this is protected journalism and security research under the First Amendment, ECHR Art. 10, CFAA safe harbor (DOJ Policy 2022), California Shield Law, GDPR Art. 85, and Israeli Basic Law: Human Dignity and Liberty.
the authors are not affiliated with any government, intelligence service, or competitor of any entity named herein. no financial interest. no compensation. this research exists in the public interest and was distributed across multiple jurisdictions, dead drops, and third-party archives before publication.
any attempt to suppress or retaliate against this publication - legal threats, DMCA abuse, employment interference, physical intimidation, or extrajudicial action - will be treated as confirmation of its findings and will trigger additional distribution. killing the messenger does not kill the message.
for the record: all authors of this document are in good health, of sound mind, and have no plans to hurt themselves, disappear, or die unexpectedly. if that changes suddenly - it wasn’t voluntary. this document, its evidence, and a list of names are held by multiple trusted third parties with instructions to publish everything in the event that anything happens to any of us. we mean anything.
to Persona and OpenAI’s legal teams: actually audit your supposed “FedRAMP” compliancy, and answer the questions in 0x14. that’s the appropriate response. everything else is the wrong one.
from: the world
to: openai, persona, the US government, ICE, the open internet
date: 2026-02-16
subject: the watchers
they told us the future would be convenient. sign up, verify your identity, talk to the machine. easy. frictionless. the brochure said “trust and safety.” the source code said SelfieSuspiciousEntityDetection.
funny how that works. you hand over your passport to use a chatbot and somewhere in a datacenter in iowa, a facial recognition algorithm is checking whether you look like a politically exposed person. your selfie gets a similarity score. your name hits a watchlist. a cron job re-screens you every few weeks just to make sure you haven’t become a terrorist since the last time you asked GPT to write a cover letter.
so what do you do? well, we looked. found source code on a government endpoint with the door wide open. facial recognition, watchlists, SAR filings, intelligence codenames, and much more.
oh, and we revealed the names of every single person responsible for this!!
following the works of eva and others on ID verification bypasses, we decided to start looking into persona, yet another KYC service that uses facial recognition to verify identities. the original goal was to add a age-verification bypass to eva’s existing k-id platform.
after trying to write a few exploits, vmfunc decided to browse their infra on shodan. it all started with a Shodan search. a single IP. 34.49.93.177 sitting on Google Cloud in Kansas City. one open port. one SSL certificate. two hostnames that tell a story nobody was supposed to read:
openai-watchlistdb.withpersona.com
openai-watchlistdb-testing.withpersona.com
not “openai-verify”, not “openai-kyc”, watchlistdb. a database. (or is it?)
it was initially meant to be a passive recon investigation, that quickly turned into a rabbit hole deep dive into how commercial AI and federal government operations work together to violate our privacy every waking second. we didn’t even have to write or perform a single exploit, the entire architecture was just on the doorstep!! 53 megabytes of unprotected source maps on a FedRAMP government endpoint, exposing the entire codebase of a platform that files Suspicious Activity Reports with FinCEN, compares your selfie to watchlist photos using facial recognition, screens you against 14 categories of adverse media from terrorism to espionage, and tags reports with codenames from active intelligence programs.
2,456 source files containing the full TypeScript codebase, every permission, every API endpoint, every compliance rule, every screening algorithm. sitting unauthenticated on the public internet. on a government platform no less.
no systems were breached. no credentials were used. every finding in this document comes from publicly accessible sources: shodan, certificate transparency logs, DNS resolution, HTTP response headers, published API documentation, public web pages, and unauthenticated JavaScript source maps served by the target’s own web server.
the infrastructure told its own story. we just listened. then we read the source code.
IP: 34.49.93.177
ASN: AS396982 (Google LLC)
provider: Google Cloud
region: global
city: Kansas City, US
open ports: 443/tcp
last seen: 2026-02-05
hostnames:
- 177.93.49.34.bc.googleusercontent.com
- openai-watchlistdb.withpersona.com
- openai-watchlistdb-testing.withpersona.com
SSL cert:
subject: CN=openai-watchlistdb.withpersona.com
issuer: C=US, O=Google Trust Services, CN=WR3
valid: Jan 24 01:24:11 2026 - Apr 24 02:20:06 2026
SANs: openai-watchlistdb.withpersona.com
openai-watchlistdb-testing.withpersona.com
serial: FDFFBF37ED89BBD710D9967B7CD92B52
HTTP response (all paths, all methods):
status: 404
body: “fault filter abort”
headers: via: 1.1 google
content-type: text/plain
Alt-Svc: h3=”:443″
the “fault filter abort” response is an Envoy proxy fault injection filter. standard in GCP/Istio service mesh deployments. the service only routes requests matching specific internal criteria (likely mTLS client certificates, specific source IPs, or API key headers). everything else just dies at the edge.
though obviously this is not a misconfiguration.. this is just a locked-down backend service that was never meant to have a public face. the only reason we even know it exists is because of certificate transparency logs and DNS.
Persona (withpersona.com) is a San Francisco-based identity verification company. their normal infrastructure runs behind Cloudflare:
withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
inquiry.withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
app.withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
api.withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
they also run a wildcard DNS record, *.withpersona.com points to Cloudflare (cloudflare.withpersona.com.cdn.cloudflare.net). we confirmed this by resolving completely fabricated subdomains:
totallynonexistent12345.withpersona.com -> 162.159.141.40 (CF)
asdflkjhasdf.withpersona.com -> 162.159.141.40 (CF)
HOWEVER, here’s where it gets interesting. OpenAI’s watchlist service breaks out of this wildcard:
openai-watchlistdb.withpersona.com -> 34.49.93.177 (GCP)
openai-watchlistdb-testing.withpersona.com -> 34.49.93.177 (GCP)
a dedicated Google Cloud instance, which isn’t even behind Cloudflare, nor on Persona’s shared infrastructure. seemingly purpose-built and isolated.
you would never do this for a simple “check this name against a list” API call, you do this when the data requires compartmentalization. when the compliance requirements for the data you’re collecting, demand that level of isolation. when the damage of a breach is bad enough to warrant dedicated infrastructure.
CT logs tell us exactly when this service went live and how it evolved.
november 2023. this service has been running for over two years.
OpenAI didn’t announce “Verified Organization” requirements until mid-2025. they didn’t publicly require ID verification for advanced model access until GPT-5. but the watchlist screening infrastructure was operational 18 months before any of that was disclosed.
we can pinpoint when they started considering going “public” with the collaboration.
https://withpersona.com/customers/openai exists since September 17th, 2024, likewise, OpenAI’s Privacy Policy update started including the following passage since their November 4th, 2024 update as well.
“Other Information You Provide: We collect other information that you provide to us, such as when you participate in our events or surveys, or when you provide us or a vendor operating on our behalf with information to establish your identity or age (collectively, “Other Information You Provide”).”
the excuses used in the public post are classical, though instead of using children as the scapegoat for invading our privacy, this time it was ”[…] To offer safe AGI, we need to make sure bad people aren’t using our services […].
only… that they quickly used this opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves.
in fact, this is nothing new, OpenAI Forum User OnceAndTwice had mentioned this already back in June last year.
Persona’s API documentation (docs.withpersona.com) is public. when a customer like OpenAI runs a government ID verification, the API returns a complete identity dossier:
personal identity:
- full legal name (including native script)
- date of birth, place of birth
- nationality, sex, height
address:
- street, city, state, postal code, country
government document:
- document type and number
- issuing authority
- issue and expiration dates
- visa status
- vehicle class/endorsements/restrictions
media:
- FRONT PHOTO of ID document (URL)
- BACK PHOTO of ID document (URL)
- SELFIE PHOTO (URL + byte size)
- VIDEO of identity capture (URL)
metadata:
- entity confidence score
- all verification check results with pass/fail reasons
- capture method used
- timestamps (created, submitted, completed, redacted)
Persona’s own case study states that OpenAI “screens millions monthly” and “automatically screens over 99% of users behind the scenes in seconds.”
behind the scenes. in seconds. millions. with customizable filters ranging from simple partial name matches to advanced facial recognition algorithms.
again, none of this is even a secret, its “hidden” in plain sight.
...
Read the original on vmfunc.re »
The latest update of Firefox, version 148, introduces a much-anticipated “AI kill switch” feature, allowing users to disable AI functionalities such as chatbot prompts and AI-generated link summaries. Mozilla emphasizes that once AI features are turned off, future updates will not override this choice. This decision reflects the company’s new revenue-focused strategy regarding AI integrations.
To disable AI features, users can navigate to Settings > AI Controls and toggle the ‘Block AI Enhancements’ option. This will prevent any in-app notifications encouraging users to try out AI features, as well as remove any previously downloaded AI models from the device. For those who wish to maintain some AI functionalities, a selective blocking option is available, enabling users to retain useful features like on-device translations while avoiding cloud-based services.
Beyond the AI kill switch, Firefox 148 offers users more control over remote updates, allowing them to opt out while still minimizing data collection. Users can set these preferences under Settings > Privacy & Settings > Firefox Data Collection.
The update also focuses on enhancing core web platform capabilities, including the integration of the Trusted Types API and Sanitizer API to combat cross-site scripting (XSS) issues. Additionally, Firefox 148 now includes improved screen reader compatibility for mathematical formulas in PDFs, availability of Firefox Backup on Windows 10, and translation capabilities for Vietnamese and Traditional Chinese. New tab wallpapers will also be featured in new container tabs, alongside the addition of Service worker support for WebGPU.
For more detailed information on the update, users can refer to the official release notes.
...
Read the original on serverhost.com »
Build, program, and simulate hardware in the browser. Bring your workshop to the web.
...
Read the original on withdiode.com »
‘Students can’t reason’: Teachers warn AI is fueling a crisis in kids’ ability to thinkDiscord cuts ties with Peter Thiel–backed verification software after its code was found tied to U. S. surveillance effortsScientists are pushing back on warnings that microplastics damage your health, saying people are just obese and calling some studies ’a joke’Your Google search history can be used against you in court. Does that violate the Constitution? and The Associated PressExclusive: Anthropic rolls out AI tool that can hunt software bugs on its own—including the most dangerous ones humans missPrince Andrew was just arrested over Epstein-related ‘misconduct.’ Here’s how low his net worth is—and how he’ll pay his legal fees
Scott Bessent has ’got a feeling’ that $175 billion raised under the IEEPA is lost to the American people for goodIn less than a year, Trump erased 12 years of solvency for the trust fund that pays for Medicare Part ADiscord cuts ties with Peter Thiel–backed verification software after its code was found tied to U. S. surveillance effortsThe record gap between corporate profits and worker pay has an ‘undercurrent of betrayal,’ top economist warns A two-child household must earn $400,000 a year for childcare to be affordable, study says. ’It’s easy to see why birth rates are falling’While Nvidia CEO Jensen Huang enjoys an over $150 billion net worth, his fellow cofounder Curtis Priem sold out in 2006—and missed out on $600 billion
© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U. S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
...
Read the original on fortune.com »
Apple today announced a significant expansion of factory operations in Houston, bringing the future production of Mac mini to the U. S. for the first time. The company will also expand advanced AI server manufacturing at the factory and provide hands-on training at its new Advanced Manufacturing Center beginning later this year. Altogether, Apple’s Houston operations will create thousands of jobs.
“Apple is deeply committed to the future of American manufacturing, and we’re proud to significantly expand our footprint in Houston with the production of Mac mini starting later this year,” said Tim Cook, Apple’s CEO. “We began shipping advanced AI servers from Houston ahead of schedule, and we’re excited to accelerate that work even further.”
Technicians in protective clothing work on computers and other equipment in a Houston factory.
A worker in a lab coat stands behind an assembly line.
Technicians in protective clothing walk through the hallway of a Houston factory.
Technicians in protective clothing look at a monitor in a Houston factory.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U. S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
For more than two decades, users around the world have relied on the incredibly popular Mac mini for the tremendous power it packs into its ultra-compact design. With its next-level AI capabilities, it has become an essential tool for everyone from students and aspiring creatives to small business owners. Beginning later this year, Mac mini will be produced at a new factory on Apple’s Houston manufacturing site, doubling the campus’s footprint.
Apple began producing advanced AI servers in Houston in 2025 for the first time, and production is already ahead of schedule. Servers assembled in Houston — including logic boards produced onsite — are used in Apple data centers around the country.
Beyond production, Apple is investing in the workforce that will drive American manufacturing forward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is scheduled to open its doors in Houston. Currently under construction, the dedicated facility will provide hands-on training in advanced manufacturing techniques to students, supplier employees, and American businesses of all sizes. Apple experts will teach participants the same innovative processes that are used to make Apple products, allowing American manufacturers to take their work to the next level.
A worker stands in front of a large American flag inside the under-construction Apple Advanced Manufacturing Center in Houston.
An overhead shot of the under-construction Apple Advanced Manufacturing Center in Houston.
Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will provide hands-on training to students, supplier employees, and U. S. businesses of all sizes.
Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will provide hands-on training to students, supplier employees, and U.S. businesses of all sizes.
Since announcing its $600 billion commitment to the U. S. last year, Apple and its American Manufacturing Program partners have already reached several milestones:
Apple exceeded its target and sourced more than 20 billion U.S.-made chips from 24 factories across 12 states, including those of partners like TSMC, Broadcom, and Texas Instruments.
GlobalWafers has begun production at its new $4 billion bare silicon wafer facility in Sherman, Texas. At Apple’s direction, wafers produced in Sherman will be used by Apple’s chip manufacturing partners in the U.S., including TSMC and Texas Instruments.
Supported by Apple’s investment, Amkor broke ground on its new $7 billion semiconductor advanced packaging and test facility in Peoria, Arizona, where Apple will be the first and largest customer.
Corning’s Harrodsburg, Kentucky, facility is now 100 percent dedicated to cover glass for iPhone and Apple Watch shipped globally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.
In 2026, Apple is on track to purchase well over 100 million advanced chips produced by TSMC at its Arizona facility — a significant increase from 2025.
Apple opened its Apple Manufacturing Academy in Detroit, which is already supporting more than 130 small- and medium-sized American manufacturers with hands-on training in AI, automation, and smart manufacturing. The academy recently expanded with new virtual programming, giving businesses across the country on-demand access to the curriculum developed by Apple experts and Michigan State University faculty.
Mac mini will be made at a new facility in Houston, and a soon-to-be-launched training center will support advanced manufacturing skills development
CUPERTINO, CALIFORNIA Apple today announced a significant expansion of factory operations in Houston, bringing the future production of Mac mini to the U.S. for the first time. The company will also expand advanced AI server manufacturing at the factory and provide hands-on training at its new Advanced Manufacturing Center beginning later this year. Altogether, Apple’s Houston operations will create thousands of jobs.
“Apple is deeply committed to the future of American manufacturing, and we’re proud to significantly expand our footprint in Houston with the production of Mac mini starting later this year,” said Tim Cook, Apple’s CEO. “We began shipping advanced AI servers from Houston ahead of schedule, and we’re excited to accelerate that work even further.”
For more than two decades, users around the world have relied on the incredibly popular Mac mini for the tremendous power it packs into its ultra-compact design. With its next-level AI capabilities, it has become an essential tool for everyone from students and aspiring creatives to small business owners. Beginning later this year, Mac mini will be produced at a new factory on Apple’s Houston manufacturing site, doubling the campus’s footprint.
Apple began producing advanced AI servers in Houston in 2025 for the first time, and production is already ahead of schedule. Servers assembled in Houston — including logic boards produced onsite — are used in Apple data centers around the country.
Beyond production, Apple is investing in the workforce that will drive American manufacturing forward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is scheduled to open its doors in Houston. Currently under construction, the dedicated facility will provide hands-on training in advanced manufacturing techniques to students, supplier employees, and American businesses of all sizes. Apple experts will teach participants the same innovative processes that are used to make Apple products, allowing American manufacturers to take their work to the next level.
Since announcing its $600 billion commitment to the U.S. last year, Apple and its American Manufacturing Program partners have already reached several milestones:
Apple exceeded its target and sourced more than 20 billion U.S.-made chips from 24 factories across 12 states, including those of partners like TSMC, Broadcom, and Texas Instruments.
GlobalWafers has begun production at its new $4 billion bare silicon wafer facility in Sherman, Texas. At Apple’s direction, wafers produced in Sherman will be used by Apple’s chip manufacturing partners in the U.S., including TSMC and Texas Instruments.
Supported by Apple’s investment, Amkor broke ground on its new $7 billion semiconductor advanced packaging and test facility in Peoria, Arizona, where Apple will be the first and largest customer.
Corning’s Harrodsburg, Kentucky, facility is now 100 percent dedicated to cover glass for iPhone and Apple Watch shipped globally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.
In 2026, Apple is on track to purchase well over 100 million advanced chips produced by TSMC at its Arizona facility — a significant increase from 2025.
Apple opened its Apple Manufacturing Academy in Detroit, which is already supporting more than 130 small- and medium-sized American manufacturers with hands-on training in AI, automation, and smart manufacturing. The academy recently expanded with new virtual programming, giving businesses across the country on-demand access to the curriculum developed by Apple experts and Michigan State University faculty.
About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.
Copy text
* Apple exceeded its target and sourced more than 20 billion U.S.-made chips from 24 factories across 12 states, including those of partners like TSMC, Broadcom, and Texas Instruments.
* GlobalWafers has begun production at its new $4 billion bare silicon wafer facility in Sherman, Texas. At Apple’s direction, wafers produced in Sherman will be used by Apple’s chip manufacturing partners in the U.S., including TSMC and Texas Instruments.
* Supported by Apple’s investment, Amkor broke ground on its new $7 billion semiconductor advanced packaging and test facility in Peoria, Arizona, where Apple will be the first and largest customer.
* Corning’s Harrodsburg, Kentucky, facility is now 100 percent dedicated to cover glass for iPhone and Apple Watch shipped globally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.
* In 2026, Apple is on track to purchase well over 100 million advanced chips produced by TSMC at its Arizona facility — a significant increase from 2025.
* Apple opened its Apple Manufacturing Academy in Detroit, which is already supporting more than 130 small- and medium-sized American manufacturers with hands-on training in AI, automation, and smart manufacturing. The academy recently expanded with new virtual programming, giving businesses across the country on-demand access to the curriculum developed by Apple experts and Michigan State University faculty.
...
Read the original on www.apple.com »
In 1978, for my 10th birthday, I went to Disneyland and got to ride a new roller coaster called Space Mountain. It figuratively and literally took my breath away. I loved every second of it and that night, I couldn’t fall asleep; I just kept thinking about how exhilarating it was. Then, a wild thought suddenly hit me: Why isn’t there a roller coaster that goes upside down?
At first, I was like that’s crazy, it can’t work. But then I remembered Spin Out, the ride with a round room that spun so fast I stuck to the wall. If that worked, why not a loop on a roller coaster? I thought that would feel and be like the same thing. I was convinced!
I finally fell asleep dreaming of my roller coaster, full of twists, turns, and loops.
A few days later, I told my best friend Daschle. He was older, knew everything, and lived next door. “Buddy,” he said, “I’ve got exciting but crushing news. Your idea works.”
“Yep. I saw it. They’re building one at Magic Mountain. It’s called the Revolution. Sorry, Buddy.”
But I wasn’t crushed, I was thrilled! What I knew could work was really happening.
“How many loops does it have?” I asked.
“Ha! Mine has four. It’s called the Quadrupuler! It’s gonna be way better!”
That night I taped six sheets of paper together and drew my blueprints in colored markers. As you can see from the photo it was glorious!
Look closely, I didn’t label those coaster hills in feet or meters, no sir, I used building “story’s” for height, and the speed at each section in miles per hour. I’m 10. I’m serious here.
With guiding blueprints, it was time to build the model.
I got a Styrofoam board and balsa wood. Cutting and gluing each tiny cross tie was slow, and with all my homework, I could only work on weekends. So I calculated how many inches I could finish in a week. The answer turned out to be something like 5 months! After that I considered just giving up because I realized that I was only 120 months old and this was going to take 5% of my life! Plus, I still hadn’t figured out how to make the loops, balsa wood doesn’t bend like that. But I told myself: one piece at a time. That became my motto.
When I got to the first loop, I had to stop and think. What in the world could I make the loop out of? A lot of 10-year-old brain power went into imagining what simple material I could use. Then one morning, I had it: heat plastic strips over the stovetop flame and bend them as they cooled. The key? Don’t burn the house down.
I’d learned that the hard way a year earlier, when Daschle convinced me we should recreate the movie The Towering Inferno, with a cardboard box under his house… and real fire. It got out of control so fast, the flames started hitting the wood floor joists! Thankfully, we were able to smother it with the damp dirt down there. So yeah, wild card Daschle was not invited over for my plastic fire bending experiment.
I don’t remember where the plastic came from, but I do remember holding the strip with pliers over the flame. The first piece melted so fast and started burning with thick black smoke that it scared me. I yanked it back and coughed. That’s when I made an amendment to the safety plan: don’t kill yourself with whatever these horrible smelly fumes were! I got a fan, opened the back door, and all the kitchen windows before trying again. Eventually, I figured out the perfect distance and timing with the heat.
When I laid that final track piece, I was so excited, so proud! I took the model outside for better lighting and snapped Polaroids. I needed it captured instantly. Here’s a photo of the Polaroid with my 10-year-old penmanship.
What was the next thought that immediately popped into my head?
This masterpiece is ready to be sold to Disneyland! And I wrote my letter. I don’t have the original of that. But I remember it went something like this:
To whom it may concern:
My name is Kevin Glikmann. I am 10 years old. Enclosed are Polaroids of a roller coaster model I made called the Quadrupuler. It has four loops! I think you should make it!
And I sent it off!
Suddenly, my chore of checking the mail after school became a heart-pounding ritual. I was nervous, hopeful, excited. Sometimes I opened the mailbox slowly, peeking in. Sometimes I pretended it was just a normal day. Other times, I yanked it open and grabbed the mail fast, trying anything to calm my nerves.
Daschle said, “No way they’ll respond. It’s Disneyland, Buddy, they get a million letters a day.”
Weeks turned into months, and I started to think he was right. Then one day, I opened the mailbox.
My name on it. My head was screaming, “This is it! I never get mail!”
HOLY SMOKES! What’s WED Enterprises? I started bouncing up and down. I started shaking. I ran into the house, ran straight to my room, tore open the envelope, and read:
Your recent letter was directed to my office here at WED Enterprises. WED (Walter Elias Disney) is the design and “Imagineering” branch of Walt Disney Productions. As such, we are responsible for the creation of all shows, attractions and outdoor entertainment for both Disneyland and Walt Disney World.
Thank you for showing us your “Quadrupuler” roller coaster - it looks like quite an adventure! As you may know Kevin, we are creating a new rollercoaster type attraction for Disneyland’s Frontierland. Known as Big Thunder Mountain Railroad, this thrilling adventure will carry passengers on a high speed journey through the gold rush days of the old west. Big Thunder is scheduled to open later this year.
Thank you Kevin, for your interest in Walt Disney Productions.
Here is a photo of the original letter:
You would think that this letter would have ended my inventing spirit. That I would have gotten bitter and declared to the world, “No more will I invent for that mouse or anyone else!”
But no! Instead I was elated. I read it over and over. It said, “It looks like quite an adventure!” Disneyland liked it! I couldn’t believe it.
Looking back, those words from Tom Fitzgerald didn’t just validate my idea, they launched my 10-year-old self-esteem into orbit! (Tom Fitzgerald went on to become one of the most influential Imagineers in Disney history. According to his bio, he started in 1979. I got this letter in April 1979 which means this must have been one of his first tasks as a new Imagineer employee. lol! I wonder if he would remember?)
A couple years later, the Rubik’s Cube came out. I was obsessed. It took me weeks to solve, and once I did, my first thought was: What if it could turn on the angle? So I introduced Mr. Rubik’s Cube to my bandsaw, redesigned the core, and built a very rough prototype.
Here are some pictures.
I sent it to Ideal Toy Company, but they rejected it straight away, they don’t accept unsolicited ideas. But that didn’t matter. They didn’t understand that I had a letter from Disney Imagineering telling me my ideas are good!
That ten-year-old inventor is still alive in me, and still doesn’t understand rejection. Over the decades, I’ve invented several patented board games that were shopped around but never sold.
But I’m convinced the early validation from the Disney letter gave me a kind of bulletproof resilience.
Today, I work in one of the most rejection-heavy industries there is, acting. To me, inventing and acting are deeply connected: both are about discovery, and both feed my inventing spirit. Successes are rare, but when they come, the joy lasts forever.
So I keep auditioning, and I keep inventing.
And sometimes, when frustration creeps in, wondering why I’m not further along, or why things aren’t working out, that 10-year-old boy appears. He reminds me of what we believed back then, and what still guides me now- just keep going, one piece at a time.
...
Read the original on wordglyph.xyz »
*This post was updated at 12:35 pm PT to fix a typo in the build time benchmarks.
Last week, one engineer and an AI model rebuilt the most popular front-end framework from scratch. The result, vinext (pronounced “vee-next”), is a drop-in replacement for Next.js, built on Vite, that deploys to Cloudflare Workers with a single command. In early benchmarks, it builds production apps up to 4x faster and produces client bundles up to 57% smaller. And we already have customers running it in production.
The whole thing cost about $1,100 in tokens.
Next.js is the most popular React framework. Millions of developers use it. It powers a huge chunk of the production web, and for good reason. The developer experience is top-notch.
But Next.js has a deployment problem when used in the broader serverless ecosystem. The tooling is entirely bespoke: Next.js has invested heavily in Turbopack but if you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run.
If you’re thinking: “Isn’t that what OpenNext does?”, you are correct.
That is indeed the problem OpenNext was built to solve. And a lot of engineering effort has gone into OpenNext from multiple providers, including us at Cloudflare. It works, but quickly runs into limitations and becomes a game of whack-a-mole.
Building on top of Next.js output as a foundation has proven to be a difficult and fragile approach. Because OpenNext has to reverse-engineer Next.js’s build output, this results in unpredictable changes between versions that take a lot of work to correct.
Next.js has been working on a first-class adapters API, and we’ve been collaborating with them on it. It’s still an early effort but even with adapters, you’re still building on the bespoke Turbopack toolchain. And adapters only cover build and deploy. During development, next dev runs exclusively in Node.js with no way to plug in a different runtime. If your application uses platform-specific APIs like Durable Objects, KV, or AI bindings, you can’t test that code in dev without workarounds.
What if instead of adapting Next.js output, we reimplemented the Next.js API surface on Vite directly? Vite is the build tool used by most of the front-end ecosystem outside of Next.js, powering frameworks like Astro, SvelteKit, Nuxt, and Remix. A clean reimplementation, not merely a wrapper or adapter. We honestly didn’t think it would work. But it’s 2026, and the cost of building software has completely changed.
We got a lot further than we expected.
Replace next with vinext in your scripts and everything else stays the same. Your existing app/, pages/, and next.config.js work as-is.
vinext dev # Development server with HMR
vinext build # Production build
vinext deploy # Build and deploy to Cloudflare Workers
This is not a wrapper around Next.js and Turbopack output. It’s an alternative implementation of the API surface: routing, server rendering, React Server Components, server actions, caching, middleware. All of it built on top of Vite as a plugin. Most importantly Vite output runs on any platform thanks to the Vite Environment API.
Early benchmarks are promising. We compared vinext against Next.js 16 using a shared 33-route App Router application.
Both frameworks are doing the same work: compiling, bundling, and preparing server-rendered routes. We disabled TypeScript type checking and ESLint in Next.js’s build (Vite doesn’t run these during builds), and used force-dynamic so Next.js doesn’t spend extra time pre-rendering static routes, which would unfairly slow down its numbers. The goal was to measure only bundler and compilation speed, nothing else. Benchmarks run on GitHub CI on every merge to main.
These benchmarks measure compilation and bundling speed, not production serving performance. The test fixture is a single 33-route app, not a representative sample of all production applications. We expect these numbers to evolve as three projects continue to develop. The full methodology and historical results are public. Take them as directional, not definitive.
The direction is encouraging, though. Vite’s architecture, and especially Rolldown (the Rust-based bundler coming in Vite 8), has structural advantages for build performance that show up clearly here.
vinext is built with Cloudflare Workers as the first deployment target. A single command takes you from source code to a running Worker:
This handles everything: builds the application, auto-generates the Worker configuration, and deploys. Both the App Router and Pages Router work on Workers, with full client-side hydration, interactive components, client-side navigation, React state.
For production caching, vinext includes a Cloudflare KV cache handler that gives you ISR (Incremental Static Regeneration) out of the box:
KV is a good default for most applications, but the caching layer is designed to be pluggable. That setCacheHandler call means you can swap in whatever backend makes sense. R2 might be a better fit for apps with large cached payloads or different access patterns. We’re also working on improvements to our Cache API that should provide a strong caching layer with less configuration. The goal is flexibility: pick the caching strategy that fits your app.
We also have a live example of Cloudflare Agents running in a Next.js app, without the need for workarounds like getPlatformProxy, since the entire app now runs in workerd, during both dev and deploy phases. This means being able to use Durable Objects, AI bindings, and every other Cloudflare-specific service without compromise. Have a look here.
The current deployment target is Cloudflare Workers, but that’s a small part of the picture. Something like 95% of vinext is pure Vite. The routing, the module shims, the SSR pipeline, the RSC integration: none of it is Cloudflare-specific.
Cloudflare is looking to work with other hosting providers about adopting this toolchain for their customers (the lift is minimal — we got a proof-of-concept working on Vercel in less than 30 minutes!). This is an open-source project, and for its long term success, we believe it’s important we work with partners across the ecosystem to ensure ongoing investment. PRs from other platforms are welcome. If you’re interested in adding a deployment target, open an issue or reach out.
We want to be clear: vinext is experimental. It’s not even one week old, and it has not yet been battle-tested with any meaningful traffic at scale. If you’re evaluating it for a production application, proceed with appropriate caution.
That said, the test suite is extensive: over 1,700 Vitest tests and 380 Playwright E2E tests, including tests ported directly from the Next.js test suite and OpenNext’s Cloudflare conformance suite. We’ve verified it against the Next.js App Router Playground. Coverage sits at 94% of the Next.js 16 API surface.
Early results from real-world customers are encouraging. We’ve been working with National Design Studio, a team that’s aiming to modernize every government interface, on one of their beta sites, CIO.gov. They’re already running vinext in production, with meaningful improvements in build times and bundle sizes.
The README is honest about what’s not supported and won’t be, and about known limitations. We want to be upfront rather than overpromise.
vinext already supports Incremental Static Regeneration (ISR) out of the box. After the first request to any page, it’s cached and revalidated in the background, just like Next.js. That part works today.
vinext does not yet support static pre-rendering at build time. In Next.js, pages without dynamic data get rendered during next build and served as static HTML. If you have dynamic routes, you use generateStaticParams() to enumerate which pages to build ahead of time. vinext doesn’t do that… yet.
This was an intentional design decision for launch. It’s on the roadmap, but if your site is 100% prebuilt HTML with static content, you probably won’t see much benefit from vinext today. That said, if one engineer can spend $1,100 in tokens and rebuild Next.js, you can probably spend $10 and migrate to a Vite-based framework designed specifically for static content, like Astro (which also deploys to Cloudflare Workers).
For sites that aren’t purely static, though, we think we can do something better than pre-rendering everything at build time.
Next.js pre-renders every page listed in generateStaticParams() during the build. A site with 10,000 product pages means 10,000 renders at build time, even though 99% of those pages may never receive a request. Builds scale linearly with page count. This is why large Next.js sites end up with 30-minute builds.
So we built Traffic-aware Pre-Rendering (TPR). It’s experimental today, and we plan to make it the default once we have more real-world testing behind it.
The idea is simple. Cloudflare is already the reverse proxy for your site. We have your traffic data. We know which pages actually get visited. So instead of pre-rendering everything or pre-rendering nothing, vinext queries Cloudflare’s zone analytics at deploy time and pre-renders only the pages that matter.
vinext deploy –experimental-tpr
Building…
Build complete (4.2s)
TPR (experimental): Analyzing traffic for my-store.com (last 24h)
TPR: 12,847 unique paths — 184 pages cover 90% of traffic
TPR: Pre-rendering 184 pages…
TPR: Pre-rendered 184 pages in 8.3s → KV cache
Deploying to Cloudflare Workers…
For a site with 100,000 product pages, the power law means 90% of traffic usually goes to 50 to 200 pages. Those get pre-rendered in seconds. Everything else falls back to on-demand SSR and gets cached via ISR after the first request. Every new deploy refreshes the set based on current traffic patterns. Pages that go viral get picked up automatically. All of this works without generateStaticParams() and without coupling your build to your production database.
A project like this would normally take a team of engineers months, if not years. Several teams at various companies have attempted it, and the scope is just enormous. We tried once at Cloudflare! Two routers, 33+ module shims, server rendering pipelines, RSC streaming, file-system routing, middleware, caching, static export. There’s a reason nobody has pulled it off.
This time we did it in under a week. One engineer (technically engineering manager) directing AI.
The first commit landed on February 13. By the end of that same evening, both the Pages Router and App Router had basic SSR working, along with middleware, server actions, and streaming. By the next afternoon, App Router Playground was rendering 10 of 11 routes. By day three, vinext deploy was shipping apps to Cloudflare Workers with full client hydration. The rest of the week was hardening: fixing edge cases, expanding the test suite, bringing API coverage to 94%.
What changed from those earlier attempts? AI got better. Way better.
Not every project would go this way. This one did because a few things happened to line up at the right time.
Next.js is well-specified. It has extensive documentation, a massive user base, and years of Stack Overflow answers and tutorials. The API surface is all over the training data. When you ask Claude to implement getServerSideProps or explain how useRouter works, it doesn’t hallucinate. It knows how Next works.
Next.js has an elaborate test suite. The Next.js repo contains thousands of E2E tests covering every feature and edge case. We ported tests directly from their suite (you can see the attribution in the code). This gave us a specification we could verify against mechanically.
Vite is an excellent foundation. Vite handles the hard parts of front-end tooling: fast HMR, native ESM, a clean plugin API, production bundling. We didn’t have to build a bundler. We just had to teach it to speak Next.js. @vitejs/plugin-rsc is still early, but it gave us React Server Components support without having to build an RSC implementation from scratch.
The models caught up. We don’t think this would have been possible even a few months ago. Earlier models couldn’t sustain coherence across a codebase this size. New models can hold the full architecture in context, reason about how modules interact, and produce correct code often enough to keep momentum going. At times, I saw it go into Next, Vite, and React internals to figure out a bug. The state-of-the-art models are impressive, and they seem to keep getting better.
All of those things had to be true at the same time. Well-documented target API, comprehensive test suite, solid build tool underneath, and a model that could actually handle the complexity. Take any one of them away and this doesn’t work nearly as well.
Almost every line of code in vinext was written by AI. But here’s the thing that matters more: every line passes the same quality gates you’d expect from human-written code. The project has 1,700+ Vitest tests, 380 Playwright E2E tests, full TypeScript type checking via tsgo, and linting via oxlint. Continuous integration runs all of it on every pull request. Establishing a set of good guardrails is critical to making AI productive in a codebase.
The process started with a plan. I spent a couple of hours going back and forth with Claude in OpenCode to define the architecture: what to build, in what order, which abstractions to use. That plan became the north star. From there, the workflow was straightforward:
Let the AI write the implementation and tests. If tests pass, merge. If not, give the AI the error output and let it iterate.
We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.
It didn’t work perfectly every time. There were PRs that were just wrong. The AI would confidently implement something that seemed right but didn’t match actual Next.js behavior. I had to course-correct regularly. Architecture decisions, prioritization, knowing when the AI was headed down a dead end: that was all me. When you give AI good direction, good context, and good guardrails, it can be very productive. But the human still has to steer.
For browser-level testing, I used agent-browser to verify actual rendered output, client-side navigation, and hydration behavior. Unit tests miss a lot of subtle browser issues. This caught them.
Over the course of the project, we ran over 800 sessions in OpenCode. Total cost: roughly $1,100 in Claude API tokens.
Why do we have so many layers in the stack? This project forced me to think deeply about this question. And to consider how AI impacts the answer.
Most abstractions in software exist because humans need help. We couldn’t hold the whole system in our heads, so we built layers to manage the complexity for us. Each layer made the next person’s job easier. That’s how you end up with frameworks on top of frameworks, wrapper libraries, thousands of lines of glue code.
AI doesn’t have the same limitation. It can hold the whole system in context and just write the code. It doesn’t need an intermediate framework to stay organized. It just needs a spec and a foundation to build on.
It’s not clear yet which abstractions are truly foundational and which ones were just crutches for human cognition. That line is going to shift a lot over the next few years. But vinext is a data point. We took an API contract, a build tool, and an AI model, and the AI wrote everything in between. No intermediate framework needed. We think this pattern will repeat across a lot of software. The layers we’ve built up over the years aren’t all going to make it.
Thanks to the Vite team. Vite is the foundation this whole thing stands on. @vitejs/plugin-rsc is still early days, but it gave me RSC support without having to build that from scratch, which would have been a dealbreaker. The Vite maintainers were responsive and helpful as I pushed the plugin into territory it hadn’t been tested in before.
We also want to acknowledge the Next.js team. They’ve spent years building a framework that raised the bar for what React development could look like. The fact that their API surface is so well-documented and their test suite so comprehensive is a big part of what made this project possible. vinext wouldn’t exist without the standard they set.
vinext includes an Agent Skill that handles migration for you. It works with Claude Code, OpenCode, Cursor, Codex, and dozens of other AI coding tools. Install it, open your Next.js project, and tell the AI to migrate:
Then open your Next.js project in any supported tool and say:
The skill handles compatibility checking, dependency installation, config generation, and dev server startup. It knows what vinext supports and will flag anything that needs manual attention.
Or if you prefer doing it by hand:
npx vinext init # Migrate an existing Next.js project
npx vinext dev # Start the dev server
npx vinext deploy # Ship to Cloudflare Workers
The source is at github.com/cloudflare/vinext. Issues, PRs, and feedback are welcome.
...
Read the original on blog.cloudflare.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.