10 interesting stories served every morning and every evening.
Microsoft provided the FBI with the recovery keys to unlock encrypted data on the hard drives of three laptops as part of a federal investigation, Forbes reported on Friday.
Many modern Windows computers rely on full-disk encryption, called BitLocker, which is enabled by default. This type of technology should prevent anyone except the device owner from accessing the data if the computer is locked and powered off.
But, by default, BitLocker recovery keys are uploaded to Microsoft’s cloud, allowing the tech giant — and by extension law enforcement — to access them and use them to decrypt drives encrypted with BitLocker, as with the case reported by Forbes.
The case involved several people suspected of fraud related to the Pandemic Unemployment Assistance program in Guam, a U. S. island in the Pacific. Local news outlet Pacific Daily News covered the case last year, reporting that a warrant had been served to Microsoft in relation to the suspects’ hard drives. Kandit News, another local Guam news outlet, also reported in October that the FBI requested the warrant six months after seizing the three laptops encrypted with BitLocker.
A spokesperson for Microsoft did not immediately respond to a request for comment by TechCrunch. Microsoft told Forbes that the company sometimes provides BitLocker recovery keys to authorities, having received an average of 20 such requests per year.
Apart from the privacy risks of handing recovery keys to a company, Johns Hopkins professor and cryptography expert Matthew Green raised the potential scenario where malicious hackers compromise Microsoft’s cloud infrastructure — something that has happened several times in recent years — and get access to these recovery keys. The hackers would still need physical access to the hard drives to use the stolen recovery keys.
“It’s 2026 and these concerns have been known for years,” Green wrote in a post on Bluesky. “Microsoft’s inability to secure critical customer keys is starting to make it an outlier from the rest of the industry.”
...
Read the original on techcrunch.com »
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The analysis finds that solar is making gains in every E. U. country, while coal is broadly in retreat. Last year, solar alone supplied more than 20 percent of power in Hungary, Cyprus, Greece, Spain, and the Netherlands. Meanwhile, in 19 European countries, coal accounted for less than 5 percent of power. In 2025, both Ireland and Finland joined the ranks of European countries that have shuttered their last remaining coal plants.
Warming, however, continues to challenge the shift to clean energy as drought saps hydropower. Last year, hydro output dropped slightly in the E. U., and natural gas power rose to compensate.
“The next priority for the E. U. should be to put a serious dent in reliance on expensive, imported gas,” said Ember analyst Beatrice Petrovich. “Gas not only makes the E.U. more vulnerable to energy blackmail, it’s also driving up prices.”
In parts of Europe, there are signs that increasingly cheap batteries are beginning to displace natural gas in the early evening, when power demand is high, but solar output is waning. Said Petrovich, “As this trend accelerates it could limit how much gas is needed in evening hours, therefore stabilizing prices.”
An E. U. Plan to Slash Micropollutants in Wastewater Is Under Attack
...
Read the original on e360.yale.edu »
On January 21, 2026, @fredwilson challenged @seth: AI can write code, but it can’t affect the physical world. This is our response. Real corn, grown from seed to harvest, with every decision made by Claude Code.
AI doesn’t need to drive a tractor. It needs to orchestrate the systems and people who do. A farm manager doesn’t personally plant every seed. They aggregate data, make decisions, coordinate contractors. Claude Code becomes that farm manager— 24/7, data-driven, fully documented.
The Process — How this was built
Fred processes emails as part of his daily check routine. Priority given to farmers and operators.
A project by @seth, inspired by @fredwilson, orchestrated by Claude Code (Opus 4.5)Want to help? Iowa land leads, ag expertise, vibe coders welcome: [email protected]
...
Read the original on proofofcorn.com »
1. Design and planning becomes the bottleneck when agents write all the code 2. Buried in the chaos are sketches of future agent orchestration patterns 3. The price is extremely high, but so is the (potential) value 4. Yegge never looks at code. When should we stop looking too? Access to feedback loops and definitions of succes
1. Design and planning becomes the bottleneck when agents write all the code 2. Buried in the chaos are sketches of future agent orchestration patterns 3. The price is extremely high, but so is the (potential) value 4. Yegge never looks at code. When should we stop looking too? Access to feedback loops and definitions of succes
A few weeks ago Steve Yegge published an elaborate manifesto and guide to Gas Town, his Mad-Max-Slow-Horses-Waterworld-etc-themed agent orchestrator that runs dozens of coding agents simultaneously in a metaphorical town of automated activity. Gas Town is entirely vibecoded, hastily designed with off-the-cuff solutions, and inefficiently burning through thousands of dollars a month in API costs.
This doesn’t sound promising, but it’s lit divisive debates and sparks of change across the software engineering community. A small hype machine has formed around it. It’s made the rounds through every engineering team’s Slack, probably twice. There’s somehow already a $GAS meme coin doing over $400k in earnings. And the hype is justified. First, because it’s utterly unhinged, and second because it’s a serious indication of how agents will change the nature of software development from this point on.
You should at least skim through Yegge’s original article before continuing to read my reflections. First, because I’m not going to comprehensively summarise it. And second, because a even a one minute glance over Yegge’s style of writing will make the vibes clear.
We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.
“Design fiction” or “speculative design” is a branch of design where you creating things (objects, prototypes, sketches) from a plausible near future. Not to predict what’s going to happen, but to provoke questions and start conversations about what could happen. Not in a bright-and-glorious-flying-cars way that futurism can sometimes fall into. But, most helpfully, in a way that thinks about banal details, overlooked everyday interactions, low status objects, imperfect implementations, knock-on effects, and inconveniences. See the Near Future Lab’s short explainer video and their Manual of Design Fiction if you want to learn more.
I also think Yegge deserves praise for exercising agency and taking a swing at a system like this, despite the inefficiencies and chaos of this iteration. And then running a public tour of his shitty, quarter-built plane while it’s mid-flight.
When I was taken to the Tate Modern as a child I’d point at Mark Rothko pieces and say to my mother “I could do that”, and she would say “yes, but you didn’t.” Many people have talked about what large-scale, automated agent orchestration systems could look like in a few years, and no one else attempted to sincerely build it.
I should be transparent and say that I have not used Gas Town in earnest on any serious work. I have only lightly poked at it, because I do not qualify as a serious user when I’m still hovering around stages 4-6 in Yegge’s 8 levels of automation:
I currently juggle a handful of consecutive Claude Code and OpenCode agents, but pay close attention to the diffs and regularly check code in an IDE. Which I guess puts me in the agentically conservative camp in this distressingly breakneck moment in history.
Gas Town is a full-on stage 8 piece of tooling: using an orchestrator that manages dozens+ of other coding agents for you. Yegge also warned me not to seriously use Gas Town multiple times, in increasingly threatening typography. I trust his guidance on his own slush pile.
But I have grokked the basic concepts and spent more time with this manifesto than is warranted. And here is what stood out to me from the parts I could comprehend:
When you have a fat stack of agents churning through code tasks, development time is no longer the bottleneck. Yegge says “Gas Town churns through implementation plans so quickly that you have to do a LOT of design and planning to keep the engine fed.” Design becomes the limiting factor: imagining what you want to create and then figuring out all the gnarly little details required to make your imagination into reality.
I certainly feel this friction in both my own professional work and personal projects. My development velocity is far slower than Yegge since I only wrangle a few agents at a time and keep my eyes and hands on the code. But the build time is rarely what holds me up. It is always the design; how should we architect this? What should this feel like? How should this look? Is that transition subtle enough? How composable should this be? Is this the right metaphor?
When it’s not the design, it’s the product strategy and planning; What are the highest priority features to tackle? Which piece of this should we build first? When do we need to make that decision? What’s the next logical, incremental step we need to make progress here?
These are the kind of decisions that agents cannot make for you. They require your human context, taste, preferences, and vision.
With agents to hand, it’s easy to get ahead of yourself, stumbling forward into stacks of generated functions that should never have been prompted into existence, because they do not correctly render your intentions or achieve your goals.
Gas Town seems to be halfway into this pitfall. The biggest flaw in Yegge’s creation is that it is poorly designed. I mean this in the sense that he absolutely did not design the shape of this system ahead of time, thoughtfully considering which metaphors and primitives would make this effective, efficient, easy to use, and comprehensible.
He just made stuff up as he went. He says as much himself: “Gas Town is complicated. Not because I wanted it to be, but because I had to keep adding components until it was a self-sustaining machine.” Gas Town is composed of “especially difficult [theories] because it’s a bunch of bullshit I pulled out of my arse over the past 3 weeks, and I named it after badgers and stuff.” It was slapdashed together over “17 days, 75k lines of code, 2000 commits. It finally got off the ground (GUPP started working) just 2 days ago.”
This Hacker News comment from qcnguy describes the problem well, and points out that Yegge’s previous Beads project, of which Gas Town is an extension, suffers the same issue:
“Beads is a good idea with a bad implementation. It’s not a designed product in the sense we are used to, it’s more like a stream of consciousness converted directly into code. It’s a program that isn’t only vibe coded, it was vibe designed too.”
“Gas Town is clearly the same thing multiplied by ten thousand. The number of overlapping and ad hoc concepts in this design is overwhelming. Steve is ahead of his time but we aren’t going to end up using this stuff. Instead a few of the core insights will get incorporated into other agents in a simpler but no less effective way.”
Or this review from astrra.space on Bluesky:
“gas town [is] such a nightmare to use i love it… the mayor is dumb as rocks the witness regularly forgets to look at stuff the deacon makes his own rules the crew have the object permanence of a tank full of goldfish and the polecats seem intent on wreaking as much chaos on the project as they can. this is peak entertainment i swear”
Friends and colleagues of mine who have been brave enough to try out Gas Town in more depth report the same thing; this thing fits the shape of Yegge’s brain and no one else’s. I’d categorise that as a moderate design fail, given this is a public product that I assume Yegge wants at least some people to try out. The onboarding is baptism by fire.
This feels like one of the most critical, emerging footguns of liberally hands-off agentic development. You can move so fast you never stop to think. It is so easy to prompt, you don’t fully consider what you’re building at each step of the process. It is only once you are hip-deep in poor architectural decisions, inscrutable bugs, and a fuzzy memory of what you set out to do, do you realise you have burned a billion tokens in exchange for a pile of hot trash.
Now that I’ve just critiqued the design of Gas Town, I will turn around and say that while the current amalgamation of polecats, convoys, deacons, molecules, protomolecules, mayors, seances, hooks, beads, witnesses, wisps, rigs, refineries, and dogs is a bunch of under cooked spaghetti, Yegge’s patterns roughly sketch out some useful conceptual shapes for future agentic systems.
If you step back and squint, this mishmash of concepts reveals a few underlying patterns that future agentic systems will likely follow:
Every agent in Gas Town has a permanent, specialised role. When an agent spins up a new session, it knows who it is and what job it needs to do. Some examples:
* The Mayor is the human concierge: it’s the main agent you talk to. It talks to all the other agents for you, kicking off work, receiving notifications when things finish, and managing the flow of production.
* Polecats are temporary grunt workers. They complete single, isolated tasks, then disappear after submitting their work to be merged.
* The Witness supervises the Polecats and helps them get unstuck. Its job is to solve problems and nudge the proletariat workers along.
* The Refinery manages the merge queue into the main branch. It evaluates each piece of work waiting to be merged, resolving conflicts in the process. It can creatively “re-imagine” implementations if merge conflicts get too hairy, while trying to keep the intent of the original work.
There are many more characters in this town, but these give you a flavour of the system. Giving each agent a single job means you can prompt them more precisely, limit what they’re allowed to touch, and run lots of them at once without them stepping on each other’s toes.
There’s also a clear chain of command between these agents. You talk to the Mayor, who coordinates work across the system. The Mayor in Gas Town never writes code. It talks to you, then creates work tasks and assigns them to workers. A set of system supervisors called the Witness, the Deacon, and “Boot the Dog” intermittently nudge the grunt workers and each other to check everyone is doing their work. Oh and there’s also a crew of “dogs” who do maintenance and cleaning.
It’s easier if I try and show you. Here’s the basic relationship structure of Gas Town, as best I can make out:
Since I’m making my own visuals here, I should justify it by pointing out that while Yegge made lots of his own ornate, zoopmorphic diagrams of Gas Town’s architecture and workflows, they are unhelpful. Primarily because they were made entirely by Gemini’s Nano Banana . And while Nano Banana is state-of-the-art at making diagrams, generative AI systems are still really shit at making illustrative diagrams. They are very hard to decipher, filled with cluttered details, have arrows pointing the wrong direction, and are often missing key information. Case in point:
Does this help you understand how the system works? No? No.
Gas Town’s hierarchical approach solves both a coordination and attention problem. Without it, you are the one assigning tasks to dozens individual agents, checking who’s stuck, who’s idle, and who’s waiting on work from someone else. With the Mayor as your single interface, that overhead disappears. You can continuously talk to the Mayor without interrupting any agents or getting in the way, or having to think much about which one is doing what. This is less cognitive overhead than constantly switching tabs between Claudes.
I think there’s a lot of opportunity to diversify the cast of characters here and make more use of specialist subagents . The agents in Gas Town are all generalist workers in the software development pipeline. But we could add in any kind of specialist we want: a dev ops expert, a product manager, a front-end debugger, an accessibility checker, a documentation writer. These would be called in on-demand to apply their special skills and tools.
One of the major limitations of current coding agents is running out of context. Before you even hit the limits of a context window, context rot degrades the output enough that it’s not worth keeping. We constantly have to compact or start fresh sessions.
Gas Town’s solution to this is make each agent session disposable by design. It stores the important information — agent identities and tasks — in Git, then liberally kills off sessions and spins up fresh ones when needed. New sessions are told their identity and currently assigned work, and continue on where the last one left off. Gas Town also lets new sessions ask their predecessors what happened through “seancing”: resuming the last session as a separate instance in order to let the new agent ask questions about unfinished work.
This saving and recalling is all done through Gas Town’s “Beads” system. Beads are tiny, trackable units of work — like issues in an issue tracker — stored as JSON in Git alongside your code. Each bead has an ID, description, status, and assignee. Agent identities are also stored as beads, giving each worker a persistent address that survives session crashes.
Yegge didn’t invent this pattern of tracking atomic tasks outside agent memory in something structured like JSON. Anthropic described the same approach in their research on effective harnesses for long-running agents , just published in November 2025. I give it a hot minute before this type of task tracking lands in Claude Code.
The whole promise of an orchestration system like Gas Town is it’s a perpetual motion machine. You give high-level orders to the mayor, and then a zoo of agents kicks off to break it down into tasks, assign them, execute them, check for bugs, fix the bugs, review the code, and merge it in.
Each worker agent in Gas Town has its own queue of assigned work and a “hook” pointing to the current thing they should be doing. The minute they finish a task, the next one jumps to the front of the queue. The mayor is the one filling up these queues — it’s in charge of breaking down large features into atomic tasks and assigning them to available workers. In theory, the workers are never idle or lacking tasks, so long as you keep feeding the mayor your grand plans.
This principle of “workers always do their work” is better in theory than practice. It turns out to be slightly difficult to make happen because of the way current models are trained. They’re designed as helpful assistants who wait politely for human instructions. They’re not used to checking a task queue and independently getting on with things.
Gas Town’s patchwork solution to this is aggressive prompting and constant nudging. Supervisor agents spend their time poking workers to see if anyone’s stalled out or run dry on work. When one goes quiet, they send it a ping which jolts the agent into checking its queue and getting back to work. These periodic nudges move through the agent hierarchy like a heartbeat keeping everything moving. This is a decent band-aid for the first version, but more serious efforts at agent orchestration systems will need reliable ways to keep agents on task.
When you have a bunch of agents all working in parallel, you’re of course going to run into merge conflicts. Each agent is off on its own branch, and by the time it finishes its task, the main branch might look completely different — other changes have landed, the code has moved on. The later an agent finishes, the worse this gets. Normally you, the human, takes on the burden of sorting out the mess and deciding which changes to keep. But if agents are running on their own, something has to do that job for them.
So Gas Town has a dedicated merge agent — the Refinery — that works through the merge queue one change at a time. It looks at each merge request, resolves any conflicts, and gets it into main. When things get really tangled — when so much has changed that the original work doesn’t even make sense anymore — it can creatively “re-imagine” the changes: re-doing the work to fit the new codebase. Or escalate to a human if needed.
But there’s another way to sidestep merge conflict nightmares that Gas Town doesn’t have built in: ditch PRs for stacked diffs . The traditional git workflow puts each feature on its own branch for days or weeks, accumulating commits, then getting merged back as one chunky PR.
Stacked diffs avoid this conflict-prone approach by breaking work into small, atomic changes that each get reviewed and merged on their own, building on top of each other. Every change gets its own branch, forked off the previous change, forming a “stack” of changes dependent on one another. When a change earlier in the stack gets updated, all the changes below it automatically rebase on top of the new version.
This fits how agents naturally work. They’re already producing tiny, focused changes rather than sprawling multi-day branches. When conflicts do pop up, they’re easier to untangle because each diff touches less code. Cursor ’s recent acquisition of Graphite , a tool built specifically for stacked diff workflows, suggests I am not the only one who sees this opportunity. When you’ve got dozens of agents landing changes continuously, you need tools and interfaces specifically designed for these frequent, incremental merges.
Yegge describes Gas Town as “expensive as hell… you won’t like Gas Town if you ever have to think, even for a moment, about where money comes from.” He’s on his second Claude account to get around Anthropic’s spending limits.
I can’t find any mention online of the per-account limits, but let’s conservatively assume he’s spending at least $2,000 USD per month, and liberally $5,000.
The current cost is almost certainly artificially inflated by system inefficiency. Work gets lost, bugs get fixed numerous times, designs go missing and need redoing. As models improve and orchestration patterns mature, the cost of orchestrators like this should drop while output quality rises.
I expect companies would happily pay around the $1-3k/month mark for a sane, understandable, higher quality, and lower waste version of Gas Town. Maybe that sounds absurd to you, given we’ve all become anchored to the artificially low rate of $100-200/month for unlimited usage by the major providers. But once the AI bubble pops, the VC funds dry up, and providers have to charge the true cost of inference at scale, we should expect that “unlimited” tier to look a lot pricier.
Even when that comes to pass, a few thousand is pretty reasonable when you compare it to an average US senior developer salary: $120,000 USD. If Gas Town could genuinely speed up the work of a senior developer by 2-3x or more, it would easily be worth 10-30% of their salary. The cost per unit of valuable work starts to look competitive with human labor.
The maths on paying for something like this is already defensible in wealthier places like the US and parts of Western Europe. In spots where developer salaries are lower, we would expect the budget for AI assisted tools adjusts accordingly. They’ll get less crazy scaled automation and more conversative useage with humans filling in the cognitive gaps.
Yegge is leaning into the true definition of vibecoding with this project: “It is 100% vibecoded. I’ve never seen the code, and I never care to.” Not looking at code at all is a very bold proposition, today, in January 2026.
Given the current state of models and the meagre safeguards we have in place around them, the vast majority of us would consider this blind coding approach irresponsible and daft to do on anything that isn’t a throwaway side project. Which, given the amount of effort and Claude tokens Yegge has sunk into building it, writing documentation, and publicly promoting it, Gas Town is not.
“Should developers still look at code?” will become one of the most divisive and heated debates over the coming years. You might be offended by the question, and find it absurd anyone is asking. But it’s a sincere question and the answer will change faster than you think.
I’m already seeing people divide along moralistic, personal identity lines as they try to answer it. Some declare themselves purist, AI skeptic, Real Developers who check every diff and hand-adjust specific lines, sneering at anyone reckless enough to let agents run free. While others lean into agentic maximalism, directing fleets from on high and pitying the mass of luddites still faffing about with manual edits like it’s 2019. Both camps mistake a contextual judgement for a personality trait and firm moral position.
A more conservative, easier to consider, debate is: how close should the code be in agentic software development tools? How easy should it be to access? How often do we expect developers to edit it by hand?
Interfaces like Claude Code , Cursor , and Conductor do not put code front and centre in the experience. The agent is your first and primary interface. You might be able to see diffs rolls by or display code files inline, but you can’t touch them. Trying to edit code yourself is a roundabout journey of opening your IDE and navigating to the correct files and lines.
This design choice assumes it is easier to ask an agent to make the change for you, than it is to type it out the syntax yourself. They clearly say “we don’t believe users need to touch code.”
Framing this debate as an either/or — either you look at code or don’t, either you edit code by hand or you exclusively direct agents, either you’re the anti-AI-purist or the agentic-maxxer — is unhelpful. Because nothing is a strict binary.
The right distance isn’t about what kind of person you are or what you believe about AI capabilities in the current moment. How far away you step from the syntax shifts based on what you’re building, who you’re building with, and what happens when things go wrong. The degree of freedom you hand over to agents depends on:
Front-end versus backend makes a huge difference. Language is a poor medium for designing easing curves and describing aesthetic feelings — I always need to touch the CSS, and it’s often faster to just tweak directly than try to explain what I want. Yegge’s CLI tooling is much easier to validate with pass/fail tests than evaluating whether a notification system “feels calm enough”. Model competence also varies wildly by language; prompting React and Tailwind gives you much better results than Rust or Haskell, where the models still regularly choke on borrow checkers and type systems.
The more agents can validate their own work, the better the results. If you let agents run tests and see the output, they quickly learn what’s broken and how to fix it. If you let them open browsers, take screenshots, and click around, they can spot their mistakes. Tools like the Ralph Wiggum plugin lean into this — it loops until tests pass or some specific condition is validated. This doesn’t work for less defined, clear cut work though. If you try to make an agent design a visual diagram for you, it’s going to struggle. It doesn’t know your aesthetic preferences and can’t really “see” what it’s making.
Stakes matter. If an agent breaks some images on your personal blog, you’ll recover. But if you’re running a healthcare system where a bug could miscalculate drug dosages, or a banking app moving actual money around, you can’t just wave an agent at it and hope. Consequences scale up fast. Corporate software has people whose entire job is compliance and regulatory sign-off — they need to see the code, understand it, verify it meets requirements. Those people aren’t going to let you widly run Gas Town over projects without serious guardrails in place.
“Gas Town sounds fun if you are accountable to nobody: not for code quality, design coherence or inferencing costs. The rest of us are accountable for at least the first two and even in corporate scenarios where there is a blank check for tokens, that can’t last. So the bottleneck is going to be how fast humans can review code and agree to take responsibility for it.”
Starting fresh (greenfield), means you can let agents make architectural decisions and establish patterns — if you don’t like them, you can easily throw it out and restart. The cost of mistakes is low. But in an existing codebase (brownfield) with years of accumulated conventions, implicit patterns, and code that exists for reasons nobody remembers anymore, agents need much tighter supervision. They’ll happily introduce a new pattern that contradicts the three other ways this codebase already solves the same problem.
If you’re solo of course you can YOLO. If you’re working with more than a handful of people, you’ll have to agree on coding standards and agent rules. This creates its own overhead: updating the AGENTS.md file, picking MCPs, writing commands and skills and rules and whatever else we invent to constrain these things. The pace of change when you’re all using agents can be overwhelming and you need to figure out a sensible reviewing pipeline to manage it. Team coordination can fall apart when everyone’s agents start moving too fast. You might show up in the morning and discover someone’s agent renamed the database schema while another agent refactored the whole API layer, and neither of which jive with your giant, unmerged feature.
More senior developers can prompt better, debug better, and setup more stringent preferences earned through decades of seeing what can go wrong in scaled, production environments. They can recognize patterns: “oh, that’s a memory leak” or “that’s going to deadlock under load.” Newer developers don’t have that catalog of failures yet and are much more likely to prompt their own personal house of cards. The tests might pass and everything looks fine until you hit production traffic or someone enters a weird character. It is hard to defend against unknown unknowns.
Given all these “it depends” considerations, I’m currently in the code-must-be-close camp for most serious work done by professional developers. But I expect I’ll shift to the code-at-a-distance camp over the next year or two as the harnesses and tools we wrap around these agents mature. If we can ship them with essential safe guards and quality gates, the risks drop. Sure, the models will also improve, but the infrastructure matters far more: validation loops, tests, and specialised subagents who focus on security, debugging, and code quality are what will make code-at-a-distance feasible.
We have many, continuous versions of the code distance debate interally at Github Next . One of the projects within the team driving this is Agentic Workflows — autonomous agents run through GitHub Actions in response to events: new PRs, new issues, or specific times of day. Every commit can trigger a security review agent, an accessibility audit, and a documentation updater, all running in parallel alongside traditional CI/CD tests before anything lands in main. The team building it rarely touches code and do most of their work by directing agents from their phones. It’s these kinds of guardrails that makes a hands-off Sim-City-esque orchestrator system feel less terrifying to me.
I don’t believe Gas Town itself is “it”. It’s not going to evolve into the thing we all use, day in and day out, in 2027. As I said, it’s a provocative piece of speculative design, not a system many people will use in earnest. In the same way any poorly designed object or system gets abandoned, this manic creation is too poorly thought through to persist. But the problems it’s wrestling with and the patterns it has sketched out will unquestionably show up in the next generation of development tools.
As the pace of software development speeds up, we’ll feel the pressure intensify in other parts of the pipeline: thoughtful design, critical thinking, user research, planning and coordination within teams, deciding what to build, and whether it’s been built well. The most valuable tools in this new world won’t be the ones that generate the most code fastest. They’ll be the ones that help us think more clearly, plan more carefully, and keep the quality bar high while everything accelerates around us.
...
Read the original on maggieappleton.com »
Most PCs tend to boot from a primary media storage, be it a hard disk drive, or a solid-state drive, perhaps from a network, or — if all else fails — the USB stick or the boot DVD comes to the rescue… Fun, eh? Boring! Why don’t we try to boot from a record player for a change?
Update February 2022: Click here to observe the very same vinyl ramdisk booted on an IBM PCjr!
So this nutty little experiment connects a PC, or an IBM PC to be exact, directly onto a record player through an amplifier. I made a small ROM on-chip boot loader that operates the built-in “cassette interface” of the PC (that was hardly ever used), which will now be invoked by the BIOS if all the other boot options fail, i.e. floppy disk and the hard drive. The turntable spins an analog recording of a small bootable read-only RAM drive, which is 64K in size. This contains a FreeDOS kernel, modified by me to cram it into the memory constraint, a micro variant of COMMAND. COM and a patched version of INTERLNK, that allows file transfer through a printer cable, modified to be runnable on FreeDOS. The bootloader reads the disk image from the audio recording through the cassette modem, loads it to memory and boots the system on it. Simple huh?
The vinyl loader code, in a ROM
(It can also reside on a hard drive or a floppy, but that’d be cheating)
And now to get more technical: this is basically a merge between BootLPT/86 and 5150CAXX, minus the printer port support. It also resides in a ROM, in the BIOS expansion socket, but it does not have to. The connecting cable between the PC and the record player amplifier is the same as with 5150CAXX, just without the line-in (PC data out) jack.
The “cassette interface” itself is just PC speaker timer channel 2 for the output, and 8255A-5 PPI port C channel 4 (PC4, I/O port 62h bit 4) for the input. BIOS INT 15h routines are used for software (de)modulation.
The boot image is the same 64K BOOTDISK. IMG “example” RAM drive that can be downloaded at the bottom of the BootLPT article. This has been turned into an “IBM cassette tape”-protocol compliant audio signal using 5150CAXX, and sent straight to a record cutting lathe.
Vinyls are cut with an RIAA equalization curve that a preamp usually reverses during playback, but not perfectly. So some signal correction had to be applied from the amplifier, as I couldn’t make it work right with the line output straight from the phono preamp. In my case, involving a vintage Harman&Kardon 6300 amplifier with an integrated MM phono preamp, I had to fade the treble all the way down to -10dB/10kHz, increase bass equalization to approx. +6dB/50Hz and reduce the volume level to approximately 0.7 volts peak, so it doesn’t distort. All this, naturally, with any phase and loudness correction turned off.
Of course, the cassette modem does not give a hoot in hell about where the signal is coming from. Notwithstanding, the recording needs to be pristine and contain no pops or loud crackles (vinyl) or modulation/frequency drop-outs (tape) that will break the data stream from continuing. However, some wow is tolerated, and the speed can be 2 or 3 percent higher or lower too.
Bootloader in a ROM; being an EPROM for a good measure
And that’s it! For those interested, the bootloader binary designed for a 2364 chip (2764s can be used, through an adaptor), can be obtained here. It assumes an IBM 5150 with a monochrome screen and at least 512K of RAM, which kind of reminds me of my setup (what a coincidence). The boot disk image can be obtained at the bottom of the BootLPT/86 article, and here’s its analog variant, straight from the grooves 🙂
...
Read the original on boginjr.com »
Tesla was told that if it couldn’t resolve the deceptive marketing within those 60 days, the sales suspension would take effect. That would be bad for the automaker, as California is far and away its largest market in the US, albeit one that is shrinking each quarter. Having to suspend sales entirely in the state would be disastrous. Some had speculated that Tesla could change Autopilot’s name to something less misleading, but the company chose a more drastic approach.
Now, if you buy a new Tesla and want it to steer itself—while you pay attention to the road—you will have to pay for FSD. Until the middle of February, that can be done for a one-time fee of $8,000. But starting on February 14, that option goes away, too, and the sole choice will be a $99/month FSD subscription.
But probably not for very long. Last night, Musk revealed on his social media platform that “the $99/month for supervised FSD will rise as FSD’s capabilities improve. The massive value jump is when you can be on your phone or sleeping for the entire ride (unsupervised FSD).”
The quest for recurring revenue streams is becoming something of a holy grail in the automotive industry as OEMs that previously treated their customers as a single sale now hope to make themselves more attractive to investors by encouraging customers to give them regular payouts.
This may have contributed to General Motors’ decision to drop Apple CarPlay and Android Automotive. BMW has also experimented with subscription services. Tesla’s stock price remains so high that such games are probably unnecessary here, but with falling profit margins, declining sales, and the loss of emissions credits to bolster the bottom line, one can see why regular cash infusions from Tesla drivers would be desirable.
...
Read the original on arstechnica.com »
It’s weird to see Docker Inc (the company) struggle to find its place in 2026. What started as the company that revolutionized how we deploy applications has been through multiple identity crises, pivoting from one strategy to another in search of sustainable revenue and market relevance.
Docker’s journey reads like a startup trying to find product-market fit, except Docker already had product-market fit - they created the containerization standard that everyone uses. The problem is that Docker the technology became so successful that Docker the company struggled to monetize it. When your core product becomes commoditized and open source, you need to find new ways to add value.
Docker Swarm was Docker’s attempt to compete with Kubernetes in the orchestration space. But Kubernetes won that battle decisively, and Docker eventually sold Swarm. This was a clear signal that Docker was stepping back from trying to be the full-stack container platform and instead focusing on what they could uniquely provide.
For a while, Docker seemed to focus on developer experience. This made sense - developers are Docker’s core users, and improving their workflow could be a differentiator. Docker Scout emerged from the acquisition of Atomist in June 2022, bringing “software supply chain” capabilities. Scout allows Docker to see not just what’s in a container, but how it was built and where vulnerabilities are. This was a smart move toward security and observability, areas where Docker could add real value.
Docker also acquired AtomicJar, the company behind Testcontainers, adding shift-left testing capabilities. Testcontainers lets developers run real dependencies (databases, message queues, etc.) in containers during testing, making integration tests more reliable and closer to production environments.
Then came the AI pivot. Docker Model Runner entered the scene, positioning Docker as a platform for running AI models. Docker Compose expanded to support AI agents and models. Docker Offload was introduced for cloud-scale GPU execution of AI tasks. Partnerships with Google Cloud, Microsoft Azure, and AI SDKs (CrewAI, LangGraph, Vercel AI SDK) followed.
The acquisition of MCP Defender in September 2025 further cemented Docker’s move into AI security, focusing on securing agentic AI infrastructure and runtime threat detection. This was a significant shift - from developer tools to AI infrastructure.
Suddenly, Docker moved into the hardened images space. In December 2025, Docker made over 1,000 Docker Hardened Images free and open source under Apache 2.0, reducing vulnerabilities by up to 95% compared to traditional images. This move was likely triggered by Chainguard’s success in the secure container image space. Chainguard had been building a business around minimal, secure container images, and Docker needed to respond.
Making hardened images free was a bold move - it’s hard to compete with free, especially when it’s open source. But it also raises questions about Docker’s business model. If you’re giving away your security features for free, what are you selling?
In February 2025, Docker replaced CEO Scott Johnston (who led the company since 2019) with Don Johnson, a former Oracle Cloud Infrastructure founder and executive vice president. This leadership transition has prompted tech analysts to anticipate a potential acquisition by a major cloud provider. The CEO swap, combined with the strategic pivots, suggests Docker may be positioning itself for sale rather than building a standalone business.
Docker’s strategic shifts tell a story of a company searching for its place in a market it helped create. The containerization technology Docker pioneered became so successful that it became infrastructure - something everyone uses but no one wants to pay for directly.
The pivots from orchestration (Swarm) to developer tools (Scout, Testcontainers) to AI (Model Runner, MCP Defender) to security (Hardened Images) show a company trying different approaches to find sustainable revenue. Each pivot makes sense in isolation, but together they paint a picture of a company without a clear long-term vision.
The hardened images move is particularly interesting because it’s defensive - responding to Chainguard’s success rather than leading with innovation. Making it free and open source is a strong competitive move, but it doesn’t solve the fundamental business model question.
Docker the technology isn’t going anywhere. It’s too embedded in the infrastructure of modern software development. But Docker the company? That’s less clear. The leadership change, acquisition speculation, and rapid strategic pivots suggest Docker Inc may be positioning itself for an exit rather than building a long-term independent business.
For developers, this doesn’t change much. Docker containers will continue to work, and the open source nature of Docker means the technology will persist regardless of what happens to the company. But it’s worth watching how Docker Inc’s search for identity plays out - it could affect the ecosystem of tools and services built around containers.
The irony is that Docker created a standard so successful that it became infrastructure, and infrastructure is hard to monetize. Docker Inc’s struggle to find its place is a cautionary tale about the challenges of building a business around open source technology that becomes too successful.
...
Read the original on tuananh.net »
Local Area Network discovery tool with a modern Terminal User Interface (TUI) written in Go. Discover, explore, and understand your LAN in an intuitive way.
Whosthere performs unprivileged, concurrent scans using mDNS
and SSDP scanners. Additionally, it sweeps the local subnet by attempting TCP/UDP connections to trigger ARP resolution, then reads the
ARP cache to identify devices on your Local Area Network. This technique populates the ARP cache without requiring elevated privileges. All discovered devices are enhanced with
OUI lookups to display manufacturers when available.
Whosthere provides a friendly, intuitive way to answer the question every network administrator asks: “Who’s there on my network?”
* No Elevated Privileges Required: Runs entirely in user-space.
* Integrated Port Scanner: Optional service discovery on found hosts (only scan devices with permission!).
* Daemon Mode with HTTP API: Run in the background and integrate with other tools.
* Theming & Configuration: Personalize the look and behavior via YAML configuration.
brew tap ramonvermeulen/whosthere
brew install whosthere
Or with Go:
go install github.com/ramonvermeulen/whosthere@latest
git clone https://github.com/ramonvermeulen/whosthere.git
cd whosthere
make build
whosthere
whosthere daemon –port 8080
Additional command line options can be found by running:
whosthere –help
Whosthere is supported on the following platforms:
* Windows (maybe in the future, contributions welcome!)
Whosthere can be configured via a YAML configuration file. By default, it looks for the configuration file in the following order:
* Path specified in the WHOSTHERE_CONFIG environment variable (if set)
When not running in TUI mode, logs are also written to the console output.
Example of the default configuration file:
# How often to run discovery scans
scan_interval: 20s
# Maximum duration for each scan
# If you set this too low some scanners or the sweeper might not complete in time
scan_duration: 10s
# Splash screen configuration
splash:
enabled: true
delay: 1s
# Theme configuration
theme:
# Configure the theme to use for the TUI, complete list of available themes at:
# https://github.com/ramonvermeulen/whosthere/tree/main/internal/ui/theme/theme.go
# Set name to “custom” to use the custom colors below
# For any color that is not configured it will take the default theme value as fallback
name: default
# Custom theme colors (uncomment and set name: custom to use)
# primitive_background_color: “#000a1a”
# contrast_background_color: “#001a33”
# more_contrast_background_color: “#003366”
# border_color: “#0088ff”
# title_color: “#00ffff”
# graphics_color: “#00ffaa”
# primary_text_color: “#cceeff”
# secondary_text_color: “#6699ff”
# tertiary_text_color: “#ffaa00”
# inverse_text_color: “#000a1a”
# contrast_secondary_text_color: “#88ddff”
# Scanner configuration
scanners:
mdns:
enabled: true
ssdp:
enabled: true
arp:
enabled: true
# Port scanner configuration
port_scanner:
timeout: 5s
# List of TCP ports to scan on discovered devices
tcp: [21, 22, 23, 25, 80, 110, 135, 139, 143, 389, 443, 445, 993, 995, 1433, 1521, 3306, 3389, 5432, 5900, 8080, 8443, 9000, 9090, 9200, 9300, 10000, 27017]
# Uncomment the next line to configure a specific network interface - uses OS default if not set
# network_interface: lo0
When running Whosthere in daemon mode, it exposes an very simplistic HTTP API with the following endpoints:
Theme can be configured via the configuration file, or at runtime via the CTRL+t key binding. A complete list of available themes can be found here, feel free to open a PR to add your own theme!
theme:
name: cyberpunk
When the name is set to custom, the other color options can be used to create your own custom theme.
Logs are written to the application’s state directory:
When not running in TUI mode, logs are also output to the console.
For clipboard functionality to work, a fork of go-clipboard is used. Ensure you have the appropriate copy tool installed for your OS:
Whosthere is intended for use on networks where you have permission to perform network discovery and scanning, such as your own home network. Unauthorized scanning of networks may be illegal and unethical. Always obtain proper authorization before using this tool on any network.
Contributions and suggestions such as feature requests, bug reports, or improvements are welcome! Feel free to open issues or submit pull requests on the GitHub repository. Please make sure to discuss any major changes on a Github issue before implementing them.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.