10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
Home • Automotive News • This Alberta Startup Sells No-Tech Tractors for Half Price
Automotive News
Stay connected via Google News
Follow us for the latest travel updates and guides.
Four hundred inquiries from American farmers poured in after a single interview. Not for a John Deere. Not for a Case IH. For a tractor built in Alberta with a remanufactured 1990s diesel engine and zero electronics.
Ursa Ag, a small Canadian manufacturer, is assembling tractors powered by 12-valve Cummins engines — the same mechanically injected workhorses that powered combines and pickup trucks decades ago — and selling them for roughly half the price of comparable machines from established brands. The 150-horsepower model starts at $129,900 CAD, about $95,000 USD. The range-topping 260-hp version runs $199,900 CAD, around $146,000.
Try finding a similarly powered John Deere for that money.
Owner Doug Wilson isn’t pretending this is cutting-edge technology. That’s the entire point. The 150-hp and 180-hp models use remanufactured 5.9-liter Cummins engines, while the 260-hp gets an 8.3-liter unit.
All are fed by Bosch P-pumps — purely mechanical fuel injection, no ECU, no proprietary software handshake required. The cabs are sourced externally and stripped to essentials: an air ride seat, mechanically connected controls, and nothing resembling a touchscreen.
This plays directly into a fight that has been simmering for years. John Deere’s right-to-repair battles became a national story when farmers discovered they couldn’t fix their own equipment without dealer-authorized software. Lawsuits followed, then legislation.
Deere eventually made concessions, but the damage was done. A generation of farmers learned exactly how much control they’d surrendered by buying machines loaded with proprietary code.
Wilson saw the gap and drove a tractor through it. The 12-valve Cummins is arguably the most widely understood diesel engine in North America. Every independent shop, every shade-tree mechanic with a set of wrenches, every farmer who grew up turning bolts has encountered one.
Parts sit on shelves in thousands of stores. Downtime — the thing that actually costs a farmer money during planting or harvest — shrinks dramatically when you don’t need a factory technician with a laptop to diagnose a fuel delivery problem.
Ursa Ag’s dealer network remains tiny, and the company sells direct. Wilson admitted they haven’t scaled up distribution because they can’t keep shelves stocked as it stands. He says 2026 production will exceed the company’s entire cumulative output, which is a bold claim from a small operation, and whether they can actually deliver is the single biggest question hanging over this story.
The U.S. market is where things get interesting. Ursa Ag has no American distributors yet, though Wilson says that’s likely to change. The easiest answer is yes, we can ship to the United States,” he told reporters.
Those 400 American inquiries after one Farms.com segment suggest the appetite is real. Farmers who have been buying 30-year-old equipment to avoid modern complexity now have a new alternative — a machine with fresh sheet metal, a warranty, and an engine philosophy rooted firmly in the past.
There’s a reason the used tractor market has been so robust. Plenty of operators looked at a $300,000 machine full of sensors and software and decided a well-maintained older unit was the smarter bet. Ursa Ag is manufacturing that bet from scratch.
Whether a small Alberta company can scale fast enough to meet demand from an entire continent is another matter. The big manufacturers have supply chains, dealer networks, and financing arms that took decades to build. Wilson has remanufactured Cummins engines and a value proposition that resonates with anyone who has ever waited three days for a dealer tech to show up with a diagnostic cable.
The farm equipment industry spent 20 years adding complexity and cost. Ursa Ag is wagering that a significant number of farmers never wanted any of it.
Stay connected via Google News
Follow us for the latest travel updates and guides.
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length.
🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world’s top closed-source models.
🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice.
Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today!
📄 Tech Report: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf
🤗 Open Weights: https://huggingface.co/collections/deepseek-ai/deepseek-v4
DeepSeek-V4-Pro
🔹 Enhanced Agentic Capabilities: Open-source SOTA in Agentic Coding benchmarks.
🔹 Rich World Knowledge: Leads all current open models, trailing only Gemini-3.1-Pro.
🔹 World-Class Reasoning: Beats all current open models in Math/STEM/Coding, rivaling top closed-source models.
DeepSeek-V4-Flash
🔹 Reasoning capabilities closely approach V4-Pro.
🔹 Performs on par with V4-Pro on simple Agent tasks.
🔹 Smaller parameter size, faster response times, and highly cost-effective API pricing.
Structural Innovation & Ultra-High Context Efficiency
🔹 Novel Attention: Token-wise compression + DSA (DeepSeek Sparse Attention).
🔹 Peak Efficiency: World-leading long context with drastically reduced compute & memory costs.
🔹 1M Standard: 1M context is now the default across all official DeepSeek services.
Dedicated Optimizations for Agent Capabilities
🔹 DeepSeek-V4 is seamlessly integrated with leading AI agents like Claude Code, OpenClaw & OpenCode.
🔹 Already driving our in-house agentic coding at DeepSeek.
The figure below showcases a sample PDF generated by DeepSeek-V4-Pro.
API is Available Today!
🔹 Keep base_url, just update model to deepseek-v4-pro or deepseek-v4-flash.
🔹 Supports OpenAI ChatCompletions & Anthropic APIs.
🔹 Both models support 1M context & dual modes (Thinking / Non-Thinking): https://api-docs.deepseek.com/guides/thinking_mode
⚠️ Note: deepseek-chat & deepseek-reasoner will be fully retired and inaccessible after Jul 24th, 2026, 15:59 (UTC Time). (Currently routing to deepseek-v4-flash non-thinking/thinking).
🔹 Amid recent attention, a quick reminder: please rely only on our official accounts for DeepSeek news. Statements from other channels do not reflect our views.
🔹 Thank you for your continued trust. We remain committed to longtermism, advancing steadily toward our ultimate goal of AGI.
The DeepSeek API uses an API format compatible with OpenAI/Anthropic. By modifying the configuration, you can use the OpenAI/Anthropic SDK or softwares compatible with the OpenAI/Anthropic API to access the DeepSeek API.
* The model names deepseek-chat and deepseek-reasoner will be deprecated on 2026/07/24. For compatibility, they correspond to the non-thinking mode and thinking mode of deepseek-v4-flash, respectively.
Invoke The Chat API
Once you have obtained an API key, you can access the DeepSeek model using the following example scripts in the OpenAI API format. This is a non-stream example, you can set the stream parameter to true to get stream response.
For examples using the Anthropic API format, please refer to Anthropic API.
curl
python
nodejs
curl https://api.deepseek.com/chat/completions \ -H “Content-Type: application/json” \ -H “Authorization: Bearer ${DEEPSEEK_API_KEY}” \ -d ‘{ “model”: “deepseek-v4-pro”, “messages”: [ {“role”: “system”, “content”: “You are a helpful assistant.“}, {“role”: “user”, “content”: “Hello!“} ], “thinking”: {“type”: “enabled”}, “reasoning_effort”: “high”, “stream”: false }’
We’re checking if you’re a real person and not an automated bad bot. Usually, the captcha below will complete itself. If it doesn’t, simply click the checkbox in the captcha to verify. Once verified, you’ll be taken to the page you wanted to visit.
If for some reason after verifying the captcha above, you are constantly being redirected to this exact same page to re-verify the captcha again, then please click on the button below to get in touch with the support team.
Finally, great battery life in a Framework Laptop
20 hours
Netflix 4K streaming250nit brightness, 30% volume, Windows 11
17 hours
Active web usage
250nit brightness, 30% volume, Windows 11
11 hours
Video conferencing250nit brightness, 30% volume, Windows 11
7 days
Standby without charging
Wi-Fi connected on Ubuntu
Intel® Core™ Ultra Series 3 processors
The Framework Laptop 13 Pro runs on Intel® Core™ Ultra Series 3 processors, unlocking 20 hours of battery ϟ life, up to 64GB of LPCAMM2 LPDDR5X memory, and support for up to 8TB of PCIe Gen 5.0 NVMe storage. It’s designed to stay responsive under sustained, heavy workloads.
Power-efficient memory, made upgradeable
We’re among the first to pair Intel® Core™ Ultra Series 3 with LPCAMM2. A high-density interposer enables LPDDR5X in a modular form, delivering 7467 MT/s and high performance per watt without soldering it down.
A laptop that you own
You can customize it,
Pick your ports with the Framework Expansion Card system and install them directly into your laptop without relying on external adapters. The magnet-attach Bezel lets you customize with bold or translucent color options.
USB-C
USB-A
Audio Jack
DisplayPort
HDMI
MicroSD
SD
Storage - 250GB
Storage - 1TB
Ethernet
repair it,
A truly easy-to-repair laptop that’s built to respect your rights. Just scan the QR codes, follow the guides, and replace any part with a single tool that’s included in the box.
upgrade it.
When you’re ready for more performance, you can upgrade individual components instead of replacing your entire laptop. Install a new Mainboard for generational processor upgrades, add memory to handle heavier workloads, or expand your storage to increase capacity or enable dual booting. The Framework Marketplace makes it easy to find the compatible parts you need.
Runs Linux. Really well.
(you can also use Windows 11 if you want)
We don’t just support Linux; we live in it. Framework Laptop 13 Pro with Intel® Core™ Ultra Series 3 is our first Ubuntu Certified system. We seed development hardware and provide funding to a range of other distros like Fedora, Bazzite, NixOS, CachyOS, and more to ensure reliable support.
A sensory upgrade
13.5″ 2880x1920 Touchscreen Display
A custom 13.5″ 3:2 touchscreen display with sharp 2880×1920 resolution gives you the vertical space you need for coding and productivity. A 30 – 120Hz variable refresh rate keeps motion smooth while optimizing power, and with up to 700nits of brightness and a matte surface, it stays clear across a wide range of lighting conditions.
A haptic touchpad that beats your expectations
The large 123.7mm × 76.7mm Haptic Touchpad, powered by four piezoelectric actuators, delivers consistent, high-quality clicks across the surface. Feedback and gestures are fully tunable, so you can set it up exactly how you want.
The keyboard you love, now even better
With 1.5mm of key travel, the keyboard delivers deeper, more tactile feedback than most modern laptops without increasing noise. A CNC aluminum Input Cover Frame reduces deck flex for a more solid and consistent feel. Available in multiple ANSI and ISO layouts, in black, black with lavender, and black with gray and orange.
Dolby Atmos® audio
The side-firing speakers are tuned with Dolby Atmos® to deliver clear, balanced audio on Windows, ideal for calls or music while you work.
Thin, light, and fully aluminum
At just 15.85mm thick and 1.4kg, gaining durability doesn’t mean losing portability. The Top Cover, Input Cover, and Bottom Cover are now CNC machined from 6063 aluminum, increasing rigidity and durability.
296.63mm
Width
228.98mm
Depth
15.85mm
Height
1.4kg
Weight
Open source ecosystems
We’ve open sourced design files and documentation for many core components and firmware on GitHub, giving you the freedom to modify, extend, or repurpose them.
Respecting your privacy
Privacy switches
Your privacy is protected at a hardware level, with physical switches that electrically cut off the webcam and microphones whenever you need.
No crapware
We hate software bloat as much as you do. Our pre-builts ship with Ubuntu or stock Windows 11 plus the necessary drivers, and our DIY Edition lets you bring whichever operating system you’d like.
The choice is yours
Framework Laptop 13 Pro is available pre-built with Windows or Ubuntu pre-installed, or as a DIY Edition that lets you install the operating system of your choice.
Upgrade, customize, and repair
Pick up new parts and modules for your Framework Laptop 13 Pro.
Keep track of what we’re working on with the Framework Newsletter.
ϟ
Testing conducted by Framework in April 2026 using Framework Laptop 13 Pro tested with Intel® Core™ Ultra X7 358H Processor, Intel® Arc™ B390 graphics, 2.8K touchscreen display, 32GB memory and 1TB storage, with display brightness set to 250nits, display refresh rate set to 60Hz, speaker volume as 30%, Dolby Atmos® disabled, and wireless enabled. Battery life tested by streaming Netflix 4K content in the Netflix app on Windows 11 under Best Power Efficiency mode. Battery life varies by use and configuration.
A collection of principles and patterns that shape software systems, teams, and decisions.
56 laws
•
Click any card to learn more
In 2023, Raytheon’s president stood at the Paris Air Show and described what it took to restart Stinger missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.
The Pentagon hadn’t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn’t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.
I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now.
In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn’t work.
By the deadline, Europe delivered about half. Macron called the original promise reckless. An investigation by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn’t hit until December 2024, nine months late.
It wasn’t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe’s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent’s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.
The U.S. wasn’t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn’t hit half the target.
This wasn’t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.
The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.
Then there’s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program, they discovered they couldn’t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.
After $69 million in cost overruns and years of failed attempts, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original process had relied on an unintentional impurity that was critical to the material’s function. Nobody knew. Not the engineers trying to reproduce it. Not even the original workers who made it decades earlier. Los Alamos called it an unknowing dependency in the original process.
A nuclear weapons program lost the ability to make a material it invented. The knowledge didn’t just leave with people. It was never fully understood by anyone.
(Correction: the original version stated that the workers who made Fogbank knew about the impurity. They didn’t. The dependency was unwitting, which makes the knowledge-loss argument stronger, not weaker. Thanks to John F. in the comments for catching this.)
I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.
In defense, the substitute was the peace dividend. In software, it’s AI.
I wrote about the talent pipeline collapse before. The hiring numbers and the junior-to-senior problem are documented. So is the comprehension crisis. What I didn’t have was the right historical parallel. Now I do.
And it tells you something the hiring data doesn’t: how long rebuilding actually takes.
Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.
Money was never the constraint. Knowledge was. RAND found that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either.
A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn’t imagine going back.
The software industry is in year three of the same optimization. Salesforce said it won’t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A CRA survey of university computing departments found 62% reported declining enrollment this year.
I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry’s answer is predictable: let AI review AI’s code. I’m not doing that. I’ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn’t guessing. I’m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.
But even that doesn’t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031.
But crises don’t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn’t. Even Fogbank had records. There weren’t enough. The original workers didn’t fully understand their own process.
Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.
It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.
It just disappears.
The West already made this mistake once. The bill came due in Ukraine.
I know how this sounds. I know I’ve written about the talent pipeline before. The defense example isn’t about repeating the argument. It’s about showing what happens if the industry’s expectations don’t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That’s the cost of betting wrong on optimization. We’re making the same bet with software engineering right now.
Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t. The defense industry thought peace would last forever, too.
No posts
I am building a cloud
2026 – 04-22
Today is fundraising announcement day. As is the nature of writing for a larger audience, it is a formal, safe announcement. As it should be. Writing must necessarily become impersonal at scale. But I would like to write something personal about why I am doing this. What is the goal of building exe.dev? I am already the co-founder of one startup that is doing very well, selling a product I love as much as when I first helped design and build it.
What could possess me to go through all the pain of starting another company? Some fellow founders have looked at me with incredulity and shock that I would throw myself back into the frying pan. (Worse yet, experience tells me that most of the pain is still in my future.) It has been a genuinely hard question to answer because I start searching for a “big” reason, a principle or a social need, a reason or motivation beyond challenge. But I believe the truth is far simpler, and to some I am sure almost equally incredulous.
I like computers.
In some tech circles, that is an unusual statement. (“In this house, we curse computers!”) I get it, computers can be really frustrating. But I like computers. I always have. It is really fun getting computers to do things. Painful, sure, but the results are worth it. Small microcontrollers are fun, desktops are fun, phones are fun, and servers are fun, whether racked in your basement or in a data center across the world. I like them all.
So it is no small thing for me when I admit: I do not like the cloud today.
I want to. Computers are great, whether it is a BSD installed directly on a PC or a Linux VM. I can enjoy Windows, BeOS, Novell NetWare, I even installed OS/2 Warp back in the day and had a great time with it. Linux is particularly powerful today and a source of endless potential. And for all the pages of products, the cloud is just Linux VMs. Better, they are API driven Linux VMs. I should be in heaven.
But every cloud product I try is wrong. Some are better than others, but I am constantly constrained by the choices cloud vendors make in ways that make it hard to get computers to do the things I want them to do.
These issues go beyond UX or bad API design. Some of the fundamental building blocks of today’s clouds are the wrong shape. VMs are the wrong shape because they are tied to CPU/memory resources. I want to buy some CPUs, memory, and disk, and then run VMs on it. A Linux VM is a process running in another Linux’s cgroup, I should be able to run as many as I like on the computer I have. The only way to do that easily on today’s clouds is to take isolation into my own hands, with gVisor or nested virtualization on a single cloud VM, paying the nesting performance penalty, and then I am left with the job of running and managing, at a minimum, a reverse proxy onto my VMs. All because the cloud abstraction is the wrong shape.
Clouds have tried to solve this with “PaaS” systems. Abstractions that are inherently less powerful than a computer, bespoke to a particular provider. Learn a new way to write software for each compute vendor, only to find half way into your project that something that is easy on a normal computer is nearly impossible because of some obscure limit of the platform system buried so deep you cannot find it until you are deeply committed to a project. Time and again I have said “this is the one” only to be betrayed by some half-assed, half-implemented, or half-thought-through abstraction. No thank you.
Consider disk. Cloud providers want you to use remote block devices (or something even more limited and slow, like S3). When remote block devices were introduced they made sense, because computers used hard drives. Remote does not hurt sequential read/write performance, if the buffering implementation is good. Random seeks on a hard drive take 10ms, so 1ms RTT for the Ethernet connection to remote storage is a fine price to pay. It is a good product for hard drives and makes the cloud vendor’s life a lot easier because it removes an entire dimension from their standard instance types.
But then we all switched to SSD. Seek time went from 10 milliseconds to 20 microseconds. Heroic efforts have cut the network RTT a bit for really good remote block systems, but the IOPS overhead of remote systems went from 10% with hard drives to more than 10x with SSDs.
It is a lot of work to configure an EC2 VM to have 200k IOPS, and you will pay $10k/month for the privilege. My MacBook has 500k IOPS. Why are we hobbling our cloud infrastructure with slow disk?
Finally networking. Hyperscalers have great networks. They charge you the earth for them and make it miserable to do deals with other vendors. The standard price for a GB of egress from a cloud provider is 10x what you pay racking a server in a normal data center. At moderate volume the multiplier is even worse. Sure, if you spend $XXm/month with a cloud the prices get much better, but most of my projects want to spend $XX/month, without the little m. The fundamental technology here is fine, but this is where limits are placed on you to make sure whatever you build cannot be affordable.
Finally, clouds have painful APIs. This is where projects like K8S come in, papering over the pain so engineers suffer a bit less from using the cloud. But VMs are hard with Kubernetes because the cloud makes you do it all yourself with lumpy nested virtualization. Disk is hard because back when they were designing K8S Google didn’t really even do usable remote block devices, and even if you can find a common pattern among clouds today to paper over, it will be slow. Networking is hard because if it were easy you would private link in a few systems from a neighboring open DC and drop a zero from your cloud spend. It is tempting to dismiss Kubernetes as a scam, artificial make work designed to avoid doing real product work, but the truth is worse: it is a product attempting to solve an impossible problem: make clouds portable and usable. It cannot be done.
You cannot solve the fundamental problems with cloud abstractions by building new abstractions on top. Making Kubernetes good is inherently impossible, a project in putting (admittedly high quality) lipstick on a pig.
We have been muddying along with these miserable clouds for 15 years now. We make do, in the way we do with all the unpleasant parts of our software stack, holding our nose whenever we have to deal with and trying to minimize how often that happens.
This however, is the moment to fix it.
This is the moment because something has changed: we have agents now. (Indeed my co-founder Josh and I started tinkering because we wanted to use LLMs in programming. It turns out what needs building for LLMs are better traditional abstractions.) Agents, by making it easiest to write code, means there will be a lot more software. Economists would call this an instance of Jevons paradox. Each of us will write more programs, for fun and for work. We need private places to run them, easy sharing with friends and colleagues, minimal overhead.
With more total software in our lives the cloud, which was an annoying pain, becomes a much bigger pain. We need a lot more compute, we need it to be easier to manage. Agents help to some degree. If you trust them with your credentials they will do a great job driving the AWS API for you (though occasionally it will delete your production DB). But agents struggle with the fundamental limits of the abstractions as much as we do. You need more tokens than you should and you get a worse result than you should. Every percent of context window the agent spends thinking about how to contort classic clouds into working is context window is not using to solve your problem.
So we are going to fix it. What we have launched on exe.dev today addresses the VM resource isolation problem: instead of provisioning individual VMs, you get CPU and memory and run the VMs you want. We took care of a TLS proxy and an authentication proxy, because I do not actually want my fresh VMs dumped directly on the internet. Your disk is local NVMe with blocks replicated off machine asynchronously. We have regions around the world for your machines, because you want your machines close. Your machines are behind an anycast network to give all your global users a low latency entrypoint to your product (and so we can build some new exciting things soon).
There is a lot more to build here, from obvious things like static IPs to UX challenges like how to give you access to our automatic historical disk snapshots. Those will get built. And at the same time we are going right back to the beginning, racking computers in data centers, thinking through every layer of the software stack, exploring all the options for how we wire up networks.
So, I am building a cloud. One I actually want to use. I hope it is useful to you.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.