10 interesting stories served every morning and every evening.
On September 14, 2015, our first publicly-trusted certificate went live. We were proud that we had issued a certificate that a significant majority of clients could accept, and had done it using automated software. Of course, in retrospect this was just the first of billions of certificates. Today, Let’s Encrypt is the largest certificate authority in the world in terms of certificates issued, the ACME protocol we helped create and standardize is integrated throughout the server ecosystem, and we’ve become a household name among system administrators. We’re closing in on protecting one billion web sites.
In 2023, we marked the tenth anniversary of the creation of our nonprofit, Internet Security Research Group, which continues to host Let’s Encrypt and other public benefit infrastructure projects. Now, in honor of the tenth anniversary of Let’s Encrypt’s public certificate issuance and the start of the general availability of our services, we’re looking back at a few milestones and factors that contributed to our success.
A conspicuous part of Let’s Encrypt’s history is how thoroughly our vision of scalability through automation has succeeded.
In March 2016, we issued our one millionth certificate. Just two years later, in September 2018, we were issuing a million certificates every day. In 2020 we reached a billion total certificates issued and as of late 2025 we’re frequently issuing ten million certificates per day. We’re now on track to reach a billion active sites, probably sometime in the coming year. (The “certificates issued” and “certificates active” metrics are quite different because our certificates regularly expire and get replaced.)
The steady growth of our issuance volume shows the strength of our architecture, the validity of our vision, and the great efforts of our engineering team to scale up our own infrastructure. It also reminds us of the confidence that the Internet community is placing in us, making the use of a Let’s Encrypt certificate a normal and, dare we say, boring choice. But I often point out that our ever-growing issuance volumes are only an indirect measure of value. What ultimately matters is improving the security of people’s use of the web, which, as far as Let’s Encrypt’s contribution goes, is not measured by issuance volumes so much as by the prevalence of HTTPS encryption. For that reason, we’ve always emphasized the graph of the percentage of encrypted connections that web users make (here represented by statistics from Firefox).
(These graphs are snapshots as of the date of this post; a dynamically updated version is found on our stats page.) Our biggest goal was to make a concrete, measurable security impact on the web by getting HTTPS connection prevalence to increase—and it’s worked. It took five years or so to get the global percentage from below 30% to around 80%, where it’s remained ever since. In the U. S. it has been close to 95% for a while now.
A good amount of the remaining unencrypted traffic probably comes from internal or private organizational sites (intranets), but other than that we don’t know much about it; this would be a great topic for Internet security researchers to look into.
We believe our present growth in certificate issuance volume is essentially coming from growth in the web as a whole. In other words, if we protect 20% more sites over some time period, it’s because the web itself grew by 20%.
We’ve blogged about most of Let’s Encrypt’s most significant milestones as they’ve happened, and I invite everyone in our community to look over those blog posts to see how far we’ve come. We’ve also published annual reports for the past seven years, which offer elegant and concise summaries of our work.
As I personally think back on the past decade, just a few of the many events that come to mind include:
Telling the world about the project in November 2014
Our one millionth certificate in March 2016, then our 100 millionth certificate in June 2017, and then our billionth certificate in 2020
Along the way, first issuing one million certificates in a single day (in September 2018), significantly contributed to by the SquareSpace and Shopify Let’s Encrypt integrations
Just at the end of September 2025, we issued more than ten million certificates in a day for the first time.
We’ve also periodically rolled out new features such as internationalized domain name support (2016), wildcard support (2018), and short-lived and IP address (2025) certificates. We’re always working on more new features for the future.
There are many technical milestones like our database server upgrades in 2021, where we found we needed a serious server infrastructure boost because of the tremendous volumes of data we were dealing with. Similarly, our original infrastructure was using Gigabit Ethernet internally, and, with the growth of our issuance volume and logging, we found that our Gigabit Ethernet network eventually became too slow to synchronize database instances! (Today we’re using 25-gig Ethernet.) More recently, we’ve experimented with architectural upgrades to our ever-growing Certificate Transparency logs, and decided to go ahead with deploying those upgrades—to help us not just keep up with, but get ahead of, our continuing growth.
These kinds of growing pains and successful responses to them are nice to remember because they point to the inexorable increase in demands on our infrastructure as we’ve become a more and more essential part of the Internet. I’m proud of our technical teams which have handled those increased demands capably and professionally.
I also recall the ongoing work involved in making sure our certificates would be as widely accepted as possible, which has meant managing the original cross-signature from IdenTrust, and subsequently creating and propagating our own root CA certificates. This process has required PKI engineering, key ceremonies, root program interactions, documentation, and community support associated with certificate migrations. Most users never have reason to look behind the scenes at our chains of trust, but our engineers update it as root and intermediate certificates have been replaced. We’ve engaged at the CA/B Forum, IETF, and in other venues with the browser root programs to help shape the web PKI as a technical leader.
As I wrote in 2020, our ideal of complete automation of the web PKI aims at a world where most site owners wouldn’t even need to think about certificates at all. We continue to get closer and closer to that world, which creates a risk that people will take us and our services for granted, as the details of certificate renewal occupy less of site operators’ mental energy. As I said at the time,
When your strategy as a nonprofit is to get out of the way, to offer services that people don’t need to think about, you’re running a real risk that you’ll eventually be taken for granted. There is a tension between wanting your work to be invisible and the need for recognition of its value. If people aren’t aware of how valuable our services are then we may not get the support we need to continue providing them.
I’m also grateful to our communications and fundraising staff who help make clear what we’re doing every day and how we’re making the Internet safer.
Our community continually recognizes our work in tangible ways by using our certificates—now by the tens of millions per day—and by sponsoring us.
We were honored to be recognized with awards including the 2022 Levchin Prize for Real-World Cryptography and the 2019 O’Reilly Open Source Award. In October of this year some of the individuals who got Let’s Encrypt started were honored to receive the IEEE Cybersecurity Award for Practice.
We documented the history, design, and goals of the project in an academic paper at the ACM CCS ’19 conference, which has subsequently been cited hundreds of times in academic research.
Ten years later, I’m still deeply grateful to the five initial sponsors that got Let’s Encrypt off the ground - Mozilla, EFF, Cisco, Akamai, and IdenTrust. When they committed significant resources to the project, it was just an ambitious idea. They saw the potential and believed in our team, and because of that we were able to build the service we operate today.
I’d like to particularly recognize IdenTrust, a PKI company that worked as a partner from the outset and enabled us to issue publicly-trusted certificates via a cross-signature from one of their roots. We would simply not have been able to launch our publicly-trusted certificate service without them. Back when I first told them that we were starting a new nonprofit certificate authority that would give away millions of certificates for free, there wasn’t any precedent for this arrangement, and there wasn’t necessarily much reason for IdenTrust to pay attention to our proposal. But the company really understood what we were trying to do and was willing to engage from the beginning. Ultimately, IdenTrust’s support made our original issuance model a reality.
I’m proud of what we have achieved with our staff, partners, and donors over the past ten years. I hope to be even more proud of the next ten years, as we use our strong footing to continue to pursue our mission to protect Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet.
Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. You can help us make the next ten years great as well by donating or becoming a sponsor.
...
Read the original on letsencrypt.org »
Today, we’re releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.
Devstral 2 is currently free to use via our API.
We are also introducing Mistral Vibe, a native CLI built for Devstral that enables end-to-end code automation.
Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.
Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.
Devstral Small 2: 24B parameter model available via API or deployable locally on consumer hardware.
Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight models while remaining highly cost efficient. Released under a modified MIT license, Devstral sets the open state-of-the-art for code agents.
Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among models up to five times its size while being capable of running locally on consumer hardware.
Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that compact models can match or exceed the performance of much larger competitors. Their reduced size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists.hardware.
Devstral 2 supports exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections—solving challenges like bug fixing and modernizing legacy systems.
The model can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases.
We evaluated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 using human evaluations conducted by an independent annotation provider, with tasks scaffolded through Cline. Devstral 2 shows a clear advantage over DeepSeek V3.2, with a 42.8% win rate versus 28.6% loss rate. However, Claude Sonnet 4.5 remains significantly preferred, indicating a gap with closed-source models persists.
“Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it’s a remarkably smooth driver. This is a massive contribution to the open-source ecosystem.” — Cline.
“Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale.” — Kilo Code.
Devstral Small 2, a 24B-parameter model with the same 256K context window and released under Apache 2.0, brings these capabilities to a compact, locally deployable form. Its size enables fast inference, tight feedback loops, and easy customization—with fully private, on-device runtime. It also supports image inputs, and can power multimodal agents.
Mistral Vibe CLI is an open-source command-line coding assistant powered by Devstral. It explores, modifies, and executes changes across your codebase using natural language—in your terminal or integrated into your preferred IDE via the Agent Communication Protocol. It is released under the Apache 2.0 license.
Vibe CLI provides an interactive chat interface with tools for file manipulation, code searching, version control, and command execution. Key features:
Project-aware context: Automatically scans your file structure and Git status to provide relevant context
Smart references: Reference files with @ autocomplete, execute shell commands with !, and use slash commands for configuration changes
Multi-file orchestration: Understands your entire codebase—not just the file you’re editing—enabling architecture-level reasoning that can halve your PR cycle time
You can run Vibe CLI programmatically for scripting, toggle auto-approval for tool execution, configure local models and providers through a simple config.toml, and control tool permissions to match your workflow.
Devstral 2 is currently offered free via our API. After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.
We’ve partnered with leading, open agent tools Kilo Code and Cline to bring Devstral 2 to where you already build.
Mistral Vibe CLI is available as an extension in Zed, so you can use it directly inside your IDE.
Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. You can try it today on build.nvidia.com. Devstral Small 2 is built for single-GPU operation and runs across a broad range of NVIDIA systems, including DGX Spark and GeForce RTX. NVIDIA NIM support will be available soon.
Devstral Small runs on consumer-grade GPUs as well as CPU-only configurations with no dedicated GPU required.
For optimal performance, we recommend a temperature of 0.2 and following the best practices defined for Mistral Vibe CLI.
We’re excited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!
Share your projects, questions, or discoveries with us on X/Twitter, Discord, or GitHub.
If you’re interested in shaping open-source research and building world-class interfaces that bring truly open, frontier AI to users, we welcome you to apply to join our team.
...
Read the original on mistral.ai »
My name is Bruno Simon, and I’m a creative developer (mostly for the web).
This is my portfolio. Please drive around to learn more about me and discover the many secrets of this world.
Teleports you to the closest respawn
Respawn
Server currently offline. Scores can’t be saved.
- Everyone can see them
- New whispers remove old ones (max 30)
- One whisper per user
- Choose a flag
- No slur!
- Max 30 characters
Thank you for visiting my portfolio!
If you are curious about the stack and how I built it, here’s everything you need to know.
Three.js is the library I’m using to render this 3D world.
It was created by mr.doob (X, GitHub), followed by hundreds of awesome developers, one of which being Sunag (X, GitHub) who added TSL, enabling the use of both WebGL and WebGPU, making this portfolio possible.
If you want to learn Three.js, I got you covered with this huge course.
It contains everything you need to start building awesome stuff with Three.js (and much more).
I’ve been making devlogs since the very start of this portfolio and you can find them on my Youtube channel.
Even though the portfolio is out, I’m still working on the last videos so that the series is complete.
The code is available on GitHub under MIT license. Even the Blender files are there, so have fun!
For security reasons, I’m not sharing the server code, but the portfolio works without it.
The music you hear was made especially for this portfolio by the awesome Kounine (Linktree).
They are now under CC0 license, meaning you can do whatever you want with them!
Download them here.
Server currently offline. Scores can’t be saved.
Come hang out with the community, show us your projects and ask us anything.
Contact me directly.
I have to warn you, I try to answer everyone, but it might take a while.
...
Read the original on bruno-simon.com »
The topic of the Rust experiment was just discussed at the annual
Maintainers Summit. The consensus among the assembled developers is that
Rust in the kernel is no longer experimental — it is now a core part of the
kernel and is here to stay. So the “experimental” tag will be coming off.
Congratulations are in order for all of the Rust for Linux team.
(Stay tuned for details in our Maintainers Summit coverage.)
Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds
...
Read the original on lwn.net »
PeerTube is a tool for hosting, managing, and sharing videos or live streams.
The following repositories were submitted by the solution and included in our evaluation. Any repositories, add-ons, features not included in here were not reviewed by us. French Ministry of National Education (~100K videos), Italy’s National Research Council, a few French alternative media, the Weißensee Kunsthochschule in Berlin, as well as the Universität der Künste in the same city, a few universities worldwide, the Blender and Debian projects, and various activist groups* This information is self-reported and updated annuallyLearn how this product has met the requirements of the DPG Standard by exploring the indicators below. Ricardo Torres (L2 Reviewer) submitted their review of PeerTube (152) and found it to be a DPG
...
Read the original on www.digitalpublicgoods.net »
Catch your best ideas before they slip through your fingers
Do you ever have flashes of insight or an idea worth remembering? This happens to me 5-10 times every day. If I don’t write down the thought immediately, it slips out of my mind. Worst of all, I remember that I’ve forgotten something and spend the next 10 minutes trying to remember what it is. So I invented external memory for my brain.
Introducing Pebble Index 01 - a small ring with a button and microphone. Hold the button, whisper your thought, and it’s sent to your phone. It’s added to your notes, set as a reminder, or saved for later review.
Index 01 is designed to become muscle memory, since it’s always with you. It’s private by design (no recording until you press the button) and requires no internet connection or paid subscription. It’s as small as a wedding band and comes in 3 colours. It’s made from durable stainless steel and is water-resistant. Like all Pebble products, it’s extremely customizable and built with open source software.
Here’s the best part: the battery lasts for years. You never need to charge it.
Pre-order today for $75. After worldwide shipping begins in March 2026, the price will go up to $99.
Now that I’ve worn my Index 01 for several months, I can safely say that it has changed my life - just like with Pebble, I couldn’t go back to a world without this. There are so many situations each day where my hands are full (while biking or driving, washing dishes, wrangling my kids, etc) and I need to remember something. A random sampling of my recent recordings:
Set a timer for 3pm to go pick up the kids
Remind me to phone the pharmacy at 11am
Peter is coming by tomorrow at 11:30am, add that to my calendar
Before, I would take my phone out of my pocket to jot these down, but I couldn’t always do that (eg, while bicycling). I also wanted to start using my phone less, especially in front of my kids.
Initially, we experimented by building this as an app on Pebble, since it has a mic and I’m always wearing one. But, I realized quickly that this was suboptimal - it required me to use my other hand to press the button to start recording (lift-to-wake gestures and wake-words are too unreliable). This was tough to use while bicycling or carrying stuff.
Then a genius electrical engineer friend of mine came up with an idea to fit everything into a tiny ring. It is the perfect form factor! Honestly, I’m still amazed that it all fits.
The design needed to satisfy several critical conditions:
Must work reliably 100% of the time. If it didn’t work or failed to record a thought, I knew I would take it off and revert back to my old habit of just forgetting things.
It had to have a physical press-button, with a satisfying click-feel. I want to know for sure if the button is pressed and my thought is captured.
Long battery life - every time you take something off to charge, there’s a chance you’ll forget to put it back on.
Must be privacy-preserving. These are your inner thoughts. All recordings must be processed and stored on your phone. Only record when the button is pressed.
It had to be as small as a wedding band. Since it’s worn on the index finger, if it were too large or bulky, it would hit your phone while you held it in your hand.
Water resistance - must be able to wash hands, shower, and get wet.
We’ve been working on this for a while, testing new versions and making tweaks. We’re really excited to get this out into the world.
Here are a few of my favourite things about Index 01:
It does one thing really well - it helps me remember things.
It’s discreet. It’s not distracting. It doesn’t take you out of the moment.
There’s no AI friend persona and it’s not always recording.
It’s inexpensive. We hope you try it and see if you like it as well!
Available in 3 colours and 8 sizes
You can pre-order now and pick your size/colour later before your ring ships.
Cost and availability: Pre-order price is $75, rises to $99 later. Ships worldwide, beginning in March.
Works with iPhone and Android: We overcame Apple’s best efforts to make life terrible for 3rd party accessory makers and have Index 01 working well on iOS and Android.
Extremely private and secure: Your thoughts are processed by open source speech-to-text (STT) and AI models locally on your phone. You can read the code and see exactly how it works - our Pebble mobile app is open source. Higher-quality STT is available through an optional cloud service.
No charging: The battery lasts for up to years of average use. After the end of its life, send your ring back to us for recycling.
On-ring storage: Recording works even if your phone is out of range. Up to 5 minutes of audio can be stored on-ring, then synced later.
No speaker or vibrating motor: This is an input device only. There is an RGB LED, but it’s rarely used (to save battery life and to reduce distraction).
Works great with Pebble or other smartwatches: After recording, the thought will appear on your watch, and you can check that it’s correct. You can ask questions like ‘What’s the weather today?’ and see the answer on your watch.
Raw audio playback: Very helpful if STT doesn’t work perfectly due to wind or loud background noises.
Actions: While the primary task is remembering things for you, you can also ask it to do things like ’Send a Beeper message to my wife - running late’ or answer simple questions that could be answered by searching the web. You can configure button clicks to control your music - I love using this to play/pause or skip tracks. You can also configure where to save your notes and reminders (I have it set to add to Notion).
Customizable and hackable: Configure single/double button clicks to control whatever you want (take a photo, turn on lights, Tasker, etc). Add your own voice actions via MCP. Or route the audio recordings directly to your own app or server!
99+ languages: Speech to text and local LLM support over 99 languages! Naturally, the quality of each may vary.
Let me be very clear - Index 01 is designed at its core to be a device that helps you remember things. We want it to be 100% reliable at its primary task. But we’re leaving the side door open for folks to customize, build new interactions and actions.
Here’s how I’m thinking about it - a single click-hold + voice input will be routed to the primary memory processing path. Double-click-hold + voice input would be routed to a more general purpose voice agent (think ChatGPT with web search). Responses from the agent would be presented on Pebble (eg ‘What’s the weather tomorrow?’, ‘When’s the next northbound Caltrain?’) or other smartwatches (as a notification). Maybe this could even be an input for something like ChatGPT Voice Mode, enabling you to hear the AI response from your earbuds.
The built in actions, set reminder, create note, alarms, etc, are actually MCPs - basically mini apps that AI agents know how to operate. They run locally in WASM within the Pebble mobile app (no cloud MCP server required). Basically any MCP server can be used with the system, so intrepid folks may have fun adding various actions like Beeper, Google Calendar, weather, etc that already offer MCPs.
Not everything will be available at launch, but this is the direction we are working towards. There will be 3 ways to customize your Index 01:
Trigger actions via button clicks - configure a single or double click to do things like take a photo, control your Home Assistant smart home, Tasker function, unlock your car. This will work better on Android since iOS Shortcuts doesn’t have an open API.
Trigger actions via voice input - write an MCP to do….basically anything? This is pretty open ended.
Route your voice recordings and/or transcriptions to your own webhook - or skip our AI processing entirely and send every recording to your own app or webapp.
How does it work?
People usually wear it on the index finger. Inside the ring is a button, a microphone, a Bluetooth chip, memory, and a battery that lasts for years. Click the button with your thumb, talk into the mic, and it records to internal memory. When your phone is in range, the recording is streamed to the Pebble app. It’s converted to text on-device, then processed by an on-device large language model (LLM) which selects an action to take (create note, add to reminders, etc).
When do I pick my size?
You’ll be able to pick your ring size and color after placing a pre-order. If you have a 3D printer, you can print our CAD designs to try on. We’re also planning a sizing kit. You can view the measurements of the inner diameter of each ring size.
How long does the battery last?
Roughly 12 to 15 hours of recording. On average, I use it 10-20 times per day to record 3-6 second thoughts. That’s up to 2 years of usage.
Is it secure and private?
Yes, extremely. The connection between ring and phone is encrypted. Recordings are processed locally on your phone in the open-source Pebble app. The app works offline (no internet connection) and does not require a cloud service. An optional cloud storage system for backing up recordings is available. Our plan is for this to be optionally encrypted, but we haven’t built it yet.
What kind of battery is inside?
Why can’t it be recharged?
We considered this but decided not to for several reasons:
You’d probably lose the charger before the battery runs out!
Adding charge circuitry and including a charger would make the product larger and more expensive.
You send it back to us to recycle.
Yes. We know this sounds a bit odd, but in this particular circumstance we believe it’s the best solution to the given set of constraints. Other smart rings like Oura cost $250+ and need to be charged every few days. We didn’t want to build a device like that. Before the battery runs out, the Pebble app notifies and asks if you’d like to order another ring.
Is it always listening?
No. It only records while the button is pressed. It’s not designed to record your whole life, or meetings.
What if the speech-to-text processing misses a word or something?
You can always listen to the each recording in the app.
We experimented with a touchpad, but found it too easy to accidentally swipe and press. Also, nothing beats the feedback of a real gosh darn pressable button.
Is there a speaker or vibrating motor?
No. The button has a great click-feel to indicate when you are pressing.
Does it do health tracking like Oura?
How durable and water-resistant is it?
It’s primarily made from stainless steel 316, with a liquid silicone rubber (LSR) button. It’s water-resistant to 1 meter. You can wash your hands, do dishes, and shower with it on, but we don’t recommend swimming with it.
Does it work with iPhone and Android?
I love customizing and hacking on my devices. What could I do with Index 01?
Lots of stuff! Control things with the buttons. Route raw audio or transcribed text directly to your own app via webhook. Use MCPs (also run locally on-device! No cloud server required) to add more actions.
Is this an AI friend thingy or always-recording device?
How far along is development?
We’ve been working on this in the background to watch development. It helps that our Pebble Time 2 partner factory is also building Index 01! We’re currently in the DVT stage, testing pre-production samples. We’ll start a wider alpha test in January with a lot more people. Here’s some shots from the pre-production assembly line:
Your browser does not support the video tag.
Get updates on Pebble news and products.
...
Read the original on repebble.com »
Or hell, why not do it in x86 assembly?
Let’s get a few things out of the way before I go any further with this seemingly impertinent thought, because it’s nowhere near as snarky as it sounds.
First, I don’t particularly like vibe coding. I love programming, and I have loved it since I made my first tentative steps with it sometime back in the mid-to-late 90s. I love programming so much, it always feels like I’m having too much fun for it to count as real work. I’ve done it professionally, but I also do it as a hobby. Someone apparently once said, “Do what you love and you’ll never work a day in your life.” That’s how I feel about writing code. I’ve also been teaching the subject for twenty-five years, and I can honestly say I am as excited about the first day of the semester now as I was when I first started. I realize it’s a bit precious to say so, but I’ll say it anyway: Turning non-programmers into programmers is my life’s work. It is the thing of which I am most proud as a college professor.
Vibe coding makes me feel dirty in ways that I struggle to articulate precisely. It’s not just that it feels like “cheating” (though it does). I also think it takes a lot of the fun out of the whole thing. I sometimes tell people (like the aforementioned students) that programming is like doing the best crossword puzzle in the world, except that when you solve it, it actually dances and sings. Vibe coding robs me of that moment, because I don’t feel like I really did it at all. And even though to be a programmer is to live with a more-or-less permanent set of aporias (you don’t really understand what the compiler is doing, really—and even if you do, you probably don’t really
understand how the virtual memory subsystem works, really), it’s satisfying to understand every inch of my code and frustrating—all the way to the borderlands of active anxiety—not quite understanding what Claude just wrote.
But this leads me to my second point, which I must make as clearly and forcefully as I can. Vibe coding actually works. It creates robust, complex systems that work. You can tell yourself (as I did) that it can’t possibly do that, but you are wrong. You can then tell yourself (as I did) that it’s good as a kind of alternative search engine for coding problems, but not much else. You are also wrong about that. Because when you start giving it little programming problems that you can’t be arsed to work out yourself (as I did), you discover (as I did) that it’s awfully good at those. And then one day you muse out loud (as I did) to an AI model something like, “I have an idea for a program…” And you are astounded. If you aren’t astounded, you either haven’t actually done it or you are at some stage of grief prior to acceptance. Perfect? Hardly. But then neither are human coders. The future? I think the questions answers itself.
But to get to my impertinent question…
Early on in my love affair with programming, I read Structure and
Interpretation of Computer Programs, which I now consider one of the great pedagogical masterpieces of the twentieth century. I learned a great deal about programming from that book, but among the most memorable lessons was one that appears in the second paragraph of the original preface. There, Hal Abelson and Gerald Sussman make a point that hits with the force of the obvious, and yet is very often forgotten:
[W]e want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
I’ve been repeating some version of this to my students ever since. Computers, I remind them, do not need the code to be “readable” or “ergonomic” for humans; they only need it to be readable and ergonomic for a computer, which is a considerably lower bar.
Every programming language—including assembly language—was and is intended for the convenience of humans who need to read it and write it. If a language is innovative, it is usually not because it has allowed for automatic memory management, or concurrency, or safety, or robust error checking, but because it has made it easier for humans to express and reason about these matters. When we extol the virtues of this or that language—Rust’s safety guarantees, C++’s “no-cost abstractions,” or Go’s approach to concurrency—we are not talking about an affordance that the computer has gained, but about an affordance that we
have gained as programmers of said computer. From our standpoint as programmers, object-oriented languages offer certain ways to organize our code—and, I think Abelson and Sussman would say, our thinking—that are potentially conducive to the noble treasures of maintainability, extensibility, error checking, and any number of other condign matters. From the standpoint of the computer, this little OO kink of ours seems mostly to indicate a strange affinity for heap memory. “Whatevs!” (says the computer). And pick your poison here, folks: functional programming, algebraic data types, dependent types, homoiconicity, immutable data structures, brace styles… We can debate the utility of these things, but we must understand that we are primarily talking about human
problems. The set of “machine problems” to which these matters correspond is considerably smaller.
So my question is this: Why vibe code with a language that has human convenience and ergonomics in view? Or to put that another way: Wouldn’t a language designed for vibe coding naturally dispense with much of what is convenient and ergonomic for humans in favor of what is convenient and ergonomic for machines? Why not have it just write C? Or hell, why not x86 assembly?
Now, at this point, you will want to say that the need for human understanding isn’t erased entirely thereby. Some version of this argument has merit, but I would remind you that if you are really vibe coding for real you already don’t understand a great deal of what it is producing. But if you look carefully, you will notice that it doesn’t struggle with undefined behavior in C. Or with making sure that all memory is properly freed. Or with off-by-one errors. It sometimes struggles to understand what it is that you actually want, but it rarely struggles with the actual execution of the code. It’s better than you are at keeping track of those things in the same way that a compiler is better at optimizing code than you are. Perfect? No. But as I said before…
Is C the ideal language for vibe coding? I think I could mount an argument for why it is not, but surely Rust is even less ideal. To say nothing of Haskell, or OCaml, or even Python. All of these languages, after all, are for people to read, and only incidentally for machines to execute. They are practically adorable in their concern for problems that AI models do not have.
I suppose what I’m getting at, here, is that if
vibe coding is the future of software development (and it is), then why bother with languages that were designed for people who are not vibe coding? Shouldn’t there be such a thing as a “vibe-oriented programming language?” VOP. You read it here first.
One possibility is that such a language truly would be executable pseudocode beyond even the most extravagant fever dreams of the most earnest Pythonistas; it shows you what it’s doing in truly pseudo code, but all the while it’s writing assembly. Or perhaps it’s something like the apotheosis of literate programming. You write a literary document “expressing ideas about methodology,” and the AI produces machine code (and a kind of literary critical practice evolves around this activity, eventually ordering itself into structuralist and post-structuralist camps. But I’m getting ahead of myself). Perhaps your job as a programmer is mostly running tests that verify this machine code (tests which have also been produced by AI). Or maybe a VOPL is really a certain kind of language that comes closer to natural language than any existing programming language, but which has a certain (easily learned) set of idioms and expressions that guide the AI more reliably and more quickly toward particular solutions. It doesn’t have goroutines. It has a “concurrency slang.”
Now obviously, the reason a large language model focused on coding is good at Javascript and C++ is precisely because it has been trained on billions of lines of code in those languages along with countless forum posts, StackOverflow debates, and so on. Bootstrapping a VOPL presents a certain kind of difficulty, but then one also suspects that LLMs are already being trained in some future version of this language, because so many programmers are already groping their way toward a system like this by virtue of the fact that so many of them are already vibe coding production-level systems.
I don’t know how I feel about all of this (see my first and second points above). It saddens me to think of “coding by hand” becoming a kind of quaint Montessori-school stage in the education of a vibe coder—something like the contour drawings we demand from future photoshopers or the balanced equations we insist serve as a rite of passage for people who will never be without a calculator to the end of their days.
At the same time, there is something exciting about the birth of a computational paradigm. It wasn’t that long ago, in the grand scheme of things, that someone realized that rewiring the entire machine every time you wanted to do a calculation (think ENIAC, circa 1945) was a rather suboptimal way to do things. And it is worth recalling that people complained when the stored-program computer rolled around (think EDVAC, circa 1951). Why? Well, the answer should be obvious. It was less reliable. It was slower. It removed the operator from the loop. It threatened specialized labor. It was conceptually impure. I’m not kidding about any of this. No less an authority than Grace Hopper had to argue against the quite popular idea that there was no way anyone could ever trust a machine to write instructions for another machine.
Same vibe, as the kids say.
...
Read the original on stephenramsay.net »
Shares of Apple Inc. were battered earlier this year as the iPhone maker faced repeated complaints about its lack of an artificial intelligence strategy. But as the AI trade faces increasing scrutiny, that hesitance has gone from a weakness to a strength — and it’s showing up in the stock market.
Through the first six months of 2025, Apple was the second-worst performer among the Magnificent Seven tech giants, as its shares tumbled 18% through the end of June. That has reversed since then, with the stock soaring 35%, while AI darlings like Meta Platforms Inc. and Microsoft Corp. slid into the red and even Nvidia Corp. underperformed. The S&P 500 Index rose 10% in that time, and the tech-heavy Nasdaq 100 Index gained 13%.
“It is remarkable how they have kept their heads and are in control of spending, when all of their peers have gone the other direction,” said John Barr, portfolio manager of the Needham Aggressive Growth Fund, which owns Apple shares.
As a result, Apple now has a $4.1 trillion market capitalization and the second biggest weight in the S&P 500, leaping over Microsoft and closing in on Nvidia. The shift reflects the market’s questioning of the hundreds of billions of dollars Big Tech firms are throwing at AI development, as well as Apple’s positioning to eventually benefit when the technology is ready for mass use.
“While they most certainly will incorporate more AI into the phones over time, Apple has avoided the AI arms race and the massive capex that accompanies it,” said Bill Stone, chief investment officer at Glenview Trust Company, who owns the stock and views it as “a bit of an anti-AI holding.”
Of course, the rally has made Apple’s stock pricier than it has been in a long time. The shares are trading for around 33 times expected earnings over the next 12 months, a level they’ve only hit a few times in the past 15 years, with a high of 35 in September 2020. The stock’s average multiple over that time is less than 19 times. Apple is now the second most expensive stock in the Bloomberg Magnificent Seven Index, trailing only Tesla Inc.’s whopping valuation of 203 times forward earnings. Apple’s shares climbed about 0.5% in early Tuesday trading.
“It’s really hard to see how the stock can continue to compound value at a level that makes this a compelling entry point,” said Craig Moffett, co-founder of research firm MoffettNathanson. “The obvious question is, are investors overpaying for Apple’s defensiveness? We think so.”
...
Read the original on finance.yahoo.com »
Django 6.0 was released today, starting another release cycle for the loved and long-lived Python web framework (now 20 years old!). It comes with a mosaic of new features, contributed to by many, some of which I am happy to have helped with. Below is my pick of highlights from the release notes.
Upgrade with help from django-upgrade
If you’re upgrading a project from Django 5.2 or earlier, please try my tool django-upgrade. It will automatically update old Django code to use new features, fixing some deprecation warnings for you, including five fixers for Django 6.0. (One day, I’ll propose django-upgrade to become an official Django project, when energy and time permit…)
There are four headline features in Django 6.0, which we’ll cover before other notable changes, starting with this one:The Django Template Language now supports template partials, making it easier to encapsulate and reuse small named fragments within a template file.
Partials are sections of a template marked by the new {% partialdef %} and {% endpartialdef %} tags. They can be reused within the same template or rendered in isolation. Let’s look at examples for each use case in turn. Reuse partials within the same templateThe below template reuses a partial called filter_controls within the same template. It’s defined once at the top of the template, then used twice later on. Using a partial allows the template avoid repetition without pushing the content into a separate include file.Actually, we can simplify this pattern further, by using the inline option on the partialdef tag, which causes the definition to also render in place:Reach for this pattern any time you find yourself repeating template code within the same template. Because partials can use variables, you can also use them to de-duplicate when rendering similar controls with different data.The below template defines a view_count partial that’s intended to be re-rendered in isolation. It uses the inline option, so when the whole template is rendered, the partial is included.The page uses htmx, via my django-htmx package, to periodically refresh the view count, through the hx-* attributes. The request from htmx goes to a dedicated view that re-renders the view_count partial.{% load django_htmx %}
Your browser does not support the video tag.
{% partialdef view_count inline %}
{% endpartialdef %}
{% htmx_script %}
The relevant code for the two views could look like this:The initial video view renders the full template video.html. The video_view_count view renders just the view_count partial, by appending #view_count to the template name. This syntax is similar to how you’d reference an HTML fragment by its ID in a URL.htmx was the main motivation for this feature, as promoted by htmx creator Carson Gross in a cross-framework review post. Using partials definitely helps maintain “Locality of behaviour” within your templates, easing authoring, debugging, and maintenance by avoiding template file sprawl.Django’s support for template partials was initially developed by Carlton Gibson in the django-template-partials package, which remains available for older Django versions. The integration into Django itself was done in a Google Summer of Code project this year, worked on by student Farhan Ali and mentored by Carlton, in Ticket #36410. You can read more about the development process in Farhan’s retrospective blog post. Many thanks to Farhan for authoring, Carlton for mentoring, and Natalia Bidart, Nick Pope, and Sarah Boyce for reviewing!
The next headline feature we’re covering:Django now includes a built-in Tasks framework for running code outside the HTTP request–response cycle. This enables offloading work, such as sending emails or processing data, to background workers.
Basically, there’s a new API for defining and enqueuing background tasks—very cool!
Background tasks are a way of running code outside of the request-response cycle. They’re a common requirement in web applications, used for sending emails, processing images, generating reports, and more.
Historically, Django has not provided any system for background tasks, and kind of ignored the problem space altogether. Developers have instead relied on third-party packages like Celery or Django Q2. While these systems are fine, they can be complex to set up and maintain, and often don’t “go with the grain” of Django.
The new Tasks framework fills this gap by providing an interface to define background tasks, which task runner packages can then integrate with. This common ground allows third-party Django packages to define tasks in a standard way, assuming you’ll be using a compatible task runner to execute them.
Define tasks with the new @task decorator:
…and enqueue them for background execution with the Task.enqueue() method:At this time, Django does not include a production-ready task backend, only two that are suitable for development and testing:DummyBackend - does nothing when tasks are enqueued, but allows them to be inspected later. Useful for tests, where you can assert that tasks were enqueued without actually running them. For production use, you’ll need to use a third-party package that implements one, for which django-tasks, the reference implementation, is the primary option. It provides DatabaseBackend for storing tasks in your SQL database, a fine solution for many projects, avoiding extra infrastructure and allowing atomic task enqueuing within database transactions. We may see this backend merged into Django in due course, or at least become an official package, to help make Django “batteries included” for background tasks.To use django-tasks’ DatabaseBackend today, first install the package:Second, add these two apps to your INSTALLED_APPS setting:Third, configure DatabaseBackend as your tasks backend in the new TASKS setting:Fourth, run migrations to create the necessary database tables:Finally, to run the task worker process, use the package’s db_worker management command:This process runs indefinitely, polling for tasks and executing them, logging events as it goes:You’ll want to run db_worker in production, and also in development if you want to test background task execution.It’s been a long path to get the Tasks framework into Django, and I’m super excited to see it finally available in Django 6.0. Jake Howard started on the idea for Wagtail, a Django-powered CMS, back in 2021, as they have a need for common task definitions across their package ecosystem. He upgraded the idea to target Django itself in 2024, when he proposed DEP 0014. As a member of the Steering Council at the time, I had the pleasure of helping review and accept the DEP.Since then, Jake has been leading the implementation effort, building pieces first in the separate django-tasks package before preparing them for inclusion in Django itself. This step was done under Ticket #35859, with a pull request that took nearly a year to review and land. Thanks to Jake for his perseverance here, and to all reviewers: Andreas Nüßlein, Dave Gaeddert, Eric Holscher, Jacob Walls, Jake Howard, Kamal Mustafa, @rtr1, @tcely, Oliver Haas, Ran Benita, Raphael Gaschignard, and Sarah Boyce.Read more about this feature and story in Jake’s post celebrating when it was merged.
Our third headline feature:Built-in support for the Content Security Policy (CSP) standard is now available, making it easier to protect web applications against content injection attacks such as cross-site scripting (XSS). CSP allows declaring trusted sources of content by giving browsers strict rules about which scripts, styles, images, or other resources can be loaded.
I’m really excited about this, because I’m a bit of a security nerd who’s been deploying CSP for client projects for years.
CSP is a security standard that can protect your site from cross-site scripting (XSS) and other code injection attacks. You set a content-security-policy header to declare which content sources are trusted for your site, and then browsers will block content from other sources. For example, you might declare that only scripts your domain are allowed, so an attacker who manages to inject a tag pointing to evil.com would be thwarted, as the browser would refuse to load it.
Previously, Django had no built-in support for CSP, and developers had to rely on building their own, or using a third-party package like the very popular django-csp. But this was a little bit inconvenient, as it meant that other third-party packages couldn’t reliably integrate with CSP, as there was no common API to do so.
The new CSP support provides all the core features that django-csp did, with a slightly tidier and more Djangoey API. To get started, first add ContentSecurityPolicyMiddleware to your MIDDLEWARE setting:
Place it next to SecurityMiddleware, as it similarly adds security-related headers to all responses. (You do have SecurityMiddleware enabled, right?)
Second, configure your CSP policy using the new settings:SECURE_CSP to configure the content-security-policy header, which is your actively enforced policy. SECURE_CSP_REPORT_ONLY to configure the content-security-policy-report-only header, which sets a non-enforced policy for which browsers report violations to a specified endpoint. This option is useful for testing and monitoring a policy before enforcing it.
For example, to adopt the nonce-based strict CSP recommended by web.dev, you could start with the following setting:
The CSP enum used above provides constants for CSP directives, to help avoid typos.
This policy is quite restrictive and will break most existing sites if deployed as-is, because it requires nonces, as covered next. That’s why the example shows starting with the report-only mode header, to help track down places that need fixing before enforcing the policy. You’d later change to setting the SECURE_CSP setting to enforce the policy.
Anyway, those are the two basic steps to set up the new CSP support!A key part of the new feature is that nonce generation is now built-in to Django, when using the CSP middleware. Nonces are a security feature in CSP that allow you to mark specific and tags as trusted with a nonce attribute:The nonce value is randomly generated per-request, and included in the CSP header. An attacker performing content injection couldn’t guess the nonce, so browsers can trust only those tags that include the correct nonce. Because nonce generation is now part of Django, third-party packages can depend on it for their and tags and they’ll continue to work if you adopt CSP with nonces. Nonces are the recommended way to use CSP today, avoiding problems with previous allow-list based approaches. That’s why the above recommended policy enables them. To adopt a nonce-based policy, you’ll need to annotate your and tags with the nonce value through the following steps.First, add the new csp template context processor to your TEMPLATES setting:Second, annotate your and tags with nonce=“{{ csp_nonce }}”:This can be tedious and error-prone, hence using the report-only mode first to monitor violations might be useful, especially on larger projects.Anyway, deploying CSP right would be another post in itself, or even a book chapter, so we’ll stop here for now. For more info, check out that web.dev article and the MDN CSP guide.CSP itself was proposed for browsers way back in 2004, and was first implemented in Mozilla Firefox version 4, released 2011. That same year, Django Ticket #15727 was opened, proposing adding CSP support to Django. Mozilla created django-csp from 2010, before the first public availability of CSP, using it on their own Django-powered sites. The first comment on Ticket #15727 pointed to django-csp, and the community basically rolled with it as the de facto solution.Over the years, CSP itself evolved, as did django-csp, with Rob Hudson ending up as its maintainer. Focusing on the package motivated to finally get CSP into Django itself. He made a draft PR and posted on Ticket #15727 in 2024, which I enjoyed helping review. He iterated on the PR over the next 13 months until it was finally merged for Django 6.0. Thanks to Rob for his heroic dedication here, and to all reviewers: Benjamin Balder Bach, Carlton Gibson, Collin Anderson, David Sanders, David Smith, Florian Apolloner, Harro van der Klauw, Jake Howard, Natalia Bidart, Paolo Melchiorre, Sarah Boyce, and Sébastien Corbin.
We’re now out of the headline features and onto the “minor” changes, starting with this deprecation related to the above email changes:django.core.mail APIs now require keyword arguments for less commonly used parameters. Using positional arguments for these now emits a deprecation warning and will raise a TypeError when the deprecation period ends:All optional parameters (fail_silently and later) must be passed as keyword arguments to get_connection(), mail_admins(), mail_managers(), send_mail(), and send_mass_mail().All parameters must be passed as keyword arguments when creating an EmailMessage or EmailMultiAlternatives instance, except for the first four (subject, body, from_email, and to), which may still be passed either as positional or keyword arguments.
Previously, Django would let you pass all parameters positionally, which gets a bit silly and hard to read with long parameter lists, like:from django.core.mail import send_mail
send_mail(
“🐼 Panda of the week”,
“This week’s panda is Po Ping, sha-sha booey!”,
[“adam@example.com”],
True,
The final True doesn’t provide any clue what it means without looking up the function signature. Now, using positional arguments for those less-commonly-used parameters raises a deprecation warning, nudging you to write:from django.core.mail import send_mail
send_mail(
subject=“🐼 Panda of the week”,
body=“This week’s panda is Po Ping, sha-sha booey!”,
from_email=“updates@example.com”,
[“adam@example.com”],
fail_silently=True,
This change is appreciated for API clarity, and Django is generally moving towards using keyword-only arguments more often. django-upgrade can automatically fix this one for you, via its mail_api_kwargs fixer.
Thanks to Mike Edmunds, again, for making this improvement in Ticket #36163.
Next up:Common utilities, such as django.conf.settings, are now automatically imported to the shell by default.
One of the headline features back in Django 5.2 was automatic model imports in the shell, making ./manage.py shell import all of your models automatically. Building on that DX boost, Django 6.0 now also imports other common utilities, for which we can find the full list by running ./manage.py shell with -v 2:$ ./manage.py shell -v 2
6 objects imported automatically:
from django.conf import settings
from django.db import connection, models, reset_queries
from django.db.models import functions
from django.utils import timezone
So that’s:settings, useful for checking your runtime configuration: connection and reset_queries(), great for checking the executed queries: In [1]: Book.objects.select_related(‘author’)
Out[1]:
In [2]: connection.queries
Out[2]:
[{‘sql’: ‘SELECT “example_book”.“id”, “example_book”.“title”, “example_book”.“author_id”, “example_author”.“id”, “example_author”.“name” FROM “example_book” INNER JOIN “example_author” ON (“example_book”.“author_id” = “example_author”.“id”) LIMIT 21’,
‘time’: ‘0.000’}]
models and functions, useful for advanced ORM work: timezone, useful for using Django’s timezone-aware date and time utilities:
It remains possible to extend the automatic imports with whatever you’d like, as documented in How to customize the shell command documentation page.
Salvo Polizzi contributed the original automatic shell imports feature in Django 5.2. He’s then returned to offer these extra imports for Django 6.0, in Ticket #35680. Thanks to everyone that contributed to the forum discussion agreeing on which imports to add, and to Natalia Bidart and Sarah Boyce for reviewing!
Now let’s discuss a series of ORM improvements, starting with this big one:GeneratedFields and fields assigned expressions are now refreshed from the database after save() on backends that support the RETURNING clause (SQLite, PostgreSQL, and Oracle). On backends that don’t support it (MySQL and MariaDB), the fields are marked as deferred to trigger a refresh on subsequent accesses.
Django models support having the database generate field values for you in three cases:The db_default field option, which lets the database generate the default value when creating an instance: The GeneratedField field type, which is always computed by the database based on other fields in the same instance:
Previously, only the first method, using db_default, would refresh the field value from the database after saving. The other two methods would leave you with only the old value or the expression object, meaning you’d need to call Model.refresh_from_db() to get any updated value if necessary. This was hard to remember and it costs an extra database query.
Now Django takes advantage of the RETURNING SQL clause to save the model instance and fetch updated dynamic field values in a single query, on backends that support it (SQLite, PostgreSQL, and Oracle). A save() call may now issue a query like:
Django puts the return value into the model field, so you can read it immediately after saving:video = Video.objects.get(id=1)
video.last_updated = Now()
video.save()
print(video.last_updated) # Updated value from the database
On backends that don’t support RETURNING (MySQL and MariaDB), Django now marks the dynamic fields as deferred after saving. That way, the later access, as in the above example, will automatically call Model.refresh_from_db(). This ensures that you always read the updated value, even if it costs an extra query. This feature was proposed in Ticket #27222 way back in 2016, by Anssi Kääriäinen. It sat dormant for most of the nine years since, but ORM boss Simon Charette picked it up earlier this year, found an implementation, and pushed it through to completion. Thanks to Simon for continuing to push the ORM forward, and to all reviewers: David Sanders, Jacob Walls, Mariusz Felisiak, nessita, Paolo Melchiorre, Simon Charette, and Tim Graham.
The next ORM change:The new StringAgg aggregate returns the input values concatenated into a string, separated by the delimiter string. This aggregate was previously supported only for PostgreSQL.
This aggregate is often used for making comma-separated lists of related items, among other things. Previously, it was only supported on PostgreSQL, as part of django.contrib.postgres:from django.contrib.postgres.aggregates import StringAgg
from example.models import Video
videos = Video.objects.annotate(
chapter_ids=StringAgg(“chapter”, delimiter=”,“),
for video in videos:
print(f”Video {video.id} has chapters: {video.chapter_ids}“)
…which might give you output like:
Now this aggregate is available on all database backends supported by Django, imported from django.db.models:from django.db.models import StringAgg, Value
from example.models import Video
videos = Video.objects.annotate(
chapter_ids=StringAgg(“chapter”, delimiter=Value(”,“)),
for video in videos:
print(f”Video {video.id} has chapters: {video.chapter_ids}“)
Note the delimiter argument now requires a Value() expression wrapper for literal strings, as above. This change allows you to use database functions or fields as the delimiter if desired.
While most Django projects stick to PostgreSQL, having this aggregate available on all backends is a nice improvement for cross-database compatibility, and it means third-party packages can use it without affecting their database support. The PostgreSQL-specific StringAgg was added way back in Django 1.9 (2015) by Andriy Sokolovskiy, in Ticket #24301. In Ticket #35444, Chris Muthig proposed adding the Aggregate.order_by option, something used by StringAgg to specify the ordering of concatenated elements, and as a side effect this made it possible to generalize StringAgg to all backends.Thanks to Chris for proposing and implementing this change, and to all reviewers: Paolo Melchiorre, Sarah Boyce, and Simon Charette.
Next up:
Django 3.2 (2021) introduced the DEFAULT_AUTO_FIELD setting for changing the default primary key type used in models. Django uses this setting to add a primary key field called id to models that don’t explicitly define a primary key field. For example, if you define a model like this:
…then it will have two fields: id and title, where id uses the type defined by DEFAULT_AUTO_FIELD.
The setting can also be overridden on a per-app basis by defining AppConfig.default_auto_field in the app’s apps.py file:
A key motivation for adding the setting was to allow projects to switch from AutoField (a 32-bit integer) to BigAutoField (a 64-bit integer) for primary keys, without needing changes to every model. AutoField can store values up to about 2.1 billion, which sounds large but it becomes easy to hit at scale. BigAutoField can store values up to about 9.2 quintillion, which is “more than enough” for every practical purpose.
If a model using AutoField hits its maximum value, it can no longer accept new rows, a problem known as primary key exhaustion. The table is effectively blocked, requiring an urgent fix to switch the model from AutoField to BigAutoField via a locking database migration on a large table. For a great watch on how Kraken is fixing this problem, see Tim Bell’s DjangoCon Europe 2025 talk, detailing some clever techniques to proactively migrate large tables with minimal downtime.
To stop this problem arising for new projects, Django 3.2 made new projects created with startproject set DEFAULT_AUTO_FIELD to BigAutoField, and new apps created with startapp set their AppConfig.default_auto_field to BigAutoField. It also added a system check to ensure that projects set DEFAULT_AUTO_FIELD explicitly, to ensure users were aware of the feature and could make an informed choice.
Now Django 6.0 changes the actual default values of the setting and app config attribute to BigAutoField. Projects using BigAutoField can remove the setting:
The default startproject and startapp templates also no longer set these values. This change reduces the amount of boilerplate in new projects, and the problem of primary key exhaustion can fade into history, becoming something that most Django users no longer need to think about. The addition of DEFAULT_AUTO_FIELD in Django 3.2 was proposed by Caio Ariede and implemented by Tom Forbes, in Ticket #31007. This new change in Django 6.0 was proposed and implemented by ex-Fellow Tim Graham, in Ticket #36564. Thanks to Tim for spotting that this cleanup was now possible, and to Jacob Walls and Clifford Gama for reviewing!
Moving on to templates, let’s start with this nice little addition:The new variable forloop.length is now available within a for loop.
This small extension makes it possible to write a template loop like this:
Previously, you’d need to refer to the length in an another way, like {{ geese|length }}, which is a bit less flexible.
Thanks to Jonathan Ströbele for contributing this idea and implementation in Ticket #36186, and to David Smith, Paolo Melchiorre, and Sarah Boyce for reviewing.
There are two extensions to the querystring template tag, which was added in Django 5.1 to help with building links that modify the current request’s query parameters. The querystring template tag now consistently prefixes the returned query string with a ?, ensuring reliable link generation behavior. This small change improves how the tag behaves when an empty mapping of query parameters are provided. Say you had a template like this: …where params is a dictionary that may sometimes be empty. Previously, if params was empty, the output would be: Browsers treat this as a link to the same URL including the query parameters, so it would not clear the query parameters as intended. Now, with this change, the output will be: Browsers treat ? as a link to the same URL without any query parameters, clearing them as the user would expect. Thanks to Django Fellow Sarah Boyce for spotting this improvement and implementing the fix in Ticket #36268, and for Django Fellow Natalia Bidart for reviewing! The querystring template tag now accepts multiple positional arguments, which must be mappings, such as QueryDict or dict. This enhancement allows the tag to merge multiple sources of query parameters when building the output. For example, you might have a template like this: …where super_search_params is a dictionary of extra parameters to add to make the current search “super”. The tag merges the two mappings, with later mappings taking precedence for duplicate keys. Thanks again to Sarah Boyce for proposing this improvement in Ticket #35529, to Giannis Terzopoulos for implementing it, and to Natalia Bidart, Sarah Boyce, and Tom Carrick for reviewing!
...
Read the original on adamj.eu »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.