10 interesting stories served every morning and every evening.




1 2,654 shares, 142 trendiness

Hacker News

...

Read the original on dosaygo-studio.github.io »

2 666 shares, 41 trendiness

10 Years of Let's Encrypt Certificates

On September 14, 2015, our first pub­licly-trusted cer­tifi­cate went live. We were proud that we had is­sued a cer­tifi­cate that a sig­nif­i­cant ma­jor­ity of clients could ac­cept, and had done it us­ing au­to­mated soft­ware. Of course, in ret­ro­spect this was just the first of bil­lions of cer­tifi­cates. Today, Let’s Encrypt is the largest cer­tifi­cate au­thor­ity in the world in terms of cer­tifi­cates is­sued, the ACME pro­to­col we helped cre­ate and stan­dard­ize is in­te­grated through­out the server ecosys­tem, and we’ve be­come a house­hold name among sys­tem ad­min­is­tra­tors. We’re clos­ing in on pro­tect­ing one bil­lion web sites.

In 2023, we marked the tenth an­niver­sary of the cre­ation of our non­profit, Internet Security Research Group, which con­tin­ues to host Let’s Encrypt and other pub­lic ben­e­fit in­fra­struc­ture pro­jects. Now, in honor of the tenth an­niver­sary of Let’s Encrypt’s pub­lic cer­tifi­cate is­suance and the start of the gen­eral avail­abil­ity of our ser­vices, we’re look­ing back at a few mile­stones and fac­tors that con­tributed to our suc­cess.

A con­spic­u­ous part of Let’s Encrypt’s his­tory is how thor­oughly our vi­sion of scal­a­bil­ity through au­toma­tion has suc­ceeded.

In March 2016, we is­sued our one mil­lionth cer­tifi­cate. Just two years later, in September 2018, we were is­su­ing a mil­lion cer­tifi­cates every day. In 2020 we reached a bil­lion to­tal cer­tifi­cates is­sued and as of late 2025 we’re fre­quently is­su­ing ten mil­lion cer­tifi­cates per day. We’re now on track to reach a bil­lion ac­tive sites, prob­a­bly some­time in the com­ing year. (The certificates is­sued” and certificates ac­tive” met­rics are quite dif­fer­ent be­cause our cer­tifi­cates reg­u­larly ex­pire and get re­placed.)

The steady growth of our is­suance vol­ume shows the strength of our ar­chi­tec­ture, the va­lid­ity of our vi­sion, and the great ef­forts of our en­gi­neer­ing team to scale up our own in­fra­struc­ture. It also re­minds us of the con­fi­dence that the Internet com­mu­nity is plac­ing in us, mak­ing the use of a Let’s Encrypt cer­tifi­cate a nor­mal and, dare we say, bor­ing choice. But I of­ten point out that our ever-grow­ing is­suance vol­umes are only an in­di­rect mea­sure of value. What ul­ti­mately mat­ters is im­prov­ing the se­cu­rity of peo­ple’s use of the web, which, as far as Let’s Encrypt’s con­tri­bu­tion goes, is not mea­sured by is­suance vol­umes so much as by the preva­lence of HTTPS en­cryp­tion. For that rea­son, we’ve al­ways em­pha­sized the graph of the per­cent­age of en­crypted con­nec­tions that web users make (here rep­re­sented by sta­tis­tics from Firefox).

(These graphs are snap­shots as of the date of this post; a dy­nam­i­cally up­dated ver­sion is found on our stats page.) Our biggest goal was to make a con­crete, mea­sur­able se­cu­rity im­pact on the web by get­ting HTTPS con­nec­tion preva­lence to in­crease—and it’s worked. It took five years or so to get the global per­cent­age from be­low 30% to around 80%, where it’s re­mained ever since. In the U. S. it has been close to 95% for a while now.

A good amount of the re­main­ing un­en­crypted traf­fic prob­a­bly comes from in­ter­nal or pri­vate or­ga­ni­za­tional sites (intranets), but other than that we don’t know much about it; this would be a great topic for Internet se­cu­rity re­searchers to look into.

We be­lieve our pre­sent growth in cer­tifi­cate is­suance vol­ume is es­sen­tially com­ing from growth in the web as a whole. In other words, if we pro­tect 20% more sites over some time pe­riod, it’s be­cause the web it­self grew by 20%.

We’ve blogged about most of Let’s Encrypt’s most sig­nif­i­cant mile­stones as they’ve hap­pened, and I in­vite every­one in our com­mu­nity to look over those blog posts to see how far we’ve come. We’ve also pub­lished an­nual re­ports for the past seven years, which of­fer el­e­gant and con­cise sum­maries of our work.

As I per­son­ally think back on the past decade, just a few of the many events that come to mind in­clude:

Telling the world about the pro­ject in November 2014

Our one mil­lionth cer­tifi­cate in March 2016, then our 100 mil­lionth cer­tifi­cate in June 2017, and then our bil­lionth cer­tifi­cate in 2020

Along the way, first is­su­ing one mil­lion cer­tifi­cates in a sin­gle day (in September 2018), sig­nif­i­cantly con­tributed to by the SquareSpace and Shopify Let’s Encrypt in­te­gra­tions

Just at the end of September 2025, we is­sued more than ten mil­lion cer­tifi­cates in a day for the first time.

We’ve also pe­ri­od­i­cally rolled out new fea­tures such as in­ter­na­tion­al­ized do­main name sup­port (2016), wild­card sup­port (2018), and short-lived and IP ad­dress (2025) cer­tifi­cates. We’re al­ways work­ing on more new fea­tures for the fu­ture.

There are many tech­ni­cal mile­stones like our data­base server up­grades in 2021, where we found we needed a se­ri­ous server in­fra­struc­ture boost be­cause of the tremen­dous vol­umes of data we were deal­ing with. Similarly, our orig­i­nal in­fra­struc­ture was us­ing Gigabit Ethernet in­ter­nally, and, with the growth of our is­suance vol­ume and log­ging, we found that our Gigabit Ethernet net­work even­tu­ally be­came too slow to syn­chro­nize data­base in­stances! (Today we’re us­ing 25-gig Ethernet.) More re­cently, we’ve ex­per­i­mented with ar­chi­tec­tural up­grades to our ever-grow­ing Certificate Transparency logs, and de­cided to go ahead with de­ploy­ing those up­grades—to help us not just keep up with, but get ahead of, our con­tin­u­ing growth.

These kinds of grow­ing pains and suc­cess­ful re­sponses to them are nice to re­mem­ber be­cause they point to the in­ex­orable in­crease in de­mands on our in­fra­struc­ture as we’ve be­come a more and more es­sen­tial part of the Internet. I’m proud of our tech­ni­cal teams which have han­dled those in­creased de­mands ca­pa­bly and pro­fes­sion­ally.

I also re­call the on­go­ing work in­volved in mak­ing sure our cer­tifi­cates would be as widely ac­cepted as pos­si­ble, which has meant man­ag­ing the orig­i­nal cross-sig­na­ture from IdenTrust, and sub­se­quently cre­at­ing and prop­a­gat­ing our own root CA cer­tifi­cates. This process has re­quired PKI en­gi­neer­ing, key cer­e­monies, root pro­gram in­ter­ac­tions, doc­u­men­ta­tion, and com­mu­nity sup­port as­so­ci­ated with cer­tifi­cate mi­gra­tions. Most users never have rea­son to look be­hind the scenes at our chains of trust, but our en­gi­neers up­date it as root and in­ter­me­di­ate cer­tifi­cates have been re­placed. We’ve en­gaged at the CA/B Forum, IETF, and in other venues with the browser root pro­grams to help shape the web PKI as a tech­ni­cal leader.

As I wrote in 2020, our ideal of com­plete au­toma­tion of the web PKI aims at a world where most site own­ers would­n’t even need to think about cer­tifi­cates at all. We con­tinue to get closer and closer to that world, which cre­ates a risk that peo­ple will take us and our ser­vices for granted, as the de­tails of cer­tifi­cate re­newal oc­cupy less of site op­er­a­tors’ men­tal en­ergy. As I said at the time,

When your strat­egy as a non­profit is to get out of the way, to of­fer ser­vices that peo­ple don’t need to think about, you’re run­ning a real risk that you’ll even­tu­ally be taken for granted. There is a ten­sion be­tween want­ing your work to be in­vis­i­ble and the need for recog­ni­tion of its value. If peo­ple aren’t aware of how valu­able our ser­vices are then we may not get the sup­port we need to con­tinue pro­vid­ing them.

I’m also grate­ful to our com­mu­ni­ca­tions and fundrais­ing staff who help make clear what we’re do­ing every day and how we’re mak­ing the Internet safer.

Our com­mu­nity con­tin­u­ally rec­og­nizes our work in tan­gi­ble ways by us­ing our cer­tifi­cates—now by the tens of mil­lions per day—and by spon­sor­ing us.

We were hon­ored to be rec­og­nized with awards in­clud­ing the 2022 Levchin Prize for Real-World Cryptography and the 2019 O’Reilly Open Source Award. In October of this year some of the in­di­vid­u­als who got Let’s Encrypt started were hon­ored to re­ceive the IEEE Cybersecurity Award for Practice.

We doc­u­mented the his­tory, de­sign, and goals of the pro­ject in an aca­d­e­mic pa­per at the ACM CCS 19 con­fer­ence, which has sub­se­quently been cited hun­dreds of times in aca­d­e­mic re­search.

Ten years later, I’m still deeply grate­ful to the five ini­tial spon­sors that got Let’s Encrypt off the ground - Mozilla, EFF, Cisco, Akamai, and IdenTrust. When they com­mit­ted sig­nif­i­cant re­sources to the pro­ject, it was just an am­bi­tious idea. They saw the po­ten­tial and be­lieved in our team, and be­cause of that we were able to build the ser­vice we op­er­ate to­day.

I’d like to par­tic­u­larly rec­og­nize IdenTrust, a PKI com­pany that worked as a part­ner from the out­set and en­abled us to is­sue pub­licly-trusted cer­tifi­cates via a cross-sig­na­ture from one of their roots. We would sim­ply not have been able to launch our pub­licly-trusted cer­tifi­cate ser­vice with­out them. Back when I first told them that we were start­ing a new non­profit cer­tifi­cate au­thor­ity that would give away mil­lions of cer­tifi­cates for free, there was­n’t any prece­dent for this arrange­ment, and there was­n’t nec­es­sar­ily much rea­son for IdenTrust to pay at­ten­tion to our pro­posal. But the com­pany re­ally un­der­stood what we were try­ing to do and was will­ing to en­gage from the be­gin­ning. Ultimately, IdenTrust’s sup­port made our orig­i­nal is­suance model a re­al­ity.

I’m proud of what we have achieved with our staff, part­ners, and donors over the past ten years. I hope to be even more proud of the next ten years, as we use our strong foot­ing to con­tinue to pur­sue our mis­sion to pro­tect Internet users by low­er­ing mon­e­tary, tech­no­log­i­cal, and in­for­ma­tional bar­ri­ers to a more se­cure and pri­vacy-re­spect­ing Internet.

Let’s Encrypt is a pro­ject of the non­profit Internet Security Research Group, a 501(c)(3) non­profit. You can help us make the next ten years great as well by do­nat­ing or be­com­ing a spon­sor.

...

Read the original on letsencrypt.org »

3 612 shares, 28 trendiness

Introducing: Devstral 2 and Mistral Vibe CLI.

Today, we’re re­leas­ing Devstral 2—our next-gen­er­a­tion cod­ing model fam­ily avail­able in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships un­der a mod­i­fied MIT li­cense, while Devstral Small 2 uses Apache 2.0. Both are open-source and per­mis­sively li­censed to ac­cel­er­ate dis­trib­uted in­tel­li­gence.

Devstral 2 is cur­rently free to use via our API.

We are also in­tro­duc­ing Mistral Vibe, a na­tive CLI built for Devstral that en­ables end-to-end code au­toma­tion.

Devstral 2: SOTA open model for code agents with a frac­tion of the pa­ra­me­ters of its com­peti­tors and achiev­ing 72.2% on SWE-bench Verified.

Up to 7x more cost-ef­fi­cient than Claude Sonnet at real-world tasks.

Devstral Small 2: 24B pa­ra­me­ter model avail­able via API or de­ploy­able lo­cally on con­sumer hard­ware.

Devstral 2 is a 123B-parameter dense trans­former sup­port­ing a 256K con­text win­dow. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight mod­els while re­main­ing highly cost ef­fi­cient. Released un­der a mod­i­fied MIT li­cense, Devstral sets the open state-of-the-art for code agents.

Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among mod­els up to five times its size while be­ing ca­pa­ble of run­ning lo­cally on con­sumer hard­ware.

Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that com­pact mod­els can match or ex­ceed the per­for­mance of much larger com­peti­tors. Their re­duced size makes de­ploy­ment prac­ti­cal on lim­ited hard­ware, low­er­ing bar­ri­ers for de­vel­op­ers, small busi­nesses, and hob­by­ists.hard­ware.

Devstral 2 sup­ports ex­plor­ing code­bases and or­ches­trat­ing changes across mul­ti­ple files while main­tain­ing ar­chi­tec­ture-level con­text. It tracks frame­work de­pen­den­cies, de­tects fail­ures, and re­tries with cor­rec­tions—solv­ing chal­lenges like bug fix­ing and mod­ern­iz­ing legacy sys­tems.

The model can be fine-tuned to pri­or­i­tize spe­cific lan­guages or op­ti­mize for large en­ter­prise code­bases.

We eval­u­ated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 us­ing hu­man eval­u­a­tions con­ducted by an in­de­pen­dent an­no­ta­tion provider, with tasks scaf­folded through Cline. Devstral 2 shows a clear ad­van­tage over DeepSeek V3.2, with a 42.8% win rate ver­sus 28.6% loss rate. However, Claude Sonnet 4.5 re­mains sig­nif­i­cantly pre­ferred, in­di­cat­ing a gap with closed-source mod­els per­sists.

Devstral 2 is at the fron­tier of open-source cod­ing mod­els. In Cline, it de­liv­ers a tool-call­ing suc­cess rate on par with the best closed mod­els; it’s a re­mark­ably smooth dri­ver. This is a mas­sive con­tri­bu­tion to the open-source ecosys­tem.” — Cline.

Devstral 2 was one of our most suc­cess­ful stealth launches yet, sur­pass­ing 17B to­kens in the first 24 hours. Mistral AI is mov­ing at Kilo Speed with a cost-ef­fi­cient model that truly works at scale.” — Kilo Code.

Devstral Small 2, a 24B-parameter model with the same 256K con­text win­dow and re­leased un­der Apache 2.0, brings these ca­pa­bil­i­ties to a com­pact, lo­cally de­ploy­able form. Its size en­ables fast in­fer­ence, tight feed­back loops, and easy cus­tomiza­tion—with fully pri­vate, on-de­vice run­time. It also sup­ports im­age in­puts, and can power mul­ti­modal agents.

Mistral Vibe CLI is an open-source com­mand-line cod­ing as­sis­tant pow­ered by Devstral. It ex­plores, mod­i­fies, and ex­e­cutes changes across your code­base us­ing nat­ural lan­guage—in your ter­mi­nal or in­te­grated into your pre­ferred IDE via the Agent Communication Protocol. It is re­leased un­der the Apache 2.0 li­cense.

Vibe CLI pro­vides an in­ter­ac­tive chat in­ter­face with tools for file ma­nip­u­la­tion, code search­ing, ver­sion con­trol, and com­mand ex­e­cu­tion. Key fea­tures:

Project-aware con­text: Automatically scans your file struc­ture and Git sta­tus to pro­vide rel­e­vant con­text

Smart ref­er­ences: Reference files with @ au­to­com­plete, ex­e­cute shell com­mands with !, and use slash com­mands for con­fig­u­ra­tion changes

Multi-file or­ches­tra­tion: Understands your en­tire code­base—not just the file you’re edit­ing—en­abling ar­chi­tec­ture-level rea­son­ing that can halve your PR cy­cle time

You can run Vibe CLI pro­gram­mat­i­cally for script­ing, tog­gle auto-ap­proval for tool ex­e­cu­tion, con­fig­ure lo­cal mod­els and providers through a sim­ple con­fig.toml, and con­trol tool per­mis­sions to match your work­flow.

Devstral 2 is cur­rently of­fered free via our API. After the free pe­riod, the API pric­ing will be $0.40/$2.00 per mil­lion to­kens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.

We’ve part­nered with lead­ing, open agent tools Kilo Code and Cline to bring Devstral 2 to where you al­ready build.

Mistral Vibe CLI is avail­able as an ex­ten­sion in Zed, so you can use it di­rectly in­side your IDE.

Devstral 2 is op­ti­mized for data cen­ter GPUs and re­quires a min­i­mum of 4 H100-class GPUs for de­ploy­ment. You can try it to­day on build.nvidia.com. Devstral Small 2 is built for sin­gle-GPU op­er­a­tion and runs across a broad range of NVIDIA sys­tems, in­clud­ing DGX Spark and GeForce RTX. NVIDIA NIM sup­port will be avail­able soon.

Devstral Small runs on con­sumer-grade GPUs as well as CPU-only con­fig­u­ra­tions with no ded­i­cated GPU re­quired.

For op­ti­mal per­for­mance, we rec­om­mend a tem­per­a­ture of 0.2 and fol­low­ing the best prac­tices de­fined for Mistral Vibe CLI.

We’re ex­cited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!

Share your pro­jects, ques­tions, or dis­cov­er­ies with us on X/Twitter, Discord, or GitHub.

If you’re in­ter­ested in shap­ing open-source re­search and build­ing world-class in­ter­faces that bring truly open, fron­tier AI to users, we wel­come you to ap­ply to join our team.

...

Read the original on mistral.ai »

4 597 shares, 30 trendiness

Bruno Simon

My name is Bruno Simon, and I’m a cre­ative de­vel­oper (mostly for the web).

This is my port­fo­lio. Please drive around to learn more about me and dis­cover the many se­crets of this world.

Teleports you to the clos­est respawn

Respawn

Server cur­rently of­fline. Scores can’t be saved.

- Everyone can see them

- New whis­pers re­move old ones (max 30)

- One whis­per per user

- Choose a flag

- No slur!

- Max 30 char­ac­ters

Thank you for vis­it­ing my port­fo­lio!

If you are cu­ri­ous about the stack and how I built it, here’s every­thing you need to know.

Three.js is the li­brary I’m us­ing to ren­der this 3D world.

It was cre­ated by mr.doob (X, GitHub), fol­lowed by hun­dreds of awe­some de­vel­op­ers, one of which be­ing Sunag (X, GitHub) who added TSL, en­abling the use of both WebGL and WebGPU, mak­ing this port­fo­lio pos­si­ble.

If you want to learn Three.js, I got you cov­ered with this huge course.

It con­tains every­thing you need to start build­ing awe­some stuff with Three.js (and much more).

I’ve been mak­ing de­vlogs since the very start of this port­fo­lio and you can find them on my Youtube chan­nel.

Even though the port­fo­lio is out, I’m still work­ing on the last videos so that the se­ries is com­plete.

The code is avail­able on GitHub un­der MIT li­cense. Even the Blender files are there, so have fun!

For se­cu­rity rea­sons, I’m not shar­ing the server code, but the port­fo­lio works with­out it.

The mu­sic you hear was made es­pe­cially for this port­fo­lio by the awe­some Kounine (Linktree).

They are now un­der CC0 li­cense, mean­ing you can do what­ever you want with them!

Download them here.

Server cur­rently of­fline. Scores can’t be saved.

Come hang out with the com­mu­nity, show us your pro­jects and ask us any­thing.

Contact me di­rectly.

I have to warn you, I try to an­swer every­one, but it might take a while.

...

Read the original on bruno-simon.com »

5 550 shares, 74 trendiness

The (successful) end of the kernel Rust experiment

The topic of the Rust ex­per­i­ment was just dis­cussed at the an­nual

Maintainers Summit. The con­sen­sus among the as­sem­bled de­vel­op­ers is that

Rust in the ker­nel is no longer ex­per­i­men­tal — it is now a core part of the

ker­nel and is here to stay. So the experimental” tag will be com­ing off.

Congratulations are in or­der for all of the Rust for Linux team.

(Stay tuned for de­tails in our Maintainers Summit cov­er­age.)

Copyright © 2025, Eklektix, Inc.

Comments and pub­lic post­ings are copy­righted by their cre­ators.

Linux is a reg­is­tered trade­mark of Linus Torvalds

...

Read the original on lwn.net »

6 546 shares, 30 trendiness

PeerTube

PeerTube is a tool for host­ing, man­ag­ing, and shar­ing videos or live streams.

The fol­low­ing repos­i­to­ries were sub­mit­ted by the so­lu­tion and in­cluded in our eval­u­a­tion. Any repos­i­to­ries, add-ons, fea­tures not in­cluded in here were not re­viewed by us. French Ministry of National Education (~100K videos), Italy’s National Research Council, a few French al­ter­na­tive me­dia, the Weißensee Kunsthochschule in Berlin, as well as the Universität der Künste in the same city, a few uni­ver­si­ties world­wide, the Blender and Debian pro­jects, and var­i­ous ac­tivist groups* This in­for­ma­tion is self-re­ported and up­dated an­nu­al­lyLearn how this prod­uct has met the re­quire­ments of the DPG Standard by ex­plor­ing the in­di­ca­tors be­low. Ricardo Torres (L2 Reviewer) sub­mit­ted their re­view of PeerTube (152) and found it to be a DPG

...

Read the original on www.digitalpublicgoods.net »

7 496 shares, 18 trendiness

External Memory For Your Brain

Catch your best ideas be­fore they slip through your fin­gers

Do you ever have flashes of in­sight or an idea worth re­mem­ber­ing? This hap­pens to me 5-10 times every day. If I don’t write down the thought im­me­di­ately, it slips out of my mind. Worst of all, I re­mem­ber that I’ve for­got­ten some­thing and spend the next 10 min­utes try­ing to re­mem­ber what it is. So I in­vented ex­ter­nal mem­ory for my brain.

Introducing Pebble Index 01 - a small ring with a but­ton and mi­cro­phone. Hold the but­ton, whis­per your thought, and it’s sent to your phone. It’s added to your notes, set as a re­minder, or saved for later re­view.

Index 01 is de­signed to be­come mus­cle mem­ory, since it’s al­ways with you. It’s pri­vate by de­sign (no record­ing un­til you press the but­ton) and re­quires no in­ter­net con­nec­tion or paid sub­scrip­tion. It’s as small as a wed­ding band and comes in 3 colours. It’s made from durable stain­less steel and is wa­ter-re­sis­tant. Like all Pebble prod­ucts, it’s ex­tremely cus­tomiz­able and built with open source soft­ware.

Here’s the best part: the bat­tery lasts for years. You never need to charge it.

Pre-order to­day for $75. After world­wide ship­ping be­gins in March 2026, the price will go up to $99.

Now that I’ve worn my Index 01 for sev­eral months, I can safely say that it has changed my life - just like with Pebble, I could­n’t go back to a world with­out this. There are so many sit­u­a­tions each day where my hands are full (while bik­ing or dri­ving, wash­ing dishes, wran­gling my kids, etc) and I need to re­mem­ber some­thing. A ran­dom sam­pling of my re­cent record­ings:

Set a timer for 3pm to go pick up the kids

Remind me to phone the phar­macy at 11am

Peter is com­ing by to­mor­row at 11:30am, add that to my cal­en­dar

Before, I would take my phone out of my pocket to jot these down, but I could­n’t al­ways do that (eg, while bi­cy­cling). I also wanted to start us­ing my phone less, es­pe­cially in front of my kids.

Initially, we ex­per­i­mented by build­ing this as an app on Pebble, since it has a mic and I’m al­ways wear­ing one. But, I re­al­ized quickly that this was sub­op­ti­mal - it re­quired me to use my other hand to press the but­ton to start record­ing (lift-to-wake ges­tures and wake-words are too un­re­li­able). This was tough to use while bi­cy­cling or car­ry­ing stuff.

Then a ge­nius elec­tri­cal en­gi­neer friend of mine came up with an idea to fit every­thing into a tiny ring. It is the per­fect form fac­tor! Honestly, I’m still amazed that it all fits.

The de­sign needed to sat­isfy sev­eral crit­i­cal con­di­tions:

Must work re­li­ably 100% of the time. If it did­n’t work or failed to record a thought, I knew I would take it off and re­vert back to my old habit of just for­get­ting things.

It had to have a phys­i­cal press-but­ton, with a sat­is­fy­ing click-feel. I want to know for sure if the but­ton is pressed and my thought is cap­tured.

Long bat­tery life - every time you take some­thing off to charge, there’s a chance you’ll for­get to put it back on.

Must be pri­vacy-pre­serv­ing. These are your in­ner thoughts. All record­ings must be processed and stored on your phone. Only record when the but­ton is pressed.

It had to be as small as a wed­ding band. Since it’s worn on the in­dex fin­ger, if it were too large or bulky, it would hit your phone while you held it in your hand.

Water re­sis­tance - must be able to wash hands, shower, and get wet.

We’ve been work­ing on this for a while, test­ing new ver­sions and mak­ing tweaks. We’re re­ally ex­cited to get this out into the world.

Here are a few of my favourite things about Index 01:

It does one thing re­ally well - it helps me re­mem­ber things.

It’s dis­creet. It’s not dis­tract­ing. It does­n’t take you out of the mo­ment.

There’s no AI friend per­sona and it’s not al­ways record­ing.

It’s in­ex­pen­sive. We hope you try it and see if you like it as well!

Available in 3 colours and 8 sizes

You can pre-or­der now and pick your size/​colour later be­fore your ring ships.

Cost and avail­abil­ity: Pre-order price is $75, rises to $99 later. Ships world­wide, be­gin­ning in March.

Works with iPhone and Android: We over­came Apple’s best ef­forts to make life ter­ri­ble for 3rd party ac­ces­sory mak­ers and have Index 01 work­ing well on iOS and Android.

Extremely pri­vate and se­cure: Your thoughts are processed by open source speech-to-text (STT) and AI mod­els lo­cally on your phone. You can read the code and see ex­actly how it works - our Pebble mo­bile app is open source. Higher-quality STT is avail­able through an op­tional cloud ser­vice.

No charg­ing: The bat­tery lasts for up to years of av­er­age use. After the end of its life, send your ring back to us for re­cy­cling.

On-ring stor­age: Recording works even if your phone is out of range. Up to 5 min­utes of au­dio can be stored on-ring, then synced later.

No speaker or vi­brat­ing mo­tor: This is an in­put de­vice only. There is an RGB LED, but it’s rarely used (to save bat­tery life and to re­duce dis­trac­tion).

Works great with Pebble or other smart­watches: After record­ing, the thought will ap­pear on your watch, and you can check that it’s cor­rect. You can ask ques­tions like What’s the weather to­day?’ and see the an­swer on your watch.

Raw au­dio play­back: Very help­ful if STT does­n’t work per­fectly due to wind or loud back­ground noises.

Actions: While the pri­mary task is re­mem­ber­ing things for you, you can also ask it to do things like Send a Beeper mes­sage to my wife - run­ning late’ or an­swer sim­ple ques­tions that could be an­swered by search­ing the web. You can con­fig­ure but­ton clicks to con­trol your mu­sic - I love us­ing this to play/​pause or skip tracks. You can also con­fig­ure where to save your notes and re­minders (I have it set to add to Notion).

Customizable and hack­able: Configure sin­gle/​dou­ble but­ton clicks to con­trol what­ever you want (take a photo, turn on lights, Tasker, etc). Add your own voice ac­tions via MCP. Or route the au­dio record­ings di­rectly to your own app or server!

99+ lan­guages: Speech to text and lo­cal LLM sup­port over 99 lan­guages! Naturally, the qual­ity of each may vary.

Let me be very clear - Index 01 is de­signed at its core to be a de­vice that helps you re­mem­ber things. We want it to be 100% re­li­able at its pri­mary task. But we’re leav­ing the side door open for folks to cus­tomize, build new in­ter­ac­tions and ac­tions.

Here’s how I’m think­ing about it - a sin­gle click-hold + voice in­put will be routed to the pri­mary mem­ory pro­cess­ing path. Double-click-hold + voice in­put would be routed to a more gen­eral pur­pose voice agent (think ChatGPT with web search). Responses from the agent would be pre­sented on Pebble (eg What’s the weather to­mor­row?’, When’s the next north­bound Caltrain?’) or other smart­watches (as a no­ti­fi­ca­tion). Maybe this could even be an in­put for some­thing like ChatGPT Voice Mode, en­abling you to hear the AI re­sponse from your ear­buds.

The built in ac­tions, set re­minder, cre­ate note, alarms, etc, are ac­tu­ally MCPs - ba­si­cally mini apps that AI agents know how to op­er­ate. They run lo­cally in WASM within the Pebble mo­bile app (no cloud MCP server re­quired). Basically any MCP server can be used with the sys­tem, so in­tre­pid folks may have fun adding var­i­ous ac­tions like Beeper, Google Calendar, weather, etc that al­ready of­fer MCPs.

Not every­thing will be avail­able at launch, but this is the di­rec­tion we are work­ing to­wards. There will be 3 ways to cus­tomize your Index 01:

Trigger ac­tions via but­ton clicks - con­fig­ure a sin­gle or dou­ble click to do things like take a photo, con­trol your Home Assistant smart home, Tasker func­tion, un­lock your car. This will work bet­ter on Android since iOS Shortcuts does­n’t have an open API.

Trigger ac­tions via voice in­put - write an MCP to do….ba­si­cally any­thing? This is pretty open ended.

Route your voice record­ings and/​or tran­scrip­tions to your own web­hook - or skip our AI pro­cess­ing en­tirely and send every record­ing to your own app or we­bapp.

How does it work?

People usu­ally wear it on the in­dex fin­ger. Inside the ring is a but­ton, a mi­cro­phone, a Bluetooth chip, mem­ory, and a bat­tery that lasts for years. Click the but­ton with your thumb, talk into the mic, and it records to in­ter­nal mem­ory. When your phone is in range, the record­ing is streamed to the Pebble app. It’s con­verted to text on-de­vice, then processed by an on-de­vice large lan­guage model (LLM) which se­lects an ac­tion to take (create note, add to re­minders, etc).

When do I pick my size?

You’ll be able to pick your ring size and color af­ter plac­ing a pre-or­der. If you have a 3D printer, you can print our CAD de­signs to try on. We’re also plan­ning a siz­ing kit. You can view the mea­sure­ments of the in­ner di­am­e­ter of each ring size.

How long does the bat­tery last?

Roughly 12 to 15 hours of record­ing. On av­er­age, I use it 10-20 times per day to record 3-6 sec­ond thoughts. That’s up to 2 years of us­age.

Is it se­cure and pri­vate?

Yes, ex­tremely. The con­nec­tion be­tween ring and phone is en­crypted. Recordings are processed lo­cally on your phone in the open-source Pebble app. The app works of­fline (no in­ter­net con­nec­tion) and does not re­quire a cloud ser­vice. An op­tional cloud stor­age sys­tem for back­ing up record­ings is avail­able. Our plan is for this to be op­tion­ally en­crypted, but we haven’t built it yet.

What kind of bat­tery is in­side?

Why can’t it be recharged?

We con­sid­ered this but de­cided not to for sev­eral rea­sons:

You’d prob­a­bly lose the charger be­fore the bat­tery runs out!

Adding charge cir­cuitry and in­clud­ing a charger would make the prod­uct larger and more ex­pen­sive.

You send it back to us to re­cy­cle.

Yes. We know this sounds a bit odd, but in this par­tic­u­lar cir­cum­stance we be­lieve it’s the best so­lu­tion to the given set of con­straints. Other smart rings like Oura cost $250+ and need to be charged every few days. We did­n’t want to build a de­vice like that. Before the bat­tery runs out, the Pebble app no­ti­fies and asks if you’d like to or­der an­other ring.

Is it al­ways lis­ten­ing?

No. It only records while the but­ton is pressed. It’s not de­signed to record your whole life, or meet­ings.

What if the speech-to-text pro­cess­ing misses a word or some­thing?

You can al­ways lis­ten to the each record­ing in the app.

We ex­per­i­mented with a touch­pad, but found it too easy to ac­ci­den­tally swipe and press. Also, noth­ing beats the feed­back of a real gosh darn press­able but­ton.

Is there a speaker or vi­brat­ing mo­tor?

No. The but­ton has a great click-feel to in­di­cate when you are press­ing.

Does it do health track­ing like Oura?

How durable and wa­ter-re­sis­tant is it?

It’s pri­mar­ily made from stain­less steel 316, with a liq­uid sil­i­cone rub­ber (LSR) but­ton. It’s wa­ter-re­sis­tant to 1 me­ter. You can wash your hands, do dishes, and shower with it on, but we don’t rec­om­mend swim­ming with it.

Does it work with iPhone and Android?

I love cus­tomiz­ing and hack­ing on my de­vices. What could I do with Index 01?

Lots of stuff! Control things with the but­tons. Route raw au­dio or tran­scribed text di­rectly to your own app via web­hook. Use MCPs (also run lo­cally on-de­vice! No cloud server re­quired) to add more ac­tions.

Is this an AI friend thingy or al­ways-record­ing de­vice?

How far along is de­vel­op­ment?

We’ve been work­ing on this in the back­ground to watch de­vel­op­ment. It helps that our Pebble Time 2 part­ner fac­tory is also build­ing Index 01! We’re cur­rently in the DVT stage, test­ing pre-pro­duc­tion sam­ples. We’ll start a wider al­pha test in January with a lot more peo­ple. Here’s some shots from the pre-pro­duc­tion as­sem­bly line:

Your browser does not sup­port the video tag.

Get up­dates on Pebble news and prod­ucts.

...

Read the original on repebble.com »

8 482 shares, 26 trendiness

If You’re Going to Vibe Code, Why Not Do It in C?

Or hell, why not do it in x86 as­sem­bly?

Let’s get a few things out of the way be­fore I go any fur­ther with this seem­ingly im­per­ti­nent thought, be­cause it’s nowhere near as snarky as it sounds.

First, I don’t par­tic­u­larly like vibe cod­ing. I love pro­gram­ming, and I have loved it since I made my first ten­ta­tive steps with it some­time back in the mid-to-late 90s. I love pro­gram­ming so much, it al­ways feels like I’m hav­ing too much fun for it to count as real work. I’ve done it pro­fes­sion­ally, but I also do it as a hobby. Someone ap­par­ently once said, Do what you love and you’ll never work a day in your life.” That’s how I feel about writ­ing code. I’ve also been teach­ing the sub­ject for twenty-five years, and I can hon­estly say I am as ex­cited about the first day of the se­mes­ter now as I was when I first started. I re­al­ize it’s a bit pre­cious to say so, but I’ll say it any­way: Turning non-pro­gram­mers into pro­gram­mers is my life’s work. It is the thing of which I am most proud as a col­lege pro­fes­sor.

Vibe cod­ing makes me feel dirty in ways that I strug­gle to ar­tic­u­late pre­cisely. It’s not just that it feels like cheating” (though it does). I also think it takes a lot of the fun out of the whole thing. I some­times tell peo­ple (like the afore­men­tioned stu­dents) that pro­gram­ming is like do­ing the best cross­word puz­zle in the world, ex­cept that when you solve it, it ac­tu­ally dances and sings. Vibe cod­ing robs me of that mo­ment, be­cause I don’t feel like I re­ally did it at all. And even though to be a pro­gram­mer is to live with a more-or-less per­ma­nent set of apo­r­ias (you don’t re­ally un­der­stand what the com­piler is do­ing, re­ally—and even if you do, you prob­a­bly don’t re­ally

un­der­stand how the vir­tual mem­ory sub­sys­tem works, re­ally), it’s sat­is­fy­ing to un­der­stand every inch of my code and frus­trat­ing—all the way to the bor­der­lands of ac­tive anx­i­ety—not quite un­der­stand­ing what Claude just wrote.

But this leads me to my sec­ond point, which I must make as clearly and force­fully as I can. Vibe cod­ing ac­tu­ally works. It cre­ates ro­bust, com­plex sys­tems that work. You can tell your­self (as I did) that it can’t pos­si­bly do that, but you are wrong. You can then tell your­self (as I did) that it’s good as a kind of al­ter­na­tive search en­gine for cod­ing prob­lems, but not much else. You are also wrong about that. Because when you start giv­ing it lit­tle pro­gram­ming prob­lems that you can’t be ar­sed to work out your­self (as I did), you dis­cover (as I did) that it’s aw­fully good at those. And then one day you muse out loud (as I did) to an AI model some­thing like, I have an idea for a pro­gram…” And you are as­tounded. If you aren’t as­tounded, you ei­ther haven’t ac­tu­ally done it or you are at some stage of grief prior to ac­cep­tance. Perfect? Hardly. But then nei­ther are hu­man coders. The fu­ture? I think the ques­tions an­swers it­self.

But to get to my im­per­ti­nent ques­tion…

Early on in my love af­fair with pro­gram­ming, I read Structure and

Interpretation of Computer Programs, which I now con­sider one of the great ped­a­gog­i­cal mas­ter­pieces of the twen­ti­eth cen­tury. I learned a great deal about pro­gram­ming from that book, but among the most mem­o­rable lessons was one that ap­pears in the sec­ond para­graph of the orig­i­nal pref­ace. There, Hal Abelson and Gerald Sussman make a point that hits with the force of the ob­vi­ous, and yet is very of­ten for­got­ten:

[W]e want to es­tab­lish the idea that a com­puter lan­guage is not just a way of get­ting a com­puter to per­form op­er­a­tions but rather that it is a novel for­mal medium for ex­press­ing ideas about method­ol­ogy. Thus, pro­grams must be writ­ten for peo­ple to read, and only in­ci­den­tally for ma­chines to ex­e­cute.

I’ve been re­peat­ing some ver­sion of this to my stu­dents ever since. Computers, I re­mind them, do not need the code to be readable” or ergonomic” for hu­mans; they only need it to be read­able and er­gonomic for a com­puter, which is a con­sid­er­ably lower bar.

Every pro­gram­ming lan­guage—in­clud­ing as­sem­bly lan­guage—was and is in­tended for the con­ve­nience of hu­mans who need to read it and write it. If a lan­guage is in­no­v­a­tive, it is usu­ally not be­cause it has al­lowed for au­to­matic mem­ory man­age­ment, or con­cur­rency, or safety, or ro­bust er­ror check­ing, but be­cause it has made it eas­ier for hu­mans to ex­press and rea­son about these mat­ters. When we ex­tol the virtues of this or that lan­guage—Rust’s safety guar­an­tees, C++’s no-cost ab­strac­tions,” or Go’s ap­proach to con­cur­rency—we are not talk­ing about an af­for­dance that the com­puter has gained, but about an af­for­dance that we

have gained as pro­gram­mers of said com­puter. From our stand­point as pro­gram­mers, ob­ject-ori­ented lan­guages of­fer cer­tain ways to or­ga­nize our code—and, I think Abelson and Sussman would say, our think­ing—that are po­ten­tially con­ducive to the no­ble trea­sures of main­tain­abil­ity, ex­ten­si­bil­ity, er­ror check­ing, and any num­ber of other condign mat­ters. From the stand­point of the com­puter, this lit­tle OO kink of ours seems mostly to in­di­cate a strange affin­ity for heap mem­ory. Whatevs!” (says the com­puter). And pick your poi­son here, folks: func­tional pro­gram­ming, al­ge­braic data types, de­pen­dent types, ho­moiconic­ity, im­mutable data struc­tures, brace styles… We can de­bate the util­ity of these things, but we must un­der­stand that we are pri­mar­ily talk­ing about hu­man

prob­lems. The set of machine prob­lems” to which these mat­ters cor­re­spond is con­sid­er­ably smaller.

So my ques­tion is this: Why vibe code with a lan­guage that has hu­man con­ve­nience and er­gonom­ics in view? Or to put that an­other way: Wouldn’t a lan­guage de­signed for vibe cod­ing nat­u­rally dis­pense with much of what is con­ve­nient and er­gonomic for hu­mans in fa­vor of what is con­ve­nient and er­gonomic for ma­chines? Why not have it just write C? Or hell, why not x86 as­sem­bly?

Now, at this point, you will want to say that the need for hu­man un­der­stand­ing is­n’t erased en­tirely thereby. Some ver­sion of this ar­gu­ment has merit, but I would re­mind you that if you are re­ally vibe cod­ing for real you al­ready don’t un­der­stand a great deal of what it is pro­duc­ing. But if you look care­fully, you will no­tice that it does­n’t strug­gle with un­de­fined be­hav­ior in C. Or with mak­ing sure that all mem­ory is prop­erly freed. Or with off-by-one er­rors. It some­times strug­gles to un­der­stand what it is that you ac­tu­ally want, but it rarely strug­gles with the ac­tual ex­e­cu­tion of the code. It’s bet­ter than you are at keep­ing track of those things in the same way that a com­piler is bet­ter at op­ti­miz­ing code than you are. Perfect? No. But as I said be­fore…

Is C the ideal lan­guage for vibe cod­ing? I think I could mount an ar­gu­ment for why it is not, but surely Rust is even less ideal. To say noth­ing of Haskell, or OCaml, or even Python. All of these lan­guages, af­ter all, are for peo­ple to read, and only in­ci­den­tally for ma­chines to ex­e­cute. They are prac­ti­cally adorable in their con­cern for prob­lems that AI mod­els do not have.

I sup­pose what I’m get­ting at, here, is that if

vibe cod­ing is the fu­ture of soft­ware de­vel­op­ment (and it is), then why bother with lan­guages that were de­signed for peo­ple who are not vibe cod­ing? Shouldn’t there be such a thing as a vibe-oriented pro­gram­ming lan­guage?” VOP. You read it here first.

One pos­si­bil­ity is that such a lan­guage truly would be ex­e­cutable pseudocode be­yond even the most ex­trav­a­gant fever dreams of the most earnest Pythonistas; it shows you what it’s do­ing in truly pseudo code, but all the while it’s writ­ing as­sem­bly. Or per­haps it’s some­thing like the apoth­e­o­sis of lit­er­ate pro­gram­ming. You write a lit­er­ary doc­u­ment expressing ideas about method­ol­ogy,” and the AI pro­duces ma­chine code (and a kind of lit­er­ary crit­i­cal prac­tice evolves around this ac­tiv­ity, even­tu­ally or­der­ing it­self into struc­tural­ist and post-struc­tural­ist camps. But I’m get­ting ahead of my­self). Perhaps your job as a pro­gram­mer is mostly run­ning tests that ver­ify this ma­chine code (tests which have also been pro­duced by AI). Or maybe a VOPL is re­ally a cer­tain kind of lan­guage that comes closer to nat­ural lan­guage than any ex­ist­ing pro­gram­ming lan­guage, but which has a cer­tain (easily learned) set of id­ioms and ex­pres­sions that guide the AI more re­li­ably and more quickly to­ward par­tic­u­lar so­lu­tions. It does­n’t have gor­ou­tines. It has a concurrency slang.”

Now ob­vi­ously, the rea­son a large lan­guage model fo­cused on cod­ing is good at Javascript and C++ is pre­cisely be­cause it has been trained on bil­lions of lines of code in those lan­guages along with count­less fo­rum posts, StackOverflow de­bates, and so on. Bootstrapping a VOPL pre­sents a cer­tain kind of dif­fi­culty, but then one also sus­pects that LLMs are al­ready be­ing trained in some fu­ture ver­sion of this lan­guage, be­cause so many pro­gram­mers are al­ready grop­ing their way to­ward a sys­tem like this by virtue of the fact that so many of them are al­ready vibe cod­ing pro­duc­tion-level sys­tems.

I don’t know how I feel about all of this (see my first and sec­ond points above). It sad­dens me to think of coding by hand” be­com­ing a kind of quaint Montessori-school stage in the ed­u­ca­tion of a vibe coder—some­thing like the con­tour draw­ings we de­mand from fu­ture pho­to­shop­ers or the bal­anced equa­tions we in­sist serve as a rite of pas­sage for peo­ple who will never be with­out a cal­cu­la­tor to the end of their days.

At the same time, there is some­thing ex­cit­ing about the birth of a com­pu­ta­tional par­a­digm. It was­n’t that long ago, in the grand scheme of things, that some­one re­al­ized that rewiring the en­tire ma­chine every time you wanted to do a cal­cu­la­tion (think ENIAC, circa 1945) was a rather sub­op­ti­mal way to do things. And it is worth re­call­ing that peo­ple com­plained when the stored-pro­gram com­puter rolled around (think EDVAC, circa 1951). Why? Well, the an­swer should be ob­vi­ous. It was less re­li­able. It was slower. It re­moved the op­er­a­tor from the loop. It threat­ened spe­cial­ized la­bor. It was con­cep­tu­ally im­pure. I’m not kid­ding about any of this. No less an au­thor­ity than Grace Hopper had to ar­gue against the quite pop­u­lar idea that there was no way any­one could ever trust a ma­chine to write in­struc­tions for an­other ma­chine.

Same vibe, as the kids say.

...

Read the original on stephenramsay.net »

9 310 shares, 15 trendiness

Apple’s Slow AI Pace Becomes a Strength as Market Grows Weary of Spending

Shares of Apple Inc. were bat­tered ear­lier this year as the iPhone maker faced re­peated com­plaints about its lack of an ar­ti­fi­cial in­tel­li­gence strat­egy. But as the AI trade faces in­creas­ing scrutiny, that hes­i­tance has gone from a weak­ness to a strength — and it’s show­ing up in the stock mar­ket.

Through the first six months of 2025, Apple was the sec­ond-worst per­former among the Magnificent Seven tech gi­ants, as its shares tum­bled 18% through the end of June. That has re­versed since then, with the stock soar­ing 35%, while AI dar­lings like Meta Platforms Inc. and Microsoft Corp. slid into the red and even Nvidia Corp. un­der­per­formed. The S&P 500 Index rose 10% in that time, and the tech-heavy Nasdaq 100 Index gained 13%.

It is re­mark­able how they have kept their heads and are in con­trol of spend­ing, when all of their peers have gone the other di­rec­tion,” said John Barr, port­fo­lio man­ager of the Needham Aggressive Growth Fund, which owns Apple shares.

As a re­sult, Apple now has a $4.1 tril­lion mar­ket cap­i­tal­iza­tion and the sec­ond biggest weight in the S&P 500, leap­ing over Microsoft and clos­ing in on Nvidia. The shift re­flects the mar­ket’s ques­tion­ing of the hun­dreds of bil­lions of dol­lars Big Tech firms are throw­ing at AI de­vel­op­ment, as well as Apple’s po­si­tion­ing to even­tu­ally ben­e­fit when the tech­nol­ogy is ready for mass use.

While they most cer­tainly will in­cor­po­rate more AI into the phones over time, Apple has avoided the AI arms race and the mas­sive capex that ac­com­pa­nies it,” said Bill Stone, chief in­vest­ment of­fi­cer at Glenview Trust Company, who owns the stock and views it as a bit of an anti-AI hold­ing.”

Of course, the rally has made Apple’s stock pricier than it has been in a long time. The shares are trad­ing for around 33 times ex­pected earn­ings over the next 12 months, a level they’ve only hit a few times in the past 15 years, with a high of 35 in September 2020. The stock’s av­er­age mul­ti­ple over that time is less than 19 times. Apple is now the sec­ond most ex­pen­sive stock in the Bloomberg Magnificent Seven Index, trail­ing only Tesla Inc.’s whop­ping val­u­a­tion of 203 times for­ward earn­ings. Apple’s shares climbed about 0.5% in early Tuesday trad­ing.

It’s re­ally hard to see how the stock can con­tinue to com­pound value at a level that makes this a com­pelling en­try point,” said Craig Moffett, co-founder of re­search firm MoffettNathanson. The ob­vi­ous ques­tion is, are in­vestors over­pay­ing for Apple’s de­fen­sive­ness? We think so.”

...

Read the original on finance.yahoo.com »

10 292 shares, 19 trendiness

Django: what’s new in 6.0

Django 6.0 was re­leased to­day, start­ing an­other re­lease cy­cle for the loved and long-lived Python web frame­work (now 20 years old!). It comes with a mo­saic of new fea­tures, con­tributed to by many, some of which I am happy to have helped with. Below is my pick of high­lights from the re­lease notes.

Upgrade with help from django-up­grade

If you’re up­grad­ing a pro­ject from Django 5.2 or ear­lier, please try my tool django-up­grade. It will au­to­mat­i­cally up­date old Django code to use new fea­tures, fix­ing some dep­re­ca­tion warn­ings for you, in­clud­ing five fix­ers for Django 6.0. (One day, I’ll pro­pose django-up­grade to be­come an of­fi­cial Django pro­ject, when en­ergy and time per­mit…)

There are four head­line fea­tures in Django 6.0, which we’ll cover be­fore other no­table changes, start­ing with this one:The Django Template Language now sup­ports tem­plate par­tials, mak­ing it eas­ier to en­cap­su­late and reuse small named frag­ments within a tem­plate file.

Partials are sec­tions of a tem­plate marked by the new {% par­tialdef %} and {% end­par­tialdef %} tags. They can be reused within the same tem­plate or ren­dered in iso­la­tion. Let’s look at ex­am­ples for each use case in turn. Reuse par­tials within the same tem­plateThe be­low tem­plate reuses a par­tial called fil­ter_­con­trols within the same tem­plate. It’s de­fined once at the top of the tem­plate, then used twice later on. Using a par­tial al­lows the tem­plate avoid rep­e­ti­tion with­out push­ing the con­tent into a sep­a­rate in­clude file.Ac­tu­ally, we can sim­plify this pat­tern fur­ther, by us­ing the in­line op­tion on the par­tialdef tag, which causes the de­f­i­n­i­tion to also ren­der in place:Reach for this pat­tern any time you find your­self re­peat­ing tem­plate code within the same tem­plate. Because par­tials can use vari­ables, you can also use them to de-du­pli­cate when ren­der­ing sim­i­lar con­trols with dif­fer­ent data.The be­low tem­plate de­fines a view_­count par­tial that’s in­tended to be re-ren­dered in iso­la­tion. It uses the in­line op­tion, so when the whole tem­plate is ren­dered, the par­tial is in­cluded.The page uses htmx, via my django-htmx pack­age, to pe­ri­od­i­cally re­fresh the view count, through the hx-* at­trib­utes. The re­quest from htmx goes to a ded­i­cated view that re-ren­ders the view_­count par­tial.{% load djan­go_htmx %}

Your browser does not sup­port the video tag.

{% par­tialdef view_­count in­line %}

{% end­par­tialdef %}

{% ht­mx_script %}

The rel­e­vant code for the two views could look like this:The ini­tial video view ren­ders the full tem­plate video.html. The video_view_­count view ren­ders just the view_­count par­tial, by ap­pend­ing #view_count to the tem­plate name. This syn­tax is sim­i­lar to how you’d ref­er­ence an HTML frag­ment by its ID in a URL.htmx was the main mo­ti­va­tion for this fea­ture, as pro­moted by htmx cre­ator Carson Gross in a cross-frame­work re­view post. Using par­tials def­i­nitely helps main­tain Locality of be­hav­iour” within your tem­plates, eas­ing au­thor­ing, de­bug­ging, and main­te­nance by avoid­ing tem­plate file sprawl.Djan­go’s sup­port for tem­plate par­tials was ini­tially de­vel­oped by Carlton Gibson in the django-tem­plate-par­tials pack­age, which re­mains avail­able for older Django ver­sions. The in­te­gra­tion into Django it­self was done in a Google Summer of Code pro­ject this year, worked on by stu­dent Farhan Ali and men­tored by Carlton, in Ticket #36410. You can read more about the de­vel­op­ment process in Farhan’s ret­ro­spec­tive blog post. Many thanks to Farhan for au­thor­ing, Carlton for men­tor­ing, and Natalia Bidart, Nick Pope, and Sarah Boyce for re­view­ing!

The next head­line fea­ture we’re cov­er­ing:Django now in­cludes a built-in Tasks frame­work for run­ning code out­side the HTTP re­quest–re­sponse cy­cle. This en­ables of­fload­ing work, such as send­ing emails or pro­cess­ing data, to back­ground work­ers.

Basically, there’s a new API for defin­ing and en­queu­ing back­ground tasks—very cool!

Background tasks are a way of run­ning code out­side of the re­quest-re­sponse cy­cle. They’re a com­mon re­quire­ment in web ap­pli­ca­tions, used for send­ing emails, pro­cess­ing im­ages, gen­er­at­ing re­ports, and more.

Historically, Django has not pro­vided any sys­tem for back­ground tasks, and kind of ig­nored the prob­lem space al­to­gether. Developers have in­stead re­lied on third-party pack­ages like Celery or Django Q2. While these sys­tems are fine, they can be com­plex to set up and main­tain, and of­ten don’t go with the grain” of Django.

The new Tasks frame­work fills this gap by pro­vid­ing an in­ter­face to de­fine back­ground tasks, which task run­ner pack­ages can then in­te­grate with. This com­mon ground al­lows third-party Django pack­ages to de­fine tasks in a stan­dard way, as­sum­ing you’ll be us­ing a com­pat­i­ble task run­ner to ex­e­cute them.

Define tasks with the new @task dec­o­ra­tor:

…and en­queue them for back­ground ex­e­cu­tion with the Task.enqueue() method:At this time, Django does not in­clude a pro­duc­tion-ready task back­end, only two that are suit­able for de­vel­op­ment and test­ing:Dum­my­Back­end - does noth­ing when tasks are en­queued, but al­lows them to be in­spected later. Useful for tests, where you can as­sert that tasks were en­queued with­out ac­tu­ally run­ning them. For pro­duc­tion use, you’ll need to use a third-party pack­age that im­ple­ments one, for which django-tasks, the ref­er­ence im­ple­men­ta­tion, is the pri­mary op­tion. It pro­vides DatabaseBackend for stor­ing tasks in your SQL data­base, a fine so­lu­tion for many pro­jects, avoid­ing ex­tra in­fra­struc­ture and al­low­ing atomic task en­queu­ing within data­base trans­ac­tions. We may see this back­end merged into Django in due course, or at least be­come an of­fi­cial pack­age, to help make Django batteries in­cluded” for back­ground tasks.To use django-tasks’ DatabaseBackend to­day, first in­stall the pack­age:Sec­ond, add these two apps to your INSTALLED_APPS set­ting:Third, con­fig­ure DatabaseBackend as your tasks back­end in the new TASKS set­ting:Fourth, run mi­gra­tions to cre­ate the nec­es­sary data­base ta­bles:Fi­nally, to run the task worker process, use the pack­age’s db_­worker man­age­ment com­mand:This process runs in­def­i­nitely, polling for tasks and ex­e­cut­ing them, log­ging events as it goes:You’ll want to run db_­worker in pro­duc­tion, and also in de­vel­op­ment if you want to test back­ground task ex­e­cu­tion.It’s been a long path to get the Tasks frame­work into Django, and I’m su­per ex­cited to see it fi­nally avail­able in Django 6.0. Jake Howard started on the idea for Wagtail, a Django-powered CMS, back in 2021, as they have a need for com­mon task de­f­i­n­i­tions across their pack­age ecosys­tem. He up­graded the idea to tar­get Django it­self in 2024, when he pro­posed DEP 0014. As a mem­ber of the Steering Council at the time, I had the plea­sure of help­ing re­view and ac­cept the DEP.Since then, Jake has been lead­ing the im­ple­men­ta­tion ef­fort, build­ing pieces first in the sep­a­rate django-tasks pack­age be­fore prepar­ing them for in­clu­sion in Django it­self. This step was done un­der Ticket #35859, with a pull re­quest that took nearly a year to re­view and land. Thanks to Jake for his per­se­ver­ance here, and to all re­view­ers: Andreas Nüßlein, Dave Gaeddert, Eric Holscher, Jacob Walls, Jake Howard, Kamal Mustafa, @rtr1, @tcely, Oliver Haas, Ran Benita, Raphael Gaschignard, and Sarah Boyce.Read more about this fea­ture and story in Jake’s post cel­e­brat­ing when it was merged.

Our third head­line fea­ture:Built-in sup­port for the Content Security Policy (CSP) stan­dard is now avail­able, mak­ing it eas­ier to pro­tect web ap­pli­ca­tions against con­tent in­jec­tion at­tacks such as cross-site script­ing (XSS). CSP al­lows de­clar­ing trusted sources of con­tent by giv­ing browsers strict rules about which scripts, styles, im­ages, or other re­sources can be loaded.

I’m re­ally ex­cited about this, be­cause I’m a bit of a se­cu­rity nerd who’s been de­ploy­ing CSP for client pro­jects for years.

CSP is a se­cu­rity stan­dard that can pro­tect your site from cross-site script­ing (XSS) and other code in­jec­tion at­tacks. You set a con­tent-se­cu­rity-pol­icy header to de­clare which con­tent sources are trusted for your site, and then browsers will block con­tent from other sources. For ex­am­ple, you might de­clare that only scripts your do­main are al­lowed, so an at­tacker who man­ages to in­ject a tag point­ing to evil.com would be thwarted, as the browser would refuse to load it.

Previously, Django had no built-in sup­port for CSP, and de­vel­op­ers had to rely on build­ing their own, or us­ing a third-party pack­age like the very pop­u­lar django-csp. But this was a lit­tle bit in­con­ve­nient, as it meant that other third-party pack­ages could­n’t re­li­ably in­te­grate with CSP, as there was no com­mon API to do so.

The new CSP sup­port pro­vides all the core fea­tures that django-csp did, with a slightly ti­dier and more Djangoey API. To get started, first add ContentSecurityPolicyMiddleware to your MIDDLEWARE set­ting:

Place it next to SecurityMiddleware, as it sim­i­larly adds se­cu­rity-re­lated head­ers to all re­sponses. (You do have SecurityMiddleware en­abled, right?)

Second, con­fig­ure your CSP pol­icy us­ing the new set­tings:SE­CURE_CSP to con­fig­ure the con­tent-se­cu­rity-pol­icy header, which is your ac­tively en­forced pol­icy. SECURE_CSP_REPORT_ONLY to con­fig­ure the con­tent-se­cu­rity-pol­icy-re­port-only header, which sets a non-en­forced pol­icy for which browsers re­port vi­o­la­tions to a spec­i­fied end­point. This op­tion is use­ful for test­ing and mon­i­tor­ing a pol­icy be­fore en­forc­ing it.

For ex­am­ple, to adopt the nonce-based strict CSP rec­om­mended by web.dev, you could start with the fol­low­ing set­ting:

The CSP enum used above pro­vides con­stants for CSP di­rec­tives, to help avoid ty­pos.

This pol­icy is quite re­stric­tive and will break most ex­ist­ing sites if de­ployed as-is, be­cause it re­quires nonces, as cov­ered next. That’s why the ex­am­ple shows start­ing with the re­port-only mode header, to help track down places that need fix­ing be­fore en­forc­ing the pol­icy. You’d later change to set­ting the SECURE_CSP set­ting to en­force the pol­icy.

Anyway, those are the two ba­sic steps to set up the new CSP sup­port!A key part of the new fea­ture is that nonce gen­er­a­tion is now built-in to Django, when us­ing the CSP mid­dle­ware. Nonces are a se­cu­rity fea­ture in CSP that al­low you to mark spe­cific and tags as trusted with a nonce at­tribute:The nonce value is ran­domly gen­er­ated per-re­quest, and in­cluded in the CSP header. An at­tacker per­form­ing con­tent in­jec­tion could­n’t guess the nonce, so browsers can trust only those tags that in­clude the cor­rect nonce. Because nonce gen­er­a­tion is now part of Django, third-party pack­ages can de­pend on it for their and tags and they’ll con­tinue to work if you adopt CSP with nonces. Nonces are the rec­om­mended way to use CSP to­day, avoid­ing prob­lems with pre­vi­ous al­low-list based ap­proaches. That’s why the above rec­om­mended pol­icy en­ables them. To adopt a nonce-based pol­icy, you’ll need to an­no­tate your and tags with the nonce value through the fol­low­ing steps.First, add the new csp tem­plate con­text proces­sor to your TEMPLATES set­ting:Sec­ond, an­no­tate your and tags with nonce=“{{ csp_nonce }}”:This can be te­dious and er­ror-prone, hence us­ing the re­port-only mode first to mon­i­tor vi­o­la­tions might be use­ful, es­pe­cially on larger pro­jects.Any­way, de­ploy­ing CSP right would be an­other post in it­self, or even a book chap­ter, so we’ll stop here for now. For more info, check out that web.dev ar­ti­cle and the MDN CSP guide.CSP it­self was pro­posed for browsers way back in 2004, and was first im­ple­mented in Mozilla Firefox ver­sion 4, re­leased 2011. That same year, Django Ticket #15727 was opened, propos­ing adding CSP sup­port to Django. Mozilla cre­ated django-csp from 2010, be­fore the first pub­lic avail­abil­ity of CSP, us­ing it on their own Django-powered sites. The first com­ment on Ticket #15727 pointed to django-csp, and the com­mu­nity ba­si­cally rolled with it as the de facto so­lu­tion.Over the years, CSP it­self evolved, as did django-csp, with Rob Hudson end­ing up as its main­tainer. Focusing on the pack­age mo­ti­vated to fi­nally get CSP into Django it­self. He made a draft PR and posted on Ticket #15727 in 2024, which I en­joyed help­ing re­view. He it­er­ated on the PR over the next 13 months un­til it was fi­nally merged for Django 6.0. Thanks to Rob for his heroic ded­i­ca­tion here, and to all re­view­ers: Benjamin Balder Bach, Carlton Gibson, Collin Anderson, David Sanders, David Smith, Florian Apolloner, Harro van der Klauw, Jake Howard, Natalia Bidart, Paolo Melchiorre, Sarah Boyce, and Sébastien Corbin.

We’re now out of the head­line fea­tures and onto the minor” changes, start­ing with this dep­re­ca­tion re­lated to the above email changes:django.core.mail APIs now re­quire key­word ar­gu­ments for less com­monly used pa­ra­me­ters. Using po­si­tional ar­gu­ments for these now emits a dep­re­ca­tion warn­ing and will raise a TypeError when the dep­re­ca­tion pe­riod ends:All op­tional pa­ra­me­ters (fail_silently and later) must be passed as key­word ar­gu­ments to get_­con­nec­tion(), mail_ad­mins(), mail_­man­agers(), send_­mail(), and send_­mass_­mail().All pa­ra­me­ters must be passed as key­word ar­gu­ments when cre­at­ing an EmailMessage or EmailMultiAlternatives in­stance, ex­cept for the first four (subject, body, from_e­mail, and to), which may still be passed ei­ther as po­si­tional or key­word ar­gu­ments.

Previously, Django would let you pass all pa­ra­me­ters po­si­tion­ally, which gets a bit silly and hard to read with long pa­ra­me­ter lists, like:from django.core.mail im­port send_­mail

send_­mail(

🐼 Panda of the week”,

This week’s panda is Po Ping, sha-sha booey!”,

up­dates@ex­am­ple.com,

[“adam@ex­am­ple.com],

True,

The fi­nal True does­n’t pro­vide any clue what it means with­out look­ing up the func­tion sig­na­ture. Now, us­ing po­si­tional ar­gu­ments for those less-com­monly-used pa­ra­me­ters raises a dep­re­ca­tion warn­ing, nudg­ing you to write:from django.core.mail im­port send_­mail

send_­mail(

sub­ject=“🐼 Panda of the week”,

body=“This week’s panda is Po Ping, sha-sha booey!”,

from_e­mail=“up­dates@ex­am­ple.com,

[“adam@ex­am­ple.com],

fail_silently=True,

This change is ap­pre­ci­ated for API clar­ity, and Django is gen­er­ally mov­ing to­wards us­ing key­word-only ar­gu­ments more of­ten. django-up­grade can au­to­mat­i­cally fix this one for you, via its mail_api_k­wargs fixer.

Thanks to Mike Edmunds, again, for mak­ing this im­prove­ment in Ticket #36163.

Next up:Com­mon util­i­ties, such as django.conf.set­tings, are now au­to­mat­i­cally im­ported to the shell by de­fault.

One of the head­line fea­tures back in Django 5.2 was au­to­matic model im­ports in the shell, mak­ing ./manage.py shell im­port all of your mod­els au­to­mat­i­cally. Building on that DX boost, Django 6.0 now also im­ports other com­mon util­i­ties, for which we can find the full list by run­ning ./manage.py shell with -v 2:$ ./manage.py shell -v 2

6 ob­jects im­ported au­to­mat­i­cally:

from django.conf im­port set­tings

from django.db im­port con­nec­tion, mod­els, re­set_­queries

from django.db.mod­els im­port func­tions

from django.utils im­port time­zone

So that’s:set­tings, use­ful for check­ing your run­time con­fig­u­ra­tion: con­nec­tion and re­set_­queries(), great for check­ing the ex­e­cuted queries: In [1]: Book.objects.select_related(‘author’)

Out[1]:

In [2]: con­nec­tion.queries

Out[2]:

[{‘sql’: SELECT example_book”.“id”, example_book”.“title”, example_book”.“author_id”, example_author”.“id”, example_author”.“name” FROM example_book” INNER JOIN example_author” ON (“example_book”.“author_id” = example_author”.“id”) LIMIT 21’,

time’: 0.000’}]

mod­els and func­tions, use­ful for ad­vanced ORM work: time­zone, use­ful for us­ing Django’s time­zone-aware date and time util­i­ties:

It re­mains pos­si­ble to ex­tend the au­to­matic im­ports with what­ever you’d like, as doc­u­mented in How to cus­tomize the shell com­mand doc­u­men­ta­tion page.

Salvo Polizzi con­tributed the orig­i­nal au­to­matic shell im­ports fea­ture in Django 5.2. He’s then re­turned to of­fer these ex­tra im­ports for Django 6.0, in Ticket #35680. Thanks to every­one that con­tributed to the fo­rum dis­cus­sion agree­ing on which im­ports to add, and to Natalia Bidart and Sarah Boyce for re­view­ing!

Now let’s dis­cuss a se­ries of ORM im­prove­ments, start­ing with this big one:Gen­er­at­ed­Fields and fields as­signed ex­pres­sions are now re­freshed from the data­base af­ter save() on back­ends that sup­port the RETURNING clause (SQLite, PostgreSQL, and Oracle). On back­ends that don’t sup­port it (MySQL and MariaDB), the fields are marked as de­ferred to trig­ger a re­fresh on sub­se­quent ac­cesses.

Django mod­els sup­port hav­ing the data­base gen­er­ate field val­ues for you in three cases:The db_de­fault field op­tion, which lets the data­base gen­er­ate the de­fault value when cre­at­ing an in­stance: The GeneratedField field type, which is al­ways com­puted by the data­base based on other fields in the same in­stance:

Previously, only the first method, us­ing db_de­fault, would re­fresh the field value from the data­base af­ter sav­ing. The other two meth­ods would leave you with only the old value or the ex­pres­sion ob­ject, mean­ing you’d need to call Model.refresh_from_db() to get any up­dated value if nec­es­sary. This was hard to re­mem­ber and it costs an ex­tra data­base query.

Now Django takes ad­van­tage of the RETURNING SQL clause to save the model in­stance and fetch up­dated dy­namic field val­ues in a sin­gle query, on back­ends that sup­port it (SQLite, PostgreSQL, and Oracle). A save() call may now is­sue a query like:

Django puts the re­turn value into the model field, so you can read it im­me­di­ately af­ter sav­ing:video = Video.objects.get(id=1)

video.last_up­dated = Now()

video.save()

print(video.last_up­dated) # Updated value from the data­base

On back­ends that don’t sup­port RETURNING (MySQL and MariaDB), Django now marks the dy­namic fields as de­ferred af­ter sav­ing. That way, the later ac­cess, as in the above ex­am­ple, will au­to­mat­i­cally call Model.refresh_from_db(). This en­sures that you al­ways read the up­dated value, even if it costs an ex­tra query. This fea­ture was pro­posed in Ticket #27222 way back in 2016, by Anssi Kääriäinen. It sat dor­mant for most of the nine years since, but ORM boss Simon Charette picked it up ear­lier this year, found an im­ple­men­ta­tion, and pushed it through to com­ple­tion. Thanks to Simon for con­tin­u­ing to push the ORM for­ward, and to all re­view­ers: David Sanders, Jacob Walls, Mariusz Felisiak, nes­sita, Paolo Melchiorre, Simon Charette, and Tim Graham.

The next ORM change:The new StringAgg ag­gre­gate re­turns the in­put val­ues con­cate­nated into a string, sep­a­rated by the de­lim­iter string. This ag­gre­gate was pre­vi­ously sup­ported only for PostgreSQL.

This ag­gre­gate is of­ten used for mak­ing comma-sep­a­rated lists of re­lated items, among other things. Previously, it was only sup­ported on PostgreSQL, as part of django.con­trib.post­gres:from django.con­trib.post­gres.ag­gre­gates im­port StringAgg

from ex­am­ple.mod­els im­port Video

videos = Video.objects.annotate(

chap­ter_ids=StringAgg(“chap­ter”, de­lim­iter=”,“),

for video in videos:

print(f”Video {video.id} has chap­ters: {video.chapter_ids}“)

…which might give you out­put like:

Now this ag­gre­gate is avail­able on all data­base back­ends sup­ported by Django, im­ported from django.db.mod­els:from django.db.mod­els im­port StringAgg, Value

from ex­am­ple.mod­els im­port Video

videos = Video.objects.annotate(

chap­ter_ids=StringAgg(“chap­ter”, de­lim­iter=Value(”,“)),

for video in videos:

print(f”Video {video.id} has chap­ters: {video.chapter_ids}“)

Note the de­lim­iter ar­gu­ment now re­quires a Value() ex­pres­sion wrap­per for lit­eral strings, as above. This change al­lows you to use data­base func­tions or fields as the de­lim­iter if de­sired.

While most Django pro­jects stick to PostgreSQL, hav­ing this ag­gre­gate avail­able on all back­ends is a nice im­prove­ment for cross-data­base com­pat­i­bil­ity, and it means third-party pack­ages can use it with­out af­fect­ing their data­base sup­port. The PostgreSQL-specific StringAgg was added way back in Django 1.9 (2015) by Andriy Sokolovskiy, in Ticket #24301. In Ticket #35444, Chris Muthig pro­posed adding the Aggregate.order_by op­tion, some­thing used by StringAgg to spec­ify the or­der­ing of con­cate­nated el­e­ments, and as a side ef­fect this made it pos­si­ble to gen­er­al­ize StringAgg to all back­ends.Thanks to Chris for propos­ing and im­ple­ment­ing this change, and to all re­view­ers: Paolo Melchiorre, Sarah Boyce, and Simon Charette.

Next up:

Django 3.2 (2021) in­tro­duced the DEFAULT_AUTO_FIELD set­ting for chang­ing the de­fault pri­mary key type used in mod­els. Django uses this set­ting to add a pri­mary key field called id to mod­els that don’t ex­plic­itly de­fine a pri­mary key field. For ex­am­ple, if you de­fine a model like this:

…then it will have two fields: id and ti­tle, where id uses the type de­fined by DEFAULT_AUTO_FIELD.

The set­ting can also be over­rid­den on a per-app ba­sis by defin­ing AppConfig.default_auto_field in the ap­p’s apps.py file:

A key mo­ti­va­tion for adding the set­ting was to al­low pro­jects to switch from AutoField (a 32-bit in­te­ger) to BigAutoField (a 64-bit in­te­ger) for pri­mary keys, with­out need­ing changes to every model. AutoField can store val­ues up to about 2.1 bil­lion, which sounds large but it be­comes easy to hit at scale. BigAutoField can store val­ues up to about 9.2 quin­til­lion, which is more than enough” for every prac­ti­cal pur­pose.

If a model us­ing AutoField hits its max­i­mum value, it can no longer ac­cept new rows, a prob­lem known as pri­mary key ex­haus­tion. The table is ef­fec­tively blocked, re­quir­ing an ur­gent fix to switch the model from AutoField to BigAutoField via a lock­ing data­base mi­gra­tion on a large table. For a great watch on how Kraken is fix­ing this prob­lem, see Tim Bell’s DjangoCon Europe 2025 talk, de­tail­ing some clever tech­niques to proac­tively mi­grate large ta­bles with min­i­mal down­time.

To stop this prob­lem aris­ing for new pro­jects, Django 3.2 made new pro­jects cre­ated with start­pro­ject set DEFAULT_AUTO_FIELD to BigAutoField, and new apps cre­ated with star­tapp set their AppConfig.default_auto_field to BigAutoField. It also added a sys­tem check to en­sure that pro­jects set DEFAULT_AUTO_FIELD ex­plic­itly, to en­sure users were aware of the fea­ture and could make an in­formed choice.

Now Django 6.0 changes the ac­tual de­fault val­ues of the set­ting and app con­fig at­tribute to BigAutoField. Projects us­ing BigAutoField can re­move the set­ting:

The de­fault start­pro­ject and star­tapp tem­plates also no longer set these val­ues. This change re­duces the amount of boil­er­plate in new pro­jects, and the prob­lem of pri­mary key ex­haus­tion can fade into his­tory, be­com­ing some­thing that most Django users no longer need to think about. The ad­di­tion of DEFAULT_AUTO_FIELD in Django 3.2 was pro­posed by Caio Ariede and im­ple­mented by Tom Forbes, in Ticket #31007. This new change in Django 6.0 was pro­posed and im­ple­mented by ex-Fel­low Tim Graham, in Ticket #36564. Thanks to Tim for spot­ting that this cleanup was now pos­si­ble, and to Jacob Walls and Clifford Gama for re­view­ing!

Moving on to tem­plates, let’s start with this nice lit­tle ad­di­tion:The new vari­able for­loop.length is now avail­able within a for loop.

This small ex­ten­sion makes it pos­si­ble to write a tem­plate loop like this:

Previously, you’d need to re­fer to the length in an an­other way, like {{ geese|length }}, which is a bit less flex­i­ble.

Thanks to Jonathan Ströbele for con­tribut­ing this idea and im­ple­men­ta­tion in Ticket #36186, and to David Smith, Paolo Melchiorre, and Sarah Boyce for re­view­ing.

There are two ex­ten­sions to the querys­tring tem­plate tag, which was added in Django 5.1 to help with build­ing links that mod­ify the cur­rent re­quest’s query pa­ra­me­ters. The querys­tring tem­plate tag now con­sis­tently pre­fixes the re­turned query string with a ?, en­sur­ing re­li­able link gen­er­a­tion be­hav­ior. This small change im­proves how the tag be­haves when an empty map­ping of query pa­ra­me­ters are pro­vided. Say you had a tem­plate like this: …where params is a dic­tio­nary that may some­times be empty. Previously, if params was empty, the out­put would be: Browsers treat this as a link to the same URL in­clud­ing the query pa­ra­me­ters, so it would not clear the query pa­ra­me­ters as in­tended. Now, with this change, the out­put will be: Browsers treat ? as a link to the same URL with­out any query pa­ra­me­ters, clear­ing them as the user would ex­pect. Thanks to Django Fellow Sarah Boyce for spot­ting this im­prove­ment and im­ple­ment­ing the fix in Ticket #36268, and for Django Fellow Natalia Bidart for re­view­ing! The querys­tring tem­plate tag now ac­cepts mul­ti­ple po­si­tional ar­gu­ments, which must be map­pings, such as QueryDict or dict. This en­hance­ment al­lows the tag to merge mul­ti­ple sources of query pa­ra­me­ters when build­ing the out­put. For ex­am­ple, you might have a tem­plate like this: …where su­per_search_­params is a dic­tio­nary of ex­tra pa­ra­me­ters to add to make the cur­rent search super”. The tag merges the two map­pings, with later map­pings tak­ing prece­dence for du­pli­cate keys. Thanks again to Sarah Boyce for propos­ing this im­prove­ment in Ticket #35529, to Giannis Terzopoulos for im­ple­ment­ing it, and to Natalia Bidart, Sarah Boyce, and Tom Carrick for re­view­ing!

...

Read the original on adamj.eu »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.