10 interesting stories served every morning and every evening.

Bambu Lab is abusing the open source social contract

www.jeffgeerling.com

Last year I said I’d prob­a­bly never rec­om­mend an­other Bambu Lab printer again.

I still use my P1S, but af­ter Bambu Lab started push­ing their al­ways-con­nected cloud so­lu­tion as the new de­fault:

I blocked the printer from the Internet via my OPNsense Firewall

I stopped up­dat­ing the firmware

I locked the printer into Developer mode

I deleted Bambu Studio and started us­ing OrcaSlicer

I had to do that to keep it un­der my con­trol, in­stead of Bambu’s.

But I’m weird—I ac­knowl­edge that. I’m one of those crazy ones who likes to own some­thing they pur­chased, and not have the com­pany watch every­thing I do with hard­ware I paid for.

Bambu Lab could’ve left the sta­tus quo at that, and I would­n’t be writ­ing this blog post.

But they did­n’t.

What hap­pened this time?

For con­text: OrcaSlicer is a fork of the open source pro­ject Bambu Studio, which is a fork of Prusa Slicer, which is a fork of slic3r. (They are all li­censed un­der the AGPLv3 open source li­cense).

OrcaSlicer al­ready has to dance around Bambu’s weird de­fault setup where every file you print goes through Bambu’s servers, mean­ing they can see every­thing you ever print on your printer.

That is, un­less you’re like me and you run it in Developer mode, and com­pletely block it from the Internet on old firmware.

Some peo­ple are okay with us­ing OrcaSlicer and print­ing through Bambu’s cloud. It’s con­ve­nient if you’re on the road and want to start a print on your printer at home, with­out man­ag­ing your own VPN.

I run my own WireGuard VPN, so I don’t need that, but I un­der­stand not every­one has the re­sources to man­age their own re­mote ac­cess.

Bambu saw a fork of OrcaSlicer that al­lowed you to use all your print­er’s fea­tures with­out hav­ing to route prints through Bambu’s cloud called OrcaSlicer-bambulab and was like, You know what? No. For the 0.1% of power users who want to run OrcaSlicer with­out the cloud de­liv­ery mech­a­nism like we have in our AGPL-licensed Linux Bambu Studio code… no. You have to use our app, and only our app.”

So they threat­ened that OrcaSlicer fork’s de­vel­oper with le­gal ac­tion for things that de­vel­oper did­n’t do. For ex­am­ple, they in­di­cated the fork used an im­per­son­ation at­tack, de­spite the fork us­ing Bambu Studio’s up­stream code ver­ba­tim.

These are very se­ri­ous pub­lic ac­cu­sa­tions.Bambu Lab did not write to me with these spe­cific pub­lic claims first. They also re­fused my re­quest to pub­lish the full cor­re­spon­dence. Instead, they pub­lished a one-sided pub­lic state­ment where I can­not re­ply di­rectly.In prac­tice, this pre­sents me to the pub­lic as some­one by­pass­ing se­cu­rity, im­per­son­at­ing their client, and cre­at­ing a risk to their in­fra­struc­ture. I re­ject that char­ac­ter­i­za­tion.— OrcaSlicer-bambulabs de­vel­op­er’s re­sponse

These are very se­ri­ous pub­lic ac­cu­sa­tions.

Bambu Lab did not write to me with these spe­cific pub­lic claims first. They also re­fused my re­quest to pub­lish the full cor­re­spon­dence. Instead, they pub­lished a one-sided pub­lic state­ment where I can­not re­ply di­rectly.

In prac­tice, this pre­sents me to the pub­lic as some­one by­pass­ing se­cu­rity, im­per­son­at­ing their client, and cre­at­ing a risk to their in­fra­struc­ture. I re­ject that char­ac­ter­i­za­tion.

— OrcaSlicer-bambulabs de­vel­op­er’s re­sponse

Bambu is abus­ing the open source so­cial con­tract, and us­ing their le­gal might, to sup­press a tiny num­ber of their users1, for who knows what rea­son.

It seems dumb to me, be­cause it would’ve been eas­ier (and more prof­itable) to do noth­ing at all2. Instead, they wrote a blog post blam­ing an in­di­vid­ual open source de­vel­oper for their own in­fra­struc­ture and se­cu­rity prob­lems.

This is where the ac­tual is­sue arises: the mod­i­fi­ca­tion in ques­tion worked by in­ject­ing fal­si­fied iden­tity meta­data into net­work com­mu­ni­ca­tion.In sim­ple terms: it pre­tended to be the of­fi­cial Bambu Studio client when com­mu­ni­cat­ing with our servers.— Bambu Lab blog post

This is where the ac­tual is­sue arises: the mod­i­fi­ca­tion in ques­tion worked by in­ject­ing fal­si­fied iden­tity meta­data into net­work com­mu­ni­ca­tion.

In sim­ple terms: it pre­tended to be the of­fi­cial Bambu Studio client when com­mu­ni­cat­ing with our servers.

— Bambu Lab blog post

I don’t think they un­der­stand open source cul­ture. Security ei­ther, if a pub­lic user agent string is their only pro­tec­tion against DDoS at­tacks…

Instead of find­ing so­lu­tions to ecosys­tem prob­lems and build­ing a more se­cure plat­form, Bambu is putting de­voted power users like the fork’s de­vel­oper on blast3.

When ten­sions flared last year, they wrote a sim­i­lar blog post blam­ing com­mu­nity back­lash on unfortunate mis­in­for­ma­tion’. I imag­ine they meant spec­u­la­tion from com­mu­nity mem­bers (like my­self) frus­trated the whole soft­ware ecosys­tem and own­er­ship model was turned up­side down post-pur­chase.

This year they’re blam­ing one de­vel­oper of a tiny slicer fork for the po­ten­tial im­pact he could have on their en­tire cloud in­fra­struc­ture.

It cre­ates struc­tural vul­ner­a­bil­ity. If this method were widely adopted or in­cor­rectly con­fig­ured, thou­sands of clients could si­mul­ta­ne­ously hit our servers while im­per­son­at­ing the of­fi­cial client. Our sys­tems would have no way to dis­tin­guish traf­fic, be­cause the re­quests would look iden­ti­cal.— Bambu Lab blog post

It cre­ates struc­tural vul­ner­a­bil­ity. If this method were widely adopted or in­cor­rectly con­fig­ured, thou­sands of clients could si­mul­ta­ne­ously hit our servers while im­per­son­at­ing the of­fi­cial client. Our sys­tems would have no way to dis­tin­guish traf­fic, be­cause the re­quests would look iden­ti­cal.

— Bambu Lab blog post

I love how they frame this as a de­vel­oper try­ing to im­per­son­ate their app, when he’s lit­er­ally us­ing the same AGPL-licensed code their Linux app uses.

I find it dou­bly ironic since their own fork caused Bambu users’ teleme­try to hit Prusa’s servers back in 2022, and (to my knowl­edge) Prusa did­n’t snap back with a C&D.

They spent the rest of their blog post talk­ing about vul­ner­a­bil­i­ties, bugs, and in­sta­bil­i­ties—as if that has any­thing to do with a de­vel­oper us­ing up­stream code ver­ba­tim in his fork.

Maybe they could take a new ap­proach and just not lock down their whole ecosys­tem in the first place.

But who am I kid­ding? Nothing I say, and no amount of com­plain­ing in the com­ments be­low, seems to help Bambu see the fault in their ways.

Spending a lit­tle more for a printer from an­other com­pany just might do it, though.

Louis Rossmann posted a video say­ing he’d pledge $10,000 to help the open source dev fight Bambu’s le­gal threats. And I’d hap­pily chip in too, but that’s only use­ful if the dev wants to put him­self back in Bambu’s crosshairs.

The bet­ter play might just be to skip Bambu al­to­gether.

The OrcaSlicer fork in ques­tion did­n’t seem to have much up­take out­side of a very small sub­set of users prior to Bambu Lab’s cease and de­sist or­der. ↩︎

The OrcaSlicer fork in ques­tion did­n’t seem to have much up­take out­side of a very small sub­set of users prior to Bambu Lab’s cease and de­sist or­der. ↩︎

Maybe ask for the fork to not in­clude bambulabs” in the name, since that could be a rea­son­able trade­mark-re­lated de­mand. ↩︎

Maybe ask for the fork to not in­clude bambulabs” in the name, since that could be a rea­son­able trade­mark-re­lated de­mand. ↩︎

The fork’s de­vel­oper men­tioned I pre­vi­ously helped Bambu Studio users with Linux and Wayland is­sues, in­clud­ing on Bambu Lab’s own GitHub. That makes it es­pe­cially ab­surd to me that I am now be­ing pub­licly pre­sented as some­one dan­ger­ous to their in­fra­struc­ture.” ↩︎

The fork’s de­vel­oper men­tioned I pre­vi­ously helped Bambu Studio users with Linux and Wayland is­sues, in­clud­ing on Bambu Lab’s own GitHub. That makes it es­pe­cially ab­surd to me that I am now be­ing pub­licly pre­sented as some­one dan­ger­ous to their in­fra­struc­ture.” ↩︎

Googlebook: Designed for Gemini Intelligence

googlebook.google

Be in the know.

Sorry, some­thing went wrong. Please en­ter your name and email again.

I con­firm I am 18 years of age or older. I ac­cept and ac­knowl­edge that my in­for­ma­tion will be used in ac­cor­dance with

Be in the know.

Sorry, some­thing went wrong. Please en­ter your name and email again.

I con­firm I am 18 years of age or older. I ac­cept and ac­knowl­edge that my in­for­ma­tion will be used in ac­cor­dance with

Intelligence is the new spec.

Link to Youtube Video (visible only when JS is dis­abled)

The best of Gemini meets our most ad­vanced lap­tops.

Select any­thing to ask, com­pare, or cre­ate with Gemini, in­stantly.1

Open your phone apps on your lap­top, no in­stalls needed.2

Access files from your phone as if they live on your lap­top.2

Be in the know.

Sorry, some­thing went wrong. Please en­ter your name and email again.

I con­firm I am 18 years of age or older. I ac­cept and ac­knowl­edge that my in­for­ma­tion will be used in ac­cor­dance with

Check re­sponses. Internet con­nec­tion re­quired. 18+. Results may vary based on vi­sual matches and are for il­lus­tra­tive pur­poses only. Sequences short­ened.

Setup re­quired. Phone with Android 17 or above re­quired.

GitHub - davmlaw/they_live_adblocker: Replace Ads with They Live style slogans

github.com

They Live Adblocker

A fork of uBlock Origin Lite that, in­stead of hid­ing cos­met­i­cally-blocked ads, re­places them with white tiles bear­ing slo­gans from John Carpenter’s 1988 film They Live: OBEY, CONSUME, WATCH TV, SLEEP, SUBMIT, CONFORM, STAY ASLEEP, BUY, WORK, NO INDEPENDENT THOUGHT, DO NOT QUESTION AUTHORITY.

Each blocked ad gets a sin­gle phrase, picked at ran­dom from the list.

The idea is from a blog post I wrote in 2015 (and never got around to build­ing): They Live ad­block mode.

Screenshot

Install

Download the lat­est uBO­Lite_theylive.chromium.zip from the Releases page, ex­tract it, then in Chromium / Chrome / Brave / Edge:

Open chrome://​ex­ten­sions

Toggle Developer mode on (top-right)

Click Load un­packed and se­lect the ex­tracted folder

Keep the folder around — the ex­ten­sion is loaded from that path.

Make it ac­tu­ally re­place ads

By de­fault uBO Lite uses Basic fil­ter­ing mode, which blocks ads at the net­work layer. Network-blocked ads never pro­duce a DOM el­e­ment, so there’s noth­ing to they-live-ify” — you just get empty space, as with nor­mal uBO Lite. To see the OBEY tiles:

Click the uBO Lite tool­bar icon → cog (⚙) → Dashboard.

Set the fil­ter­ing mode for the sites you care about to Optimal or Complete.

Reload.

Building from source

Requires Node 22.

git clone –recursive https://​github.com/​davm­law/​they_live_ad­blocker cd they_live_ad­blocker/​uBlock nvm use 22 # or oth­er­wise en­sure Node >= 22 tools/​make-mv3.sh chromium # or: fire­fox | edge | sa­fari

The pack­aged ex­ten­sion lands in uBlock/​dist/​build/​uBO­Lite.chromium/ — load it as an un­packed ex­ten­sion.

How it works

uBO Lite’s cos­metic fil­ter­ing nor­mally in­jects CSS like se­lec­tor { dis­play: none !important } to hide matched ad el­e­ments. This fork patches those in­jec­tion sites to in­stead ap­ply a white-box mask with a ::after over­lay whose con­tent is read from a data-ubol-they-live at­tribute, then walks the DOM (with a MutationObserver for late-loaded ads) to tag each matched el­e­ment with a ran­dom phrase from the list.

Touched files in the davm­law/​uBlock sub­mod­ule:

plat­form/​mv3/​ex­ten­sion/​js/​script­ing/​they-live.js (new) — phrase list, CSS gen­er­a­tor, DOM tag­ging

plat­form/​mv3/​ex­ten­sion/​js/​script­ing/​css-{spe­cific,generic,pro­ce­dural-api}.js — call sites

plat­form/​mv3/​ex­ten­sion/​js/​script­ing-man­ager.js — reg­is­ters they-live.js ahead of con­sumers

Caveats

Personal hobby fork; not an of­fi­cial uBlock Origin prod­uct. Don’t file uBO is­sues against this.

Forcing pre­vi­ously-hid­den el­e­ments vis­i­ble can oc­ca­sion­ally shift page lay­out where the site’s CSS as­sumed the ad slot col­lapsed.

Custom user-de­fined cos­metic fil­ters still hide nor­mally (no OBEY treat­ment).

Network-blocked ads (most of uBO Lite’s block­ing) don’t get re­placed — only cos­metic-fil­tered ones do.

License

GPL-3.0, same as up­stream uBlock Origin / uBO Lite.

Learning Software Architecture

matklad.github.io

In re­ply to an email ask­ing about learn­ing soft­ware de­sign skills as a re­searcher physi­cist:

I was at­tached to a bioin­for­mat­ics lab early in my ca­reer, so I think I un­der­stand what you are talk­ing about, the phe­nom­e­non of scientific code”! My thoughts:

First meta ob­ser­va­tion is that software de­sign” is some­thing best learned by do­ing. While I had some for­mal design” courses at the University, and I was even an ar­chi­tect” for our course pro­ject, that stuff was mostly make-be­lieve, kinder­garten­ers play­ing fire-fight­ers. What re­ally taught me how to do stuff was an ac­ci­dent of my ca­reer, where my sec­ond real pro­ject (IntelliJ Rust) pro­pelled me to a po­si­tion of soft­ware lead­er­ship, and made de­sign my prob­lem. I did make a few mis­takes in IJ Rust, but noth­ing too hor­ri­ble, and I learned a lot. So that’s good news — soft­ware en­gi­neer­ing is sim­ple enough that an in­quis­i­tive mind can fig­ure it out from first prin­ci­ples (and read­ing ran­dom blog posts).

Second meta ob­ser­va­tion, the bad news: Conway’s law is im­por­tant. Softwaregenesis re­peats the so­cial ar­chi­tec­ture of the or­ga­ni­za­tion pro­duc­ing soft­ware. Or, as put elo­quently by neugierig,

If I were to sum­ma­rize what I learned in a sin­gle sen­tence, it would be this: we talk about pro­gram­ming like it is about writ­ing code, but the code ends up be­ing less im­por­tant than the ar­chi­tec­ture, and the ar­chi­tec­ture ends up be­ing less im­por­tant than so­cial is­sues.

If I were to sum­ma­rize what I learned in a sin­gle sen­tence, it would be this: we talk about pro­gram­ming like it is about writ­ing code, but the code ends up be­ing less im­por­tant than the ar­chi­tec­ture, and the ar­chi­tec­ture ends up be­ing less im­por­tant than so­cial is­sues.

I sus­pect that the dif­fer­ence you per­ceive be­tween in­dus­trial and sci­en­tific soft­ware is not so much about soft­ware-build­ing knowl­edge, but rather about the field of in­cen­tives that com­pels peo­ple to pro­duce the soft­ware. Something like my PhD needs to pub­lish a pa­per in three months” is per­haps a sig­nif­i­cant ex­plainer?

Two things you can do here. One, at times you get a chance to de­sign or nudge an in­cen­tive struc­ture for a pro­ject. This hap­pens once in a blue moon, but is very im­pact­ful. This is the se­cret sauce be­hind TIGER_STYLE, not the set of rules per se, but the so­cial con­text that makes this set of rules a good idea.

Two, you can speedrun the four stages of grief to ac­cep­tance. Incentive struc­ture is al­most never what you want it to be, but, if you can’t change it, you can adapt to it. This is also true about most in­dus­trial soft­ware pro­jects — there’s never a time to do a thing prop­erly, you must do the best you can, given con­straints.

Let me use rust-an­a­lyzer as an ex­am­ple. The phys­i­cal re­al­ity of the pro­ject is that it’s si­mul­ta­ne­ously very deep (it’s a com­piler! Yay!) and very wide (opposite to an LLM, a clas­si­cal IDE is a lot of pur­pose-built spe­cial fea­tures). The so­cial re­al­ity is that deep com­piler” can at­tract a few bril­liant ded­i­cated con­trib­u­tors, and that the breadth fea­tures” can be a good fit for an army of week­end war­riors, peo­ple who learn Rust, who don’t have sus­tained ca­pac­ity to par­tic­i­pate in the pro­ject, but who can sink an hour or two to scratch their own itch.

My in­sis­tence that rust-an­a­lyzer does­n’t re­quire build­ing rustc, that it builds on sta­ble, that it does­n’t have any C de­pen­den­cies, and that the en­tire test suite takes sec­onds, was in the ser­vice of the goal of at­tract­ing high-im­pact con­trib­u­tors. I was wran­gling the build sys­tem to make sure peo­ple can work on the bor­row checker with­out think­ing about any­thing else.

To at­tract week­end war­riors, the in­ter­nals of rust-an­a­lyzer are split into mul­ti­ple in­de­pen­dent fea­tures, where each fea­ture is guarded by catch_un­wind at run­time. The think­ing was that I ex­plic­itly don’t want to care too much about qual­ity there, that the bar for get­ting a fea­ture PR in is happy path works & tested”. It’s fine if the code crashes, it will only at­tract fur­ther con­trib­u­tors, pro­vided that:

the qual­ity is iso­lated to a fea­ture, and does­n’t spill over,

at run­time, the crash is in­vis­i­ble to the user (it’s cru­cial that rust-an­a­lyzer fea­tures work with an im­mutable snap­shot, and can’t poi­son the data).

In con­trast, when work­ing on the core spine which pro­vided sup­port for fea­tures, I was very rel­a­tively more pedan­tic about qual­ity.

A word of cau­tion about adapt­ing to, rather than fix­ing in­cen­tive struc­ture — the fu­ture is un­cer­tain, and tends to hap­pen in the least con­ve­nient man­ner. The orig­i­nal mo­ti­va­tion be­hind rust-an­a­lyzer ex­per­i­ment was to avoid the need to write a par­al­lel com­piler (the one in IntelliJ Rust), and to pro­to­type a bet­ter ar­chi­tec­ture for LSP, so that the learn­ings could be back­ported to rustc. So, even in core (especially in core), the code was very ex­per­i­men­tal. Oh well. Stuck with one more com­piler now, I guess?

I might haz­ard a guess that some­thing sim­i­lar hap­pened to uu­tils pro­ject, which started as the pri­mary des­ti­na­tion for peo­ple learn­ing Rust, and ended up as Ubuntu core­utils im­ple­men­ta­tion.

Third, now to some con­crete rec­om­men­da­tions. Sadly, I don’t know of a sin­gle book I can rec­om­mend which con­tains the truths. I sus­pect one can only find such a book in an apoc­ryphal short story by Borges: prac­tice seems to be an es­sen­tial el­e­ment here. But here are some things worth pay­ing at­ten­tion to:

Boundaries talk by Gary Bernhardt is all-time fa­vorite. It con­tains solid ob­ject-level ad­vice, and, for me, it trig­gered the meta in­quiry.

How to Test is some­thing I wish I had. I im­me­di­ately un­der­stood the im­por­tance of test­ing, but it took me a long time to grow ar­ro­gant enough to ad­mit that most widely-cited test­ing ad­vice is shaman­is­tic snake-oil, and to con­cep­tu­al­ize what ac­tu­ally works.

∅MQ guide and, more gen­er­ally, writ­ings by Pieter Hintjens in­tro­duced me to Conway’s Law think­ing. That feature de­vel­op­ment” ar­chi­tec­ture of rust-an­a­lyzer? — op­ti­mistic merg­ing, ap­plied.

Reflections on a decade of cod­ing by Jamii is ex­cel­lent, goes very meta. It is in­ten­tion­ally the first of my links.

Ted Kaminski blog is the clos­est there is to a co­her­ent the­ory of soft­ware de­vel­op­ment, ap­pro­pri­ately framed as a set of notes to a non-ex­ist­ing book!

As for the ac­tual books, Software Engineering at Google and Ousterhout’s The Philosophy of Software Design are of­ten rec­om­mended. They are good. SWE, in par­tic­u­lar, helped me with a cou­ple of im­por­tant names. But they weren’t ground break­ing for me.

EU to crack down on TikTok, Instagram's ‘addictive design’ targeting kids on social media

www.cnbc.com

The TikTok app logo is seen in this photo il­lus­tra­tion taken in Warsaw, Poland on 18 November, 2024.

Nurphoto | Nurphoto | Getty Images

The EU is clamp­ing down on so­cial me­dia firms and plans to tar­get addictive de­sign” fea­tures on TikTok and Instagram as gov­ern­ments world­wide look to pro­tect chil­dren from the harms of so­cial me­dia.

The re­gion will take ac­tion against cer­tain fea­tures on so­cial me­dia plat­forms later in the year, EU Commission President Ursula von der Leyen said Tuesday at the European Summit on Artificial Intelligence and Children in Denmark.

CNBC has ap­proached ByteDance and Meta for com­ment.

We are tak­ing ac­tion against TikTok and its ad­dic­tive de­sign — end­less scrolling, au­to­play, and push no­ti­fi­ca­tions. The same ap­plies to Meta, be­cause we be­lieve Instagram and Facebook are fail­ing to en­force their own min­i­mum age of 13,” Von der Leyen said.

We are in­ves­ti­gat­ing plat­forms that al­low chil­dren to go down rabbit holes’ of harm­ful con­tent — such as videos that pro­mote eat­ing dis­or­ders or self-harm,” she added.

The EUs ex­ec­u­tive arm has also de­vel­oped its own age ver­i­fi­ca­tion app, which has the highest pri­vacy stan­dards in the world,” ac­cord­ing to Von der Leyen.

Member states will soon be able to in­te­grate it into their dig­i­tal wal­lets, and it can eas­ily be en­forced by on­line plat­forms. No more ex­cuses — the tech­nol­ogy for age-ver­i­fi­ca­tion is avail­able,” the EU chief said.

The EU Commission could have a le­gal pro­posal pre­pared as soon as the sum­mer, as it awaits the ad­vice and find­ings of its Special Panel of ex­perts on Child Safety Online.’

U.S. crack­down

The EU has been crack­ing down on U.S Big Tech in the past year as it en­forces leg­is­la­tion aimed at strength­en­ing ac­count­abil­ity of the tech gi­ants. A slew of fines has drawn crit­i­cism from U.S. of­fi­cials who have warned the bloc risks miss­ing out on par­tak­ing in the AI econ­omy.

U.S. President Donald Trump is com­bat­ing the penal­ties against U.S. busi­nesses, which have to­talled over $7 bil­lion in the past two years.

Apple, Meta, and Google, are among the com­pa­nies fac­ing fines over vi­o­la­tions of the bloc’s an­titrust and com­pe­ti­tion laws, which they have con­tested.

Trump signed a mem­o­ran­dum in February that would con­sider is­su­ing tar­iffs to combat dig­i­tal ser­vice taxes (DSTs), fines, prac­tices, and poli­cies that for­eign gov­ern­ments levy on American com­pa­nies.”

Earlier this year, the EU Commission launched an in­ves­ti­ga­tion against Elon Musk’s X, for­merly known as Twitter, for the spread­ing of sex­u­ally ex­plicit non-con­sen­sual con­tent of women and chil­dren gen­er­ated by its chat­bot Grok.

watch now

The in­creased le­gal scrutiny around child safety on so­cial me­dia plat­forms comes af­ter Meta and YouTube lost a high-pro­file court rul­ing in the U.S. in March, which found that de­sign fea­tures such as in­fi­nite scrolling and au­to­play con­tributed to ad­dic­tion and men­tal health harms in teenagers.

More re­cently, the Commission found that Meta breached the EUs Digital Services Act by fail­ing to keep un­der-13s off its plat­forms, with a pre­lim­i­nary in­ves­ti­ga­tion de­ter­min­ing that mi­nors are eas­ily able to by­pass checks.

Meanwhile, a so­cial ban for un­der-16s is gain­ing trac­tion with gov­ern­ments world­wide, af­ter Australia be­came the first coun­try to en­force a sweep­ing ban in December. Several European coun­tries in­clud­ing Spain, France, and the U.K., are propos­ing their own leg­is­la­tion to keep chil­dren off so­cial me­dia.

On Rendering the Sky, Sunsets, and Planets - The Blog of Maxime Heckel

blog.maximeheckel.com

There’s this photo that’s been sit­ting on my in­spi­ra­tion board for a while, of the space shut­tle Endeavour, sus­pended in space in low Earth or­bit at sun­set. It shows Earth’s up­per at­mos­phere as a back­drop, fea­tur­ing beau­ti­ful, col­or­ful lay­ers rang­ing from dark or­ange to blue be­fore fad­ing away into the deep black of space. Not only is that gra­di­ent of color aes­thet­i­cally pleas­ing, but the phe­nom­e­non be­hind those col­ors, at­mos­pheric scat­ter­ing, is even more of an in­ter­est­ing topic once you start look­ing into how it works and how to re­pro­duce it.

I wanted to build my own ver­sion of this ef­fect with shaders, ren­der­ing the sky’s dis­tinc­tive blue color and re­al­is­tic sun­sets and sun­rises di­rectly in the browser. The goal was to get as close as I could to that photo, while also mov­ing to­ward the kind of at­mos­pheric ren­der­ing of­ten seen in games and other shader-based me­dia.

Here’s a com­pi­la­tion of what came out of this month-long jour­ney, all run­ning in real time:

I did­n’t orig­i­nally plan on writ­ing about this sub­ject, but the en­thu­si­asm around the re­cent Artemis II mis­sion, com­bined with my own in­ter­est in all things space, made it feel worth ex­plor­ing in depth. It also felt like the per­fect op­por­tu­nity to build an in­ter­ac­tive ex­pe­ri­ence that could make the topic more ac­ces­si­ble. In this write-up, we’ll see how to im­ple­ment an at­mos­pheric scat­ter­ing shader post-pro­cess­ing ef­fect step-by-step, start­ing with the im­ple­men­ta­tion of the dif­fer­ent build­ing blocks (raymarching, Rayleigh and Mie scat­ter­ing, as well as ozone ab­sorp­tion) to ren­der a re­al­is­tic sky dome, and then adapt the re­sult to ren­der it as an at­mos­pheric shell around a planet. Finally, we’ll look into Sebastian Hillaire’s LUT-based ap­proach for a more per­for­mant re­sult, or at least my at­tempt at im­ple­ment­ing it, as this was very much the step­ping out­side of my com­fort zone phase for this pro­ject.

You may have, at some point or an­other, tried to slap a blue gra­di­ent back­ground be­hind some of your work in an at­tempt to give it a more atmospheric” look and call it a day, but quickly no­ticed do­ing so never feels quite right 1.

For a more true to life im­ple­men­ta­tion, we must treat the sky and its color as the re­sult of light in­ter­act­ing with air and its con­stituents, while tak­ing into ac­count sev­eral vari­ables, such the al­ti­tude of the ob­server, the amount of dust, the time of day, etc, all of that in a vol­ume.

With that es­tab­lished, our goal for this first part is to use this as guid­ing prin­ci­ple to lay the foun­da­tion for our at­mos­phere shader, and get to a re­sult that feels al­most in­dis­tin­guish­able from a real sky, at any time of the day.

Sampling Atmospheric Density

Much like how we’d ap­proach vol­u­met­ric clouds or vol­u­met­ric light, one easy way to sam­ple the at­mos­phere is through ray­march­ing. We can cast rays from the cam­er­a’s po­si­tion into the scene and step through the trans­par­ent medium to an­swer the two fol­low­ing ques­tions:

How much light sur­vives trav­el­ing through the at­mos­phere? This is the trans­mit­tance term.

How much light sur­vives trav­el­ing through the at­mos­phere? This is the trans­mit­tance term.

How much light is redi­rected to­ward the cam­era at each sam­ple? Also known as scat­ter­ing.

How much light is redi­rected to­ward the cam­era at each sam­ple? Also known as scat­ter­ing.

To an­swer the first one, we need to ac­cu­mu­late the at­mos­pheric den­sity en­coun­tered along the ray to ob­tain what is known as the op­ti­cal depth. We will model this us­ing the Rayleigh den­sity func­tion, which tells us how much air” there is at a given al­ti­tude h. This is im­por­tant to take into ac­count that the at­mos­phere gets thin­ner as al­ti­tude in­creases.

Sampling Rayleigh den­sity and ac­cu­mu­lat­ing op­ti­cal depth

1const float RAYLEIGH_SCALE_HEIGHT = 8.0; 2const float ATMOSPHERE_HEIGHT = 100.0; 3const float VIEW_DISTANCE = 200.0; 4const int PRIMARY_STEPS = 24;5const vec3 SUN_DIRECTION = nor­mal­ize(vec3(0.0, 1.0, 1.0));67float rayleigh­Den­sity(float h) {8 re­turn exp(-max(h, 0.0) / RAYLEIGH_SCALE_HEIGHT);9}1011void main() {12 vec2 p = vUv * 2.0 – 1.0;1314 vec3 color = vec3(0.0);15 vec3 viewDir = nor­mal­ize(vec3(p.x, p.y, 1.0));16 vec3 sky­Dir = nor­mal­ize(vec3(viewDir.x, max(viewDir.y, 0.0), viewDir.z));1718 float step­Size = VIEW_DISTANCE / float(PRI­MA­RY_STEPS);19 float viewOp­ti­calDepth = 0.0;2021 for (int i = 0; i < PRIMARY_STEPS; i++) {22 float t = (float(i) + 0.5) * step­Size;23 float h = t * sky­Dir.y;2425 if (h < 0.0) break;26 if (h > ATMOSPHERE_HEIGHT) break;2728 float dR = rayleigh­Den­sity(h);29 viewOp­ti­calDepth += dR * step­Size;303132 }33343536 color = ACESFilm(color);3738 frag­Color = vec4(color, 1.0);39}

1

const float RAYLEIGH_SCALE_HEIGHT = 8.0;

2

const float ATMOSPHERE_HEIGHT = 100.0;

3

const float VIEW_DISTANCE = 200.0;

4

const int PRIMARY_STEPS = 24;

5

const vec3 SUN_DIRECTION = nor­mal­ize(vec3(0.0, 1.0, 1.0));

6

7

float rayleigh­Den­sity(float h) {

8

re­turn exp(-max(h, 0.0) / RAYLEIGH_SCALE_HEIGHT);

9

}

10

11

void main() {

12

vec2 p = vUv * 2.0 – 1.0;

13

14

vec3 color = vec3(0.0);

15

vec3 viewDir = nor­mal­ize(vec3(p.x, p.y, 1.0));

16

vec3 sky­Dir = nor­mal­ize(vec3(viewDir.x, max(viewDir.y, 0.0), viewDir.z));

17

18

float step­Size = VIEW_DISTANCE / float(PRI­MA­RY_STEPS);

19

float viewOp­ti­calDepth = 0.0;

20

21

for (int i = 0; i < PRIMARY_STEPS; i++) {

22

float t = (float(i) + 0.5) * step­Size;

23

float h = t * sky­Dir.y;

24

25

if (h < 0.0) break;

26

if (h > ATMOSPHERE_HEIGHT) break;

27

28

float dR = rayleigh­Den­sity(h);

29

viewOp­ti­calDepth += dR * step­Size;

30

31

32

}

33

34

35

36

color = ACESFilm(color);

37

38

frag­Color = vec4(color, 1.0);

39

}

Then, from the op­ti­cal depth, we can com­pute the trans­mit­tance T at a given point along the ray: the frac­tion of light that sur­vives while trav­el­ing through the at­mos­phere.

T=1.0 means that there is no loss of light.

T=1.0 means that there is no loss of light.

T=0.0 means that the light is to­tally ex­tin­guished.

T=0.0 means that the light is to­tally ex­tin­guished.

If you’ve read my ar­ti­cle on vol­u­met­ric clouds 2, we’re us­ing a for­mula that may look fa­mil­iar for this: Beer’s Law:

Computing trans­mit­tance

123float dR = rayleigh­Den­sity(h);4viewOp­ti­calDepth += dR * step­Size;56vec3 trans­mit­tance = exp(-rayleigh­Beta * viewOp­ti­calDepth);7s­cat­ter­ing += dR * trans­mit­tance * step­Size;89

1

2

3

float dR = rayleigh­Den­sity(h);

4

viewOp­ti­calDepth += dR * step­Size;

5

6

vec3 trans­mit­tance = exp(-rayleigh­Beta * viewOp­ti­calDepth);

7

scat­ter­ing += dR * trans­mit­tance * step­Size;

Why senior developers fail to communicate their expertise

www.nair.sh

§01

A se­nior de­vel­oper is a prob­lem avoider

When I join a team there are two kinds of se­nior de­vel­op­ers I meet.

The first kind says things like:

I found this new tool and it’s pretty cool …”“This com­pany <company to­tally un­like the one we’re in> does things this way, so …”“Here, look at this HackerNews post that says this is best prac­tice, we should prob­a­bly …”

I found this new tool and it’s pretty cool …”

This com­pany <company to­tally un­like the one we’re in> does things this way, so …”

Here, look at this HackerNews post that says this is best prac­tice, we should prob­a­bly …”

I don’t like this kind of se­nior de­vel­oper. A lit­tle self-pro­tec­tive, lots of time spent in the in­dus­try, prob­a­bly a good peo­ple per­son.

But not my wave­length.

Then there’s also this kind of se­nior de­vel­oper:

Do we re­ally need that?”“What hap­pens if we don’t do this?”“Can we make do for now? Maybe come back to this later when it be­comes more im­por­tant?”

Do we re­ally need that?”

What hap­pens if we don’t do this?”

Can we make do for now? Maybe come back to this later when it be­comes more im­por­tant?”

Ah, baby, this is my se­nior de­vel­oper. The avoider, the re­ducer, the re­cy­cler. They want to avoid de­vel­op­ment as much as they can.

Why? Because they hunt a sin­gu­lar mon­ster in pro­fes­sional soft­ware de­vel­op­ment: com­plex­ity.

Special cases, if con­di­tions, new data­base ta­bles, new com­po­nents. All yuck yucks. The se­nior de­vel­oper wants as lit­tle of this as pos­si­ble, spend­ing lots of time mak­ing sure they ab­solutely need to add more code.

Because adding to a sys­tem is risk­ing more com­plex­ity.

Yes, yes, of course this is sim­plis­tic. There are se­nior de­vel­op­ers who ex­cel at tak­ing on un­solved prob­lems and find­ing new cre­ative de­signs.

But even­tu­ally, if you’re tak­ing re­spon­si­bil­ity for a work­ing sys­tem, you’re scared of com­plex­ity.

Now, why is that? What’s the down­side of com­plex­ity? And why does­n’t any­body else get it?

§02

The rest of the busi­ness is scared of un­cer­tainty

We’re go­ing to be sim­pli­fy­ing what a busi­ness is us­ing two loops.

This is the first loop; mar­keters, sales­peo­ple, prod­uct man­agers, the CEO, they all live here:

The main goal of this loop is to try and learn. The busi­ness wants to take things to mar­ket and then get feed­back on whether they’ve got some­thing valu­able or not.

The mon­ster, for peo­ple in this loop, is un­cer­tainty.

And un­cer­tainty is cruel be­cause no strat­egy is guar­an­teed to work. When com­bined with time (compensation for mar­ket­ing/​sales, or pay­roll for founders, or data for prod­uct man­agers) it can feel like tak­ing things to mar­ket as fast as pos­si­ble is the only way to re­duce un­cer­tainty be­fore a dead­line. The more you can take to the mar­ket, the more you can get feed­back from it, the more you can (potentially) re­duce un­cer­tainty.

This loop, and all com­pa­nies start with this loop, is about pure, raw, speed.

But what hap­pens when a busi­ness gets cus­tomers?

§03

Senior de­vel­op­ers care a lot about sta­bil­ity

Ah, now, here’s our sec­ond loop. People pay­ing for a ser­vice.

This loop is where a lot of se­nior de­vel­op­ers find them­selves in. The main goal in this loop is the con­tin­u­a­tion and guar­an­tee of ser­vice.

Keep things work­ing, keep things un­der­stand­able, keep things de­bug­gable, keep things fix­able, keep things teach­able, keep things sta­ble.

Senior de­vel­op­ers worry about sta­bil­ity be­cause they take re­spon­si­bil­ity for the busi­ness to con­tinue serv­ing cus­tomers.

And what risks all of that?

Complexity.

It makes a sys­tem less un­der­stand­able, less de­bug­gable, less fix­able, less teach­able, and ul­ti­mately, less sta­ble.

Rising com­plex­ity = low­er­ing sta­bil­ity = se­nior de­vel­oper fail­ing re­spon­si­bil­ity = bad bad not nice, pay­ments in­ter­rupted, every­body sad.

So, if the first loop’s goal was un­cer­tainty re­duc­tion, the sec­ond loop’s goal is com­plex­ity man­age­ment.

But why does this lead to com­mu­ni­ca­tion fail­ure?

Because once you have cus­tomers, both loops are run­ning si­mul­ta­ne­ously. A busi­ness needs to both ex­plore pos­si­bil­i­ties and serve cus­tomers at the same time.

Ok, now you might be able to spot my an­swer to the ques­tion in the ti­tle of this post.

Depending on which loop you spend your time on, your prob­lem is framed dif­fer­ently (which is why I think de­vel­op­ers get split in their opin­ions on AI; some work more on one loop than the other)

This was the story of the peo­ple in the first loop:

But this was the story of the se­nior de­vel­oper in the sec­ond loop:

The sto­ries don’t match.

The more re­quests to build and add to the sys­tem the se­nior de­vel­oper gets, the more the se­nior de­vel­oper wants to re­spond with uhhh, no com­plex­ity … main­te­nance costs … un­der­stand­abil­ity … speed of con­tin­u­ing de­vel­op­ment … pro­duc­tiv­ity over time …”.

But that does noth­ing to ad­dress the rest of the busi­ness’s need for re­duc­ing un­cer­tainty.

The copy­writer’s di­ag­no­sis: You can’t ex­plain away some­one else’s prob­lem us­ing your own prob­lems.

And the copy­writer’s pre­scrip­tion: You need to de­scribe your so­lu­tion as a so­lu­tion to their prob­lem as well.

Senior de­vel­op­er’s fail to com­mu­ni­cate be­cause they ex­press their prob­lems in terms of com­plex­ity man­age­ment when they should be ex­press­ing their so­lu­tions in terms of un­cer­tainty re­duc­tion.

By ac­knowl­edg­ing that what the rest of the com­pany is seek­ing for is un­cer­tainty re­duc­tion, the se­nior de­vel­oper can use their ex­per­tise to help.

And what’s the most use­ful skill a se­nior de­vel­oper has? The re­luc­tance to build what’s not nec­es­sary; the abil­ity to spot an op­por­tu­nity to re-use some­thing al­ready built.

Need to col­lect sur­vey data? Google forms, baby.

Need to build a whole new fea­ture to test it? Have you tried putting a but­ton in the ex­ist­ing UI and see­ing if peo­ple click it?

Need new an­a­lyt­ics ser­vice? What’s the most im­por­tant de­ci­sion we need an­a­lyt­ics for? Can we start with one de­ci­sion, one chart, one met­ric?

You want to bake me a whole birth­day cake? Just put a can­dle on my sand­wich.

This is what se­nior de­vel­op­ers learn to do: they learn how to give peo­ple what they want by be­ing re­source­ful with ex­ist­ing soft­ware.

But how do you com­mu­ni­cate this with­out send­ing peo­ple whole es­says?

Copywriters love boil­ing down mul­ti­ple sig­nals into sin­gu­lar phrases. And so, here’s the mag­i­cal phrase every se­nior de­vel­oper must learn: Can we try some­thing quicker?’

The use of quicker’ ac­knowl­edges what they’re re­ally look­ing for; something’ im­plies an­other way of achiev­ing it; try’ im­plies im­per­fec­tion, but also the pos­si­bil­ity of it be­ing good enough.

It per­fectly cuts down to the re­quire­ment of the rest of the com­pany, speed to re­duce un­cer­tainty, while al­low­ing the se­nior de­vel­oper to ex­er­cise their ex­per­tise: re­duce, re-use, and if life is truly a bless­ing, avoid.

That’s it. That’s my an­swer to the ti­tle of the post: se­nior de­vel­op­ers talk in terms of com­plex­ity when every­one else is wor­ried about un­cer­tainty.

But! Big but!

AI now seems to make all of this point­less, does­n’t it? Why re­duce? Why re-use? Why avoid? The AI can build so much in so lit­tle time.

Ah, well, it can’t yet do the one thing se­nior de­vel­op­ers still do.

Take re­spon­si­bil­ity.

§04

Senior de­vel­op­ers as ed­i­tors more than writ­ers

Senior de­vel­op­ers care a lot about un­der­stand­ing the sys­tem be­cause un­der­stand­ing al­lows fix­ing it when things go wrong. It al­lows ex­tend­ing it in­tel­li­gently when the sys­tem needs to grow. It al­lows, more than any­thing, the con­tin­ued, re­li­able ser­vic­ing of pay­ing cus­tomers.

AI threat­ens this un­der­stand­abil­ity. It is in­cred­i­ble at im­prov­ing the speed of tak­ing things to the mar­ket, but it also af­fects the other loop, the one the se­nior de­vel­op­ers are re­spon­si­ble for.

If you have a bunch of AI agents, ju­nior de­vel­op­ers, non-de­vel­op­ers, and your in­vestors and their moth­ers adding code into the sys­tem, you get a sys­tem that over­com­pen­sates for speed by giv­ing up sta­bil­ity.

This was the busi­ness in two loops:

And this is how AI af­fects the two loops:

Forget main­tain­ing sta­bil­ity, AI is a down­right desta­bi­lizer. It wors­ens un­der­stand­abil­ity, fix­a­bil­ity, de­bug­ga­bil­ity, teach­a­bil­ity, guar­an­te­abil­ity, all the bloody bil­i­ties.

AI does this and takes no re­spon­si­bil­ity.

Not nice. This is the se­nior de­vel­op­er’s main worry that’s be­ing brushed away.

Luckily, se­nior de­vel­op­ers have a few tricks up their sleeve.

Namely: de­cou­pling.

For the longest time, soft­ware de­vel­op­ers were the only ones who could build soft­ware. They were re­spon­si­ble for both loops.

That’s one sys­tem sup­port­ing two goals.

What if we had two sys­tems, one for each goal?

An anal­ogy: a fic­tion writer rushes to com­plete a first draft (often called a vomit draft) and later ex­tracts what’s work­ing and gets rid of what’s not. There’s an edit­ing process af­ter the first ini­tial rapid write. The ed­i­tor’s job is to take the bits that are work­ing well and shape it all into a co­he­sive whole.

What if we had one sys­tem just for speed? Everyone fo­cused on bring­ing things to life could work here. AI agents, our own gen­er­ated and un­re­viewed code, ju­nior devs, mar­ket­ing etc.

We could call this the Speed’ ver­sion of the sys­tem. It’s not meant to be un­der­stand­able, the goal is get­ting things good enough to take it to the mar­ket for feed­back.

And then what if we had a sec­ond sys­tem fo­cused on sta­bil­ity?

We could call this the Scale’ ver­sion of the sys­tem. It’s de­signed by se­nior de­vel­op­ers to be sta­ble, un­der­stand­able, and scal­able.

The Speed’ ver­sion al­lows the rest of the busi­ness to con­tinue learn­ing from the mar­ket, as the se­nior de­vel­op­ers build a trail­ing ver­sion of the sys­tem that’s well-re­viewed and un­der­stand­able.

Plus, the de­sign of the Scale’ ver­sion is in­flu­enced by what worked and what does­n’t work in the Speed’ ver­sion of the sys­tem.

Features get built on Speed’ but then sta­bi­lized on Scale’.

What this looks like in prac­tice might be un­clear, but the idea is to have a well-com­mu­ni­cated de-cou­pling that ex­plains that there’s a dif­fer­ence be­tween go­ing for speed and go­ing for sta­bil­ity.

Imagine you get asked to build some­thing am­bi­tious, and you say:

Sure, I’ll have the Speed ver­sion ready in 3 days. Then the Scale ver­sion in about 6 weeks.”

They get what they want, speed and mo­men­tum. You get what you want, ob­ser­va­tion and de­sign.

Maybe?

Your thoughts, se­nior soft­ware de­vel­oper?

Or should I say, se­nior soft­ware ed­i­tor?

Just a moment...

www.epicfurious.com

The future of Obsidian plugins

obsidian.md

Today we’re ex­cited to launch Obsidian Community, the new di­rec­tory and de­vel­oper dash­board for Obsidian plu­g­ins and themes.

Since the Obsidian API re­lease in 2020, more than 4,000 plu­g­ins and themes have been cre­ated by our amaz­ing com­mu­nity. Incredibly, Obsidian plu­g­ins have passed 120 mil­lion to­tal down­loads!

Our goal is to make it easy and safe for any­one to build, dis­trib­ute, dis­cover, and use plu­g­ins and themes.

Today’s launch is only the start of a larger set of ini­tia­tives. We’re ex­cited to share what’s new, and what’s com­ing soon.

Community site

Developer dash­board

Automated re­views

Plugin safety

Tools for teams

Next steps

FAQ

Community site

The new Community site makes it easy to ex­plore the breadth of plu­g­ins and themes with new ways to browse, search, fil­ter, and sort.

You can browse plu­g­ins across dozens of cat­e­gories such as Integrations, Bases, Charts, and many more cat­e­gories. Sort pro­jects by name, down­loads, pop­u­lar­ity, re­lease date, and up­dated date.

Every pro­ject has its own de­tail page where you can find screen­shots, de­tails, and a safety score­card. New la­bels are pre­sent for paid plu­g­ins and of­fi­cial in­te­gra­tions.

Authors can cus­tomize their pro­file pages with spon­sor­ship op­tions and links to their web­site and so­cial me­dia.

Developer dash­board

The Obsidian Community site also hosts our new de­vel­oper dash­board. This is where au­thors can sub­mit, man­age, and track the sta­tus of their pro­jects.

All ex­ist­ing plu­g­ins, themes, and queued sub­mis­sions added via GitHub have been au­to­mat­i­cally mi­grated to the new site.

To claim your ex­ist­ing pro­jects, sign into the new Community site and con­nect your GitHub ac­count. This lets you man­age your ex­ist­ing pro­jects, sub­mit new pro­jects, and edit your pro­file page.

Automated re­views

With this tran­si­tion we are in­tro­duc­ing au­to­mated re­views for all com­mu­nity pro­jects. The new au­to­mated re­view sys­tem scans every ver­sion for se­cu­rity and code qual­ity, not just the ini­tial sub­mis­sion.

Until to­day, ini­tial sub­mis­sions were man­u­ally re­viewed and ap­proved by our small team to en­sure they fol­low the Developer Policies. However, as Obsidian has grown in pop­u­lar­ity we strug­gled to keep pace with sub­mis­sions, and sub­se­quent ver­sions were not re­viewed.

As cod­ing agents ac­cel­er­ate the cre­ation of plu­g­ins, the re­view queue was only get­ting longer. We don’t ex­pect the pace of new sub­mis­sions to slow down. With tools like Obsidian CLI we’re mak­ing it even eas­ier to cre­ate plu­g­ins.

Now when a plu­gin or theme is sub­mit­ted, the au­to­mated re­view sys­tem ver­i­fies that it ad­heres to our de­vel­oper poli­cies, that the source code fol­lows best prac­tices, and that it is free of known vul­ner­a­bil­i­ties.

Building on this new sys­tem al­lows us to scal­ably re­view com­mu­nity pro­jects go­ing for­ward. With the abil­ity to con­tin­u­ously im­prove our au­to­mated tests, we are more equipped to com­pre­hen­sively im­prove the qual­ity and safety of the Obsidian ecosys­tem.

Importantly, man­ual re­views will con­tinue. The new sys­tem al­lows us to shift our ef­forts to­wards plu­g­ins that re­quire deeper in­spec­tion such as pop­u­lar plu­g­ins, fea­tured plu­g­ins, and is­sues flagged by the com­mu­nity.

All ex­ist­ing plu­g­ins and themes have been re-re­viewed us­ing the new sys­tem. In this process we found older plu­g­ins and themes that do not meet the lat­est guide­lines. These older pro­jects have been tem­porar­ily granted an ex­cep­tion. However, all plu­g­ins and themes that do not pass the new re­view process will even­tu­ally be phased out of the of­fi­cial di­rec­tory. See FAQs be­low.

And… Yes! All queued sub­mis­sions have been re­viewed. With the new sys­tem we were able to process over 2,300 queued sub­mis­sions in the last few days. If you’ve been wait­ing on us to re­view your plu­gin, sign into the Community site to see your sub­mis­sion’s cur­rent sta­tus.

Plugin safety

The new Community site and au­to­mated re­view sys­tem in­tro­duces ma­jor en­hance­ments for the safety and se­cu­rity of the Obsidian ecosys­tem:

Automated scans. Every ver­sion is now au­to­mat­i­cally checked for code qual­ity and se­cu­rity vul­ner­a­bil­i­ties. This in­cludes mal­ware scan­ning to de­tect po­ten­tially ma­li­cious ad­di­tions to plu­g­ins. Developers can see de­tailed sug­ges­tions, warn­ings, and fail­ure flags for every pro­ject in the de­vel­oper dash­board.

Scorecards. Users and de­vel­op­ers can see the sta­tus of au­to­mated checks with score­cards on every pro­ject. These score­cards will con­tinue to im­prove as we in­cor­po­rate dis­clo­sures, pri­vacy la­bels, ar­ti­fact at­tes­ta­tion, man­ual re­view re­sults, and adop­tion of app ca­pa­bil­i­ties.

Over the com­ing months, we will fur­ther in­crease trans­parency about plu­g­ins and their au­thors:

Disclosures. Plugins will de­clare what they ac­cess: net­work, file sys­tem, clip­board, and other ca­pa­bil­i­ties. Users will be able to see these dis­clo­sures be­fore in­stalling plu­g­ins.

Verified au­thors. Labels will be added for trusted de­vel­op­ers that have passed ad­di­tional ver­i­fi­ca­tion steps and are in good stand­ing.

As a mem­ber of the Obsidian com­mu­nity you play a part in keep­ing the ecosys­tem safe. Users can al­ways flag se­cu­rity is­sues di­rectly to the Obsidian team.

Tools for teams

Teams that use Obsidian can al­ready de­ploy safety con­trols for their users. In the com­ing months we will make it eas­ier for teams to man­age which com­mu­nity plu­g­ins are al­lowed, and dis­trib­ute pri­vate plu­g­ins to team mem­bers.

Teams that pub­lish of­fi­cial Obsidian plu­g­ins can now ap­ply for the Official badge in the Community di­rec­tory. Reach out to us if your plu­gin qual­i­fies.

Next steps

As you can tell, there are many mov­ing parts! Along with im­prove­ments to the Community di­rec­tory and au­to­mated re­view sys­tem, we will also make changes to the Obsidian app and API to im­prove dis­cov­ery and safety.

The com­mu­nity ecosys­tem is one of the most fun and pow­er­ful as­pects of Obsidian. We’re ex­cited to give it the foun­da­tion needed to con­tinue flour­ish­ing.

We’d love for you to ex­plore the new Obsidian Community and share your feed­back with us!

FAQ

Whew! That was a lot of in­for­ma­tion, but you might still have some ques­tions. If your ques­tions are not an­swered be­low please reach to us via the #plugin-dev and #theme-dev chan­nels on the of­fi­cial Obsidian Discord server.

As a user how does this af­fect me?

You can use the new Community site to ex­plore plu­g­ins and themes. If you’ve in­stalled early-ac­cess plu­g­ins man­u­ally, you might not need to any­more be­cause the re­view time has been been cut down dra­mat­i­cally.

I found an er­ror in a score­card. What do should I do?

Scorecards are new and can con­tain er­rors. You may find false pos­i­tives and false neg­a­tives. If you no­tice some­thing in­ac­cu­rate con­tact us in the #plugin-dev chan­nel on the Obsidian Discord server.

A plu­gin has in­cor­rect tags or la­bels (e.g. of­fi­cial, paid, free). What do should I do?

Developers can up­date a pro­jec­t’s tags and pric­ing us­ing the de­vel­oper dash­board. Only Obsidian staff can up­date the Official la­bel. If you see any is­sues con­tact us on the Obsidian Discord server.

How do I sub­mit a new plu­gin or theme?

The process is sig­nif­i­cantly eas­ier and faster than be­fore:

Sign into the Community site to ac­cess the new de­vel­oper dash­board.

Connect your GitHub ac­count and choose a repo to sub­mit.

Complete the steps in the dash­board.

Upon sub­mis­sion your pro­ject will be im­me­di­ately re­viewed. Typically, you will see the re­sults of your re­view within a few min­utes.

If your pro­ject passes, it will be avail­able to search and down­load in the app within 24 hours.

How do I claim my ex­ist­ing plu­g­ins and themes?

Sign into the Community site to ac­cess the new de­vel­oper dash­board. This lets you con­nect your GitHub ac­count and claim your plu­g­ins. Once signed in, you can up­date the ti­tle, de­scrip­tion, and screen­shots.

Why does my pro­ject not ap­pear in the di­rec­tory?

If your pro­ject does not ap­pear in search it is likely be­cause it has er­rors and can­not be down­loaded by users. Sign into the Community site, claim the plu­gin, and re­solve any er­rors.

Can I still up­date my plu­gin/​theme with­out us­ing the de­vel­oper dash­board?

Yes. You can con­tinue to re­lease new ver­sions via GitHub with­out us­ing the new de­vel­oper dash­board. New re­leases are au­to­mat­i­cally re­viewed. However if your up­date fails to pass re­view, you will need to use the de­vel­oper dash­board to see all the de­tails.

What hap­pens to pro­jects that are no longer main­tained?

Our ex­ist­ing pol­icy re­mains un­changed. When sub­mit­ting plu­g­ins and themes, de­vel­op­ers agree to con­tinue main­tain­ing their pro­jects. If the pro­ject is no longer main­tained, no longer func­tions with newer ver­sions of Obsidian, and has not been trans­ferred to a new owner, it will even­tu­ally be re­moved from the Community di­rec­tory per the Developer Policies.

As we con­tinue im­prov­ing dis­cov­ery tools, it will be­come eas­ier to find plu­g­ins that are ac­tively main­tained and up to date with Obsidian’s lat­est ca­pa­bil­i­ties.

What hap­pens to plu­g­ins and themes that fail the au­to­mated re­view sys­tem?

All new plu­g­ins and themes must pass au­to­mated re­view be­fore they are added to the di­rec­tory and avail­able via search. Each new ver­sion is scanned, and if it fails to pass re­view, the plu­gin is re­moved from search within 24 hours. You can test changes be­fore re­leas­ing them us­ing new tools listed be­low.

All plu­g­ins and themes that were pre­vi­ously ap­proved will con­tinue to be avail­able for now, even if they fail the au­to­mated re­view. Eventually, we will re­quire older plu­g­ins to meet the new stan­dards. We have not set a dead­line for this yet and will be work­ing closely with com­mu­nity de­vel­op­ers to de­fine that tran­si­tion.

Can I run the au­to­mated re­view with­out sub­mit­ting a re­lease?

Yes. Two op­tions:

Use our es­lint plu­gin to check your Obsidian plu­gin against the of­fi­cial de­vel­oper guide­lines lo­cally.

Use the de­vel­oper dash­board to run a pre­view scan on any branch, tag, or com­mit.

My plu­gin has co-main­tain­ers or be­longs to an or­ga­ni­za­tion. How can I give mul­ti­ple users ac­cess to the plu­gin in the de­vel­oper dash­board?

Currently only the owner of a GitHub repo can edit it in Obsidian Community. Organization re­pos can be claimed and edited if you have a pub­lic mem­ber­ship to the or­ga­ni­za­tion. In the near fu­ture we will add sup­port for mul­ti­ple col­lab­o­ra­tors.

Can closed source plu­g­ins be added to the new di­rec­tory?

For now, we are not ac­cept­ing new closed source plu­g­ins into the di­rec­tory. Existing closed source plu­g­ins will con­tinue to be avail­able un­til fur­ther no­tice. In the fu­ture we will con­sider how the new re­view sys­tem can be adapted for closed source plu­g­ins.

Does the de­vel­oper dash­board re­quire an Obsidian ac­count?

Yes. You must have an Obsidian ac­count to ac­cess the new de­vel­oper dash­board.

Does the new site re­quire us­ing GitHub?

For now, yes. In the fu­ture we will con­sider adding other soft­ware host­ing plat­forms.

What is shared with Obsidian when log­ging in via GitHub?

Logging in via GitHub shares your user­name and list of pub­lic repos­i­to­ries. It is only used to ver­ify own­er­ship of your repos­i­tory.

What does it mean for a plu­gin to be Paid or have Optional Payments?

Obsidian Community is not a store, and does not of­fer any built-in pay­ment so­lu­tions.

Developers can con­tinue to use ex­ter­nal pay­ment mech­a­nisms such as li­cense keys, API keys, and lo­gin gates. Developers must ac­cu­rately la­bel plu­g­ins un­der one of these three cat­e­gories:

Free means the plu­gin does not have any form pay­ment and is not tied to any paid ser­vices what­so­ever. Donation links and spon­sor­ship links are ac­cept­able for Free plu­g­ins.

Optional pay­ments means users may op­tion­ally pay to un­lock ad­di­tional fea­tures or the plu­gin con­nects to paid ser­vices. If a plu­gin con­nects to a paid ser­vice or API, it must be la­beled as hav­ing Optional pay­ments, even if the ser­vice has a free tier.

Paid means users must pay to use its pri­mary fea­tures, even if it of­fers a free trial.

These la­bels de­ter­mine what users should ex­pect to pay, not whether the de­vel­oper of the plu­gin col­lects pay­ments.

What should I do if I run into prob­lems with the new Community di­rec­tory and de­vel­oper dash­board?

Our team is here to help to make sure every­thing goes smoothly. If you have any ques­tions or con­cerns, reach out to us in the #plugin-dev chan­nel of the Obsidian Discord server.

GitHub - cactus-compute/needle: 26m function call model that runs on incredibly small devices

github.com

We dis­tilled Gemini 3.1 into a 26m pa­ra­me­ter Simple Attention Network” that you can even fine­tune lo­cally on your Mac/PC. In pro­duc­tion, Needle runs on Cactus at 6000 toks/​sec pre­fill and 1200 de­code speed. Weights are fully open on Cactus-Compute/needle, as well as the dataset gen­er­a­tion.

d=512, 8H/4KV, BPE=8192 ┌──────────────┐ │ Tool Call │ └──────┬───────┘ ┌┴──────────┐ │ Softmax │ └─────┬─────┘ ┌─────┴─────┐ │ Linear (T)│ ← tied └─────┬─────┘ ┌─────┴─────┐ │ ZCRMSNorm │ └─────┬─────┘ ┌────────┴────────┐ │ Decoder x 8 │ │┌───────────────┐│ ││ ZCRMSNorm ││ ││ Masked Self ││ ││ Attn + RoPE ││ ││ Gated Residual││ │├───────────────┤│ ┌──────────────┐ ││ ZCRMSNorm ││ │ Encoder x 12 │──────────────────────▶Cross Attn ││ │ │ ││ Gated Residual││ │ ┌──────────┐│└───────────────┘│ │ │ZCRMSNorm │ │ └────────┬────────┘ │ │Self Attn │ │ ┌─────┴─────┐ │ │ GQA+RoPE │ │ │ Embedding │ ← shared │ │Gated Res │ │ └─────┬─────┘ │ │ │ │ ┌───────┴───────-┐ │ │ (no FFN) │ │ │[EOS]<tool_call>│ │ └──────────┘ │ │ + an­swer │ │ │ └───────────────-┘ └──────┬───────┘┌────┴──────┐ │ Embedding │ └────┬──────┘┌────┴──────┐ │ Text │ │ query │ └───────────┘

Pretrained on 16 TPU v6e for 200B to­kens (27hrs).

Post-trained on 2B to­kens of sin­gle-shot func­tion call dataset (45mins).

Needle is an ex­per­i­men­tal run for Simple Attention Networks, geared at re­defin­ing tiny AI for con­sumer de­vies (phones, watches, glasses…). So while it beats FunctionGemma-270m, Qwen-0.6B, Graninte-350m, LFM2.5 – 350m on sin­gle-shot func­tion call for per­sonal AI, Those model are have more scope/​ca­pac­ity and ex­cel in con­ver­sa­tional set­tings. Also, small mod­els can be finicky. Please use the UI in the next sec­tion to test on your own tools, and fine­tune ac­cord­ingly, at the click of a but­ton.

Quickstart

git clone https://​github.com/​cac­tus-com­pute/​nee­dle.git cd nee­dle && source ./setup nee­dle play­ground

Opens a web UI at http://​127.0.0.1:7860 where you can test and fine­tune on your own tools. Weights are auto-down­loaded.

Usage (Python)

from nee­dle im­port SimpleAttentionNetwork, load­_check­point, gen­er­ate, get_­to­k­enizer

params, con­fig = load­_check­point(“check­points/​nee­dle.pkl”) model = SimpleAttentionNetwork(config) to­k­enizer = get_­to­k­enizer()

re­sult = gen­er­ate( model, params, to­k­enizer, query=“What’s the weather in San Francisco?”, tools=‘[{“name”:“get_weather”,“pa­ra­me­ters”:{“lo­ca­tion”:“string”}}]’, stream=False, ) print(re­sult) # [{“name”:“get_weather”,“arguments”:{“location”:“San Francisco”}}]

Finetuning

# Playground (generates data via Gemini, trains, eval­u­ates, bun­dles re­sult) nee­dle play­ground

# CLI (auto-downloads weights if not lo­cal) nee­dle fine­tune data.jsonl

CLI

nee­dle play­ground Test and fine­tune via web UI nee­dle fine­tune <data.jsonl> Finetune on your own data nee­dle run –query …” –tools Single in­fer­ence nee­dle train Full train­ing run nee­dle pre­train Pretrain on PleIAs/SYNTH nee­dle eval –checkpoint <path> Evaluate a check­point nee­dle to­k­enize Tokenize dataset nee­dle gen­er­ate-data Synthesize train­ing data via Gemini nee­dle tpu <action> TPU man­age­ment (see docs/​tpu.md)

@misc{ndubuaku2026needle, ti­tle={Nee­dle}, au­thor={Henry Ndubuaku, Jakub Mroz, Karen Mosoyan, Roman Shemet, Parkirat Sandhu, Satyajit Kumar, Noah Cylich, Justin H. Lee}, year={2026}, url={https://​github.com/​cac­tus-com­pute/​nee­dle} }

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.