10 interesting stories served every morning and every evening.

Googlebook: Designed for Gemini Intelligence

googlebook.google

Be in the know.

Sorry, some­thing went wrong. Please en­ter your name and email again.

I con­firm I am 18 years of age or older. I ac­cept and ac­knowl­edge that my in­for­ma­tion will be used in ac­cor­dance with

Be in the know.

Sorry, some­thing went wrong. Please en­ter your name and email again.

I con­firm I am 18 years of age or older. I ac­cept and ac­knowl­edge that my in­for­ma­tion will be used in ac­cor­dance with

Intelligence is the new spec.

Link to Youtube Video (visible only when JS is dis­abled)

The best of Gemini meets our most ad­vanced lap­tops.

Select any­thing to ask, com­pare, or cre­ate with Gemini, in­stantly.1

Open your phone apps on your lap­top, no in­stalls needed.2

Access files from your phone as if they live on your lap­top.2

Be in the know.

Sorry, some­thing went wrong. Please en­ter your name and email again.

I con­firm I am 18 years of age or older. I ac­cept and ac­knowl­edge that my in­for­ma­tion will be used in ac­cor­dance with

Check re­sponses. Internet con­nec­tion re­quired. 18+. Results may vary based on vi­sual matches and are for il­lus­tra­tive pur­poses only. Sequences short­ened.

Setup re­quired. Phone with Android 17 or above re­quired.

Why senior developers fail to communicate their expertise

www.nair.sh

§01

A se­nior de­vel­oper is a prob­lem avoider

When I join a team there are two kinds of se­nior de­vel­op­ers I meet.

The first kind says things like:

I found this new tool and it’s pretty cool …”“This com­pany <company to­tally un­like the one we’re in> does things this way, so …”“Here, look at this HackerNews post that says this is best prac­tice, we should prob­a­bly …”

I found this new tool and it’s pretty cool …”

This com­pany <company to­tally un­like the one we’re in> does things this way, so …”

Here, look at this HackerNews post that says this is best prac­tice, we should prob­a­bly …”

I don’t like this kind of se­nior de­vel­oper. A lit­tle self-pro­tec­tive, lots of time spent in the in­dus­try, prob­a­bly a good peo­ple per­son.

But not my wave­length.

Then there’s also this kind of se­nior de­vel­oper:

Do we re­ally need that?”“What hap­pens if we don’t do this?”“Can we make do for now? Maybe come back to this later when it be­comes more im­por­tant?”

Do we re­ally need that?”

What hap­pens if we don’t do this?”

Can we make do for now? Maybe come back to this later when it be­comes more im­por­tant?”

Ah, baby, this is my se­nior de­vel­oper. The avoider, the re­ducer, the re­cy­cler. They want to avoid de­vel­op­ment as much as they can.

Why? Because they hunt a sin­gu­lar mon­ster in pro­fes­sional soft­ware de­vel­op­ment: com­plex­ity.

Special cases, if con­di­tions, new data­base ta­bles, new com­po­nents. All yuck yucks. The se­nior de­vel­oper wants as lit­tle of this as pos­si­ble, spend­ing lots of time mak­ing sure they ab­solutely need to add more code.

Because adding to a sys­tem is risk­ing more com­plex­ity.

Yes, yes, of course this is sim­plis­tic. There are se­nior de­vel­op­ers who ex­cel at tak­ing on un­solved prob­lems and find­ing new cre­ative de­signs.

But even­tu­ally, if you’re tak­ing re­spon­si­bil­ity for a work­ing sys­tem, you’re scared of com­plex­ity.

Now, why is that? What’s the down­side of com­plex­ity? And why does­n’t any­body else get it?

§02

The rest of the busi­ness is scared of un­cer­tainty

We’re go­ing to be sim­pli­fy­ing what a busi­ness is us­ing two loops.

This is the first loop; mar­keters, sales­peo­ple, prod­uct man­agers, the CEO, they all live here:

The main goal of this loop is to try and learn. The busi­ness wants to take things to mar­ket and then get feed­back on whether they’ve got some­thing valu­able or not.

The mon­ster, for peo­ple in this loop, is un­cer­tainty.

And un­cer­tainty is cruel be­cause no strat­egy is guar­an­teed to work. When com­bined with time (compensation for mar­ket­ing/​sales, or pay­roll for founders, or data for prod­uct man­agers) it can feel like tak­ing things to mar­ket as fast as pos­si­ble is the only way to re­duce un­cer­tainty be­fore a dead­line. The more you can take to the mar­ket, the more you can get feed­back from it, the more you can (potentially) re­duce un­cer­tainty.

This loop, and all com­pa­nies start with this loop, is about pure, raw, speed.

But what hap­pens when a busi­ness gets cus­tomers?

§03

Senior de­vel­op­ers care a lot about sta­bil­ity

Ah, now, here’s our sec­ond loop. People pay­ing for a ser­vice.

This loop is where a lot of se­nior de­vel­op­ers find them­selves in. The main goal in this loop is the con­tin­u­a­tion and guar­an­tee of ser­vice.

Keep things work­ing, keep things un­der­stand­able, keep things de­bug­gable, keep things fix­able, keep things teach­able, keep things sta­ble.

Senior de­vel­op­ers worry about sta­bil­ity be­cause they take re­spon­si­bil­ity for the busi­ness to con­tinue serv­ing cus­tomers.

And what risks all of that?

Complexity.

It makes a sys­tem less un­der­stand­able, less de­bug­gable, less fix­able, less teach­able, and ul­ti­mately, less sta­ble.

Rising com­plex­ity = low­er­ing sta­bil­ity = se­nior de­vel­oper fail­ing re­spon­si­bil­ity = bad bad not nice, pay­ments in­ter­rupted, every­body sad.

So, if the first loop’s goal was un­cer­tainty re­duc­tion, the sec­ond loop’s goal is com­plex­ity man­age­ment.

But why does this lead to com­mu­ni­ca­tion fail­ure?

Because once you have cus­tomers, both loops are run­ning si­mul­ta­ne­ously. A busi­ness needs to both ex­plore pos­si­bil­i­ties and serve cus­tomers at the same time.

Ok, now you might be able to spot my an­swer to the ques­tion in the ti­tle of this post.

Depending on which loop you spend your time on, your prob­lem is framed dif­fer­ently (which is why I think de­vel­op­ers get split in their opin­ions on AI; some work more on one loop than the other)

This was the story of the peo­ple in the first loop:

But this was the story of the se­nior de­vel­oper in the sec­ond loop:

The sto­ries don’t match.

The more re­quests to build and add to the sys­tem the se­nior de­vel­oper gets, the more the se­nior de­vel­oper wants to re­spond with uhhh, no com­plex­ity … main­te­nance costs … un­der­stand­abil­ity … speed of con­tin­u­ing de­vel­op­ment … pro­duc­tiv­ity over time …”.

But that does noth­ing to ad­dress the rest of the busi­ness’s need for re­duc­ing un­cer­tainty.

The copy­writer’s di­ag­no­sis: You can’t ex­plain away some­one else’s prob­lem us­ing your own prob­lems.

And the copy­writer’s pre­scrip­tion: You need to de­scribe your so­lu­tion as a so­lu­tion to their prob­lem as well.

Senior de­vel­op­er’s fail to com­mu­ni­cate be­cause they ex­press their prob­lems in terms of com­plex­ity man­age­ment when they should be ex­press­ing their so­lu­tions in terms of un­cer­tainty re­duc­tion.

By ac­knowl­edg­ing that what the rest of the com­pany is seek­ing for is un­cer­tainty re­duc­tion, the se­nior de­vel­oper can use their ex­per­tise to help.

And what’s the most use­ful skill a se­nior de­vel­oper has? The re­luc­tance to build what’s not nec­es­sary; the abil­ity to spot an op­por­tu­nity to re-use some­thing al­ready built.

Need to col­lect sur­vey data? Google forms, baby.

Need to build a whole new fea­ture to test it? Have you tried putting a but­ton in the ex­ist­ing UI and see­ing if peo­ple click it?

Need new an­a­lyt­ics ser­vice? What’s the most im­por­tant de­ci­sion we need an­a­lyt­ics for? Can we start with one de­ci­sion, one chart, one met­ric?

You want to bake me a whole birth­day cake? Just put a can­dle on my sand­wich.

This is what se­nior de­vel­op­ers learn to do: they learn how to give peo­ple what they want by be­ing re­source­ful with ex­ist­ing soft­ware.

But how do you com­mu­ni­cate this with­out send­ing peo­ple whole es­says?

Copywriters love boil­ing down mul­ti­ple sig­nals into sin­gu­lar phrases. And so, here’s the mag­i­cal phrase every se­nior de­vel­oper must learn: Can we try some­thing quicker?’

The use of quicker’ ac­knowl­edges what they’re re­ally look­ing for; something’ im­plies an­other way of achiev­ing it; try’ im­plies im­per­fec­tion, but also the pos­si­bil­ity of it be­ing good enough.

It per­fectly cuts down to the re­quire­ment of the rest of the com­pany, speed to re­duce un­cer­tainty, while al­low­ing the se­nior de­vel­oper to ex­er­cise their ex­per­tise: re­duce, re-use, and if life is truly a bless­ing, avoid.

That’s it. That’s my an­swer to the ti­tle of the post: se­nior de­vel­op­ers talk in terms of com­plex­ity when every­one else is wor­ried about un­cer­tainty.

But! Big but!

AI now seems to make all of this point­less, does­n’t it? Why re­duce? Why re-use? Why avoid? The AI can build so much in so lit­tle time.

Ah, well, it can’t yet do the one thing se­nior de­vel­op­ers still do.

Take re­spon­si­bil­ity.

§04

Senior de­vel­op­ers as ed­i­tors more than writ­ers

Senior de­vel­op­ers care a lot about un­der­stand­ing the sys­tem be­cause un­der­stand­ing al­lows fix­ing it when things go wrong. It al­lows ex­tend­ing it in­tel­li­gently when the sys­tem needs to grow. It al­lows, more than any­thing, the con­tin­ued, re­li­able ser­vic­ing of pay­ing cus­tomers.

AI threat­ens this un­der­stand­abil­ity. It is in­cred­i­ble at im­prov­ing the speed of tak­ing things to the mar­ket, but it also af­fects the other loop, the one the se­nior de­vel­op­ers are re­spon­si­ble for.

If you have a bunch of AI agents, ju­nior de­vel­op­ers, non-de­vel­op­ers, and your in­vestors and their moth­ers adding code into the sys­tem, you get a sys­tem that over­com­pen­sates for speed by giv­ing up sta­bil­ity.

This was the busi­ness in two loops:

And this is how AI af­fects the two loops:

Forget main­tain­ing sta­bil­ity, AI is a down­right desta­bi­lizer. It wors­ens un­der­stand­abil­ity, fix­a­bil­ity, de­bug­ga­bil­ity, teach­a­bil­ity, guar­an­te­abil­ity, all the bloody bil­i­ties.

AI does this and takes no re­spon­si­bil­ity.

Not nice. This is the se­nior de­vel­op­er’s main worry that’s be­ing brushed away.

Luckily, se­nior de­vel­op­ers have a few tricks up their sleeve.

Namely: de­cou­pling.

For the longest time, soft­ware de­vel­op­ers were the only ones who could build soft­ware. They were re­spon­si­ble for both loops.

That’s one sys­tem sup­port­ing two goals.

What if we had two sys­tems, one for each goal?

An anal­ogy: a fic­tion writer rushes to com­plete a first draft (often called a vomit draft) and later ex­tracts what’s work­ing and gets rid of what’s not. There’s an edit­ing process af­ter the first ini­tial rapid write. The ed­i­tor’s job is to take the bits that are work­ing well and shape it all into a co­he­sive whole.

What if we had one sys­tem just for speed? Everyone fo­cused on bring­ing things to life could work here. AI agents, our own gen­er­ated and un­re­viewed code, ju­nior devs, mar­ket­ing etc.

We could call this the Speed’ ver­sion of the sys­tem. It’s not meant to be un­der­stand­able, the goal is get­ting things good enough to take it to the mar­ket for feed­back.

And then what if we had a sec­ond sys­tem fo­cused on sta­bil­ity?

We could call this the Scale’ ver­sion of the sys­tem. It’s de­signed by se­nior de­vel­op­ers to be sta­ble, un­der­stand­able, and scal­able.

The Speed’ ver­sion al­lows the rest of the busi­ness to con­tinue learn­ing from the mar­ket, as the se­nior de­vel­op­ers build a trail­ing ver­sion of the sys­tem that’s well-re­viewed and un­der­stand­able.

Plus, the de­sign of the Scale’ ver­sion is in­flu­enced by what worked and what does­n’t work in the Speed’ ver­sion of the sys­tem.

Features get built on Speed’ but then sta­bi­lized on Scale’.

What this looks like in prac­tice might be un­clear, but the idea is to have a well-com­mu­ni­cated de-cou­pling that ex­plains that there’s a dif­fer­ence be­tween go­ing for speed and go­ing for sta­bil­ity.

Imagine you get asked to build some­thing am­bi­tious, and you say:

Sure, I’ll have the Speed ver­sion ready in 3 days. Then the Scale ver­sion in about 6 weeks.”

They get what they want, speed and mo­men­tum. You get what you want, ob­ser­va­tion and de­sign.

Maybe?

Your thoughts, se­nior soft­ware de­vel­oper?

Or should I say, se­nior soft­ware ed­i­tor?

GitHub - FULU-Foundation/OrcaSlicer-bambulab

github.com

This ver­sion of OrcaSlicer re­stores full BambuNetwork sup­port for Bambu Lab print­ers.

You are not lim­ited to LAN only. It works over the in­ter­net just like be­fore, through BambuNetwork, with full func­tion­al­ity for nor­mal use and print­ing.

Installation

Windows

Windows re­quires WSL 2.

Before first launch, open Command Prompt or PowerShell as Administrator and run:

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

Restart Windows, then launch Orca Studio.

Linux

On Linux, a nor­mal in­stal­la­tion is enough.

ma­cOS

Work in progress.

BMCU

I also en­cour­age you to use BMCU.

You can find BMCU firmware in my repos­i­to­ries.

GitHub - cactus-compute/needle: 26m function call model that runs on incredibly small devices

github.com

We dis­tilled Gemini 3.1 into a 26m pa­ra­me­ter Simple Attention Network” that you can even fine­tune lo­cally on your Mac/PC. In pro­duc­tion, Needle runs on Cactus at 6000 toks/​sec pre­fill and 1200 de­code speed. Weights are fully open on Cactus-Compute/needle, as well as the dataset gen­er­a­tion.

d=512, 8H/4KV, BPE=8192 ┌──────────────┐ │ Tool Call │ └──────┬───────┘ ┌┴──────────┐ │ Softmax │ └─────┬─────┘ ┌─────┴─────┐ │ Linear (T)│ ← tied └─────┬─────┘ ┌─────┴─────┐ │ ZCRMSNorm │ └─────┬─────┘ ┌────────┴────────┐ │ Decoder x 8 │ │┌───────────────┐│ ││ ZCRMSNorm ││ ││ Masked Self ││ ││ Attn + RoPE ││ ││ Gated Residual││ │├───────────────┤│ ┌──────────────┐ ││ ZCRMSNorm ││ │ Encoder x 12 │──────────────────────▶Cross Attn ││ │ │ ││ Gated Residual││ │ ┌──────────┐│└───────────────┘│ │ │ZCRMSNorm │ │ └────────┬────────┘ │ │Self Attn │ │ ┌─────┴─────┐ │ │ GQA+RoPE │ │ │ Embedding │ ← shared │ │Gated Res │ │ └─────┬─────┘ │ │ │ │ ┌───────┴───────-┐ │ │ (no FFN) │ │ │[EOS]<tool_call>│ │ └──────────┘ │ │ + an­swer │ │ │ └───────────────-┘ └──────┬───────┘┌────┴──────┐ │ Embedding │ └────┬──────┘┌────┴──────┐ │ Text │ │ query │ └───────────┘

Pretrained on 16 TPU v6e for 200B to­kens (27hrs).

Post-trained on 2B to­kens of sin­gle-shot func­tion call dataset (45mins).

Needle is an ex­per­i­men­tal run for Simple Attention Networks, geared at re­defin­ing tiny AI for con­sumer de­vies (phones, watches, glasses…). So while it beats FunctionGemma-270m, Qwen-0.6B, Graninte-350m, LFM2.5 – 350m on sin­gle-shot func­tion call for per­sonal AI, Those model are have more scope/​ca­pac­ity and ex­cel in con­ver­sa­tional set­tings. Also, small mod­els can be finicky. Please use the UI in the next sec­tion to test on your own tools, and fine­tune ac­cord­ingly, at the click of a but­ton.

Quickstart

git clone https://​github.com/​cac­tus-com­pute/​nee­dle.git cd nee­dle && source ./setup nee­dle play­ground

Opens a web UI at http://​127.0.0.1:7860 where you can test and fine­tune on your own tools. Weights are auto-down­loaded.

Usage (Python)

from nee­dle im­port SimpleAttentionNetwork, load­_check­point, gen­er­ate, get_­to­k­enizer

params, con­fig = load­_check­point(“check­points/​nee­dle.pkl”) model = SimpleAttentionNetwork(config) to­k­enizer = get_­to­k­enizer()

re­sult = gen­er­ate( model, params, to­k­enizer, query=“What’s the weather in San Francisco?”, tools=‘[{“name”:“get_weather”,“pa­ra­me­ters”:{“lo­ca­tion”:“string”}}]’, stream=False, ) print(re­sult) # [{“name”:“get_weather”,“arguments”:{“location”:“San Francisco”}}]

Finetuning

# Playground (generates data via Gemini, trains, eval­u­ates, bun­dles re­sult) nee­dle play­ground

# CLI (auto-downloads weights if not lo­cal) nee­dle fine­tune data.jsonl

CLI

nee­dle play­ground Test and fine­tune via web UI nee­dle fine­tune <data.jsonl> Finetune on your own data nee­dle run –query …” –tools Single in­fer­ence nee­dle train Full train­ing run nee­dle pre­train Pretrain on PleIAs/SYNTH nee­dle eval –checkpoint <path> Evaluate a check­point nee­dle to­k­enize Tokenize dataset nee­dle gen­er­ate-data Synthesize train­ing data via Gemini nee­dle tpu <action> TPU man­age­ment (see docs/​tpu.md)

@misc{ndubuaku2026needle, ti­tle={Nee­dle}, au­thor={Henry Ndubuaku, Jakub Mroz, Karen Mosoyan, Roman Shemet, Parkirat Sandhu, Satyajit Kumar, Noah Cylich, Justin H. Lee}, year={2026}, url={https://​github.com/​cac­tus-com­pute/​nee­dle} }

How I Moved My Digital Stack to Europe

monokai.com

On dig­i­tal sov­er­eignty, and why European cloud is bet­ter than you think

April 29, 2026 10 min. Digital SovereigntyDigital InfrastructureDigital AutonomyEuropean CloudEurope

There’s a ver­sion of this post that starts with a spread­sheet and ends with a quiet sense of sat­is­fac­tion. That’s mostly how it went. But un­der­neath the prac­ti­cal ex­er­cise of swap­ping one SaaS tool for an­other was some­thing that felt more ur­gent, a grow­ing dis­com­fort with how much of my dig­i­tal in­fra­struc­ture sat on servers I did­n’t con­trol, in a ju­ris­dic­tion in­creas­ingly prone to un­pre­dictabil­ity, op­er­ated by com­pa­nies whose in­cen­tives don’t al­ways align with mine.

Digital sov­er­eignty sounds like a buzz­word un­til you think care­fully about what it means. It means know­ing where your data lives. It means not be­ing one pol­icy change, one ac­qui­si­tion, or one ex­ec­u­tive’s bad mood away from los­ing ac­cess to tools your busi­ness de­pends on. It means choos­ing in­fra­struc­ture based on val­ues, not just con­ve­nience.

So I started mi­grat­ing.

Analytics

Google Analytics was the ob­vi­ous first tar­get. It’s the canon­i­cal ex­am­ple of a ser­vice that’s free be­cause you are the prod­uct, your vis­i­tors’ be­hav­ior fun­neled back into Google’s ad­ver­tis­ing ma­chin­ery.

Self-hosting Matomo solved this cleanly. The data stays on my own server, and I’m fully GDPR-compliant with­out the cookie con­sent the­ater that Google Analytics typ­i­cally re­quires. The re­port­ing is com­pre­hen­sive, the in­ter­face is fa­mil­iar enough, and I own every­thing.

The main down­side is main­te­nance over­head. You’re now re­spon­si­ble for up­dates, back­ups, and keep­ing the server healthy. For most se­tups this is low-fric­tion, but it’s not zero fric­tion.

Email

Proton Mail is based in Switzerland, not EU ter­ri­tory, but Swiss pri­vacy law is closely aligned with GDPR and ar­guably stronger in some re­spects. Proton builds its busi­ness model around pri­vacy rather than ad­ver­tis­ing, and end-to-end en­cryp­tion is baked in at the pro­to­col level rather than bolted on. The email client is solid, the cal­en­dar works well, and for any­one mov­ing away from US-based ser­vices, it sits com­fort­ably in the same spirit as the rest of this stack.

One ad­just­ment is get­ting used to Proton’s fil­ter sys­tem, which is a bit more lim­ited than Gmail’s. Gmail lets you write fil­ters against vir­tu­ally any­thing, in­clud­ing the full body of the mes­sage. Proton does­n’t sup­port fil­ter­ing on email con­tent at all. So if you’ve built a work­flow around catch­ing spe­cific phrases or key­words in mes­sage bod­ies, you’ll have to re­think it. For most peo­ple this won’t be a deal­breaker, but it’s worth know­ing be­fore you mi­grate.

There’s also a prac­ti­cal lim­i­ta­tion worth flag­ging: Proton caps cus­tom do­mains at three, even on the Duo plan. If you run sev­eral do­mains, like sep­a­rate ad­dresses for dif­fer­ent pro­jects or busi­nesses, you’ll hit that ceil­ing quickly and need to re­think how you route and send mail. I ended up con­sol­i­dat­ing, which was prob­a­bly over­due any­way, but it was­n’t a choice I made en­tirely freely.

Proton is­n’t free and charges a sub­stan­tial fee com­pared to other op­tions. You’ll get ac­cess to a whole suite of Proton apps though.

Password Management

Once I was in the Proton ecosys­tem, mov­ing pass­word man­age­ment there as well made sense. Proton Pass is end-to-end en­crypted, open source, and ben­e­fits from the same Swiss ju­ris­dic­tion as the rest of Proton’s stack.

1Password is a gen­uinely great prod­uct and this was a lat­eral move more than an up­grade. The in­ter­face is sim­ple, the browser ex­ten­sion works re­li­ably, and hav­ing pass­words, email, and cal­en­dar un­der one en­crypted roof has a cer­tain sat­is­fy­ing co­her­ence to it.

Compute

DigitalOcean has earned its rep­u­ta­tion by do­ing one thing ex­cep­tion­ally well: get­ting out of your way. The UI is clean, the men­tal model is sim­ple, and spin­ning up in­fra­struc­ture never feels like a chore. It’s the plat­form that proved de­vel­oper ex­pe­ri­ence could be a com­pet­i­tive moat.

Scaleway was a pleas­ant sur­prise. I ex­pected a ca­pa­ble-but-rough European al­ter­na­tive, but what I found was a plat­form that’s gen­uinely well thought out. Servers spun up quickly in­side a pri­vate net­work of my own con­fig­u­ra­tion, the con­trol panel is clean, and the op­tions avail­able matched every­thing I ac­tu­ally needed. Scaleway dis­plays pro­jected CO₂ emis­sions along­side server lo­ca­tion choices, a nice touch.

Object Storage

Scaleway’s ob­ject stor­age is S3-compatible, which makes mi­gra­tion me­chan­i­cal rather than painful, up­date your end­point and cre­den­tials and ex­ist­ing code works un­changed.

I used a tool called rclone to sync my old AWS S3 stor­age buck­ets to the new Scaleway S3 buck­ets. This took a lit­tle more than a week of con­stant sync­ing, as these buck­ets were quite large.

Offsite Backups

OVH is the largest European cloud provider and brings the re­li­a­bil­ity and pric­ing you’d ex­pect at that scale. Their ob­ject stor­age works well as a backup des­ti­na­tion and ends up cheaper than Backblaze B2 once you con­fig­ure life­cy­cle rules to move older back­ups to the cold stor­age class.

Getting there, how­ever, re­quires some pa­tience. The OVHcloud con­trol panel is a labyrinth: the life­cy­cle rule con­fig­u­ra­tion is buried some­where in the doc­u­men­ta­tion, and it in­volves some work in the ter­mi­nal. Once it’s set up, it works re­li­ably and the cost dif­fer­ence is mean­ing­ful.

Transactional Emails

Lettermint is a European trans­ac­tional email ser­vice that does the job with­out the bloat. Deliverability is solid, the API is clean, and it has straight­for­ward pric­ing.

Compared to SendGrid, the an­a­lyt­ics are leaner and the ecosys­tem in­te­gra­tions are fewer. SendGrid has years of tool­ing, doc­u­men­ta­tion, and com­mu­nity an­swers be­hind it. Lettermint is newer and smaller. For most trans­ac­tional send­ing use cases (password re­sets, no­ti­fi­ca­tions, re­ceipts) that does­n’t mat­ter much. But if you’re do­ing com­plex multi-stream email in­fra­struc­ture, you’ll want to au­dit the fea­ture set care­fully first.

Error Tracking

Bugsink is a self-hosted er­ror track­ing tool that ac­cepts Sentry’s SDK, which means the mi­gra­tion path is al­most fric­tion­less, change one line of con­fig­u­ra­tion and you’re done.

To be hon­est: Bugsink is bare-bones. There’s no per­for­mance mon­i­tor­ing, no ses­sion re­plays, no ad­vanced alert­ing. It’s not a Sentry re­place­ment for teams that use Sentry prop­erly. For me, it’s a sim­ple re­mote er­ror log, when some­thing breaks in pro­duc­tion I get a stack trace and that’s enough. Sentry’s cloud prod­uct is gen­uinely ex­cel­lent if you need the full fea­ture set, and for larger en­gi­neer­ing teams the breadth al­most cer­tainly jus­ti­fies the cost. But if your use case is tell me when some­thing broke and show me the stack trace”, self-hosted Bugsink does ex­actly that with no data leav­ing your in­fra­struc­ture.

AI API in­te­gra­tion

For my AI API in­te­gra­tions, I switched from OpenAI to Mistral. It worked out per­fectly as I was mostly us­ing sim­pler mod­els any­way.

Mistral is head­quar­tered in Paris and has pub­lished com­pelling open-weight mod­els along­side its API of­fer­ing. The API is clean, the mod­els are fast and ca­pa­ble, and there’s some­thing co­her­ent about a European AI provider that leans into open­ness rather than away from it. For my in­fer­ence work­loads, the switch was lat­eral in qual­ity and mean­ing­fully bet­ter in terms of where the money goes.

CDN

Exception № 1

Not every­thing moved. Cloudflare is a US com­pany, I still use it, and I’m at peace with that.

Here’s the rea­son­ing: Cloudflare sits in front of my pub­lic-fac­ing web­sites. Its job is to cache, pro­tect against DDoS at­tacks, and make con­tent load fast for vis­i­tors around the world. The data flow­ing through it is al­ready pub­lic by de­f­i­n­i­tion. I’m not rout­ing pri­vate com­mu­ni­ca­tions or sen­si­tive ap­pli­ca­tion data through Cloudflare; I’m us­ing it to serve pages that any­one on the in­ter­net can read. The sov­er­eignty cal­cu­lus is dif­fer­ent when the thing you’re pro­tect­ing is al­ready pub­lic.

I did try Bunny CDN, which is European-based and has a great rep­u­ta­tion. For straight­for­ward CDN use it’s ex­cel­lent. But Cloudflare’s fea­ture set (security rules, Workers plat­form, breadth of con­fig­u­ra­tion op­tions) was­n’t matched closely enough to jus­tify the switch for my spe­cific needs. Sometimes the prag­matic an­swer wins.

Payments

Exception № 2

Stripe is one of the few ser­vices I haven’t moved yet, even though pay­ment in­fra­struc­ture is ex­actly the kind of thing I care about hav­ing in a ju­ris­dic­tion I trust. Mollie is a Dutch pay­ment proces­sor with full EU in­cor­po­ra­tion, strong GDPR com­pli­ance by de­sign, and a prod­uct that has ma­tured con­sid­er­ably in re­cent years. The API has con­verged to­ward par­ity for most com­mon pay­ment flows, and for a European busi­ness the re­gional pay­ment method cov­er­age (iDEAL, Bancontact, SEPA) is ar­guably bet­ter.

The mi­gra­tion is on the list. It’s just not a triv­ial one. Payment in­te­gra­tions touch billing logic, web­hooks, tax in­voic­ing and cus­tomer-fac­ing flows in ways that re­quire care­ful test­ing and a good mo­ment to cut over. It’s also more ex­pen­sive than Stripe for my use­case.

AI Code as­sis­tance

Exception № 3

This one felt over­due. OpenAI works fine, but the com­pa­ny’s tra­jec­tory does­n’t align with my own views any­more. After a pe­riod of de­lib­er­ate drift, I felt the need to switch. Ideally I wanted to use Mistral Vibe here, but it just did­n’t make the cut as it could­n’t com­pete with Claude.

Claude Code is now my day-to-day AI as­sis­tant for cod­ing. The rea­son­ing qual­ity is strong, the con­text han­dling is gen­uinely im­pres­sive, and Anthropic’s ap­proach to safety and trans­parency feels more struc­turally grounded.

Anthropic is a US com­pany, so this does­n’t sat­isfy the ju­ris­dic­tional cri­te­rion I ap­plied else­where. But it sat­is­fies some­thing else, the sense that the or­ga­ni­za­tion build­ing the thing has given se­ri­ous thought to what it’s build­ing and why.

It’s also worth not­ing that lo­cal mod­els are be­com­ing in­creas­ingly vi­able. Qwen, Alibaba’s open-weight model fam­ily, is a strong ex­am­ple: ca­pa­ble enough for many real work­loads, run­ning en­tirely on your own hard­ware, with no data leav­ing your ma­chine. The gap be­tween fron­tier API mod­els and what you can run lo­cally is nar­row­ing faster than most peo­ple re­al­ize.

Not every­thing is ideal. Most data cen­ters still sit out­side Europe, and open” means dif­fer­ent things to dif­fer­ent or­ga­ni­za­tions. But the di­rec­tion is right. A world where ca­pa­ble AI runs on your own hard­ware, with pub­lished weights and trans­par­ent train­ing, is a much bet­ter world for dig­i­tal au­ton­omy than one where all in­fer­ence routes through a hand­ful of closed API providers. We’re not there yet, but the tra­jec­tory is en­cour­ag­ing.

Git Version Control

Exception № 4

GitLab also re­mains for now. GitLab is head­quar­tered in the US but of­fers self-hosted op­tions, and the com­pany has long had a strong com­mit­ment to trans­parency and open source. A self-hosted in­stance is on the roadmap, but mov­ing source con­trol is a more sig­nif­i­cant un­der­tak­ing than most of these mi­gra­tions.

GitHub stays in the pic­ture for one spe­cific pur­pose: pub­lic-fac­ing NPM pack­ages and is­sue track­ing for open source soft­ware. When you pub­lish a pack­age or main­tain pub­lic tool­ing, GitHub is where de­vel­op­ers ex­pect to find it. The net­work ef­fects are real, it’s where the forks, stars, and is­sue re­ports come from. For the pub­lic-fac­ing sur­face of open source work, there’s no mean­ing­ful sov­er­eignty con­cern and a lot of prac­ti­cal up­side.

Was it worth it?

The prac­ti­cal fric­tion was real but man­age­able. Most mi­gra­tions were an af­ter­noon of work: up­date a cre­den­tial here, point a DNS record there, ex­port and im­port some data. A few took longer. None were cat­a­strophic. All in all it took longer than ex­pected, but most time was spent in re­search­ing and plan­ning when to do what. Two months in, every­thing is run­ning with­out in­ci­dent. No fires, no re­grets.

Digital sov­er­eignty is­n’t about para­noia. It’s about be­ing con­scious about your in­fra­struc­ture, where you de­cide who holds your data, who can reach it, and what hap­pens when pol­i­tics shift. The tools are there. The ecosys­tem is mostly ma­ture. The only thing that was stop­ping me was in­er­tia. It’s en­tirely pos­si­ble to run a re­li­able, ca­pa­ble, pro­fes­sional dig­i­tal stack mostly from European in­fra­struc­ture. This mi­gra­tion was proof of that.

How To Make Your Text Look Futuristic

typesetinthefuture.com

We’ve al­ready seen how Eurostile Bold Extended is spec­tac­u­larly ef­fec­tive at es­tab­lish­ing a movie’s time­frame. But if Eurostile is­n’t enough, there’s more you can do to clar­ify your movie’s time­frame. I’d like to in­tro­duce you to six easy rules that are pretty much guar­an­teed to po­si­tion your text firmly in the FU­TURE.

We’ll start with some sim­ple sans-serif text, such as this ran­domly cho­sen word in Eu­ros­tile Bold. So far, so 2016:

Rule 1: First, let’s add an italic slant. We want it to look like the text is stretch­ing to­wards 2020:

Hmm. It’s still a lit­tle bor­ing. Rule 2: What if we make things a bit more curvy in some places, and a bit more an­gu­lar in oth­ers? I hear that’s all the ty­po­graphic rage in 2035:

That’s much bet­ter! There’s still more we can do, mind. Rule 3: How about adding some con­sum­mate Vs to a few of the let­ters? Yeah! That’d be cool!

Hello, 2050:

There’s still some­thing miss­ing, how­ever — we’ve for­got­ten to take into ac­count the dev­as­tat­ing Kern Wars of 2067. Rule 4: Let’s com­bine a few let­ters into one, to make sure we’re not vi­o­lat­ing the Kern Tithe:

Now we’re talk­ing! Let’s end with Rule 5: Remove an en­tirely point­less and ar­bi­trar­ily seg­ment of the text. In this case, we’ll re­move a hor­i­zon­tal line from the ma­jor­ity of the word:

WOAH. That looks amaz­ing! Who knew 2092 was so easy to reach?

D’you know what –I think we need a Rule 6 too. Let’s add a noise tex­ture, some shame­lessly steel-brushed metal, and a bit of moody blue light­ing:

Finally, let’s em­boss it to within an inch of its life:

…and add a god-damn star field:

BOOM. Welcome to the FUTURE!

Here’s how it looks if you put the whole thing to­gether:

Now. Various per­mu­ta­tions of these six rules have been ap­plied in many dif­fer­ent movies. Perhaps the Ur Example is Ridley Scott’s Blade Runner, which I may have slightly copied in my ex­am­ple above:

Blade Runner is far from the only ex­am­ple, how­ever. The lo­go­type for 2003’s Battlestar Galactica minis­eries fol­lows pretty much every rule to the let­ter (and adds some ex­truded Eurostile Bold Extended for good mea­sure):

Transformers is sim­i­larly all-en­com­pass­ing, tak­ing the brushed metal ef­fect to the ex­treme:

Guardians of the Galaxy also uses pretty much every trick apart from the ital­ics:

…whereas RoboCop is all about those con­sum­mate Vs, plus the world’s most ex­treme em­boss­ment:

Star Wars, of course, takes Rule 4 and runs with it all the way to the bank:

…whereas The Amazing Spider-Man fol­lows nearly all of the rules (and takes Rule 2 to ex­tremes), al­though it will be re­ceiv­ing a visit from the Tithe Kern Police for Opportunities Missed:

Captain America: The Winter Soldier re­ally likes ap­ply­ing Rules 2 and 3, plus some of the best Rule 6 you’ll ever see:

Alien vs. Predator is ridicu­lously italic and metal­lic:

G.I. Joe: Retaliation uses every trick apart from the kern­ing:

And WALL·E is all about Rule 2:

Finally: if you have any lin­ger­ing doubts that these six rules spell FUTURE, here are Rules 1, 2, and 4 in ac­tion for the iconic lo­go­type for none other than Back To The Future itself:

Hello, the FUTURE. It’s good to be back.

UPDATE: Several peo­ple have noted that I missed pretty much the text­book­ing-est text­book ex­am­ple of this trope — namely, Star Trek: The Next Generation:

I mean se­ri­ously. It even has a God-Damn Star Field in the back­ground. (I swear I didn’t have this lo­go­type in mind when I wrote the ar­ti­cle, but boy, does it prove the point.)

FUN FACT: An ex­panded ver­sion of this ar­ti­cle ap­pears in the Typeset in the Future book, avail­able on December 11 2018. You can pre-or­der it now on Amazon.

The future of Obsidian plugins

obsidian.md

Today we’re ex­cited to launch Obsidian Community, the new di­rec­tory and de­vel­oper dash­board for Obsidian plu­g­ins and themes.

Since the Obsidian API re­lease in 2020, more than 4,000 plu­g­ins and themes have been cre­ated by our amaz­ing com­mu­nity. Incredibly, Obsidian plu­g­ins have passed 120 mil­lion to­tal down­loads!

Our goal is to make it easy and safe for any­one to build, dis­trib­ute, dis­cover, and use plu­g­ins and themes.

Today’s launch is only the start of a larger set of ini­tia­tives. We’re ex­cited to share what’s new, and what’s com­ing soon.

Community site

Developer dash­board

Automated re­views

Plugin safety

Tools for teams

Next steps

FAQ

Community site

The new Community site makes it easy to ex­plore the breadth of plu­g­ins and themes with new ways to browse, search, fil­ter, and sort.

You can browse plu­g­ins across dozens of cat­e­gories such as Integrations, Bases, Charts, and many more cat­e­gories. Sort pro­jects by name, down­loads, pop­u­lar­ity, re­lease date, and up­dated date.

Every pro­ject has its own de­tail page where you can find screen­shots, de­tails, and a safety score­card. New la­bels are pre­sent for paid plu­g­ins and of­fi­cial in­te­gra­tions.

Authors can cus­tomize their pro­file pages with spon­sor­ship op­tions and links to their web­site and so­cial me­dia.

Developer dash­board

The Obsidian Community site also hosts our new de­vel­oper dash­board. This is where au­thors can sub­mit, man­age, and track the sta­tus of their pro­jects.

All ex­ist­ing plu­g­ins, themes, and queued sub­mis­sions added via GitHub have been au­to­mat­i­cally mi­grated to the new site.

To claim your ex­ist­ing pro­jects, sign into the new Community site and con­nect your GitHub ac­count. This lets you man­age your ex­ist­ing pro­jects, sub­mit new pro­jects, and edit your pro­file page.

Automated re­views

With this tran­si­tion we are in­tro­duc­ing au­to­mated re­views for all com­mu­nity pro­jects. The new au­to­mated re­view sys­tem scans every ver­sion for se­cu­rity and code qual­ity, not just the ini­tial sub­mis­sion.

Until to­day, ini­tial sub­mis­sions were man­u­ally re­viewed and ap­proved by our small team to en­sure they fol­low the Developer Policies. However, as Obsidian has grown in pop­u­lar­ity we strug­gled to keep pace with sub­mis­sions, and sub­se­quent ver­sions were not re­viewed.

As cod­ing agents ac­cel­er­ate the cre­ation of plu­g­ins, the re­view queue was only get­ting longer. We don’t ex­pect the pace of new sub­mis­sions to slow down. With tools like Obsidian CLI we’re mak­ing it even eas­ier to cre­ate plu­g­ins.

Now when a plu­gin or theme is sub­mit­ted, the au­to­mated re­view sys­tem ver­i­fies that it ad­heres to our de­vel­oper poli­cies, that the source code fol­lows best prac­tices, and that it is free of known vul­ner­a­bil­i­ties.

Building on this new sys­tem al­lows us to scal­ably re­view com­mu­nity pro­jects go­ing for­ward. With the abil­ity to con­tin­u­ously im­prove our au­to­mated tests, we are more equipped to com­pre­hen­sively im­prove the qual­ity and safety of the Obsidian ecosys­tem.

Importantly, man­ual re­views will con­tinue. The new sys­tem al­lows us to shift our ef­forts to­wards plu­g­ins that re­quire deeper in­spec­tion such as pop­u­lar plu­g­ins, fea­tured plu­g­ins, and is­sues flagged by the com­mu­nity.

All ex­ist­ing plu­g­ins and themes have been re-re­viewed us­ing the new sys­tem. In this process we found older plu­g­ins and themes that do not meet the lat­est guide­lines. These older pro­jects have been tem­porar­ily granted an ex­cep­tion. However, all plu­g­ins and themes that do not pass the new re­view process will even­tu­ally be phased out of the of­fi­cial di­rec­tory. See FAQs be­low.

And… Yes! All queued sub­mis­sions have been re­viewed. With the new sys­tem we were able to process over 2,300 queued sub­mis­sions in the last few days. If you’ve been wait­ing on us to re­view your plu­gin, sign into the Community site to see your sub­mis­sion’s cur­rent sta­tus.

Plugin safety

The new Community site and au­to­mated re­view sys­tem in­tro­duces ma­jor en­hance­ments for the safety and se­cu­rity of the Obsidian ecosys­tem:

Automated scans. Every ver­sion is now au­to­mat­i­cally checked for code qual­ity and se­cu­rity vul­ner­a­bil­i­ties. This in­cludes mal­ware scan­ning to de­tect po­ten­tially ma­li­cious ad­di­tions to plu­g­ins. Developers can see de­tailed sug­ges­tions, warn­ings, and fail­ure flags for every pro­ject in the de­vel­oper dash­board.

Scorecards. Users and de­vel­op­ers can see the sta­tus of au­to­mated checks with score­cards on every pro­ject. These score­cards will con­tinue to im­prove as we in­cor­po­rate dis­clo­sures, pri­vacy la­bels, ar­ti­fact at­tes­ta­tion, man­ual re­view re­sults, and adop­tion of app ca­pa­bil­i­ties.

Over the com­ing months, we will fur­ther in­crease trans­parency about plu­g­ins and their au­thors:

Disclosures. Plugins will de­clare what they ac­cess: net­work, file sys­tem, clip­board, and other ca­pa­bil­i­ties. Users will be able to see these dis­clo­sures be­fore in­stalling plu­g­ins.

Verified au­thors. Labels will be added for trusted de­vel­op­ers that have passed ad­di­tional ver­i­fi­ca­tion steps and are in good stand­ing.

As a mem­ber of the Obsidian com­mu­nity you play a part in keep­ing the ecosys­tem safe. Users can al­ways flag se­cu­rity is­sues di­rectly to the Obsidian team.

Tools for teams

Teams that use Obsidian can al­ready de­ploy safety con­trols for their users. In the com­ing months we will make it eas­ier for teams to man­age which com­mu­nity plu­g­ins are al­lowed, and dis­trib­ute pri­vate plu­g­ins to team mem­bers.

Teams that pub­lish of­fi­cial Obsidian plu­g­ins can now ap­ply for the Official badge in the Community di­rec­tory. Reach out to us if your plu­gin qual­i­fies.

Next steps

As you can tell, there are many mov­ing parts! Along with im­prove­ments to the Community di­rec­tory and au­to­mated re­view sys­tem, we will also make changes to the Obsidian app and API to im­prove dis­cov­ery and safety.

The com­mu­nity ecosys­tem is one of the most fun and pow­er­ful as­pects of Obsidian. We’re ex­cited to give it the foun­da­tion needed to con­tinue flour­ish­ing.

We’d love for you to ex­plore the new Obsidian Community and share your feed­back with us!

FAQ

Whew! That was a lot of in­for­ma­tion, but you might still have some ques­tions. If your ques­tions are not an­swered be­low please reach to us via the #plugin-dev and #theme-dev chan­nels on the of­fi­cial Obsidian Discord server.

As a user how does this af­fect me?

You can use the new Community site to ex­plore plu­g­ins and themes. If you’ve in­stalled early-ac­cess plu­g­ins man­u­ally, you might not need to any­more be­cause the re­view time has been been cut down dra­mat­i­cally.

I found an er­ror in a score­card. What do should I do?

Scorecards are new and can con­tain er­rors. You may find false pos­i­tives and false neg­a­tives. If you no­tice some­thing in­ac­cu­rate con­tact us in the #plugin-dev chan­nel on the Obsidian Discord server.

A plu­gin has in­cor­rect tags or la­bels (e.g. of­fi­cial, paid, free). What do should I do?

Developers can up­date a pro­jec­t’s tags and pric­ing us­ing the de­vel­oper dash­board. Only Obsidian staff can up­date the Official la­bel. If you see any is­sues con­tact us on the Obsidian Discord server.

How do I sub­mit a new plu­gin or theme?

The process is sig­nif­i­cantly eas­ier and faster than be­fore:

Sign into the Community site to ac­cess the new de­vel­oper dash­board.

Connect your GitHub ac­count and choose a repo to sub­mit.

Complete the steps in the dash­board.

Upon sub­mis­sion your pro­ject will be im­me­di­ately re­viewed. Typically, you will see the re­sults of your re­view within a few min­utes.

If your pro­ject passes, it will be avail­able to search and down­load in the app within 24 hours.

How do I claim my ex­ist­ing plu­g­ins and themes?

Sign into the Community site to ac­cess the new de­vel­oper dash­board. This lets you con­nect your GitHub ac­count and claim your plu­g­ins. Once signed in, you can up­date the ti­tle, de­scrip­tion, and screen­shots.

Why does my pro­ject not ap­pear in the di­rec­tory?

If your pro­ject does not ap­pear in search it is likely be­cause it has er­rors and can­not be down­loaded by users. Sign into the Community site, claim the plu­gin, and re­solve any er­rors.

Can I still up­date my plu­gin/​theme with­out us­ing the de­vel­oper dash­board?

Yes. You can con­tinue to re­lease new ver­sions via GitHub with­out us­ing the new de­vel­oper dash­board. New re­leases are au­to­mat­i­cally re­viewed. However if your up­date fails to pass re­view, you will need to use the de­vel­oper dash­board to see all the de­tails.

What hap­pens to pro­jects that are no longer main­tained?

Our ex­ist­ing pol­icy re­mains un­changed. When sub­mit­ting plu­g­ins and themes, de­vel­op­ers agree to con­tinue main­tain­ing their pro­jects. If the pro­ject is no longer main­tained, no longer func­tions with newer ver­sions of Obsidian, and has not been trans­ferred to a new owner, it will even­tu­ally be re­moved from the Community di­rec­tory per the Developer Policies.

As we con­tinue im­prov­ing dis­cov­ery tools, it will be­come eas­ier to find plu­g­ins that are ac­tively main­tained and up to date with Obsidian’s lat­est ca­pa­bil­i­ties.

What hap­pens to plu­g­ins and themes that fail the au­to­mated re­view sys­tem?

All new plu­g­ins and themes must pass au­to­mated re­view be­fore they are added to the di­rec­tory and avail­able via search. Each new ver­sion is scanned, and if it fails to pass re­view, the plu­gin is re­moved from search within 24 hours. You can test changes be­fore re­leas­ing them us­ing new tools listed be­low.

All plu­g­ins and themes that were pre­vi­ously ap­proved will con­tinue to be avail­able for now, even if they fail the au­to­mated re­view. Eventually, we will re­quire older plu­g­ins to meet the new stan­dards. We have not set a dead­line for this yet and will be work­ing closely with com­mu­nity de­vel­op­ers to de­fine that tran­si­tion.

Can I run the au­to­mated re­view with­out sub­mit­ting a re­lease?

Yes. Two op­tions:

Use our es­lint plu­gin to check your Obsidian plu­gin against the of­fi­cial de­vel­oper guide­lines lo­cally.

Use the de­vel­oper dash­board to run a pre­view scan on any branch, tag, or com­mit.

My plu­gin has co-main­tain­ers or be­longs to an or­ga­ni­za­tion. How can I give mul­ti­ple users ac­cess to the plu­gin in the de­vel­oper dash­board?

Currently only the owner of a GitHub repo can edit it in Obsidian Community. Organization re­pos can be claimed and edited if you have a pub­lic mem­ber­ship to the or­ga­ni­za­tion. In the near fu­ture we will add sup­port for mul­ti­ple col­lab­o­ra­tors.

Can closed source plu­g­ins be added to the new di­rec­tory?

For now, we are not ac­cept­ing new closed source plu­g­ins into the di­rec­tory. Existing closed source plu­g­ins will con­tinue to be avail­able un­til fur­ther no­tice. In the fu­ture we will con­sider how the new re­view sys­tem can be adapted for closed source plu­g­ins.

Does the de­vel­oper dash­board re­quire an Obsidian ac­count?

Yes. You must have an Obsidian ac­count to ac­cess the new de­vel­oper dash­board.

Does the new site re­quire us­ing GitHub?

For now, yes. In the fu­ture we will con­sider adding other soft­ware host­ing plat­forms.

What is shared with Obsidian when log­ging in via GitHub?

Logging in via GitHub shares your user­name and list of pub­lic repos­i­to­ries. It is only used to ver­ify own­er­ship of your repos­i­tory.

What does it mean for a plu­gin to be Paid or have Optional Payments?

Obsidian Community is not a store, and does not of­fer any built-in pay­ment so­lu­tions.

Developers can con­tinue to use ex­ter­nal pay­ment mech­a­nisms such as li­cense keys, API keys, and lo­gin gates. Developers must ac­cu­rately la­bel plu­g­ins un­der one of these three cat­e­gories:

Free means the plu­gin does not have any form pay­ment and is not tied to any paid ser­vices what­so­ever. Donation links and spon­sor­ship links are ac­cept­able for Free plu­g­ins.

Optional pay­ments means users may op­tion­ally pay to un­lock ad­di­tional fea­tures or the plu­gin con­nects to paid ser­vices. If a plu­gin con­nects to a paid ser­vice or API, it must be la­beled as hav­ing Optional pay­ments, even if the ser­vice has a free tier.

Paid means users must pay to use its pri­mary fea­tures, even if it of­fers a free trial.

These la­bels de­ter­mine what users should ex­pect to pay, not whether the de­vel­oper of the plu­gin col­lects pay­ments.

What should I do if I run into prob­lems with the new Community di­rec­tory and de­vel­oper dash­board?

Our team is here to help to make sure every­thing goes smoothly. If you have any ques­tions or con­cerns, reach out to us in the #plugin-dev chan­nel of the Obsidian Discord server.

Tell New York Times, The Atlantic, and USA Today to keep the crucial work of journalists in the Wayback Machine!

www.savethearchive.com

Petition Text

Dear lead­ers of ma­jor me­dia out­lets,

The free­dom of jour­nal­ists is­n’t only the free­dom to write, it’s also the free­dom to have your work read and re­mem­bered for gen­er­a­tions to come. 2026 is the first World Press Freedom Day in 30 years that jour­nal­ists’ work at ma­jor me­dia out­lets in­clud­ing New York Times, The Atlantic, and USA Today is not be­ing pre­served by the in­de­pen­dent, non­profit Internet Archive. We are call­ing on you and on all news out­lets to pub­licly com­mit to work­ing with the Internet Archive to keep the news in the Wayback Machine.

Since February of this year, the New York Times has told the Internet Archive to stop its Wayback Machine from pre­serv­ing the work of its jour­nal­ists. Meanwhile, Wired re­cently re­ported how USA Today is pub­lish­ing pow­er­ful re­port­ing that re­lies on the Wayback Machine, while iron­i­cally block­ing it from archiv­ing that same re­port­ing. And when over 100 jour­nal­ists de­liv­ered a let­ter cel­e­brat­ing the Internet Archive for their re­spect­ful preser­va­tion of jour­nal­ism, gen­er­at­ing a wave of tech-vi­ral angst, the CEO of The Atlantic weighed in but did­n’t com­mit to find­ing a so­lu­tion. The con­cerns about AI that these pub­li­ca­tions cited as a rea­son to ban the Wayback Machine are wholly hy­po­thet­i­cal. Journalists, and this non­profit pub­lic good that they rely on, de­serve bet­ter.

Though other web­sites use the word archive” and try to style them­selves as sim­i­lar to the Internet Archive, the Wayback Machine is­n’t a flash-in-the-pan ser­vice that skips over pay­walls. It has been pre­serv­ing the news longer than many peo­ple who sign this pe­ti­tion have been alive. Generative AI is the worst ex­cuse to hide prin­ci­pled re­port­ing from fact-check­ers. If any­thing, AI is the top rea­son why the Wayback Machine is more cru­cial than ever. The truth is that AI com­pa­nies can eas­ily do what knock­off archiv­ing sites are do­ing: ig­nore the rules and grab the news off of pub­lish­er’s web­sites with­out their con­sent. There is lit­tle to stop them. There’s only one rea­son that the Internet Archive is­n’t do­ing what most of Silicon Valley is: in­tegrity. This in­tegrity shows us that the Internet Archive is trust­wor­thy and aims to op­er­ate for a very long time.

It should. Censorship and au­thor­i­tar­i­an­ism are grow­ing, along with pres­sure to al­ter re­port­ing and erase facts. Journalists fre­quently face death threats, and many have died across the past year for their work. The least we can do out of re­spect dur­ing these hor­rors is to shore up the Wayback Machine’s neu­tral third party preser­va­tion ef­forts so these brave jour­nal­ists’ work is not lost. Their re­port­ing must re­main ac­ces­si­ble not only to their col­leagues and loved ones, but to the eyes of his­tory.

The Wayback Machine makes every on­line news out­let it archives more re­silient against pres­sure to re­move sto­ries that threaten the pow­er­ful. It is in the in­ter­est of any news out­let that still does real jour­nal­ism to cham­pion such an ally in times like these. It should­n’t be this hard to find a way to in­de­pen­dently pre­serve the news. We call on the lead­er­ship of ma­jor me­dia out­lets to com­mit to work­ing with the Internet Archive and get­ting all the news in the Wayback Machine now!

Sincerely,

The Undersigned

Just a moment...

www.epicfurious.com

Canada’s Bill C-22 Is a Repackaged Version of Last Year’s Surveillance Nightmare

www.eff.org

Last year, the Canadian gov­ern­ment pushed Bill C-2, which would erode Canadian dig­i­tal rights in the name of border se­cu­rity.” The bill was so bad it did­n’t even make it to com­mit­tee be­cause of the back­lash from the pri­vacy com­mu­nity. Now, the spring’s worst se­quel, Bill C-22, aka The Lawful Access Act, is try­ing it again.

As with most se­quels, Bill C-22 makes some tweaks to prob­lem­atic el­e­ments, but largely re­tains the same prob­lems. The bill forces dig­i­tal ser­vices, which could in­clude tele­coms, mes­sag­ing apps, and more, to record and re­tain meta­data for a full year, and ex­pands in­for­ma­tion shar­ing with for­eign gov­ern­ments, in­clud­ing the United States. Metadata can re­veal a lot about who you com­mu­ni­cate with, where you go, and when you do so. Expanding the col­lec­tion of meta­data would re­quire com­pa­nies to store even more in­for­ma­tion about their users than they al­ready do, pro­vid­ing an in­cen­tive for bad ac­tors to ac­cess that in­for­ma­tion.

Worst of all, Bill C-22 erodes the pri­vacy of mil­lions by pro­vid­ing a mech­a­nism for the Minister of Public Safety to de­mand com­pa­nies cre­ate a back­door to their ser­vices to pro­vide law en­force­ment ac­cess to data, as long as these man­dates don’t in­tro­duce a systemic vul­ner­a­bil­ity.” These wide­spread sur­veil­lance back­doors would likely fa­cil­i­tate even more data breaches than we see al­ready. The bill also bans com­pa­nies from even re­veal­ing the ex­is­tence of these or­ders pub­licly.

The de­f­i­n­i­tions of both systemic vul­ner­a­bil­i­ties” and encryption” are not clear enough in C-22, leav­ing wig­gle room for the gov­ern­ment to de­mand that com­pa­nies cir­cum­vent en­cryp­tion. And the over­broad de­f­i­n­i­tions in the bill can in­clude apps as well as op­er­at­ing sys­tems. Canadian of­fi­cials have made it clear they be­lieve it’s pos­si­ble to add sur­veil­lance with­out in­tro­duc­ing sys­temic vul­ner­a­bil­i­ties, which is just not true. Surveillance of en­crypted com­mu­ni­ca­tions is fun­da­men­tally a sys­temic vul­ner­a­bil­ity.

This re­sem­bles what hap­pened in the UK last year, when the gov­ern­ment de­manded that Apple im­ple­ment this type of back­door into its op­tional Advanced Data Protection fea­ture, which then forced Apple to re­voke the fea­ture for its UK users in­stead of com­ply­ing with the re­quest. To this day, UK users still do not have ac­cess to this pow­er­ful, pri­vacy-pro­tec­tive fea­ture that pro­vides stronger pro­tec­tions for data stored in iCloud. Both Meta and Apple are con­cerned that C-22 would give the Canadian gov­ern­ments sim­i­lar pow­ers, and both com­pa­nies have come out against the bill. The U.S. House Judiciary and Foreign Affairs com­mit­tees also sent a joint let­ter to Canada’s Minister of Public Safety high­light­ing the con­cern around back­doors into en­crypted sys­tems.

The dan­gers of these sorts of back­doors are not the­o­ret­i­cal. In 2024, the Salt Typhoon hack took ad­van­tage of a sys­tem built by Internet Service Providers to give law en­force­ment ac­cess to user data. When you build these sys­tems, hack­ers will come.

Canadians de­serve strong pri­vacy pro­tec­tions, trans­parency into how com­pa­nies han­dle user data, and clear safe­guards around en­crypted data. Bill C-22 pro­vides none of that, in­stead reach­ing fur­ther into the dig­i­tal pock­ets of tech com­pa­nies to build broad law­ful ac­cess mech­a­nisms.

Further read­ing

Full text of C-22

Canadian Civil Liberties Association state­ment and let­ter

Open Media blog on C-22

EFFs blog on bill C-2

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.