10 interesting stories served every morning and every evening.




1 812 shares, 48 trendiness

How I run multiple $10K MRR companies on a $20/month tech stack

Last night, I was re­jected from yet an­other pitch night. It was just the pre-in­ter­view, and the prob­lem was­n’t my prod­uct. I al­ready have MRR. I al­ready have users who de­pend on it every day.

The feed­back was sim­ply: What do you even need fund­ing for?”

I hear this time and time again when I try to grow my ideas. Running lean is in my DNA. I’ve built tools you might have used, like web­se­quence­di­a­grams.com, and niche prod­ucts you prob­a­bly haven’t, like eh-trade.ca. That ob­ses­sion with ef­fi­ciency leads to suc­cess­ful boot­strap­ping, and hon­estly, a lot of VCs hate that.

Keeping costs near zero gives you the ex­act same run­way as get­ting a mil­lion dol­lars in fund­ing with a mas­sive burn rate. It’s less stress­ful, it keeps your ar­chi­tec­ture in­cred­i­bly sim­ple, and it gives you ad­e­quate time to find prod­uct-mar­ket fit with­out the pres­sure of a board breath­ing down your neck.

If you are tired of the mod­ern Enterprise” boil­er­plate, here is the ex­act play­book of how I build my com­pa­nies to run on nearly noth­ing.

The naive way to launch a web app in 2026 is to fire up AWS, pro­vi­sion an EKS clus­ter, set up an RDS in­stance, con­fig­ure a NAT Gateway, and ac­ci­den­tally spend $300 a month be­fore a sin­gle user has even looked at your land­ing page.

The smart way is to rent a sin­gle Virtual Private Server (VPS).

First thing I do is get a cheap, re­li­able box. Forget AWS. You aren’t go­ing to need it, and their con­trol panel is a labyrinth de­signed to ex­tract billing up­grades. I use Linode or DigitalOcean. Pay no more than $5 to $10 a month.

1GB of RAM sounds ter­ri­fy­ing to mod­ern web de­vel­op­ers, but it is plenty if you know what you are do­ing. If you need a lit­tle breath­ing room, just use a swap­file.

The goal is to serve re­quests, not to main­tain in­fra­struc­ture. When you have one server, you know ex­actly where the logs are, ex­actly why it crashed, and ex­actly how to restart it.

Now you have con­straints. You only have a gi­ga­byte of mem­ory. You could run Python or Ruby as your main back­end lan­guage—but why would you? You’ll spend half your RAM just boot­ing the in­ter­preter and man­ag­ing gu­ni­corn work­ers.

I write my back­ends in Go.

Go is in­fi­nitely more per­for­mant for web tasks, it’s strictly typed, and—cru­cially for 2026—it is in­cred­i­bly easy for LLMs to rea­son about. But the real magic of Go is the de­ploy­ment process. There is no pip in­stall de­pen­dency hell. There is no vir­tual en­vi­ron­ment. You com­pile your en­tire ap­pli­ca­tion into a sin­gle, sta­t­i­cally linked bi­nary on your lap­top, scp it to your $5 server, and run it.

Here is what a com­plete, pro­duc­tion-ready web server looks like in Go. No bloated frame­works re­quired:

pack­age main

im­port (

fmt”

net/http”

func main() {

http.Han­dle­Func(“/”, func(w http.Re­spon­seWriter, r *http.Request) {

fmt.Fprintf(w, Hello, your MRR is safe here.“)

// This will com­fort­ably han­dle 10,000s of re­quests per sec­ond

// on a potato.

http.Lis­te­nAnd­Serve(”:8080″, nil)

If you have a graph­ics card sit­ting some­where in your house, you al­ready have un­lim­ited AI cred­its.

When I was build­ing eh-trade.ca, I had a spe­cific prob­lem: I needed to per­form deep, qual­i­ta­tive stock mar­ket re­search on thou­sands of com­pa­nies, sum­ma­riz­ing mas­sive quar­terly re­ports. The naive so­lu­tion is to throw all of this at the OpenAI API. I could have paid hun­dreds of dol­lars in API cred­its, only to find a logic bug in my prompt loop that re­quired me to run the whole batch over again.

Instead, I’m run­ning VLLM on a dusty $900 graph­ics card (an RTX 3090 with 24GB of VRAM) I bought off Facebook Marketplace. It’s an up­front in­vest­ment, sure, but I never have to pay a toll to an AI provider for batch pro­cess­ing again.

For lo­cal AI, you have a dis­tinct up­grade path:

* Start with Ollama. It sets up in one com­mand (ollama run qwen3:32b) and lets you try out dozens of mod­els in­stantly. It’s the per­fect en­vi­ron­ment for it­er­at­ing on prompts.

* Move to VLLM for pro­duc­tion. Once you have a sys­tem that works, Ollama be­comes a bot­tle­neck for con­cur­rent re­quests. VLLM locks your GPU to one model, but it is dras­ti­cally faster be­cause it uses PagedAttention. Structure your sys­tem so you send 8 or 16 async re­quests si­mul­ta­ne­ously. VLLM will batch them to­gether in the GPU mem­ory, and all 16 will fin­ish in roughly the same time it takes to process one.

* Use Transformer Lab for any­thing more ad­vanced. If you need to do any model pre-train­ing or fine-tun­ing, Transformer Lab makes it easy on lo­cal hard­ware.

To man­age all this, I built la­conic, an agen­tic re­searcher specif­i­cally op­ti­mized for run­ning in a con­strained 8K con­text win­dow. It man­ages the LLM con­text like an op­er­at­ing sys­tem’s vir­tual mem­ory man­ager—it pages out” the ir­rel­e­vant bag­gage of a con­ver­sa­tion, keep­ing only the ab­solute most crit­i­cal facts in the ac­tive LLM con­text win­dow.

I also use llmhub, which ab­stracts any LLM into a sim­ple provider/​end­point/​apikey combo, grace­fully han­dling both text and im­age IO whether the model is run­ning un­der my desk or in the cloud.

You can’t do every­thing lo­cally. Sometimes you need the ab­solute cut­ting-edge rea­son­ing of Claude 3.5 Sonnet or GPT-4o for user-fac­ing, low-la­tency chat in­ter­ac­tions.

Instead of jug­gling billing ac­counts, API keys, and rate lim­its for Anthropic, Google, and OpenAI, I just use OpenRouter. You write one OpenAI-compatible in­te­gra­tion in your code, and you in­stantly get ac­cess to every ma­jor fron­tier model.

More im­por­tantly, it al­lows for seam­less fall­back rout­ing. If Anthropic’s API goes down on a Tuesday af­ter­noon (which hap­pens), my app au­to­mat­i­cally falls back to an equiv­a­lent OpenAI model. My users never see an er­ror screen, and I don’t have to write com­plex retry logic.

New, in­sanely ex­pen­sive mod­els are be­ing re­leased every week. I con­stantly hear about de­vel­op­ers drop­ping hun­dreds of dol­lars a month on Cursor sub­scrip­tions and Anthropic API keys just to have an AI write their boil­er­plate.

Meanwhile, I’m us­ing Claude Opus 4.6 all day and my bill barely touches $60 a month. My se­cret? I ex­ploit Microsoft’s pric­ing model.

I bought a GitHub Copilot sub­scrip­tion in 2023, plugged it into stan­dard VS Code, and never left. I tried Cursor and the other fancy forks when they briefly sur­passed it with agen­tic cod­ing, but Copilot Chat al­ways catches up.

Here is the trick that you might have missed: some­how, Microsoft is able to charge per re­quest, not per to­ken. And a request” is sim­ply what I type into the chat box. Even if the agent spends the next 30 min­utes chew­ing through my en­tire code­base, map­ping de­pen­den­cies, and chang­ing hun­dreds of files, I still pay roughly $0.04.

The op­ti­mal strat­egy is sim­ple: write bru­tally de­tailed prompts with strict suc­cess cri­te­ria (which is best prac­tice any­way), tell the agent to keep go­ing un­til all er­rors are fixed,” hit en­ter, and go make a cof­fee while Satya Nadella sub­si­dizes your com­pute costs.

I al­ways start a new ven­ture us­ing sqlite3 as the main data­base. Hear me out, this is not as in­sane as you think.

The en­ter­prise mind­set dic­tates that you need an out-of-process data­base server. But the truth is, a lo­cal SQLite file com­mu­ni­cat­ing over the C-interface or mem­ory is or­ders of mag­ni­tude faster than mak­ing a TCP net­work hop to a re­mote Postgres server.

But what about con­cur­rency?” you ask. Many peo­ple think SQLite locks the whole data­base on every write. They are wrong. You just need to turn on Write-Ahead Logging (WAL). Execute this pragma once when you open the data­base:

PRAGMA jour­nal_­mode=WAL;

PRAGMA syn­chro­nous=NOR­MAL;

Boom. Readers no longer block writ­ers. Writers no longer block read­ers. You can now eas­ily han­dle thou­sands of con­cur­rent users off a sin­gle .db file on an NVMe drive.

Since im­ple­ment­ing user au­then­ti­ca­tion is usu­ally the most an­noy­ing part of start­ing a new SQLite-based pro­ject, I built a li­brary: smhanov/​auth. It in­te­grates di­rectly with what­ever data­base you are us­ing and man­ages user signups, ses­sions, and pass­word re­sets. It even lets users sign in with Google, Facebook, X, or their own com­pany-spe­cific SAML provider. No bloated de­pen­den­cies, just sim­ple, au­ditable code.

The tech in­dus­try wants you to be­lieve that build­ing a real busi­ness re­quires com­plex or­ches­tra­tion, mas­sive monthly AWS bills, and mil­lions in ven­ture cap­i­tal.

By uti­liz­ing a sin­gle VPS, sta­t­i­cally com­piled bi­na­ries, lo­cal GPU hard­ware for batch AI tasks, and the raw speed of SQLite, you can boot­strap a highly scal­able startup that costs less than the price of a few cof­fees a month. You add in­fi­nite run­way to your pro­ject, giv­ing your­self the time to ac­tu­ally solve your users’ prob­lems in­stead of sweat­ing your burn rate.

If you are in­ter­ested in run­ning lean, check out my auth li­brary and agent im­ple­men­ta­tions on my GitHub. I’ll be hang­ing around the com­ments—let me know how you keep your server costs down, or tell me why I’m com­pletely wrong.

...

Read the original on stevehanov.ca »

2 511 shares, 38 trendiness

[BUG] Pro Max 5x Quota Exhausted in 1.5 Hours Despite Moderate Usage · Issue #45756 · anthropics/claude-code

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

You can’t per­form that ac­tion at this time.

...

Read the original on github.com »

3 462 shares, 42 trendiness

Seven countries now generate 100% of their electricity from renewable energy

Seven coun­tries now gen­er­ate nearly all of their elec­tric­ity from re­new­able en­ergy sources, ac­cord­ing to newly com­piled fig­ures.

Albania, Bhutan, Nepal, Paraguay, Iceland, Ethiopia and the Democratic Republic of Congo pro­duced more than 99.7 per cent of the elec­tric­ity they con­sumed us­ing ge­ot­her­mal, hy­dro, so­lar or wind power.

Data from the International Energy Agency (IEA) and International Renewable Energy Agency (IRENA) also re­vealed that a fur­ther 40 coun­tries gen­er­ated at least 50 per cent of the elec­tric­ity they con­sumed from re­new­able en­ergy tech­nolo­gies in 2021 and 2022 — in­clud­ing 11 European coun­tries.

We don’t need mir­a­cle tech­nolo­gies,” said Stanford University Professor Mark Jacobson, who pub­lished the data.

We need to stop emis­sions by elec­tri­fy­ing every­thing and pro­vid­ing the elec­tric­ity with Wind, Water and Solar (WWS), which in­cludes on­shore wind, so­lar pho­to­voltaics, con­cen­trated so­lar power, ge­ot­her­mal elec­tric­ity, small hy­dro­elec­tric­ity, and large hy­dro­elec­tric­ity.”

Professor Jacobson also noted that other coun­tries like Germany were also ca­pa­ble of run­ning off 100 per cent re­new­able-gen­er­ated elec­tric­ity for short pe­ri­ods of time.

Figures re­leased by the IEA in January show that the UK gen­er­ated 41.5 per cent of its elec­tric­ity from re­new­able sources in 2022 — up 10.5 per cent from the year be­fore.

In Scotland, re­new­able en­ergy tech­nolo­gies gen­er­ated the equiv­a­lent of 113 per cent of the coun­try’s over­all elec­tric­ity con­sump­tion in 2022.

These record-break­ing fig­ures are a ma­jor mile­stone on Scotland’s jour­ney to net-zero, clearly demon­strat­ing the enor­mous po­ten­tial of our world-class re­new­able en­ergy re­sources,” Claire Mack, chief ex­ec­u­tive of Scottish Renewables, said at the time.

While Scotland’s elec­tric­ity gen­er­a­tion was dom­i­nated by wind power, re­searchers pre­dict that so­lar will come to dom­i­nate global elec­tric­ity sup­plies over the com­ing decades.

There has been sig­nif­i­cant progress in re­cent years with im­prov­ing ef­fi­ciency rates for so­lar cells, pri­mar­ily boosted by the so-called miracle ma­te­ri­al’ per­ovskite.

Commercial costs have also fallen, which led sci­en­tists at the University of Exeter and University College London to claim last year that so­lar en­ergy has reached an irreversible tip­ping point” that will see it be­come the world’s main source of en­ergy by 2050.

Their 2023 pa­per, pub­lished in the jour­nal Nature Communications, found that tech­no­log­i­cal and eco­nomic ad­vances meant the tran­si­tion to clean en­ergy is not just reach­able, but in­evitable.

Due to tech­no­log­i­cal tra­jec­to­ries set in mo­tion by past pol­icy, a global ir­re­versible so­lar tip­ping point may have passed where so­lar en­ergy grad­u­ally comes to dom­i­nate global elec­tric­ity mar­kets, with­out any fur­ther cli­mate poli­cies,” the re­searchers wrote in the study.

Solar en­ergy is the most widely avail­able en­ergy re­source on Earth, and its eco­nomic at­trac­tive­ness is im­prov­ing fast in a cy­cle of in­creas­ing in­vest­ments.”

...

Read the original on www.independent.co.uk »

4 461 shares, 26 trendiness

Cache TTL silently regressed from 1h to 5m around early March 2026, causing quota and cost inflation · Issue #46829 · anthropics/claude-code

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

Cache TTL silently re­gressed from 1h to 5m around early March 2026, caus­ing quota and cost in­fla­tion Cache TTL silently re­gressed from 1h to 5m around early March 2026, caus­ing quota and cost in­fla­tion

You can’t per­form that ac­tion at this time.

...

Read the original on github.com »

5 433 shares, 41 trendiness

Bring Back Idiomatic Design

I’m part of the desk­top soft­ware gen­er­a­tion. From Windows 95 to Windows 7, I grew us­ing mostly-of­fline soft­ware on com­put­ers op­er­ated via mouse and key­board, well be­fore tablets and smart­phones. Recently, I’ve been miss­ing one par­tic­u­lar part of that era: its con­sis­tency in de­sign. I want to tell you about id­iomatic de­sign, em­pha­size the im­por­tance of ho­mo­ge­neous in­ter­faces, and sug­gest that we’ve lost some­thing im­por­tant.

Suppose you’re log­ging into a web­site, and it asks: do you want to stay logged in?” There are many ways in which the web­site could ask you for an an­swer: for ex­am­ple, a text field where you can en­ter Yes” or No”, or a drop­down where you can se­lect Keep me logged in” or Log me out when I close this win­dow.” But in re­al­ity, it’ll al­ways be a check­box. Why is that?

The check­box is a de­sign id­iom: it’s such a com­mon de­sign that as a user, you know how to use it with­out think­ing about it, and if you were mak­ing a web­site and had to ask this ques­tion, you would also put in a check­box with­out think­ing about it. To builders and users alike, it is a stan­dard de­sign pat­tern that every­one can rely on.

A check­box is also part of an in­ter­face. You’re us­ing it to in­ter­act with a sys­tem by in­putting data. Interfaces are bet­ter the less think­ing they re­quire: whether the in­ter­face is a steer­ing wheel or an on­line form, if you have to spend any amount of time fig­ur­ing out how to use it, that’s bad. As you in­ter­act with many things, you want ho­mo­ge­neous in­ter­faces that give you con­sis­tent ex­pe­ri­ences. If you learn that Command + C is the key­board short­cut for copy, you want that to work every­where. You don’t want to have to re­mem­ber to use CTRL + Shift + C in cer­tain cir­cum­stances or right-click → copy in oth­ers, that’d be an­noy­ing.

But that’s where we’ve ended up. Software is on the in­ter­net now, and the in­ter­faces aren’t ho­mo­ge­neous at all. There are hun­dreds of ways that dif­fer­ent web­sites ask you to pick dates, en­ter your credit card num­ber, or do any num­ber of com­mon tasks. Keyboard short­cuts dif­fer from app to app. There are so many dif­fer­ent ways of in­ter­ac­tion that you can’t re­mem­ber or learn them at all. Using web ap­pli­ca­tions in 2023 is an ex­er­cise of where do I find what I want to do?” over and over again.

By con­trast, one of the strengths of the desk­top soft­ware era was high con­sis­tency across in­ter­faces by use of de­sign id­ioms. Look at this pic­ture from Windows 2000:

The vi­su­als feel a lit­tle ugly and dated: it’s blocky, the font is­n’t great, and the col­ors are dull. But the in­ter­face gets a cou­ple of things re­ally right:

* The File, Edit, View… menu struc­ture was stan­dard. No mat­ter whether you were in Adobe Photoshop or Microsoft Excel, you knew that save is un­der File, redo is un­der Edit, full-screen is un­der View, etc.

* The menu is nav­i­ga­ble by key­board: there’s a lit­tle un­der­line in each of the menu items, e.g. F in File and N in New, un­der­neath. They in­di­cate key­board short­cuts. You can en­ter ALT+F to open the File menu, then hit N to open a new file. This caters to power-users while mak­ing the short­cuts easy to learn.

* The sta­tus bar at the bot­tom tells you every­thing about the cur­rent state: page, col­umn, word count, whether you’re track­ing changes, in in­sert-mode, etc.

* Menu items are clearly la­beled. Words, not icons, are the pri­mary in­ter­face to ac­tions. Icons are used only where they are most ob­vi­ous. The en­tire in­ter­face leaves lit­tle to the imag­i­na­tion. In the pic­ture above, there’s no I won­der what this does?” You know how to use it, even if you’ve never used it be­fore.

Crucially, these de­sign id­ioms were used not just in Microsoft Word, but through­out the en­tire Windows ecosys­tem. Take a look at this Windows XP lo­gout screen:

Every sin­gle but­ton is clearly vi­su­ally a but­ton and says ex­actly what it does. And each one has a lit­tle un­der­line to in­di­cate its key­board short­cut. Isn’t that nice?

The desk­top soft­ware era was one of ho­mo­ge­neous in­ter­faces, per­haps be­cause the op­er­at­ing sys­tem and its GUI li­braries dic­tated broad swaths of de­sign, and those con­straints guided de­vel­op­ers to­ward con­form­ing pat­terns.

The browser soft­ware era is one of het­ero­ge­neous in­ter­faces. Take a look at these screen­shots from two of my fa­vorite web ap­pli­ca­tions: Figma and Linear.

These are prob­a­bly the two best pieces of en­ter­prise soft­ware avail­able to­day. And though they have many of the same fea­tures — team set­tings, ab­stract item hi­er­ar­chies, col­lab­o­ra­tive com­ments, etc. — they don’t share a sin­gle icon. They have no de­sign id­ioms in com­mon. They have dif­fer­ent key­board short­cuts. Both are very well-de­signed from first prin­ci­ples, but do not con­form to what other in­ter­faces the user might be fa­mil­iar with.

We are in an era of in­di­vid­u­ally well-de­signed, use­ful web ap­pli­ca­tions, and they’re all unique. Even in prod­ucts by the same com­pany, the ex­pe­ri­ences are het­ero­ge­neous: us­ing GMail is noth­ing like us­ing GSuites is noth­ing like us­ing Google Docs. In ag­gre­gate, this is very frus­trat­ing. The lack of ho­mo­ge­neous in­ter­faces means that I spend most of my dig­i­tal time not in a state of pro­duc­tive flow, but hunt­ing and peck­ing all over the screen, ask­ing my­self Can I click that? Does this open in a new tab? Will the browser’s back but­ton let me go back?” Awful!

This lack of ho­mo­gene­ity is for two rea­sons:

All the pat­terns in de­sign­ing for mouse-and-key­board ap­pli­ca­tions had to be rein­vented with the ad­vent of the touch­screen. Most web ap­pli­ca­tions have to en­able both a mo­bile and desk­top ex­pe­ri­ence, and those in­ter­ac­tion forms are very dif­fer­ent. Therefore, most user ex­pe­ri­ences have since then been stuck in an awk­ward mid­dle, e.g. ham­burger menus in­tended for mo­bile apps also used for desk­top apps, etc. Consequently, there are a ton of bad de­sign pat­terns every­where you look. Modern fron­tend de­vel­op­ment has a cul­ture of copy­ing and re-us­ing mod­u­lar com­po­nents, so it’s easy to copy-paste bad de­sign pat­terns and per­pet­u­ate the is­sue. After 10+ years of this, there has been a gen­er­a­tionally cor­ro­sive ef­fect on the qual­ity of UI/UX de­sign.

If every­one were to fol­low the same de­sign id­ioms, then the in­ter­faces would look pretty con­sis­tent. In the early days of the in­ter­net, there were strong de­sign id­ioms: hy­per­links to other pages were un­der­lined blue, and pur­ple if you had al­ready vis­ited them. Great! Today, every web­site pre­sents its own guess­ing game on how el­e­ments of the in­ter­face are styled. Is that a link? Maybe!

It may be sur­pris­ing that mod­ern web de­sign is so uni­d­iomatic, be­cause the HTML/CSS stan­dards are very pre­scrip­tive. The is­sue is that even though there are stan­dards for writ­ing HTML, no-one writes HTML any­more. People write React in TypeScript or the lat­est frame­work. They im­port count­less npm pack­ages. All that goes through a com­plex build process to out­put some­thing that runs in the browser.

Frontend de­vel­op­ers aren’t wrong to do this. Browsers to­day are ex­tremely pow­er­ful and of­fer gen­eral-pur­pose APIs that can let you do pretty much any­thing if you are cre­ative about it. For ex­am­ple, Figma does­n’t fol­low any HTML de­sign id­ioms be­cause there is no HTML. It’s writ­ten in web as­sem­bly; they are on the cut­ting edge of im­ple­ment­ing desk­top-style soft­ware in the browser. Of course that breaks the HTML-webpage-as-document model. The browser’s back but­ton, key­board short­cuts, etc. fall by the way­side while a hu­man-com­puter in­ter­ac­tion par­a­digm is re­built.

In short, there are few web de­sign id­ioms be­cause front-end de­vel­op­ment is mov­ing too quickly. Engineers are con­cerned with what is pos­si­ble more than with ques­tions of pol­ish, and right­fully so. Multi-user, real-time col­lab­o­ra­tion is much more valu­able than power-user key­board short­cuts. There are both end­less fron­tend pack­ages and in­ter­ac­tion for­mats to de­ploy them into, so in­sti­tut­ing one-size-fits-all id­ioms on a space so large is very dif­fi­cult. It will take time for the cut­ting edge to cool down, and for the most suc­cess­ful pat­terns to be­come ap­par­ent and even­tu­ally id­iomatic.

And yet, some of the most suc­cess­ful prod­uct or­ga­ni­za­tions of to­day ag­gres­sively pur­sue their own de­sign id­ioms and achieve some ho­mo­gene­ity in their in­ter­faces.

Apple is a great ex­am­ple. We’ve talked about Microsoft of the past, but Apple to­day dri­ves a highly opin­ion­ated de­sign sys­tem. Apple’s gen­eral li­brary of fonts, but­tons, col­ors, etc. and its con­sis­tency across all of Apple’s na­tive ap­pli­ca­tions and de­vices have cre­ated a pow­er­ful con­form­ing ef­fect for third-party ap­pli­ca­tions. Even when us­ing a third-party app on your iPhone, in­ter­act­ing via the key­board, pinch-to-zoom, etc. is all con­trolled by iOS. This is a big part of Apple’s it-just-works ef­fect. Strong, taste­ful, id­iomatic de­sign is at the core of Apple’s suc­cess.

What’s in­ter­est­ing about the it-just-works ef­fect is that it makes users trust the de­faults and avoid cus­tomiza­tion. You see a sim­i­lar dy­namic on plat­forms like Substack, where as an au­thor I don’t have any abil­ity to se­lect the font or even un­der­line text. But the con­strain­ing de­faults are taste­fully set, and it works great. Substack’s and Apple’s de­sign prin­ci­ples gain adop­tion as those prod­ucts suc­ceed, since de­sign­ers look to them as suc­cess­ful ex­am­ples. Those de­signs even­tu­ally be­come id­ioms by (1) peo­ple con­verg­ing on them as good de­signs and (2) fre­quency of use in the com­mu­nity.

As a prod­uct builder, you want to fol­low de­sign id­ioms as closely as prac­ti­cally pos­si­ble, be­cause it makes your soft­ware eas­ier to use and it max­i­mizes com­pat­i­bil­ity across de­vices/​browsers. I fol­low these rules of thumb and break them only rarely:

Study and fol­low HTML/CSS id­ioms when­ever pos­si­ble. For ex­am­ple, a link should be un­der­lined, col­or­ful, pointer on mouseover, and writ­ten as an tag. Avoid JavaScript reim­ple­men­ta­tions of HTML ba­sics, e.g. React Button com­po­nents in­stead of styled el­e­ments. Study and fol­low browser id­ioms when­ever pos­si­ble. The back but­ton should al­ways work. Copy-pasting the URL should bring you to the same in­ter­face. CTRL-clicking a nav­i­ga­tional el­e­ment should open it in a new tab.If you de­vi­ate from gen­eral id­ioms, make sure that your de­signs are fully in­ter­nally con­sis­tent and at least idiomatic” within your or­ga­ni­za­tion.Pre­fer words to icons. Use only icons that are uni­ver­sally un­der­stood.If in doubt, make vi­sual el­e­ments look ob­vi­ous. There should never be con­fu­sion about whether some­thing is a but­ton or a tab.If at an im­passe, re­fer to two types of re­source to as­sist in your judg­ment:The best-de­signed web­sites that you know; Books on in­ter­face de­sign from decades past. Most in­ter­face de­sign prob­lems of to­day are not new, but re­peats of his­tory, with solved analo­gies in the past.

I dream of the day when every datepicker or credit card form on the in­ter­net is ex­actly the same: when af­ter thirty years of it­er­a­tive de­vel­op­ment and mil­lions of at­tempts, we’ve fi­nally con­verged on the best one. I dream of a fu­ture where in every web app, CTRL-Click opens the thing in a new tab. It would be nice…

...

Read the original on essays.johnloeber.com »

6 324 shares, 20 trendiness

AI Will Be Met With Violence, and Nothing Good Will Come of It

Sorry to bother you on Saturday. Thought this was im­por­tant to share.

The first thing you learn about a loom is that it’s easy to break.

The shut­tle runs along a track that warps with hu­mid­ity. The hed­dles hang from cords that fray. The reed is a row of thin metal strips, bent by hand, that bend back just as eas­ily. The warp beam cracks if you over-tighten it. The trea­dles loosen at the joints. The breast beam, the cloth roller, the ratchet and pawl, the lease sticks, the cas­tle; the whole con­trap­tion is wood and string held to­gether by ten­sion. It’s a piece of in­ge­nu­ity and crafts­man­ship, but one as del­i­cate as the clothes it man­i­fests out of wild plant fibers. It is, also, the foun­da­tional tool of an en­tire in­dus­try, tex­tiles, that has kept its rel­e­vance to our days of heavy ma­chin­ery, fac­to­ries, en­ergy fa­cil­i­ties, and dat­a­cen­ters.

It is not nearly as easy to break a dat­a­cen­ter.

It is made of con­crete and steel and cop­per and it’s on the big­ger side. It has in­ter­change­able servers, and bio­met­ric locks and tall elec­tri­fied fences and heav­ily armed guards and re­dun­dancy upon re­dun­dancy: every com­po­nent du­pli­cated so that no sin­gle fail­ure brings the whole thing down. There is no trea­dle to loosen or reed to bend back.

But say you man­aged to by­pass the guards, jump the fences, open the locks, and lo­cate all the servers. Then you’d face the al­go­rithm. The dat­a­cen­ter was never your goal; the al­go­rithm lurk­ing in­side is. It does­n’t run on that rack, or any rack for that mat­ter. It is a dig­i­tal pat­tern dis­trib­uted across mil­lions of chips, mir­rored across con­ti­nents; it could be re­con­sti­tuted else­where, and it’s trained to ad­dict you at a glance, like a mod­ern Medusa.

But say you man­aged to elude the stare, stop the repli­ca­tion, and break the pat­terns. Then you’d face su­per­in­tel­li­gence. The al­go­rithm was also not your goal; the vi­brant, ethe­real, la­tent su­per­in­tel­li­gence lurk­ing in­side is. Well, there’s noth­ing you can do here: It al­ways gets out of the box” and, sud­denly, you are in­side the box, like a chimp be­ing played by a hu­man with a ba­nana. It’s just so tasty…

There’s an­other so­lu­tion to break a dat­a­cen­ter: You can bomb it, like one ham­mers down the loom.

Some have ar­gued that this is the way to en­sure a rogue su­per­in­tel­li­gence does­n’t get out of the box. A dif­fer­ent rogue crea­ture took the pro­posal se­ri­ously: last month, Iran’s Revolutionary Guard re­leased satel­lite footage of OpenAI’s Stargate cam­pus in Abu Dhabi and promised its complete and ut­ter an­ni­hi­la­tion.”

But you prob­a­bly don’t have a rogue na­tion handy to ful­fill your wishes. Maybe you will end up bombed in­stead and we don’t want that to hap­pen. That’s what hap­pens with rogue in­tel­li­gences: you can’t pre­dict them.

And yet. Two hun­dred years of in­creas­ingly im­pen­e­tra­ble tech­nol­ogy—from looms to dat­a­cen­ters—have not changed the first thing about the peo­ple who live along­side it. The evo­lu­tion of tech­nol­ogy is a fea­ture of the world just as much as the per­ma­nent fragility of the hu­man body.

And so, more and more, it is peo­ple who are the weaker link in this chain of in­evitable doom. And it is peo­ple who will be tar­geted.

April of 1812. A mill owner named William Horsfall was rid­ing home on his beau­ti­ful white stal­lion back from the Cloth Hall mar­ket in Huddersfield, UK. He had spent weeks boast­ing that he would ride up to his sad­dle in Luddite blood (a pre­cious sub­stance that served as fuel for the mills).

A few yards later, at Crosland Moor, a man named George Mellor—twenty-two years old—shot him. It hit Horsfall in the groin, who, nom­i­na­tive-de­ter­min­is­ti­cally, fell from his horse. People gath­ered, re­proach­ing him for hav­ing been the op­pres­sor of the poor. Naturally, loyal to his prin­ci­ples in death as he was in life, he could­n’t hear them. He died one day later in an inn. Mellor was hanged.

April of 2026. A dat­a­cen­ter owner named Samuel Altman was dri­ving home on his beau­ti­ful white Koenigsegg Regera back from Market Street in San Francisco, US. He had spent weeks boast­ing that he would scrap and steal our blog posts (a pre­cious sub­stance that serves as fuel for the dat­a­cen­ters).

A few hours later, at Russian Hill, a man named Daniel Alejandro Moreno-Gama—twenty years old—al­legedly threw a Molotov cock­tail at his house. He hit an ex­te­rior gate. Altman and his fam­ily were asleep, but they’re fine. Moreno-Gama is in cus­tody.

This kind of vi­o­lence must be con­demned. This is not the way. It’s hor­ri­ble that it is hap­pen­ing at all. And yet, for some rea­son, it keeps hap­pen­ing.

Last week, the house of Ron Gibson, a coun­cil­man from Indianapolis, was shot at thir­teen times. The bul­let holes are still there. The shooter left a mes­sage on his doorstep: NO DATA CENTERS.” Gibson sup­ports a dat­a­cen­ter pro­ject in the Martindale-Brightwood neigh­bor­hood. He and his son were un­harmed.

In November 2025, a 27-year-old anti-AI ac­tivist threat­ened to mur­der peo­ple at OpenAI’s SF of­fices, prompt­ing a lock­down. He had ex­pressed a de­sire to buy weapons.

Increasingly, as the ob­jects of peo­ple’s anger and frus­tra­tion and des­per­a­tion be­come un­reach­able be­hind fences and guards, or ab­stracted away in ones and ze­ros, or el­e­vated above the clouds, the mob will turn their unas­sail­able emo­tions to­ward hu­man tar­gets.

I don’t want to triv­i­al­ize the griev­ances of the peo­ple who fear for their fu­tures. I don’t want to de­fend Altman’s de­ci­sions. But this is not the way. This is how things de­volve into chaos.

And I won­der: how des­per­ate can peo­ple be be­fore these iso­lated events be­come a snow­ball of vi­o­lence that will be re­sisted by nei­ther dat­a­cen­ters nor rich peo­ple’s houses?

Every time I hear from Amodei or Altman that I could lose my job, I don’t think oh, ok, then al­low me pay you $20/month so that I can adapt to these un­cer­tain times that have fallen upon my des­tiny by chance.” I think: you, for fuck’s sake, you are do­ing this.” And I con­sider my­self a pretty lev­el­headed guy, so imag­ine what not-so-lev­el­headed peo­ple think.

There’s a lot of fric­tion to es­ca­lat­ing vi­o­lence, but that fric­tion dis­solves the mo­ment this sen­ti­ment starts to be com­mon. Normally, it just fades away any­way, but there’s one sce­nario where I see it in­evitably es­ca­lat­ing:

If peo­ple feel that they have no place in the fu­ture.

If they feel ex­pelled from the sys­tem—they’re un­able to buy stuff, their skills be­come ob­so­lete, their chance at earn­ing a liv­ing is re­placed by a swarm of AI agents, they think we are truly go­ing to die (so far, the vi­o­lence has been tied mostly to safety AI move­ments)—then they will feel they have noth­ing to lose.

And then, and I’m sorry to be so blunt, then it’s die or kill.

Perhaps the most se­ri­ous mis­take that the AI in­dus­try made af­ter cre­at­ing a tech­nol­ogy that will trans­ver­sally dis­rupt the en­tire white-col­lar work­force be­fore en­sur­ing a safe tran­si­tion, was mak­ing it ex­plicit by do­ing con­stant dis­courses that amount to: we are cre­at­ing a tech­nol­ogy that will trans­ver­sally dis­rupt the en­tire white-col­lar work­force be­fore en­sur­ing a safe tran­si­tion.”

And, to top it off, they add careful down there.”

The dif­fer­ence be­tween AI and, say, looms, is that this has been broad­cast to the en­tire globe, and it has been treated in a sort of self-con­scious way. The AI lead­ers know the prob­lems that will emerge and so they can­not help but talk about them con­stantly and so they are let­ting us know, which makes them look like psy­chopaths. How do you guys think peo­ple will re­act to this? You should be much less self-con­scious and much more self-aware: re­al­ize what you sound like!

People hate AI so much that they are prone to at­tribute to it every­thing that’s go­ing wrong in their lives, re­gard­less of the truth. That’s why they mix real ar­gu­ments, like data theft, with fake ones, like the wa­ter stuff. Employers do it, too. Most lay­offs are not caused by AI, but it’s the per­fect ex­cuse to do some­thing that’s oth­er­wise so­cially rep­re­hen­si­ble.

AI has be­come the per­fect scape­goat. It does­n’t help that the en­tire AI in­dus­try has de­cided that throw­ing rocks at its own roof is its best sell­ing point: If AI is so pow­er­ful and so dan­ger­ous and soon to be so ubiq­ui­tous, then what is so un­ex­pected about peo­ple blam­ing every­thing on it?

Nothing that Altman could say jus­ti­fies vi­o­lence against him. This is an un­de­ni­able truth. But un­for­tu­nately, vi­o­lence might still en­sue. I hope not, but I guess we are see­ing what ap­pears to be the first cases.

I just hope that, con­trary to the cases of ChatGPT-induced psy­chosis, chat­bot ad­dic­tion, AI-blamed job lay­offs, and a grow­ing trend of il­lit­er­acy, it stops.

...

Read the original on www.thealgorithmicbridge.com »

7 323 shares, 19 trendiness

Apple update turns Czech mate for locked-out iPhone user

A uni­ver­sity stu­dent in the US is in data limbo af­ter Apple re­moved a char­ac­ter from its Czech key­board, pre­vent­ing him from en­ter­ing his iPhone pass­code.

Connor Byrne, 21, adopts the un­com­mon but se­cu­rity-minded ap­proach to iPhone pass­codes, us­ing an al­phanu­meric string in­stead of the stan­dard four-num­ber pass­code.

He up­dated his iPhone 13 from iOS 18 to iOS 26.4 on April 5, but in do­ing so lost the abil­ity to en­ter his pass­code. He has been locked out of the de­vice ever since.

This is be­cause iOS 18 was the last op­er­at­ing sys­tem ver­sion that al­lowed iPhone users to en­ter the spe­cial char­ac­ter — in this case, the caron/​háček (ˇ) — us­ing the old key­board on the lock screen.

It has left Byrne with­out ac­cess to his de­vice, which, given its age and chipped screen, does not hold much value, un­like the old pho­tos stored on it, which carry sen­ti­men­tal im­por­tance.

The stu­dent has not backed up the files to iCloud ei­ther, so they can­not be re­trieved via a sep­a­rate de­vice. Apple sup­port staff have sug­gested the only way to re­gain ac­cess to the iPhone 13 is by restor­ing it, which would erase the files of value.

Byrne was hop­ing that the next up­date, 26.4.1, would in­tro­duce a fix for this, but its re­lease this week has not helped.

The phone’s very cracked, so, at this point, the pho­tos con­tained in it are more valu­able than the abil­ity to use the phone it­self,” he told The Register. They’re the main data that I care about and haven’t backed up.”

I don’t an­tic­i­pate a be­spoke so­lu­tion be­ing pro­vided, but I’m hope­ful that the is­sue will be re­solved in the next iOS up­date.”

When the háček could still be used in the iPhone’s pass­code, it sat on the bot­tom row of the key­board, while just above it was an acute ac­cent mark.

Post-update, when en­ter­ing the pass­code, the key­board now dis­plays an iden­ti­cal ac­cent mark in the háček’s place, a fea­ture Byrne de­scribed as pointless; they’re en­coded the same.”

I’ve bought a cheap Android phone to use while I wait for a fix,” he added. I’ll give it a month or two and will buy a nicer Android phone if the dust set­tles with­out a fix.”

Given that iOS 18 was re­leased in 2024, and Apple has not rein­tro­duced the háček since, it seems un­likely Cupertino will make good on the stu­den­t’s hopes, es­pe­cially con­sid­er­ing that he is not the only user to en­counter the same is­sue in re­cent weeks.

During in-house test­ing, which in­volved tak­ing an iPhone 16 from iOS 18.5 to iOS 26.4.1, The Register found that Apple has kept the háček in the Czech key­board, but re­moved the abil­ity to use it in a cus­tom al­phanu­meric pass­code. The OS will not al­low users to in­put the háček as a char­ac­ter. The key’s an­i­ma­tion trig­gers, as does the key­board’s key-tap sound, but the char­ac­ter is not en­tered into the string.

If the stu­dent were able to get into his iPhone 13, he would find the háček in his key­board as it used to be be­fore he up­dated it. It is only the lock-screen key­board that re­places it with a sec­ond acute ac­cent mark.

Alas, Byrne has gone to great lengths to tin­ker and tease iOS into ac­cept­ing or find­ing the háček, or to find tricky ways of by­pass­ing it.

He tried en­ter­ing the same ac­cent mark that re­placed the háček, in the hope that it was sim­ply dis­play­ing in­cor­rectly. He also re­searched down­grad­ing to iOS 26.3.1, with a view to chang­ing the pass­code to one that’s com­pat­i­ble with the new key­board, to no avail.

Long-pressing every key to re­veal a hid­den háček did not work, nor did writ­ing the pass­word on pa­per (and also with a com­puter word proces­sor to ac­count for hand­writ­ing er­rors), and us­ing AutoFill to scan it in. In this case, he said that the háček was only read as a quo­ta­tion mark or de­gree sign.

Apple Support arranged for Byrne to at­tend a Genius Bar ap­point­ment, where the staffer be­hind the desk made no progress and even started restor­ing the phone with­out seek­ing the stu­den­t’s con­sent.

He pro­vided no rec­om­men­da­tions be­fore do­ing so,” he said.

And if you’re won­der­ing why not en­able Face ID in the first place? Biometrics are pretty se­cure.” Well, it’s not se­cure enough for this user, and it would­n’t mat­ter ei­ther, even if it did meet his stan­dards.

I don’t con­sider Face ID se­cure enough be­cause it pro­vides no pro­tec­tion in cases where some­one has con­trol of both you and the phone — po­lice or cus­toms, for ex­am­ple.”

It would­n’t have helped any­way, since you have to en­ter the pass­code once af­ter up­dat­ing to en­able Face ID.”

For the same rea­son, plug­ging in an ex­ter­nal key­board is also a no-go since freshly up­dated iPhones are placed in what’s known as a Before First Unlock state, which pre­vents wired ac­ces­sories from work­ing un­til the pass­code is en­tered.

The Register con­tacted Apple mul­ti­ple times to get its side of things, but it did not re­spond. ®

...

Read the original on www.theregister.com »

8 292 shares, 27 trendiness

Mount Lebanon

...

Read the original on maps.apple.com »

9 277 shares, 64 trendiness

The peril of laziness lost

In his clas­sic

Programming Perl — affectionately known to a gen­er­a­tion of tech­nol­o­gists as the Camel Book” — Larry Wall fa­mously wrote of the three virtues of a pro­gram­mer as lazi­ness, im­pa­tience, and hubris:

If we’re go­ing to talk about good soft­ware de­sign, we have to talk about Laziness, Impatience, and Hubris, the ba­sis of good soft­ware de­sign. We’ve all fallen into the trap of us­ing cut-and-paste when we should have de­fined a higher-level ab­strac­tion, if only just a loop or sub­rou­tine. To be sure, some folks have gone to the op­po­site ex­treme of defin­ing ever-grow­ing mounds of higher level ab­strac­tions when they should have used cut-and-paste. Generally, though, most of us need to think about us­ing more ab­strac­tion rather than less.

Of these virtues, I have al­ways found lazi­ness to be the most pro­found: packed within its tongue-in-cheek self-dep­re­ca­tion is a com­men­tary on not just the need for ab­strac­tion, but the aes­thet­ics of it. Laziness dri­ves us to make the sys­tem as sim­ple as pos­si­ble (but no sim­pler!) — to de­velop the pow­er­ful ab­strac­tions that then al­low us to do much more, much more eas­ily.

Of course, the im­plicit wink here is that it takes a lot of work to be lazy: when pro­gram­mers are en­gaged in the seem­ing lazi­ness of

ham­mock-dri­ven de­vel­op­ment, we are in fact turn­ing the prob­lem over and over in our heads. We un­der­take the hard in­tel­lec­tual work of de­vel­op­ing these ab­strac­tions in part be­cause we are op­ti­miz­ing the hy­po­thet­i­cal time of our fu­ture selves, even if at the ex­pense of our cur­rent one. When we get this cal­cu­lus right, it is glo­ri­ous, as the ab­strac­tion serves not just our­selves, but all who come af­ter us. That is, our lazi­ness serves to make soft­ware eas­ier to write, and sys­tems eas­ier to com­pose — to al­low more peo­ple to write more of it.

Ideally, you would want those that ben­e­fit from ab­strac­tions to pay the virtue of lazi­ness for­ward — to use their new-found power to them­selves la­bor on the ab­strac­tions they make. But a con­se­quence of the broad­en­ing of soft­ware cre­ation over the past two decades is it in­cludes more and more peo­ple who are un­likely to call them­selves pro­gram­mers — and for whom the virtue of lazi­ness would lose its in­tended mean­ing.

Worse, the ex­tra­or­di­nary pro­duc­tiv­ity al­lowed by mod­ern ab­strac­tions has given rise to an em­pha­sis on a kind of false in­dus­tri­ous­ness. Pejoratively, this was the

rise of the bro­gram­mer, with the virtue of ironic lazi­ness and ham­mock-dri­ven de­vel­op­ment dis­placed by

hus­tle porn about crush­ing code.

Onto this dry tin­der has struck the light­ning bolt of LLMs. Whatever one’s dis­po­si­tion is to soft­ware cre­ation, LLMs al­low that to be ap­plied with (much) greater force, so it should be of lit­tle sur­prise that LLMs have served as an­a­bolic steroids for the bro­gram­mer set.

Elated with their new-found bulk, they can’t seem to shut up about it. Take, for ex­am­ple, bro­gram­mer-of-note

Garry Tan, who has been par­tic­u­larly in­suf­fer­able about his LLM use, brag­ging about his rate of thirty-seven thou­sand

lines of code per day (and still speed­ing up”):

If lazi­ness is a virtue of a pro­gram­mer, think­ing about soft­ware this way is clearly a vice. And like as­sess­ing lit­er­a­ture by the pound, its fal­lacy is clear even to novice pro­gram­mers.

As for the ar­ti­fact that Tan was build­ing with such fre­netic en­ergy, I was broadly ig­nor­ing it. Polish soft­ware en­gi­neer Gregorein, how­ever,

took it apart, and the re­sults are at once pre­dictable, hi­lar­i­ous and in­struc­tive: A sin­gle load of Tan’s newsletter-blog-thingy” in­cluded mul­ti­ple test har­nesses (!), the Hello World Rails app (?!), a stow­away text ed­i­tor, and then eight dif­fer­ent vari­ants of the same logo — one of which with zero bytes.

The prob­lem here is­n’t these is­sues per se (which are all fix­able!), and it is­n’t even the be­lief that the method­ol­ogy that cre­ated them rep­re­sents the fu­ture of soft­ware en­gi­neer­ing (though that is cer­tainly an­noy­ing!).

The prob­lem is that LLMs in­her­ently lack the virtue of lazi­ness. Work costs noth­ing to an LLM. LLMs do not feel a need to op­ti­mize for their own (or any­one’s) fu­ture time, and will hap­pily dump more and more onto a lay­er­cake of garbage. Left unchecked, LLMs will make sys­tems larger, not bet­ter — ap­peal­ing to per­verse van­ity met­rics, per­haps, but at the cost of every­thing that mat­ters. As such, LLMs high­light how es­sen­tial our hu­man lazi­ness is: our fi­nite time forces us to de­velop crisp ab­strac­tions in part be­cause we don’t want to waste our (human!) time on the con­se­quences of clunky ones. The best en­gi­neer­ing is al­ways borne of con­straints, and the con­straint of our time places lim­its on the cog­ni­tive load of the sys­tem that we’re will­ing to ac­cept. This is what dri­ves us to make the sys­tem sim­pler, de­spite its es­sen­tial com­plex­ity. As I ex­panded on in my talk

The Complexity of Simplicity, this is a sig­nif­i­cant un­der­tak­ing — and we can­not ex­pect LLMs that do not op­er­ate un­der con­straints of time or load to un­der­take it of their own vo­li­tion.

This is not to say, of course, that LLMs won’t play an im­por­tant role in our fu­ture: they are an ex­tra­or­di­nary tool for soft­ware en­gi­neer­ing, but — as out­lined in our

guide­lines for LLM use at Oxide — they are but a tool. We can put them to use tack­ling the non-ironic (and non-vir­tu­ous!) as­pects of pro­gram­mer lazi­ness — help­ing us take on thorny prob­lems like tech­ni­cal debt —  or

use them to pro­mote our en­gi­neer­ing rigor, but it must be in ser­vice of our own vir­tu­ous lazi­ness: to yield a sim­pler, more pow­er­ful sys­tem that serves not just our­selves, but the gen­er­a­tions of soft­ware en­gi­neers to come af­ter us.

...

Read the original on bcantrill.dtrace.org »

10 243 shares, 62 trendiness

Serenity Forge (@serenityforge.com)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. A state­ment re­gard­ing the re­moval of DDLC from the Google Play Store:

...

Read the original on bsky.app »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.