10 interesting stories served every morning and every evening.




1 846 shares, 37 trendiness

📝 Is Mozilla trying hard to kill itself?

It may be just me, but I read this as I don’t want to 😜 😜 but I’ll kill AdBlockers in Firefox for buck­eri­nos 😂. This dis­ap­points and sad­dens me a lot, and I hope I’m wrong.

...

Read the original on infosec.press »

2 818 shares, 60 trendiness

frontier intelligence built for speed

Gemini 3 Flash is our lat­est model with fron­tier in­tel­li­gence built for speed that helps every­one learn, build, and plan any­thing — faster.

Senior Director, Product Management, on be­half of the Gemini team

Google is re­leas­ing Gemini 3 Flash, a fast and cost-ef­fec­tive model built for speed. You can now ac­cess Gemini 3 Flash through the Gemini app and AI Mode in Search. Developers can ac­cess it via the Gemini API in Google AI Studio, Google Antigravity, Gemini CLI, Android Studio, Vertex AI and Gemini Enterprise.

Summaries were gen­er­ated by Google AI. Generative AI is ex­per­i­men­tal.

It’s great for cod­ing, com­plex analy­sis, and quick an­swers in in­ter­ac­tive apps.

Gemini 3 Flash is now the de­fault model in the Gemini app and AI Mode in Search.

Developers and every­day users can ac­cess Gemini 3 Flash via var­i­ous Google plat­forms.

Summaries were gen­er­ated by Google AI. Generative AI is ex­per­i­men­tal.

...

Read the original on blog.google »

3 793 shares, 72 trendiness

AWS CEO Explains 3 Reasons AI Can’t Replace Junior Devs

AWS CEO Matt Garman out­lined 3 solid rea­sons why com­pa­nies should not fo­cus on cut­ting ju­nior de­vel­oper roles, not­ing that they are ac­tu­ally the most ex­pe­ri­enced with the AI tools”.

In a tech world ob­sessed with AI re­plac­ing hu­man work­ers, Matt Garman, CEO of Amazon Web Services (AWS), is push­ing back against one of the in­dus­try’s most pop­u­lar cost-cut­ting ideas.

Speaking on WIREDs The Big Interview pod­cast, Garman has a bold mes­sage for com­pa­nies rac­ing to cut costs with AI.

He was asked to ex­plain why he once called re­plac­ing ju­nior em­ploy­ees with AI one of the dumb­est ideas” he’d ever heard, and to ex­pand on how he be­lieves agen­tic AI will ac­tu­ally change the work­place in the com­ing years.

First, ju­nior em­ploy­ees are of­ten bet­ter with AI tools than se­nior staff.

Fresh grads have grown up with new tech­nol­ogy, so they can adapt quickly. Many of them learn AI-powered tools while study­ing or dur­ing in­tern­ships. They tend to ex­plore new fea­tures, find quick meth­ods to write code, and fig­ure out how to get the best re­sults from AI agents.

According to the 2025 Stack Overflow Developer Survey, 55.5% of early-ca­reer de­vel­op­ers re­ported us­ing AI tools daily in their de­vel­op­ment process, higher than for the ex­pe­ri­enced folks.

This com­fort with new tools al­lows them to work more ef­fi­ciently. In con­trast, se­nior de­vel­op­ers have es­tab­lished work­flows and may take more time to adopt. Recent re­search shows that over half of Gen Z em­ploy­ees are ac­tu­ally help­ing se­nior col­leagues up­skill in AI.

Second, ju­nior staff are usu­ally the least ex­pen­sive em­ploy­ees.

Junior em­ploy­ees usu­ally get much less in salary and ben­e­fits, so re­mov­ing them does not de­liver huge sav­ings. If a com­pany is try­ing to save money, it does­n’t make that much fi­nan­cial sense.

So, when com­pa­nies talk about in­creas­ing profit mar­gins, ju­nior em­ploy­ees should not be the de­fault or only tar­get. True op­ti­miza­tion, Real cost-cut­ting means look­ing at the whole com­pany be­cause there are plenty of other places where ex­penses can be trimmed.

In fact, 30% of com­pa­nies that laid off work­ers ex­pect­ing sav­ings ended up in­creas­ing ex­penses, and many had to re­hire later.

Think of a com­pany like a sports team. If you only keep vet­eran play­ers and never re­cruit rook­ies, what hap­pens when those vet­er­ans re­tire? You are left with no one who knows how to play the game.

Also, hir­ing peo­ple straight out of col­lege brings new ways of think­ing into the work­place. They have fresh ideas shaped by the lat­est trends, mo­ti­va­tion to in­no­vate.

More im­por­tantly, they form the foun­da­tion of a com­pa­ny’s fu­ture work­force. If a com­pany de­cides to stop hir­ing ju­nior em­ploy­ees al­to­gether, it cuts off its own tal­ent pipeline. Over time, that leads to fewer lead­ers to pro­mote from within.

A Deloitte re­port also notes that the tech work­force is ex­pected to grow at roughly twice the rate of the over­all U. S. work­force, high­light­ing the de­mand for tech tal­ent. Without a strong pipeline of ju­nior de­vel­op­ers com­ing in, com­pa­nies might face a tech tal­ent short­age.

When there are not enough ju­nior hires be­ing trained to­day, teams strug­gle to fill roles to­mor­row, es­pe­cially as pro­jects scale.

This is­n’t just cor­po­rate talk. As the leader of one of the world’s largest cloud com­put­ing plat­forms, serv­ing every­one from Netflix to the U. S. in­tel­li­gence agen­cies, Garman has a front-row seat to how com­pa­nies are ac­tu­ally us­ing AI.

And what he is see­ing makes him wor­ried that short-term think­ing could dam­age busi­nesses for years to come. Garman’s point is grounded in long-term strat­egy. A com­pany that re­lies solely on AI to han­dle tasks with­out train­ing new tal­ent could find it­self short of peo­ple.

Still, Garman ad­mits the next few years will be bumpy. Your job is go­ing to change,” he said. He be­lieves AI will make com­pa­nies more pro­duc­tive as well as the em­ploy­ees.

When tech­nol­ogy makes some­thing eas­ier, peo­ple want more of it. AI en­ables the cre­ation of soft­ware faster, al­low­ing com­pa­nies to de­velop more prod­ucts, en­ter new mar­kets, and serve more cus­tomers.

Developers will be re­spon­si­ble for more than just writ­ing code, with faster adap­ta­tion to new tech­nolo­gies be­com­ing es­sen­tial. But he has a hope­ful mes­sage in the end.

That’s why Geoffrey Hinton has ad­vised that Computer Science de­grees re­main es­sen­tial. This di­rectly sup­ports Matt Garman’s point. Fresh tal­ent with a strong un­der­stand­ing of core fun­da­men­tals be­comes cru­cial for fill­ing these higher-value roles of the fu­ture.

I’m very con­fi­dent in the medium to longer term that AI will def­i­nitely cre­ate more jobs than it re­moves at first,” Garman said.

...

Read the original on www.finalroundai.com »

4 288 shares, 21 trendiness

Hardened Images for Everyone

Containers are the uni­ver­sal path to pro­duc­tion for most de­vel­op­ers, and Docker has al­ways been the stew­ard of the ecosys­tem. Docker Hub has over 20 bil­lion monthly pulls, with nearly 90% of or­ga­ni­za­tions now re­ly­ing on con­tain­ers in their soft­ware de­liv­ery work­flows. That gives us a re­spon­si­bil­ity: to help se­cure the soft­ware sup­ply chain for the world.

Why? Supply-chain at­tacks are ex­plod­ing. In 2025, they caused more than $60 bil­lion in dam­age, tripling from 2021. No one is safe. Every lan­guage, every ecosys­tem, every build and dis­tri­b­u­tion step is a tar­get.

For this rea­son, we launched Docker Hardened Images (DHI), a se­cure, min­i­mal, pro­duc­tion-ready set of im­ages, in May 2025, and since then have hard­ened over 1,000 im­ages and helm charts in our cat­a­log. Today, we are es­tab­lish­ing a new in­dus­try stan­dard by mak­ing DHI freely avail­able and open source to every­one who builds soft­ware. All 26 Million+ de­vel­op­ers in the con­tainer ecosys­tem. DHI is fully open and free to use, share, and build on with no li­cens­ing sur­prises, backed by an Apache 2.0 li­cense. DHI now gives the world a se­cure, min­i­mal, pro­duc­tion-ready foun­da­tion from the very first pull.

If it sounds too good to be true, here’s the bot­tom line up front: every de­vel­oper and every ap­pli­ca­tion can (and should!) use DHI with­out re­stric­tions. When you need con­tin­u­ous se­cu­rity patch­ing, ap­plied in un­der 7 days, im­ages for reg­u­lated in­dus­tries (e.g., FIPS, FedRAMP), you want to build cus­tomized im­ages on our se­cure build in­fra­struc­ture, or you need se­cu­rity patches be­yond end-of-life, DHI has com­mer­cial of­fer­ings. Simple.

Since the in­tro­duc­tion of DHI, en­ter­prises like Adobe and Qualcomm have bet on Docker for se­cur­ing their en­tire en­ter­prise to achieve the most strin­gent lev­els of com­pli­ance, while star­tups like Attentive and Octopus Deploy have ac­cel­er­ated their abil­ity to get com­pli­ance and sell to larger busi­nesses.

Now every­one and every ap­pli­ca­tion can build se­curely from the first docker build. Unlike other opaque or pro­pri­etary hard­ened im­ages, DHI is com­pat­i­ble with Alpine and Debian, trusted and fa­mil­iar open source foun­da­tions teams al­ready know and can adopt with min­i­mal change. And while some ven­dors sup­press CVEs in their feed to main­tain a green scan­ner, Docker is al­ways trans­par­ent, even when we’re still work­ing on patches, be­cause we fun­da­men­tally be­lieve you should al­ways know what your se­cu­rity pos­ture is. The re­sult: dra­mat­i­cally re­duced CVEs (guaranteed near zero in DHI Enterprise), im­ages up to 95 per­cent smaller, and se­cure de­faults with­out ever com­pro­mis­ing trans­parency or trust.

There’s more. We’ve al­ready built Hardened Helm Charts to lever­age DHI im­ages in Kubernetes en­vi­ron­ments; those are open source too. And to­day, we’re ex­pand­ing that foun­da­tion with Hardened MCP Servers. We’re bring­ing DHIs se­cu­rity prin­ci­ples to the MCP in­ter­face layer, the back­bone of every agen­tic app. And start­ing now, you can run hard­ened ver­sions of the MCP servers de­vel­op­ers rely on most: Mongo, Grafana, GitHub, and more. And this is just the be­gin­ning. In the com­ing months, we will ex­tend this hard­ened foun­da­tion across the en­tire soft­ware stack with hard­ened li­braries, hard­ened sys­tem pack­ages, and other se­cure com­po­nents every­one de­pends on. The goal is sim­ple: be able to se­cure your ap­pli­ca­tion from main() down.

Base im­ages de­fine your ap­pli­ca­tion’s se­cu­rity from the very first layer, so it’s crit­i­cal to know ex­actly what goes into them. Here’s how we ap­proach it.

First: to­tal trans­parency in every part of our min­i­mal, opin­ion­ated, se­cure im­ages.

DHI uses a dis­tro­less run­time to shrink the at­tack sur­face while keep­ing the tools de­vel­op­ers rely on. But se­cu­rity is more than min­i­mal­ism; it re­quires full trans­parency. Too many ven­dors blur the truth with pro­pri­etary CVE scor­ing, down­graded vul­ner­a­bil­i­ties, or vague promises about reach­ing SLSA Build Level 3.

DHI takes a dif­fer­ent path. Every im­age in­cludes a com­plete and ver­i­fi­able SBOM. Every build pro­vides SLSA Build Level 3 prove­nance. Every vul­ner­a­bil­ity is as­sessed us­ing trans­par­ent pub­lic CVE data; we won’t hide vul­ner­a­bil­i­ties when we haven’t fixed them. Every im­age comes with proof of au­then­tic­ity. The re­sult: a se­cure foun­da­tion you can trust, built with clar­ity, ver­i­fied with ev­i­dence, and de­liv­ered with­out com­pro­mise.

Second: Migrating to se­cure im­ages takes real work, and no one should pre­tend oth­er­wise. But as you’d ex­pect from Docker, we’ve fo­cused on mak­ing the DX in­cred­i­bly easy to use. As we men­tioned be­fore, DHI is built on the open source foun­da­tions the world al­ready trusts, Debian and Alpine, so teams can adopt it with min­i­mal fric­tion.  We’re re­duc­ing that fric­tion even more: Docker’s AI as­sis­tant can scan your ex­ist­ing con­tain­ers and rec­om­mend or even ap­ply equiv­a­lent hard­ened im­ages; the fea­ture is ex­per­i­men­tal as this is day one, but we’ll quickly GA it as we learn from real world mi­gra­tions.

Lastly: we think about the most ag­gres­sive SLAs and longest sup­port times and make cer­tain that every piece of DHI can sup­port that when you need it.

DHI Enterprise, the com­mer­cial of­fer­ing of DHI, in­cludes a 7-day com­mit­ment for crit­i­cal CVE re­me­di­a­tion, with a roadmap to­ward one day or less. For reg­u­lated in­dus­tries and mis­sion-crit­i­cal sys­tems, this level of trust is manda­tory. Achieving it is hard. It de­mands deep test au­toma­tion and the abil­ity to main­tain patches that di­verge from up­stream un­til they are ac­cepted. That is why most or­ga­ni­za­tions can­not do this on their own. In ad­di­tion, DHI Enterprise al­lows or­ga­ni­za­tions to eas­ily cus­tomize DHI im­ages, lever­ag­ing Docker’s build in­fra­struc­ture which takes care of the full im­age life­cy­cle man­age­ment for you, en­sur­ing that build prove­nance and com­pli­ance is main­tained. For ex­am­ple, typ­i­cally or­ga­ni­za­tions need to add cer­tifi­cates and keys, sys­tem pack­ages, scripts, and so on. DHIs build ser­vice makes this triv­ial.

Because our patch­ing SLAs and our build ser­vice carry real op­er­a­tional cost, DHI has his­tor­i­cally been one com­mer­cial of­fer­ing. But our vi­sion has al­ways been broader. This level of se­cu­rity should be avail­able to every­one, and the tim­ing mat­ters. Now that the ev­i­dence, in­fra­struc­ture, and in­dus­try part­ner­ships are in place, we are de­liv­er­ing on that vi­sion. That is why to­day we are mak­ing Docker Hardened Images free and open source.

This move car­ries the same spirit that de­fined Docker Official Images over a decade ago. We made them free, kept them free, and backed them with clear docs, best prac­tices, and con­sis­tent main­te­nance. That foun­da­tion be­came the start­ing point for mil­lions of de­vel­op­ers and part­ners.

Now we’re do­ing it again. DHI be­ing free is pow­ered by a rapidly grow­ing ecosys­tem of part­ners, from Google, MongoDB, and the CNCF de­liv­er­ing hard­ened im­ages to se­cu­rity plat­forms like Snyk and JFrog Xray in­te­grat­ing DHI di­rectly into their scan­ners. Together, we are build­ing a uni­fied, end-to-end sup­ply chain that raises the se­cu­rity bar for the en­tire in­dus­try.

Everyone now has a se­cure foun­da­tion to start from with DHI. But busi­nesses of all shapes and sizes of­ten need more. Compliance re­quire­ments and risk tol­er­ance may de­mand CVE patches ahead of up­stream the mo­ment the source be­comes avail­able. Companies op­er­at­ing in en­ter­prise or gov­ern­ment sec­tors must meet strict stan­dards such as FIPS or STIG. And be­cause pro­duc­tion can never stop, many or­ga­ni­za­tions need se­cu­rity patch­ing to con­tinue even af­ter up­stream sup­port ends.

That is why we now of­fer three DHI op­tions, each built for a dif­fer­ent se­cu­rity re­al­ity.

Docker Hardened Images: Free for Everyone. DHI is the foun­da­tion mod­ern soft­ware de­serves: min­i­mal hard­ened im­ages, easy mi­gra­tion, full trans­parency, and an open ecosys­tem built on Alpine and Debian.

Docker Hardened Images (DHI) Enterprise: DHI Enterprise de­liv­ers the guar­an­tees that or­ga­ni­za­tions, gov­ern­ments, and in­sti­tu­tions with strict se­cu­rity or reg­u­la­tory de­mands rely on. FIPS-enabled and STIG-ready im­ages. Compliance with CIS bench­marks. SLA-backed re­me­di­a­tions they can trust for crit­i­cal CVEs in un­der 7 days. And those SLAs keep get­ting shorter as we push to­ward one-day (or less) crit­i­cal fixes.

For teams that need more con­trol, DHI Enterprise de­liv­ers. Change your im­ages. Configure run­times. Install tools like curl. Add cer­tifi­cates. DHI Enterprise gives you un­lim­ited cus­tomiza­tion, full cat­a­log ac­cess, and the abil­ity to shape your im­ages on your terms while stay­ing se­cure.

DHI Extended Lifecycle Support (ELS): ELS is a paid add-on to DHI Enterprise, built to solve one of soft­ware’s hard­est prob­lems. When up­stream sup­port ends, patches stop but vul­ner­a­bil­i­ties don’t. Scanners light up, au­di­tors de­mand an­swers, and com­pli­ance frame­works ex­pect ver­i­fied fixes. ELS ends that cy­cle with up to five ad­di­tional years of se­cu­rity cov­er­age, con­tin­u­ous CVE patches, up­dated SBOMs and prove­nance, and on­go­ing sign­ing and au­ditabil­ity for com­pli­ance.

You can learn more about these op­tions here.

Securing the con­tainer ecosys­tem is some­thing we do to­gether. Today, we’re giv­ing the world a stronger foun­da­tion to build on. Now we want every de­vel­oper, every open source pro­ject, every soft­ware ven­dor, and every plat­form to make Docker Hardened Images the de­fault.

* Join our launch we­bi­nar to get hands-on and learn what’s new.

* Explore the docs and bring DHI into your work­flows

* Join our part­ner pro­gram and help raise the se­cu­rity bar for every­one.

Lastly, we are just get­ting started, and if you’re read­ing this and want to help build the fu­ture of con­tainer se­cu­rity, we’d love to meet you. Join us.

Today’s an­nounce­ment marks a wa­ter­shed mo­ment for our in­dus­try. Docker is fun­da­men­tally chang­ing how ap­pli­ca­tions are built-se­cure by de­fault for every de­vel­oper, every or­ga­ni­za­tion, and every open-source pro­ject.

This mo­ment fills me with pride as it rep­re­sents the cul­mi­na­tion of years of work: from the early days at Atomist build­ing an event-dri­ven SBOM and vul­ner­a­bil­ity man­age­ment sys­tem, the foun­da­tion that still un­der­pins Docker Scout to­day, to un­veil­ing DHI ear­lier this year, and now mak­ing it freely avail­able to all. I am deeply grate­ful to my in­cred­i­ble col­leagues and friends at Docker who made this vi­sion a re­al­ity, and to our part­ners and cus­tomers who be­lieved in us from day one and shaped this jour­ney with their guid­ance and feed­back.

Yet while this is an im­por­tant mile­stone, it re­mains just that, a mile­stone. We are far from done, with many more in­no­va­tions on the hori­zon. In fact, we are al­ready work­ing on what comes next.

Security is a team sport, and to­day Docker opened the field to every­one. Let’s play.

I joined Docker to pos­i­tively im­pact as many de­vel­op­ers as pos­si­ble. This launch gives every de­vel­oper the right to se­cure their ap­pli­ca­tions with­out adding toil to their work­load. It rep­re­sents a mon­u­men­tal shift in the con­tainer ecosys­tem and the dig­i­tal ex­pe­ri­ences we use every day.

I’m ex­tremely proud of the prod­uct we’ve built and the cus­tomers we serve every day. I’ve had the time of my life build­ing this with our stel­lar team and I’m more ex­cited than ever for what’s to come next.

...

Read the original on www.docker.com »

5 245 shares, 23 trendiness

How SQLite Is Tested

The re­li­a­bil­ity and ro­bust­ness of SQLite is achieved in part by thor­ough and care­ful test­ing.

As of ver­sion 3.42.0 (2023-05-16), the SQLite li­brary con­sists of ap­prox­i­mately 155.8 KSLOC of C code. (KSLOC means thou­sands of Source Lines Of Code” or, in other words, lines of code ex­clud­ing blank lines and com­ments.) By com­par­i­son, the pro­ject has 590 times as much test code and test scripts - 92053.1 KSLOC.

Extensive use of as­sert() and run-time checks

There are four in­de­pen­dent test har­nesses used for test­ing the core SQLite li­brary. Each test har­ness is de­signed, main­tained, and man­aged sep­a­rately from the oth­ers.

The TCL Tests are the orig­i­nal tests for SQLite. They are con­tained in the same source tree as the SQLite core and like the SQLite core are in the pub­lic do­main. The TCL tests are the pri­mary tests used dur­ing de­vel­op­ment. The TCL tests are writ­ten us­ing the

TCL script­ing lan­guage. The TCL test har­ness it­self con­sists of 27.2 KSLOC of C code used to cre­ate the TCL in­ter­face. The test scripts are con­tained in 1390 files to­tal­ing 23.2MB in size. There are 51445 dis­tinct test cases, but many of the test cases are pa­ra­me­ter­ized and run mul­ti­ple times (with dif­fer­ent pa­ra­me­ters) so that on a full test run mil­lions of sep­a­rate tests are per­formed.

The TH3 test har­ness is a set of pro­pri­etary tests, writ­ten in C that pro­vide 100% branch test cov­er­age (and 100% MC/DC test cov­er­age) to the core SQLite li­brary. The TH3 tests are de­signed to run on em­bed­ded and spe­cial­ized plat­forms that would not eas­ily sup­port TCL or other work­sta­tion ser­vices. TH3 tests use only the pub­lished SQLite in­ter­faces. TH3 con­sists of about 76.9 MB or 1055.4 KSLOC of C code im­ple­ment­ing 50362 dis­tinct test cases. TH3 tests are heav­ily pa­ra­me­ter­ized, though, so a full-cov­er­age test runs about 2.4 mil­lion dif­fer­ent test in­stances. The cases that pro­vide 100% branch test cov­er­age con­sti­tute a sub­set of the to­tal TH3 test suite. A soak test prior to re­lease does about 248.5 mil­lion tests. Additional in­for­ma­tion on TH3 is avail­able sep­a­rately.

The SQL Logic Test

or SLT test har­ness is used to run huge num­bers of SQL state­ments against both SQLite and sev­eral other SQL data­base en­gines and ver­ify that they all get the same an­swers. SLT cur­rently com­pares SQLite against PostgreSQL, MySQL, Microsoft SQL Server, and Oracle 10g. SLT runs 7.2 mil­lion queries com­pris­ing 1.12GB of test data.

The db­sql­fuzz en­gine is a pro­pri­etary fuzz tester. Other fuzzers for SQLite

mu­tate ei­ther the SQL in­puts or the data­base file. Dbsqlfuzz mu­tates both the SQL and the data­base file at the same time, and is thus able to reach new er­ror states. Dbsqlfuzz is built us­ing the

lib­Fuzzer frame­work of LLVM with a cus­tom mu­ta­tor. There are 336 seed files. The db­sql­fuzz fuzzer runs about one bil­lion test mu­ta­tions per day. Dbsqlfuzz helps en­sure that SQLite is ro­bust against at­tack via ma­li­cious SQL or data­base in­puts.

In ad­di­tion to the four main test har­nesses, there are many other small pro­grams that im­ple­ment spe­cial­ized tests. Here are a few ex­am­ples:

The speedtest1.c” pro­gram

es­ti­mates the per­for­mance of SQLite un­der a typ­i­cal work­load.

The mptester.c” pro­gram is a stress test for mul­ti­ple processes

con­cur­rently read­ing and writ­ing a sin­gle data­base.

The threadtest3.c” pro­gram is a stress test for mul­ti­ple threads us­ing

SQLite si­mul­ta­ne­ously.

The fuzzershell.c” pro­gram is used to

run some fuzz tests.

The jfuzz” pro­gram is a lib­fuzzer-based fuzzer for

JSONB in­puts to the JSON SQL func­tions.

All of the tests above must run suc­cess­fully, on mul­ti­ple plat­forms and un­der mul­ti­ple com­pile-time con­fig­u­ra­tions, be­fore each re­lease of SQLite.

Prior to each check-in to the SQLite source tree, de­vel­op­ers typ­i­cally run a sub­set (called veryquick”) of the Tcl tests con­sist­ing of about 304.7 thou­sand test cases. The veryquick tests in­clude most tests other than the anom­aly, fuzz, and soak tests. The idea be­hind the veryquick tests are that they are suf­fi­cient to catch most er­rors, but also run in only a few min­utes in­stead of a few hours.

Anomaly tests are tests de­signed to ver­ify the cor­rect be­hav­ior of SQLite when some­thing goes wrong. It is (relatively) easy to build an SQL data­base en­gine that be­haves cor­rectly on well-formed in­puts on a fully func­tional com­puter. It is more dif­fi­cult to build a sys­tem that re­sponds sanely to in­valid in­puts and con­tin­ues to func­tion fol­low­ing sys­tem mal­func­tions. The anom­aly tests are de­signed to ver­ify the lat­ter be­hav­ior.

SQLite, like all SQL data­base en­gines, makes ex­ten­sive use of mal­loc() (See the sep­a­rate re­port on

dy­namic mem­ory al­lo­ca­tion in SQLite for ad­di­tional de­tail.) On servers and work­sta­tions, mal­loc() never fails in prac­tice and so cor­rect han­dling of out-of-mem­ory (OOM) er­rors is not par­tic­u­larly im­por­tant. But on em­bed­ded de­vices, OOM er­rors are fright­en­ingly com­mon and since SQLite is fre­quently used on em­bed­ded de­vices, it is im­por­tant that SQLite be able to grace­fully han­dle OOM er­rors.

OOM test­ing is ac­com­plished by sim­u­lat­ing OOM er­rors. SQLite al­lows an ap­pli­ca­tion to sub­sti­tute an al­ter­na­tive mal­loc() im­ple­men­ta­tion us­ing the sqlite3_­con­fig(SQLITE_­CON­FIG_­MAL­LOC,…) in­ter­face. The TCL and TH3 test har­nesses are both ca­pa­ble of in­sert­ing a mod­i­fied ver­sion of mal­loc() that can be rigged to fail af­ter a cer­tain num­ber of al­lo­ca­tions. These in­stru­mented mal­locs can be set to fail only once and then start work­ing again, or to con­tinue fail­ing af­ter the first fail­ure. OOM tests are done in a loop. On the first it­er­a­tion of the loop, the in­stru­mented mal­loc is rigged to fail on the first al­lo­ca­tion. Then some SQLite op­er­a­tion is car­ried out and checks are done to make sure SQLite han­dled the OOM er­ror cor­rectly. Then the time-to-fail­ure counter on the in­stru­mented mal­loc is in­creased by one and the test is re­peated. The loop con­tin­ues un­til the en­tire op­er­a­tion runs to com­ple­tion with­out ever en­coun­ter­ing a sim­u­lated OOM fail­ure. Tests like this are run twice, once with the in­stru­mented mal­loc set to fail only once, and again with the in­stru­mented mal­loc set to fail con­tin­u­ously af­ter the first fail­ure.

I/O er­ror test­ing seeks to ver­ify that SQLite re­sponds sanely to failed I/O op­er­a­tions. I/O er­rors might re­sult from a full disk drive, mal­func­tion­ing disk hard­ware, net­work out­ages when us­ing a net­work file sys­tem, sys­tem con­fig­u­ra­tion or per­mis­sion changes that oc­cur in the mid­dle of an SQL op­er­a­tion, or other hard­ware or op­er­at­ing sys­tem mal­func­tions. Whatever the cause, it is im­por­tant that SQLite be able to re­spond cor­rectly to these er­rors and I/O er­ror test­ing seeks to ver­ify that it does.

I/O er­ror test­ing is sim­i­lar in con­cept to OOM test­ing; I/O er­rors are sim­u­lated and checks are made to ver­ify that SQLite re­sponds cor­rectly to the sim­u­lated er­rors. I/O er­rors are sim­u­lated in both the TCL and TH3 test har­nesses by in­sert­ing a new

Virtual File System ob­ject that is spe­cially rigged to sim­u­late an I/O er­ror af­ter a set num­ber of I/O op­er­a­tions. As with OOM er­ror test­ing, the I/O er­ror sim­u­la­tors can be set to fail just once, or to fail con­tin­u­ously af­ter the first fail­ure. Tests are run in a loop, slowly in­creas­ing the point of fail­ure un­til the test case runs to com­ple­tion with­out er­ror. The loop is run twice, once with the I/O er­ror sim­u­la­tor set to sim­u­late only a sin­gle fail­ure and a sec­ond time with it set to fail all I/O op­er­a­tions af­ter the first fail­ure.

In I/O er­ror tests, af­ter the I/O er­ror sim­u­la­tion fail­ure mech­a­nism is dis­abled, the data­base is ex­am­ined us­ing

PRAGMA in­tegri­ty_check to make sure that the I/O er­ror has not in­tro­duced data­base cor­rup­tion.

Crash test­ing seeks to demon­strate that an SQLite data­base will not go cor­rupt if the ap­pli­ca­tion or op­er­at­ing sys­tem crashes or if there is a power fail­ure in the mid­dle of a data­base up­date. A sep­a­rate white-pa­per ti­tled

Atomic Commit in SQLite de­scribes the de­fen­sive mea­sures SQLite takes to pre­vent data­base cor­rup­tion fol­low­ing a crash. Crash tests strive to ver­ify that those de­fen­sive mea­sures are work­ing cor­rectly.

It is im­prac­ti­cal to do crash test­ing us­ing real power fail­ures, of course, and so crash test­ing is done in sim­u­la­tion. An al­ter­na­tive

Virtual File System is in­serted that al­lows the test har­ness to sim­u­late the state of the data­base file fol­low­ing a crash.

In the TCL test har­ness, the crash sim­u­la­tion is done in a sep­a­rate process. The main test­ing process spawns a child process which runs some SQLite op­er­a­tion and ran­domly crashes some­where in the mid­dle of a write op­er­a­tion. A spe­cial VFS ran­domly re­orders and cor­rupts the un­syn­chro­nized write op­er­a­tions to sim­u­late the ef­fect of buffered filesys­tems. After the child dies, the orig­i­nal test process opens and reads the test data­base and ver­i­fies that the changes at­tempted by the child ei­ther com­pleted suc­cess­fully or else were com­pletely rolled back. The

in­tegri­ty_check PRAGMA is used to make sure no data­base cor­rup­tion oc­curs.

The TH3 test har­ness needs to run on em­bed­ded sys­tems that do not nec­es­sar­ily have the abil­ity to spawn child processes, so it uses an in-mem­ory VFS to sim­u­late crashes. The in-mem­ory VFS can be rigged to make a snap­shot of the en­tire filesys­tem af­ter a set num­ber of I/O op­er­a­tions. Crash tests run in a loop. On each it­er­a­tion of the loop, the point at which a snap­shot is made is ad­vanced un­til the SQLite op­er­a­tions be­ing tested run to com­ple­tion with­out ever hit­ting a snap­shot. Within the loop, af­ter the SQLite op­er­a­tion un­der test has com­pleted, the filesys­tem is re­verted to the snap­shot and ran­dom file dam­age is in­tro­duced that is char­ac­ter­is­tic of the kinds of dam­age one ex­pects to see fol­low­ing a power loss. Then the data­base is opened and checks are made to en­sure that it is well-formed and that the trans­ac­tion ei­ther ran to com­ple­tion or was com­pletely rolled back. The in­te­rior of the loop is re­peated mul­ti­ple times for each snap­shot with dif­fer­ent ran­dom dam­age each time.

The test suites for SQLite also ex­plore the re­sult of stack­ing mul­ti­ple fail­ures. For ex­am­ple, tests are run to en­sure cor­rect be­hav­ior when an I/O er­ror or OOM fault oc­curs while try­ing to re­cover from a prior crash.

Fuzz test­ing

seeks to es­tab­lish that SQLite re­sponds cor­rectly to in­valid, out-of-range, or mal­formed in­puts.

SQL fuzz test­ing con­sists of cre­at­ing syn­tac­ti­cally cor­rect yet wildly non­sen­si­cal SQL state­ments and feed­ing them to SQLite to see what it will do with them. Usually some kind of er­ror is re­turned (such as no such table”). Sometimes, purely by chance, the SQL state­ment also hap­pens to be se­man­ti­cally cor­rect. In that case, the re­sult­ing pre­pared state­ment is run to make sure it gives a rea­son­able re­sult.

The con­cept of fuzz test­ing has been around for decades, but fuzz test­ing was not an ef­fec­tive way to find bugs un­til 2014 when Michal Zalewski in­vented the first prac­ti­cal pro­file-guided fuzzer,

American Fuzzy Lop or AFL. Unlike prior fuzzers that blindly gen­er­ate ran­dom in­puts, AFL in­stru­ments the pro­gram be­ing tested (by mod­i­fy­ing the as­sem­bly-lan­guage out­put from the C com­piler) and uses that in­stru­men­ta­tion to de­tect when an in­put causes the pro­gram to do some­thing dif­fer­ent - to fol­low a new con­trol path or loop a dif­fer­ent num­ber of times. Inputs that pro­voke new be­hav­ior are re­tained and fur­ther mu­tated. In this way, AFL is able to discover” new be­hav­iors of the pro­gram un­der test, in­clud­ing be­hav­iors that were never en­vi­sioned by the de­sign­ers.

AFL proved adept at find­ing ar­cane bugs in SQLite. Most of the find­ings have been as­sert() state­ments where the con­di­tional was false un­der ob­scure cir­cum­stances. But AFL has also found a fair num­ber of crash bugs in SQLite, and even a few cases where SQLite com­puted in­cor­rect re­sults.

Because of its past suc­cess, AFL be­came a stan­dard part of the test­ing strat­egy for SQLite be­gin­ning with ver­sion 3.8.10 (2015-05-07) un­til it was su­per­seded by bet­ter fuzzers in ver­sion 3.29.0 (2019-07-10).

Beginning in 2016, a team of en­gi­neers at Google started the

OSS Fuzz pro­ject. OSS Fuzz uses a AFL-style guided fuzzer run­ning on Google’s in­fra­struc­ture. The Fuzzer au­to­mat­i­cally down­loads the lat­est check-ins for par­tic­i­pat­ing pro­jects, fuzzes them, and sends email to the de­vel­op­ers re­port­ing any prob­lems. When a fix is checked in, the fuzzer au­to­mat­i­cally de­tects this and emails a con­fir­ma­tion to the de­vel­op­ers.

SQLite is one of many open-source pro­jects that OSS Fuzz tests. The

test/​oss­fuzz.c source file in the SQLite repos­i­tory is SQLite’s in­ter­face to OSS fuzz.

OSS Fuzz no longer finds his­tor­i­cal bugs in SQLite. But it is still run­ning and does oc­ca­sion­ally find is­sues in new de­vel­op­ment check-ins. Examples:

[1]

[2]

[3].

Beginning in late 2018, SQLite has been fuzzed us­ing a pro­pri­etary fuzzer called dbsqlfuzz”. Dbsqlfuzz is built us­ing the

lib­Fuzzer frame­work of LLVM.

The db­sql­fuzz fuzzer mu­tates both the SQL in­put and the data­base file at the same time. Dbsqlfuzz uses a cus­tom

Structure-Aware Mutator

on a spe­cial­ized in­put file that de­fines both an in­put data­base and SQL text to be run against that data­base. Because it mu­tates both the in­put data­base and the in­put SQL at the same time, db­sql­fuzz has been able to find some ob­scure faults in SQLite that were missed by prior fuzzers that mu­tated only SQL in­puts or only the data­base file. The SQLite de­vel­op­ers keep db­sql­fuzz run­ning against trunk in about 16 cores at all times. Each in­stance of db­sql­fuzz pro­gram is able to eva­l­utes about 400 test cases per sec­ond, mean­ing that about 500 mil­lion cases are checked every day.

The db­sql­fuzz fuzzer has been very suc­cess­ful at hard­en­ing the SQLite code base against ma­li­cious at­tack. Since db­sql­fuzz has been added to the SQLite in­ter­nal test suite, bug re­ports from ex­ter­nal fuzzers such as OSSFuzz have all but stopped.

Note that db­sql­fuzz is not the Protobuf-based struc­ture-aware fuzzer for SQLite that is used by Chromium and de­scribed in the

Structure-Aware Mutator ar­ti­cle. There is no con­nec­tion be­tween these two fuzzers, other than the fact that they are both based on lib­Fuzzer

The Protobuf fuzzer for SQLite is writ­ten and main­tained by the Chromium team at Google, whereas db­sql­fuzz is writ­ten and main­tained by the orig­i­nal SQLite de­vel­op­ers. Having mul­ti­ple in­de­pen­dently-de­vel­oped fuzzers for SQLite is good, as it means that ob­scure is­sues are more likely to be un­cov­ered.

Near the end of January 2024, a sec­ond lib­Fuzzer-based tool called jfuzz” came into use. Jfuzz gen­er­ates cor­rupt JSONB blobs and feeds them into the JSON SQL func­tions to ver­ify that the JSON func­tions are able to safely and ef­fi­ciently deal with cor­rupt bi­nary in­puts.

SQLite seems to be a pop­u­lar tar­get for third-par­ties to fuzz. The de­vel­op­ers hear about many at­tempts to fuzz SQLite and they do oc­ca­sion­ally get bug re­ports found by in­de­pen­dent fuzzers. All such re­ports are promptly fixed, so the prod­uct is im­proved and that the en­tire SQLite user com­mu­nity ben­e­fits. This mech­a­nism of hav­ing many in­de­pen­dent testers is sim­i­lar to

Linus’s law: given enough eye­balls, all bugs are shal­low”.

One fuzzing re­searcher of par­tic­u­lar note is

Manuel Rigger. Most fuzzers only look for as­ser­tion faults, crashes, un­de­fined be­hav­ior (UB), or other eas­ily de­tected anom­alies. Dr. Rigger’s fuzzers, on the other hand, are able to find cases where SQLite com­putes an in­cor­rect an­swer. Rigger has found

many such cases. Most of these finds are ob­scure cor­ner cases in­volv­ing type con­ver­sions and affin­ity trans­for­ma­tions, and a good num­ber of the finds are against un­re­leased fea­tures. Nevertheless, his finds are still im­por­tant as they are real bugs, and the SQLite de­vel­op­ers are grate­ful to be able to iden­tify and fix the un­der­ly­ing prob­lems.

Historical test cases from AFL, OSS Fuzz, and db­sql­fuzz are col­lected in a set of data­base files in the main SQLite source tree and then re­run by the fuzzcheck” util­ity pro­gram when­ever one runs make test”. Fuzzcheck only runs a few thou­sand interesting” cases out of the bil­lions of cases that the var­i­ous fuzzers have ex­am­ined over the years. Interesting” cases are cases that ex­hibit pre­vi­ously un­seen be­hav­ior. Actual bugs found by fuzzers are al­ways in­cluded among the in­ter­est­ing test cases, but most of the cases run by fuz­zcheck were never ac­tual bugs.

Fuzz test­ing and 100% MC/DC test­ing are in ten­sion with one an­other. That is to say, code tested to 100% MC/DC will tend to be more vul­ner­a­ble to prob­lems found by fuzzing and code that per­forms well dur­ing fuzz test­ing will tend to have (much) less than 100% MC/DC. This is be­cause MC/DC test­ing dis­cour­ages de­fen­sive code with un­reach­able branches, but with­out de­fen­sive code, a fuzzer is more likely to find a path that causes prob­lems. MC/DC test­ing seems to work well for build­ing code that is ro­bust dur­ing nor­mal use, whereas fuzz test­ing is good for build­ing code that is ro­bust against ma­li­cious at­tack.

Of course, users would pre­fer code that is both ro­bust in nor­mal use and re­sis­tant to ma­li­cious at­tack. The SQLite de­vel­op­ers are ded­i­cated to pro­vid­ing that. The pur­pose of this sec­tion is merely to point out that do­ing both at the same time is dif­fi­cult.

For much of its his­tory SQLite has been fo­cused on 100% MC/DC test­ing. Resistance to fuzzing at­tacks only be­came a con­cern with the in­tro­duc­tion of AFL in 2014. For a while there, fuzzers were find­ing many prob­lems in SQLite. In more re­cent years, the test­ing strat­egy of SQLite has evolved to place more em­pha­sis on fuzz test­ing. We still main­tain 100% MC/DC of the core SQLite code, but most test­ing CPU cy­cles are now de­voted to fuzzing.

While fuzz test­ing and 100% MC/DC test­ing are in ten­sion, they are not com­pletely at cross-pur­poses. The fact that the SQlite test suite does test to 100% MC/DC means that when fuzzers do find prob­lems, those prob­lems can be fixed quickly and with lit­tle risk of in­tro­duc­ing new er­rors.

There are nu­mer­ous test cases that ver­ify that SQLite is able to deal with mal­formed data­base files. These tests first build a well-formed data­base file, then add cor­rup­tion by chang­ing one or more bytes in the file by some means other than SQLite. Then SQLite is used to read the data­base. In some cases, the bytes changes are in the mid­dle of data. This causes the con­tent of the data­base to change while keep­ing the data­base well-formed. In other cases, un­used bytes of the file are mod­i­fied, which has no ef­fect on the in­tegrity of the data­base. The in­ter­est­ing cases are when bytes of the file that de­fine data­base struc­ture get changed. The mal­formed data­base tests ver­ify that SQLite finds the file for­mat er­rors and re­ports them us­ing the SQLITE_CORRUPT re­turn code with­out over­flow­ing buffers, deref­er­enc­ing NULL point­ers, or per­form­ing other un­whole­some ac­tions.

The db­sql­fuzz fuzzer also does an ex­cel­lent job of ver­i­fy­ing that SQLite re­sponds sanely to mal­formed data­base files.

SQLite de­fines cer­tain lim­its on its op­er­a­tion, such as the max­i­mum num­ber of columns in a table, the max­i­mum length of an SQL state­ment, or the max­i­mum value of an in­te­ger. The TCL and TH3 test suites both con­tains nu­mer­ous tests that push SQLite right to the edge of its de­fined lim­its and ver­ify that it per­forms cor­rectly for all al­lowed val­ues. Additional tests go be­yond the de­fined lim­its and ver­ify that SQLite cor­rectly re­turns er­rors. The source code con­tains test­case macros to ver­ify that both sides of each bound­ary have been tested.

Whenever a bug is re­ported against SQLite, that bug is not con­sid­ered fixed un­til new test cases that would ex­hibit the bug have been added to ei­ther the TCL or TH3 test suites. Over the years, this has re­sulted in thou­sands and thou­sands of new tests. These re­gres­sion tests en­sure that bugs that have been fixed in the past are not rein­tro­duced into fu­ture ver­sions of SQLite.

Resource leak oc­curs when sys­tem re­sources are al­lo­cated and never freed. The most trou­ble­some re­source leaks in many ap­pli­ca­tions are mem­ory leaks - when mem­ory is al­lo­cated us­ing mal­loc() but never re­leased us­ing free(). But other kinds of re­sources can also be leaked: file de­scrip­tors, threads, mu­texes, etc.

Both the TCL and TH3 test har­nesses au­to­mat­i­cally track sys­tem re­sources and re­port re­source leaks on every test run. No spe­cial con­fig­u­ra­tion or setup is re­quired. The test har­nesses are es­pe­cially vig­i­lant with re­gard to mem­ory leaks. If a change causes a mem­ory leak, the test har­nesses will rec­og­nize this quickly. SQLite is de­signed to never leak mem­ory, even af­ter an ex­cep­tion such as an OOM er­ror or disk I/O er­ror. The test har­nesses are zeal­ous to en­force this.

The SQLite core, in­clud­ing the unix VFS, has 100% branch test cov­er­age un­der TH3 in its de­fault con­fig­u­ra­tion as mea­sured by

gcov. Extensions such as FTS3 and RTree are ex­cluded from this analy­sis.

There are many ways to mea­sure test cov­er­age. The most pop­u­lar met­ric is statement cov­er­age”. When you hear some­one say that their pro­gram as XX% test cov­er­age” with­out fur­ther ex­pla­na­tion, they usu­ally mean state­ment cov­er­age. Statement cov­er­age mea­sures what per­cent­age of lines of code are ex­e­cuted at least once by the test suite.

Branch cov­er­age is more rig­or­ous than state­ment cov­er­age. Branch cov­er­age mea­sures the num­ber of ma­chine-code branch in­struc­tions that are eval­u­ated at least once on both di­rec­tions.

To il­lus­trate the dif­fer­ence be­tween state­ment cov­er­age and branch cov­er­age, con­sider the fol­low­ing hy­po­thet­i­cal line of C code:

Such a line of C code might gen­er­ate a dozen sep­a­rate ma­chine code in­struc­tions. If any one of those in­struc­tions is ever eval­u­ated, then we say that the state­ment has been tested. So, for ex­am­ple, it might be the case that the con­di­tional ex­pres­sion is al­ways false and the d” vari­able is never in­cre­mented. Even so, state­ment cov­er­age counts this line of code as hav­ing been tested.

Branch cov­er­age is more strict. With branch cov­er­age, each test and each sub­block within the state­ment is con­sid­ered sep­a­rately. In or­der to achieve 100% branch cov­er­age in the ex­am­ple above, there must be at least three test cases:

Any one of the above test cases would pro­vide 100% state­ment cov­er­age but all three are re­quired for 100% branch cov­er­age. Generally speak­ing, 100% branch cov­er­age im­plies 100% state­ment cov­er­age, but the con­verse is not true. To reem­pha­size, the

TH3 test har­ness for SQLite pro­vides the stronger form of test cov­er­age - 100% branch test cov­er­age.

A well-writ­ten C pro­gram will typ­i­cally con­tain some de­fen­sive con­di­tion­als which in prac­tice are al­ways true or al­ways false. This leads to a pro­gram­ming dilemma: Does one re­move de­fen­sive code in or­der to ob­tain 100% branch cov­er­age?

In SQLite, the an­swer to the pre­vi­ous ques­tion is no”. For test­ing pur­poses, the SQLite source code de­fines macros called ALWAYS() and NEVER(). The ALWAYS() macro sur­rounds con­di­tions which are ex­pected to al­ways eval­u­ate as true and NEVER() sur­rounds con­di­tions that are al­ways eval­u­ated to false. These macros serve as com­ments to in­di­cate that the con­di­tions are de­fen­sive code. In re­lease builds, these macros are pass-throughs:

During most test­ing, how­ever, these macros will throw an as­ser­tion fault if their ar­gu­ment does not have the ex­pected truth value. This alerts the de­vel­op­ers quickly to in­cor­rect de­sign as­sump­tions.

When mea­sur­ing test cov­er­age, these macros are de­fined to be con­stant truth val­ues so that they do not gen­er­ate as­sem­bly lan­guage branch in­struc­tions, and hence do not come into play when cal­cu­lat­ing the branch cov­er­age:

The test suite is de­signed to be run three times, once for each of the ALWAYS() and NEVER() de­f­i­n­i­tions shown above. All three test runs should yield ex­actly the same re­sult. There is a run-time test us­ing the sqlite3_test_­con­trol(SQLITE_TESTC­TR­L_AL­WAYS, …) in­ter­face that can be used to ver­ify that the macros are cor­rectly set to the first form (the pass-through form) for de­ploy­ment.

Another macro used in con­junc­tion with test cov­er­age mea­sure­ment is the macro. The ar­gu­ment is a con­di­tion for which we want test cases that eval­u­ate to both true and false. In non-cov­er­age builds (that is to say, in re­lease builds) the

macro is a no-op:

But in a cov­er­age mea­sur­ing build, the macro gen­er­ates code that eval­u­ates the con­di­tional ex­pres­sion in its ar­gu­ment. Then dur­ing analy­sis, a check is made to en­sure tests ex­ist that eval­u­ate the con­di­tional to both true and false. macros are used, for ex­am­ple, to help ver­ify that bound­ary val­ues are tested. For ex­am­ple:

...

Read the original on sqlite.org »

6 220 shares, 28 trendiness

I got hacked, my server started mining Monero this morning.

I got hacked, my server started min­ing Monero this morn­ing.

Edit: A few peo­ple on HN have pointed out that this ar­ti­cle sounds a lit­tle LLM gen­er­ated. That’s be­cause it’s largely a tran­script of me pan­ick­ing and talk­ing to Claude. Sorry if it reads poorly, the in­ci­dent re­ally hap­pened though!

Or: How I learned that I don’t use Next.js” does­n’t mean your de­pen­den­cies don’t use Next.js

I woke up to this beauty from Hetzner:

We have in­di­ca­tions that there was an at­tack from your server. Please take all nec­es­sary mea­sures to avoid this in the fu­ture and to solve the is­sue.

We also re­quest that you send a short re­sponse to us. This re­sponse should con­tain in­for­ma­tion about how this could have hap­pened and what you in­tend to do about it. In the event that the fol­low­ing steps are not com­pleted suc­cess­fully, your server can be blocked at any time af­ter the 2025-12-17 12:46:15 +0100.

Attached was ev­i­dence of net­work scan­ning from my server to some IP range in Thailand. Great. Nothing says good morn­ing” like an abuse re­port and the threat of get­ting your in­fra­struc­ture shut down in 4 hours.

Background: I run a Hetzner server with Coolify. It runs all my stuff, like my lit­tle cor­ner of the in­ter­net:

First thing I did was SSH in and check the load av­er­age:

For con­text, my load av­er­age is nor­mally around 0.5-1.0. Fifteen is something is very wrong.”

I ran ps aux to see what was eat­ing my CPU:

819% CPU us­age. On a process called javae run­ning from /tmp/.XIN-unix/. And mul­ti­ple xm­rig processes - that’s lit­er­ally cryp­tocur­rency min­ing soft­ware (Monero, specif­i­cally).

I’d been min­ing cryp­tocur­rency for some­one since December 7th. For ten days. Brilliant.

My first thought was I’m com­pletely fucked.” Cryptominers on the host, run­ning for over a week - time to nuke every­thing from or­bit and re­build, right?

But then I no­ticed some­thing in­ter­est­ing. All these processes were run­ning as user 1001. Not root. Not a sys­tem user. UID 1001.

Let me check what’s ac­tu­ally run­ning:

I’ve got about 20 con­tain­ers run­ning via Coolify (my self-hosted PaaS). Inventronix (my IoT plat­form), some mon­i­tor­ing stuff, Grafana, a few ex­per­i­ments.

And Umami - a pri­vacy-fo­cused an­a­lyt­ics tool I’d re-de­ployed 9 days ago to track traf­fic on my blog.

Let me check which con­tainer has user 1001:

There it is. Container a42f72cb1bc5 - that’s my Umami an­a­lyt­ics con­tainer. And it’s got a whole xm­rig-6.24.0

di­rec­tory sit­ting in what should be Next.js server in­ter­nals.

The min­ing com­mand in the process list con­firmed it:

Someone had ex­ploited my an­a­lyt­ics con­tainer and was min­ing Monero us­ing my CPU. Nice.

Here’s the kicker. A few days ago I saw a Reddit post about a crit­i­cal Next.js/Puppeteer RCE vul­ner­a­bil­ity ( CVE-2025-66478). My im­me­di­ate re­ac­tion was lol who cares, I don’t run Next.js.”

Except… Umami is built with Next.js. I did not know this, nor did I bother look­ing. Oops.

The vul­ner­a­bil­ity (CVE-2025-66478) was in Next.js’s React Server Components de­se­ri­al­iza­tion. The Flight” pro­to­col that RSC uses to se­ri­al­ize/​de­se­ri­al­ize data be­tween client and server had an un­safe de­se­ri­al­iza­tion flaw. An at­tacker could send a spe­cially crafted HTTP re­quest with a ma­li­cious pay­load to any App Router end­point, and when de­se­ri­al­ized, it would ex­e­cute ar­bi­trary code on the server.

No Puppeteer in­volved - just bro­ken de­se­ri­al­iza­tion in the RSC pro­to­col it­self. The at­tack flow:

So much for I don’t use Next.js.”

The Panic: Has It Escaped the Container?

This is where I started to prop­erly panic. Looking at that process list:

That path - /tmp/.XIN-unix/javae - looks like it’s on the host filesys­tem, not in­side a con­tainer. If the mal­ware had es­caped the con­tainer onto my ac­tual server, I’d need to:

1

2

3

4

5

$ crontab -l

no crontab for root

$ sys­tem­ctl list-unit-files | grep en­abled

# … all le­git­i­mate sys­tem ser­vices, noth­ing sus­pi­cious

No ma­li­cious cron jobs. No fake sys­temd ser­vices pre­tend­ing to be ng­inxs or apaches (common trick to blend in). That’s… good?

But I still needed to know: Did the mal­ware ac­tu­ally es­cape the con­tainer or not?

Here’s the test. If /tmp/.XIN-unix/javae ex­ists on my host, I’m fucked. If it does­n’t ex­ist, then what I’m see­ing is just Docker’s de­fault be­hav­ior of show­ing con­tainer processes in the host’s ps out­put, but they’re ac­tu­ally iso­lated.

1

2

$ ls -la /tmp/.XIN-unix/javae

ls: can­not ac­cess /tmp/.XIN-unix/javae’: No such file or di­rec­tory

The mal­ware was en­tirely con­tained within the Umami con­tainer. When you run ps aux on a Docker host, you see processes from all con­tain­ers be­cause they share the same ker­nel. But those processes are in their own mount name­space - they can’t see or touch the host filesys­tem.

Let me ver­ify what user that con­tainer was ac­tu­ally run­ning as:

This is why I’m not fucked:

The mal­ware could NOT:

Why This Matters: Dockerfiles vs. Auto-Generated Images

Here’s the thing that saved me. I write my own Dockerfiles for my ap­pli­ca­tions. I don’t use auto-gen­er­a­tion tools like Nixpacks (which Coolify sup­ports) that de­fault to USER root in con­tain­ers.

The Reddit post I’d seen ear­lier? That guy got com­pletely owned be­cause his con­tainer was run­ning as root. The mal­ware could:

Write any­where on the filesys­tem

His fix re­quired a full server re­build be­cause he could­n’t trust any­thing any­more. Mine re­quired… delet­ing a con­tainer.

What I did not do, was keep track of the tolling I was us­ing and what tool­ing that was us­ing. In fact, I in­stalled Umami from Coolify’s ser­vices screen. I did­n’t even con­fig­ure it.

Obviously none of this is Umami’s fault by the way. They re­leased a fix for their free soft­ware like a week ago. I just did­n’t think to do any­thing about it.

1

2

3

4

5

6

7

# Stop and re­move the com­pro­mised con­tainer

$ docker stop umami-bkc4kkss848c­c4k­w4gk­w8s44

$ docker rm umami-bkc4kkss848c­c4k­w4gk­w8s44

# Check CPU us­age

$ up­time

08:45:17 up 55 days, 17:43, 1 user, load av­er­age: 0.52, 1.24, 4.83

CPU back to nor­mal. All those cryp­to­min­ing processes? Gone. They only ex­isted in­side the con­tainer.

I also en­abled UFW (which I should have done ages ago):

1

2

3

4

5

6

$ sudo ufw de­fault deny in­com­ing

$ sudo ufw de­fault al­low out­go­ing

$ sudo ufw al­low ssh

$ sudo ufw al­low 80/tcp

$ sudo ufw al­low 443/tcp

$ sudo ufw en­able

This blocks all in­bound con­nec­tions ex­cept SSH, HTTP, and HTTPS. No more ex­posed PostgreSQL ports, no more RabbitMQ ports open to the in­ter­net.

The con­tainer ran as non-root user with no priv­i­leged ac­cess or host mounts, so the com­pro­mise was fully con­tained. Container has been re­moved and fire­wall hard­ened.

They closed the ticket within an hour.

1. I don’t use X” does­n’t mean your de­pen­den­cies don’t use X

I don’t write Next.js ap­pli­ca­tions. But I run third-party tools that are built with Next.js. When CVE-2025-66478 was dis­closed, I thought not my prob­lem.” Wrong.

Know what your de­pen­den­cies are ac­tu­ally built with. That simple an­a­lyt­ics tool” is a full web ap­pli­ca­tion with a com­plex stack.

This could have been so much worse. If that con­tainer had been run­ning as root, or had vol­ume mounts to sen­si­tive di­rec­to­ries, or had ac­cess to the Docker socket, I’d be writ­ing a very dif­fer­ent blog post about re­build­ing my en­tire in­fra­struc­ture.

Instead, I deleted one con­tainer and moved on with my day.

Write your own Dockerfiles. Understand what user your processes run as. Avoid USER root un­less you have a very good rea­son. Don’t mount vol­umes you don’t need. Don’t give con­tain­ers –privileged ac­cess.

This mal­ware was­n’t like those peo­ple who auto-poll for /wpadmin every time I make a DNS change. This was spicy.

Used process names that blend in (javae, runnv)

According to other re­ports, even had killer scripts” to mur­der com­pet­ing min­ers

But it was still lim­ited by con­tainer iso­la­tion. Good se­cu­rity prac­tices beat so­phis­ti­cated mal­ware.

Even though the con­tainer iso­la­tion held, I still should have:

Had a fire­wall en­abled from day one (not I’ll do it later”)

...

Read the original on blog.jakesaunders.dev »

7 219 shares, 12 trendiness

consuming, not creating · Mike San Román

Everyone’s us­ing AI wrong. Including me, un­til last month.

We ask AI to write emails, gen­er­ate re­ports, cre­ate con­tent. But that’s like us­ing a su­per­com­puter as a type­writer. The real break­through hap­pened when I flipped my en­tire ap­proach.

Here’s how most peo­ple use AI:

Makes sense. These tasks save time. But they’re think­ing too small.

My Obsidian vault con­tains: → 3 years of daily en­gi­neer­ing notes → 500+ meet­ing re­flec­tions → Thousands of fleet­ing ob­ser­va­tions about build­ing soft­ware → Every book high­light and con­fer­ence in­sight I’ve cap­tured

No hu­man could read all of this in a life­time. AI con­sumes it in sec­onds.

Last month I con­nected my Obsidian vault to AI. The ques­tions changed com­pletely:

Instead of Write me some­thing new” I ask What have I al­ready dis­cov­ered?”

What pat­terns emerge from my last 50 one-on-ones?”

AI found that per­for­mance is­sues al­ways pre­ceded tool com­plaints by 2-3 weeks. I’d never con­nected those dots.

How has my think­ing about tech­ni­cal debt evolved?”

Turns out I went from see­ing it as things to fix” to information about sys­tem evo­lu­tion” around March 2023. Forgotten par­a­digm shift.

Find con­nec­tions be­tween Buffer’s API de­sign and my car­peta.app ar­chi­tec­ture”

Surfaced 12 de­sign de­ci­sions I’m un­con­sciously re­peat­ing. Some good. Some I need to re­think.

Every meet­ing, every shower thought, every de­bug­ging ses­sion teaches you some­thing. But that knowl­edge is worth­less if you can’t re­trieve it.

Traditional search fails be­cause you need to re­mem­ber ex­act words. Your brain fails be­cause it was­n’t de­signed to store every­thing.

AI changes the re­trieval game: → Query by con­cept, not key­words → Find pat­terns across years, not just doc­u­ments → Connect ideas that were sep­a­rated by time and con­text

The con­straint was never writ­ing. Humans are al­ready good at cre­at­ing when they have the right in­puts.

The con­straint was al­ways con­sump­tion. Reading every­thing. Remembering every­thing. Connecting every­thing.

Everything goes into Obsidian (meetings, thoughts, re­flec­tions)

AI has ac­cess to the en­tire vault

I query my past self like a re­search as­sis­tant

But the magic is­n’t in the tools. It’s in the mind­set shift.

Stop think­ing of AI as a cre­ator. Start think­ing of it as the ul­ti­mate reader of your ex­pe­ri­ence.

Every note be­comes a fu­ture in­sight. Every re­flec­tion be­comes search­able wis­dom. Every ran­dom ob­ser­va­tion might be the miss­ing piece for to­mor­row’s prob­lem.

After two months of this ap­proach:

→ I solve prob­lems faster by find­ing sim­i­lar past sit­u­a­tions → I make bet­ter de­ci­sions by ac­cess­ing for­got­ten con­text → I see pat­terns that were in­vis­i­ble when scat­tered across time

Your ex­pe­ri­ence is your com­pet­i­tive ad­van­tage. But only if you can ac­cess it.

Most peo­ple are sit­ting on gold­mines of in­sight, locked away in note­books, ran­dom files, and fad­ing mem­o­ries. AI turns that locked vault into a queryable data­base of your own ex­per­tise.

We’re still think­ing about AI like it’s 2023. Writing as­sis­tants. Code gen­er­a­tors. Content cre­ators.

The real rev­o­lu­tion is AI as the reader of every­thing you’ve ever thought.

And that changes every­thing about how we should cap­ture knowl­edge to­day.

Start doc­u­ment­ing. Not for oth­ers. For your fu­ture self and the AI that will help you re­mem­ber what you’ve for­got­ten you know.

This piece orig­i­nally ap­peared in my weekly newslet­ter. Subscribe for in­sights on think­ing dif­fer­ently about work, tech­nol­ogy, and what’s ac­tu­ally pos­si­ble.

...

Read the original on msanroman.io »

8 216 shares, 19 trendiness

Hack Reveals the a16z-Backed Phone Farm Flooding TikTok With AI Influencers

Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to man­age at least hun­dreds of AI-generated so­cial me­dia ac­counts and pro­mote prod­ucts has been hacked. The hack re­veals what prod­ucts the AI-generated ac­counts are pro­mot­ing, of­ten with­out the re­quired dis­clo­sure that these are ad­ver­tise­ments, and al­lowed the hacker to take con­trol of  more than 1,000 smart­phones that power the com­pany.

The hacker, who asked for anonymity be­cause he feared re­tal­i­a­tion from the com­pany, said he re­ported the vul­ner­a­bil­ity to Doublespeed on October 31. At the time of writ­ing, the hacker said he still has ac­cess to the com­pa­ny’s back­end, in­clud­ing the phone farm it­self. Doublespeed did not re­spond to a re­quest for com­ment.

...

Read the original on www.404media.co »

9 161 shares, 47 trendiness

Gut Bacteria from Amphibians and Reptiles Achieve Complete Tumor Elimination

Demonstration that nat­ural bac­te­ria iso­lated from am­phib­ian and rep­tile in­testines achieve com­plete tu­mor elim­i­na­tion with sin­gle ad­min­is­tra­tion

Combines di­rect bac­te­r­ial killing of can­cer cells with im­mune sys­tem ac­ti­va­tion for com­pre­hen­sive tu­mor de­struc­tion

Outperforms ex­ist­ing chemother­apy and im­munother­apy with no ad­verse ef­fects on nor­mal tis­sues

Expected ap­pli­ca­tions across di­verse solid tu­mor types, open­ing new av­enues for can­cer treat­ment

A re­search team of Prof. Eijiro Miyako at the Japan Advanced Institute of Science and Technology (JAIST) has dis­cov­ered that the bac­terium Ewingella amer­i­cana, iso­lated from the in­testines of Japanese tree frogs (Dryophytes japon­i­cus), pos­sesses re­mark­ably po­tent an­ti­cancer ac­tiv­ity. This ground­break­ing re­search has been pub­lished in the in­ter­na­tional jour­nal Gut Microbes.

While the re­la­tion­ship be­tween gut mi­cro­biota and can­cer has at­tracted con­sid­er­able at­ten­tion in re­cent years, most ap­proaches have fo­cused on in­di­rect meth­ods such as mi­cro­biome mod­u­la­tion or fe­cal mi­cro­biota trans­plan­ta­tion. In con­trast, this study takes a com­pletely dif­fer­ent ap­proach: iso­lat­ing, cul­tur­ing, and di­rectly ad­min­is­ter­ing in­di­vid­ual bac­te­r­ial strains in­tra­venously to at­tack tu­mors–- rep­re­sent­ing an in­no­v­a­tive ther­a­peu­tic strat­egy.

The re­search team iso­lated a to­tal of 45 bac­te­r­ial strains from the in­testines of Japanese tree frogs, Japanese fire belly newts (Cynops pyrrhogaster), and Japanese grass lizards (Takydromus tachy­dro­moides). Through sys­tem­atic screen­ing, nine strains demon­strated an­ti­tu­mor ef­fects, with E. amer­i­cana ex­hibit­ing the most ex­cep­tional ther­a­peu­tic ef­fi­cacy.

Remarkable Therapeutic Efficacy

In a mouse col­orec­tal can­cer model, a sin­gle in­tra­venous ad­min­is­tra­tion of E. amer­i­cana achieved com­plete tu­mor elim­i­na­tion with a 100% com­plete re­sponse (CR) rate. This dra­mat­i­cally sur­passes the ther­a­peu­tic ef­fi­cacy of cur­rent stan­dard treat­ments, in­clud­ing im­mune check­point in­hibitors (anti-PD-L1 an­ti­body) and li­po­so­mal dox­oru­bicin (chemotherapy agent) (Figure 1).

Figure 1. Anticancer ef­fi­cacy: Ewingella amer­i­cana ver­sus con­ven­tional ther­a­pies. Tumor re­sponse: sin­gle i.v. dose of E. amer­i­cana (200 µL, 5 × 10⁹ CFU/mL); four doses of dox­oru­bicin or anti-PD-L1 (200 µL, 2.5 mg/​kg per dose); PBS as con­trol. Data: mean ± SEM (n = 5). ****, p < 0.0001 (Student’s two-sided t-test).

Direct Cytotoxic Effect: As a fac­ul­ta­tive anaer­o­bic bac­terium, E. amer­i­cana se­lec­tively ac­cu­mu­lates in the hy­poxic tu­mor mi­croen­vi­ron­ment and di­rectly de­stroys can­cer cells. Bacterial counts within tu­mors in­crease ap­prox­i­mately 3,000-fold within 24 hours post-ad­min­is­tra­tion, ef­fi­ciently at­tack­ing tu­mor tis­sue.

Immune Activation Effect: The bac­te­r­ial pres­ence pow­er­fully stim­u­lates the im­mune sys­tem, re­cruit­ing T cells, B cells, and neu­trophils to the tu­mor site. Pro-inflammatory cy­tokines (TNF-α, IFN-γ) pro­duced by these im­mune cells fur­ther am­plify im­mune re­sponses and in­duce can­cer cell apop­to­sis.

E. amer­i­cana se­lec­tively ac­cu­mu­lates in tu­mor tis­sues with zero col­o­niza­tion in nor­mal or­gans. This re­mark­able tu­mor speci­ficity arises from mul­ti­ple syn­er­gis­tic mech­a­nisms:

Zero bac­te­r­ial col­o­niza­tion in nor­mal or­gans in­clud­ing liver, spleen, lung, kid­ney, and heart

This re­search has es­tab­lished proof-of-con­cept for a novel can­cer ther­apy us­ing nat­ural bac­te­ria. Future re­search and de­vel­op­ment will fo­cus on:

Expansion to Other Cancer Types: Efficacy val­i­da­tion in breast can­cer, pan­cre­atic can­cer, melanoma, and other ma­lig­nan­cies

Optimization of Administration Methods: Development of safer and more ef­fec­tive de­liv­ery ap­proaches in­clud­ing dose frac­tion­a­tion and in­tra­tu­moral in­jec­tion

Combination Therapy Development: Investigation of syn­er­gis­tic ef­fects with ex­ist­ing im­munother­apy and chemother­apy

This re­search demon­strates that un­ex­plored bio­di­ver­sity rep­re­sents a trea­sure trove for novel med­ical tech­nol­ogy de­vel­op­ment and holds promise for pro­vid­ing new ther­a­peu­tic op­tions for pa­tients with re­frac­tory can­cers.

Facultative Anaerobic Bacteria: Bacteria ca­pa­ble of grow­ing in both oxy­gen-rich and oxy­gen-de­pleted en­vi­ron­ments, en­abling se­lec­tive pro­lif­er­a­tion in hy­poxic tu­mor re­gions.

Complete Response (CR): Complete tu­mor elim­i­na­tion con­firmed by di­ag­nos­tic ex­am­i­na­tion fol­low­ing treat­ment.

Immune Checkpoint Inhibitor: Drugs that re­lease can­cer cell-me­di­ated im­mune sup­pres­sion, en­abling T cells to at­tack can­cer cells.

CD47: A cell sur­face pro­tein that emits don’t eat me” sig­nals; can­cer cells over­ex­press this to evade im­mune at­tack.

Discovery and char­ac­ter­i­za­tion of an­ti­tu­mor gut mi­cro­biota from am­phib­ians and rep­tiles: Ewingella amer­i­cana as a novel ther­a­peu­tic agent with dual cy­to­toxic and im­munomod­u­la­tory prop­er­ties

This re­search was sup­ported by:

Japan Society for the Promotion of Science (JSPS) KAKENHI Grant-in-Aid for Scientific Research (A) (Grant No. 23H00551)

Japan Science and Technology Agency (JST) Program for Co-creating Startup Ecosystem (Grant No. JPMJSF2318)

...

Read the original on www.jaist.ac.jp »

10 160 shares, 9 trendiness

Yep, Passkeys Still Have Problems

It’s now late into 2025, and just over a year since I wrote my last post on Passkeys. The pre­vail­ing di­a­logue that I see from thought lead­ers is addressing com­mon mis­con­cep­tions” around Passkeys, the im­pli­ca­tion be­ing that you just don’t un­der­stand it cor­rectly” if you have doubts. Clearly I don’t un­der­stand Passkeys in that case.

And yet, I am here to once again say - yep, it’s 2025 and Passkeys still have all the is­sues I’ve men­tioned be­fore, and a few new ones I’ve learnt! Let’s round up the year to­gether then.

* Passkeys have flaws - learn about them and use them on your terms. Don’t write them off whole­sale based on this blog. I, the au­thor of this blog, use Passkeys!!!

* DO en­gage with and learn about Credential Managers (aka Password Managers). This is where the Passkey is stored.

* DO use a Credential Manager you con­trol and can backup. I rec­om­mend Bitwarden or Vaultwarden which al­low back­ups to be taken eas­ily.

* AVOID us­ing a plat­form (Apple, Google) Credential Manager as your only Passkey repos­i­tory - these can’t eas­ily backed up and you CAN be locked out per­ma­nently.

IF you use a plat­form Passkey man­ager, fre­quently sync it with FIDO Credential Exchange to an ex­ter­nal Credential Manager you can backup/​con­trol.

OR use both the plat­form Passkey man­ager AND a Credential Manager you con­trol in par­al­lel.

* IF you use a plat­form Passkey man­ager, fre­quently sync it with FIDO Credential Exchange to an ex­ter­nal Credential Manager you can backup/​con­trol.

* OR use both the plat­form Passkey man­ager AND a Credential Manager you con­trol in par­al­lel.

* For high value ac­counts such as email which are on the ac­count re­cov­ery path

DO use Yubikeys for your email ac­count as the Passkey store.

DO keep strong ma­chine gen­er­ated pass­words + TOTP in your Credential Managers as al­ter­na­tives to Passkeys for your email ac­counts.

* DO use Yubikeys for your email ac­count as the Passkey store.

* DO keep strong ma­chine gen­er­ated pass­words + TOTP in your Credential Managers as al­ter­na­tives to Passkeys for your email ac­counts.

* DO a thought ex­per­i­ment - if I lost ac­cess to my Credential Manager what is the re­cov­ery path? Ensure you can re­build from dis­as­ter.

The ma­jor change in the last 12 months has been the in­tro­duc­tion of the

FIDO Credential Exchange Specification.

Most peo­ple within the tech com­mu­nity who have dis­missed my claim that Passkeys are a form of ven­dor lockin” are now point­ing at this spec­i­fi­ca­tion as proof that this claim is now wrong.

See! Look! You can ex­port your cre­den­tials to an­other Passkey provider if you want! We aren’t lock­ing you in!!!”

I have to agree - this is great if you want to change which walled-gar­den you live in­side. However it does­n’t as­sist with the day to day us­age of Passkeys when you have de­vices from dif­fer­ent ven­dor ecosys­tems. Nor does it make it eas­ier for me to use a Passkey provider out­side of my ven­dors plat­form provider.

Example: Let’s say that I have an Windows Desktop and a Macbook Pro - I can sign up a Passkey on the Macbook Pro but I can’t then use it on the Windows Desktop.

FIDO Credential Exchange

lets me copy from Apple’s Keychain to what­ever provider I use on the Windows ma­chine. But now I have to do that ex­change every time I en­rol a new Passkey. Similar I would need to do the re­verse from Windows to Mac every time that I sign up on the Windows ma­chine.

So day to day, this changes very lit­tle - but if I want to go from all in on Apple” to all in on Google” then I can do a big-bang mi­gra­tion and jump from once gar­den to the next. But if you have mixed de­vice ecosys­tems (like uhhh … you know. Most of the world does) then very lit­tle will change for you with this.

But if I use my own Credential Manager (e.g. Vaultwarden) then I can hap­pily work be­tween mul­ti­ple ecosys­tems.

Today I saw this ex­cel­lent quote in the con­text of why Passkeys are bet­ter than Password+TOTP in a Password Manager:

Individuals hav­ing to learn to use pass­word man­age­ment soft­ware and be vig­i­lant against phish­ing is an in­dus­try fail­ure, not a per­sonal suc­cess.

Even giv­ing as much ben­e­fit of the doubt to this state­ment, and that the and” might be load bear­ing we have to ask - Where are passkeys stored?

So we still have to teach in­di­vid­u­als about pass­word (credential) man­agers, and how Passkeys work so that peo­ple trust them. That fun­da­men­tal truth has­n’t changed.

But not only this - if a per­son is choos­ing a pass­word+TOTP over a Passkey, we have to ask why is that”? Do we think that it’s truly about ar­ro­gance? Do we think that this user be­lieves they are more im­por­tant? Or is there and un­der­ly­ing us­abil­ity is­sue at play? Why might we be rec­om­mend­ing this to oth­ers? Do we re­ally think that Passkeys come with­out a need of ed­u­ca­tion?

Maybe I’m fun­da­men­tally miss­ing the orig­i­nal point of this com­ment. Maybe I am com­pletely mis­in­ter­pret­ting it. But I still think we need to say if a per­son chooses pass­word and TOTP over a Passkey even once they are in­formed of the choices, then Passkeys have failed that user. What could we have done bet­ter?

Perhaps one could in­ter­pret this state­ment as you don’t need to teach users about Passkeys if they are us­ing their ✨ m a g i c a l ✨ plat­form Passkey man­ager since it’s so much nicer than a pass­word and TOTP. And that leads to …

In eco­nom­ics, ven­dor lock-in, […] makes a cus­tomer de­pen­dent on a ven­dor for prod­ucts, un­able to use an­other ven­dor with­out sub­stan­tial switch­ing costs.

See, the big is­sue that the thought lead­ers seem to get wrong is that they be­lieve that if you can use FIDO Credential Exchange, then you aren’t locked in be­cause you can move be­tween Passkey providers.

But if we aren’t teach­ing our users about cre­den­tial man­age­ment, did­n’t we just silently lock them into to our plat­form Passkey man­ager?

Not only that, when you try to go against the plat­form man­ager, it’s the con­tin­ual fric­tion at each stage of the users ex­pe­ri­ence. It makes the cost to switch high be­cause at each point you en­counter fric­tion if you de­vi­ate from the ven­dors in­tended paths.

For ex­am­ple, con­sider the Apple Passkey modal:

The ma­jor­ity of this modal is ded­i­cated to you should make a Passkey in your Apple Keychain”. If you want to use your Android phone or a Security Key, where would I click? Oh yes, Other Options.

Make but­tons easy for peo­ple to use. It’s es­sen­tial to in­clude enough space around a but­ton so that peo­ple can vi­su­ally dis­tin­guish it from sur­round­ing com­po­nents and con­tent. Giving a but­ton enough space is also crit­i­cal for help­ing peo­ple se­lect or ac­ti­vate it, re­gard­less of the method of in­put they use.

When you se­lect Other Options this is what you see - see how Touch ID is still the de­fault, de­spite the fact that I al­ready in­di­cated I don’t want to use it by se­lect­ing Other Options? At this point I would need to se­lect Security Key and then click again to use my key. Similar for Android Phone.

And guess what - my pref­er­ences and choices are never re­mem­bered. I guess it’s true what they say.

Google Chrome has a sim­i­lar set of Modals and nudges (though props to Chrome, they at least im­plic­itly

ac­ti­vate your se­cu­rity key from the first modal so a power user who knows the trick can use it). So they are just as bad here IMO.

This is what I mean by vendor lockin”. It’s not just about where the pri­vate keys are stored. It’s the con­tin­ual fric­tion at each step of the in­ter­ac­tion when you de­vi­ate from the ven­dors in­tended path. It’s about mak­ing it so an­noy­ing to use any­thing else that you set­tle into one ven­dors ecosys­tem. It’s about the lack of com­mu­ni­ca­tion about where Passkeys are stored that tricks users into set­tling into their ven­dor ecosys­tem. That’s ven­dor lock-in.

We still get re­ports of peo­ple los­ing Passkeys from Apple Keychain. We sim­i­larly get re­ports of Android phones that one day just stop cre­at­ing new Passkeys, or stop be­ing able to use ex­ist­ing ones. One

ex­cep­tional story we saw re­cently was of an Android de­vice that stopped us­ing it’s on­board Passkeys and also stopped ac­cept­ing NFC key. USB CTAP would still func­tion, and all the his­tor­i­cal fixes we’ve seen (such as full de­vice re­sets) would not work. So now what? I’m not sure of the out­come of this story, but my as­sump­tion is there was not a happy end­ing.

If some­one ends up locked out of their ac­counts be­cause their Passkeys got nuked silently, what are we meant to do to help them?

Dr Paris Buttfield-Addison was locked out of their Apple ac­count.

I rec­om­mend you read the post, but the side ef­fect - every Passkey they had in an Apple key­chain is now un­re­cov­er­able.

There is just as much ev­i­dence about the same prac­tices with Google / Android.

I hon­estly don’t think I have to say much else, this is ter­ri­fy­ing that every ac­count you own could be de­stroyed by a sin­gle ac­tion where you have no re­course.

We still have is­sues where ser­vices that are em­brac­ing Passkeys are com­mu­ni­cat­ing badly about them. The gold stan­dard of mis­com­mu­ni­ca­tion came to me a few months ago in­fact (2025-10-29) when a com­pany emailed me this state­ment:

Passkeys use your unique fea­tures — known as bio­met­rics — like your fa­cial fea­tures, your fin­ger­print or a PIN to let us know that it’s re­ally you. They pro­vide in­creased se­cu­rity be­cause un­like a pass­word or user­name, they can’t be shared with any­one, mak­ing them phish­ing re­sis­tant.

As some­one who is deeply aware of how we­bau­thn works I know that my fa­cial fea­tures or fin­ger­print never re­ally leave my de­vice. However ask­ing my part­ner (context: my part­ner is a vet­ernary sur­geon, and so I feel jus­ti­fied in claim­ing that she is a very in­tel­li­gent and ed­u­cated woman) to read this, her in­ter­pre­ta­tion was:

So this means a Passkey sends my face or fin­ger­print over the in­ter­net for the ser­vice to ver­ify? Is that also why they be­lieve it is phish­ing re­sis­tant be­cause you can’t clone my face or my fin­ger­print?

This is a smart, ed­u­cated per­son, with the ti­tle of doc­tor, and even she is con­clud­ing that Passkeys are send­ing bio­met­rics over the in­ter­net. What are peo­ple in other dis­ci­plines go­ing to think? What about peo­ple with a cog­ni­tive im­pair­ment or who not have ac­cess to ed­u­ca­tion about Passkeys?

This kind of mes­sag­ing that leads peo­ple to be­lieve we are send­ing per­sonal phys­i­cal fea­tures over the in­ter­net is harm­ful be­cause most peo­ple will not want to send these data to a re­mote ser­vice. This com­pletely un­der­mines the trust in Passkeys be­cause we are es­tab­lish­ing to peo­ple that they are per­son­ally in­va­sive in a way that user­name and pass­words are not!

And guess what - plat­form Passkey provider modals/​di­alogs don’t do any­thing to counter this in­for­ma­tion and of­ten leave users with the same feel­ing.

A past com­plaint was that I had en­coun­tered ser­vices that only ac­cepted a sin­gle Passkey as they as­sumed you would use a syn­chro­nised cloud key­chain of some kind. In 2025 I still see a hand­ful of these ser­vices, but mostly the large prob­lem sites have now fi­nally al­lowed you to en­rol mul­ti­ple Passkeys.

But that does­n’t stop sites pulling tricks on you.

I’ve en­coun­tered mul­ti­ple sites that now use au­then­ti­ca­torAt­tach­ment op­tions to force you to use a plat­form bound Passkey. In other words, they force you into Google or Apple. No pass­word man­ager, no se­cu­rity key, no choices.

I won’t claim this one as an at­tempt at vendor lockin” by the big play­ers, but it is a re­flec­tion of what de­vel­op­ers be­lieve a Passkey to be - they be­lieve it means a pri­vate key stored in one of those ven­dors de­vices, and noth­ing else. So much of this comes from the con­fused his­tor­i­cal ori­gins of Passkeys and we aren’t do­ing any­thing to change it.

When I have con­fronted these sites about the mis­prac­tice, they pretty much shrugged and said well no one else has com­plained so meh”. Guess I won’t be en­rolling a Passkey with you then.

One other site that pulled this said instead of se­lect­ing con­tinue, se­lect this other op­tion and you get the au­then­ti­ca­torAt­tach­ment=cross-plat­form set­ting. Except that they could lit­er­ally do

noth­ing with au­then­ti­ca­torAt­tach­ment and leave it up to the plat­form modals al­low­ing me the choice (and fewer fric­tion burns) of choos­ing where I want to en­rol my Passkey.

Another very naughty web­site at­tempts to en­roll a Passkey on your de­vice with no prior warn­ing or con­sent when you lo­gin, which is very sur­pris­ing to any­one and seems very de­cep­tive as a prac­tice. Ironically same ven­dor does­n’t use your passkey when you go to sign in again any­way.

But it’s not all doom and gloom.

Most of the is­sues are around plat­form Passkey providers like Apple or Google.

The best thing you can do as a user, and for any­one in your life you want to help, is to be ed­u­cated about Credential Managers. Regardless of Passwords, TOTP, Passkeys or any­thing else, em­pow­er­ing peo­ple to man­age and think about their on­line se­cu­rity via a Credential Manager they feel they con­trol and un­der­stand is crit­i­cal - not an industry fail­ure”.

Using a Credential Manager that you have con­trol over shields you from the ac­count lock­out and plat­form blow-up risks that ex­ist with plat­form Passkeys. Additionally most Credential Managers will al­low you to backup your cre­den­tials too. It can be a great idea to do this every few months and put the con­tent onto a USB drive in a safe lo­ca­tion.

If you do choose to use a plat­form Passkey provider, you can emulate” this backup abil­ity by us­ing the cre­den­tial ex­port func­tion to an­other Passkey provider, and then do the back­ups from there.

You can also use a Yubikey as a Credential Manager if you want - mod­ern keys (firmware ver­sion 5.7 and greater) can store up to 150 Passkeys on them, so you could con­sider skip­ping soft­ware Credential Managers en­tirely for some ac­counts.

The most crit­i­cal ac­counts you own though need some spe­cial care. Email is one of those - email gen­er­ally is the path by which all other cre­den­tial re­sets and ac­count re­cov­ery flows oc­cur. This means los­ing your email ac­cess is the most dev­as­tat­ing loss as any­thing else could po­ten­tially be re­cov­ered.

For email, this is why I rec­om­mend us­ing hard­ware se­cu­rity keys (yubikeys are the gold stan­dard here) if you want Passkeys to pro­tect your email. Always keep a strong pass­word and TOTP as an ex­tra re­cov­ery path, but don’t use it day to day since it can be phished. Ensure these de­tails are phys­i­cally se­cure and backed up - again a USB drive or even a print out on pa­per in a safe and se­cure lo­ca­tion so that you can bootstrap your ac­counts” in the case of a ma­jor fail­ure.

If you are an Apple or Google em­ployee - change your di­alogs to al­low re­mem­ber­ing choices the user has pre­vi­ously made on sites, or whole­sale al­low skip­ping some parts - for ex­am­ple I want to skip straight to Security Key, and maybe I’ll choose to go back for some­thing else. But let me make that choice. Similar, make the choice to use dif­fer­ent Passkey providers a first-class cit­i­zen in the UI, not just a tiny text af­ter­thought.

If you are a de­vel­oper de­ploy­ing Passkeys, then don’t use any of the pre-fil­ter­ing Webauthn op­tions or javascript APIs. Just leave it to the users plat­form modals to let the per­son choose. If you want peo­ple to en­roll a passkey on sign in, com­mu­ni­cate that be­fore you at­tempt the en­rol­ment. Remember kids, con­sent is para­mount.

But of course - maybe I just don’t un­der­stand Passkeys cor­rectly”. I am but an un­der­achiv­ing white man on the in­ter­net af­ter all.

...

Read the original on fy.blackhats.net.au »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.