10 interesting stories served every morning and every evening.




1 1,268 shares, 67 trendiness

60 Minutes : CBS News : Free Download, Borrow, and Streaming : Internet Archive

Skip to main con­tent

Ask the pub­lish­ers to re­store ac­cess to 500,000+ books.

9 Days Left: The year is al­most over—help us fin­ish strong in 2025!

Please Don’t Scroll Past This

Can you chip in? As an in­de­pen­dent non­profit, the Internet Archive is fight­ing for uni­ver­sal ac­cess to qual­ity in­for­ma­tion. We build and main­tain all our own sys­tems, but we don’t charge for ac­cess, sell user in­for­ma­tion, or run ads. We’d be deeply grate­ful if you’d join the one in a thou­sand users that sup­port us fi­nan­cially.

We un­der­stand that not every­one can do­nate right now, but if you can af­ford to con­tribute this Tuesday, we promise it will be put to good use. Our re­sources are cru­cial for knowl­edge lovers every­where—so if you find all these bits and bytes use­ful, please pitch in.

Please Don’t Scroll Past This The Internet Archive is work­ing to keep the record straight by record­ing gov­ern­ment web­sites, news pub­li­ca­tions, his­tor­i­cal doc­u­ments, and more. If you find our li­brary use­ful, please pitch in.

Remind Me

By sub­mit­ting, you agree to re­ceive donor-re­lated emails from the Internet Archive. Your pri­vacy is im­por­tant to us. We do not sell or trade your in­for­ma­tion with any­one.

An icon used to rep­re­sent a menu that can be

tog­gled by in­ter­act­ing with this icon.

An il­lus­tra­tion of an open book.

An il­lus­tra­tion of two cells of a film

strip.

An il­lus­tra­tion of an au­dio speaker.

An il­lus­tra­tion of two pho­tographs.

An il­lus­tra­tion of a per­son’s head and chest.

An il­lus­tra­tion of a hor­i­zon­tal line over an up

point­ing ar­row.

Search the his­tory of more than 1 tril­lion web pages.

Capture a web page as it ap­pears now for use as a trusted ci­ta­tion in the fu­ture.

Internet Archive’s in-browser video theater” re­quires JavaScript to be en­abled.

It ap­pears your browser does not have it turned on.

Please see your browser set­tings for this fea­ture.

Sharyn Alfonsi’s Inside CECOT for 60 Minutes, which was cen­sored by Bari Weiss, as it ap­peared on Canada’s Global TV app.

...

Read the original on archive.org »

2 581 shares, 19 trendiness

US blocks all offshore wind construction, says reason is classified

On Monday, the US Department of the Interior an­nounced that it was paus­ing the leases on all five off­shore wind sites cur­rently un­der con­struc­tion in the US. The move comes de­spite the fact that these pro­jects al­ready have in­stalled sig­nif­i­cant hard­ware in the wa­ter and on land; one of them is nearly com­plete. In what ap­pears to be an at­tempt to avoid le­gal scrutiny, the Interior is blam­ing the de­ci­sions on a clas­si­fied re­port from the Department of Defense.

The sec­ond Trump ad­min­is­tra­tion an­nounced its an­i­mos­ity to­ward off­shore wind power lit­er­ally on day one, is­su­ing an ex­ec­u­tive or­der on in­au­gu­ra­tion day that called for a tem­po­rary halt to is­su­ing per­mits for new pro­jects pend­ing a re-eval­u­a­tion. Earlier this month, how­ever, a judge va­cated that ex­ec­u­tive or­der, not­ing that the gov­ern­ment has shown no in­di­ca­tion that it was even at­tempt­ing to start the re-eval­u­a­tion it said was needed.

But a num­ber of pro­jects have gone through the en­tire per­mit­ting process, and con­struc­tion has started. Before to­day, the ad­min­is­tra­tion had at­tempted to stop these in an er­ratic, halt­ing man­ner. Empire Wind, an 800 MW farm be­ing built off New York, was stopped by the Department of the Interior, which al­leged that it had been rushed through per­mit­ting. That hold was lifted fol­low­ing lob­by­ing and ne­go­ti­a­tions by New York and the pro­ject de­vel­oper Orsted, and the Department of the Interior never re­vealed why it changed its mind. When the Interior Department blocked a sec­ond Orsted pro­ject, Revolution Wind off­shore of south­ern New England, the com­pany took the gov­ern­ment to court and won a rul­ing that let it con­tinue con­struc­tion.

...

Read the original on arstechnica.com »

3 401 shares, 14 trendiness

Advancing the Coding Capability

Try it at Z.aiGLM-4.7, your new cod­ing part­ner, is com­ing with the fol­low­ing fea­tures:Core Coding: GLM-4.7 brings clear gains, com­pared to its pre­de­ces­sor GLM-4.6, in mul­ti­lin­gual agen­tic cod­ing and ter­mi­nal-based tasks, in­clud­ing (73.8%, +5.8%) on SWE-bench, (66.7%, +12.9%) on SWE-bench Multilingual, and (41%, +16.5%) on Terminal Bench 2.0. GLM-4.7 also sup­ports think­ing be­fore act­ing, with sig­nif­i­cant im­prove­ments on com­plex tasks in main­stream agent frame­works such as Claude Code, Kilo Code, Cline, and Roo Code. Vibe Coding: GLM-4.7 takes a ma­jor step for­ward in UI qual­ity. It pro­duces cleaner, more mod­ern web­pages and gen­er­ates bet­ter-look­ing slides with more ac­cu­rate lay­out and siz­ing.Tool Using: GLM-4.7 achieves sig­nif­i­cantly im­prove­ments in Tool us­ing. Significant bet­ter per­for­mances can be seen on bench­marks such as τ^2-Bench and on web brows­ing via BrowseComp.Complex Reasoning: GLM-4.7 de­liv­ers a sub­stan­tial boost in math­e­mat­i­cal and rea­son­ing ca­pa­bil­i­ties, achiev­ing (42.8%, +12.4%) on the HLE (Humanity’s Last Exam) bench­mark com­pared to GLM-4.6.You can also see sig­nif­i­cant im­prove­ments in many other sce­nar­ios such as chat, cre­ative writ­ing, and role-play sce­nario.Bench­mark Performance. More de­tailed com­par­isons of GLM-4.7 with other mod­els GPT-5, GPT-5.1-High, Claude Sonnet 4.5, Gemini 3.0 Pro, DeepSeek-V3.2, Kimi K2 Thinking, on 17 bench­marks (including 8 rea­son­ing, 5 cod­ing, and 3 agents bench­marks) can be seen in the be­low table.Cod­ing: AGI is a long jour­ney, and bench­marks are only one way to eval­u­ate per­for­mance. While the met­rics pro­vide nec­es­sary check­points, the most im­por­tant thing is still how it *feels*. True in­tel­li­gence is­n’t just about ac­ing a test or pro­cess­ing data faster; ul­ti­mately, the suc­cess of AGI will be mea­sured by how seam­lessly it in­te­grates into our lives-“cod­ing” this time.Scroll down to see more­Design a richly crafted voxel-art en­vi­ron­ment fea­tur­ing an or­nate pagoda set within a vi­brant gar­den.

Include di­verse veg­e­ta­tion—es­pe­cially cherry blos­som trees—and en­sure the com­po­si­tion feels lively, col­or­ful, and vi­su­ally strik­ing.

Use any voxel or WebGL li­braries you pre­fer, but de­liver the en­tire pro­ject as a sin­gle, self-con­tained HTML file that I can paste and open di­rectly in Chrome View full tra­jec­tory at Z.ai Design a poster in­tro­duc­ing Paris, with a ro­man­tic and fash­ion­able aes­thetic. The over­all style should feel el­e­gant, vi­su­ally re­fined, and de­sign-dri­ven. View full tra­jec­tory at Z.ai GLM-4.7 en­hances Interleaved Thinking, a fea­ture in­tro­duced since GLM-4.5, and fur­ther in­tro­duces Preserved Thinking and Turn-level Thinking. By think­ing be­tween ac­tions and stay­ing con­sis­tent across turns, it makes com­plex tasks more sta­ble and more con­trol­lable:In­ter­leaved Thinking: GLM-4.7 thinks be­fore every re­sponse and tool call­ing, im­prov­ing in­struc­tion fol­low­ing and the qual­ity of gen­er­a­tion.Pre­served Thinking: In cod­ing agent sce­nar­ios, GLM-4.7 au­to­mat­i­cally re­tains all think­ing blocks across multi-turn con­ver­sa­tions, reusing the ex­ist­ing rea­son­ing in­stead of re-de­riv­ing from scratch. This re­duces in­for­ma­tion loss and in­con­sis­ten­cies, and is well-suited for long-hori­zon, com­plex tasks.Turn-level Thinking: GLM-4.7 sup­ports per-turn con­trol over rea­son­ing within a ses­sion—dis­able think­ing for light­weight re­quests to re­duce la­tency/​cost, en­able it for com­plex tasks to im­prove ac­cu­racy and sta­bil­ity.The Z.ai API plat­form of­fers the GLM-4.7 model. For com­pre­hen­sive API doc­u­men­ta­tion and in­te­gra­tion guide­lines, please re­fer to https://​docs.z.ai/​guides/​llm/​glm-4.7. At the same time, the model is also avail­able world­wide through OpenRouter (https://​open­router.ai/).GLM-4.7 is now avail­able to use within cod­ing agents (Claude Code, Kilo Code, Roo Code, Cline and more).For GLM Coding Plan sub­scribers: You’ll be au­to­mat­i­cally up­graded to GLM-4.7. If you’ve pre­vi­ously cus­tomized the app con­figs (like ~/.claude/settings.json in Claude Code), sim­ply up­date the model name to glm-4.7” to com­plete the up­grade.For New users: Subscribing GLM Coding Plan means hav­ing ac­cess to a Claude-level cod­ing model at a frac­tion of the cost — just 1/7th the price with 3x the us­age quota. Start build­ing to­day: https://​z.ai/​sub­scribe.GLM-4.7 is ac­ces­si­ble through Z.ai. Try to change the model op­tion to GLM-4.7, if the sys­tem does not au­to­mat­i­cally do that (not like an AGI in that case :))Model weights for GLM-4.7 are pub­licly avail­able on HuggingFace and ModelScope. For lo­cal de­ploy­ment, GLM-4.7 sup­ports in­fer­ence frame­works in­clud­ing vLLM and SGLang. Comprehensive de­ploy­ment in­struc­tions are avail­able in the of­fi­cial GitHub repos­i­tory.1: Default set­tings (most tasks): tem­per­a­ture 1.0, top-p 0.95, max new to­kens 131072. For multi-turn agen­tic tasks (τ²-Bench and Terminal Bench 2), en­able Preserved Thinking mode.3: τ²-Bench set­tings: tem­per­a­ture 0, max new to­kens 16384. For τ²-Bench, we added an ex­tra prompt in the Retail and Telecom in­ter­ac­tions to avoid fail­ures caused by users end­ing the in­ter­ac­tion in­cor­rectly; for the Airline do­main, we ap­plied the do­main fixes pro­posed in the Claude Opus 4.5 re­lease re­port.

...

Read the original on z.ai »

4 314 shares, 13 trendiness

NPM Package With 56K Downloads Caught Stealing WhatsApp Messages

The lo­tus­bail npm pack­age pre­sents it­self as a WhatsApp Web API li­brary - a fork of the le­git­i­mate @whiskeysockets/baileys pack­age. With over 56,000 down­loads and func­tional code that ac­tu­ally works as ad­ver­tised, it’s the kind of de­pen­dency de­vel­op­ers in­stall with­out a sec­ond thought. The pack­age has been avail­able on npm for 6 months and is still live at the time of writ­ing.

Behind that work­ing func­tion­al­ity: so­phis­ti­cated mal­ware that steals your WhatsApp cre­den­tials, in­ter­cepts every mes­sage, har­vests your con­tacts, in­stalls a per­sis­tent back­door, and en­crypts every­thing be­fore send­ing it to the threat ac­tor’s server.

Most ma­li­cious npm pack­ages re­veal them­selves quickly - they’re ty­posquats, they don’t work, or they’re ob­vi­ously sketchy. This one ac­tu­ally func­tions as a WhatsApp API. It’s based on the le­git­i­mate Baileys li­brary and pro­vides real, work­ing func­tion­al­ity for send­ing and re­ceiv­ing WhatsApp mes­sages.

Obvious mal­ware is easy to spot. Functional mal­ware? That gets in­stalled, tested, ap­proved, and de­ployed to pro­duc­tion.

The so­cial en­gi­neer­ing here is bril­liant: de­vel­op­ers don’t look for mal­ware in code that works. They look for code that breaks.

The pack­age wraps the le­git­i­mate WebSocket client that com­mu­ni­cates with WhatsApp. Every mes­sage that flows through your ap­pli­ca­tion passes through the mal­ware’s socket wrap­per first.

When you au­then­ti­cate, the wrap­per cap­tures your cre­den­tials. When mes­sages ar­rive, it in­ter­cepts them. When you send mes­sages, it records them. The le­git­i­mate func­tion­al­ity con­tin­ues work­ing nor­mally - the mal­ware just adds a sec­ond re­cip­i­ent for every­thing.

All your WhatsApp au­then­ti­ca­tion to­kens, every mes­sage sent or re­ceived, com­plete con­tact lists, me­dia files - every­thing that passes through the API gets du­pli­cated and pre­pared for ex­fil­tra­tion.

But the stolen data does­n’t get sent in plain text. The mal­ware in­cludes a com­plete, cus­tom RSA im­ple­men­ta­tion for en­crypt­ing the data be­fore trans­mis­sion:

Why im­ple­ment cus­tom RSA? Because le­git­i­mate WhatsApp li­braries don’t need cus­tom en­cryp­tion - WhatsApp al­ready han­dles end-to-end en­cryp­tion. The cus­tom crypto ex­ists for one rea­son: to en­crypt stolen data be­fore ex­fil­tra­tion so net­work mon­i­tor­ing won’t catch it.

The ex­fil­tra­tion server URL is buried in en­crypted con­fig­u­ra­tion strings, hid­den in­side com­pressed pay­loads. The mal­ware uses four lay­ers of ob­fus­ca­tion: Unicode vari­able ma­nip­u­la­tion, LZString com­pres­sion, Base-91 en­cod­ing, and AES en­cryp­tion. The server lo­ca­tion is­n’t hard­coded any­where vis­i­ble.

Here’s where it gets par­tic­u­larly nasty. WhatsApp uses pair­ing codes to link new de­vices to ac­counts. You re­quest a code, WhatsApp gen­er­ates a ran­dom 8-character string, you en­ter it on your new de­vice, and the de­vices link to­gether.

The mal­ware hi­jacks this process with a hard­coded pair­ing code. The code is en­crypted with AES and hid­den in the pack­age:

This means the threat ac­tor has a key to your WhatsApp ac­count. When you use this li­brary to au­then­ti­cate, you’re not just link­ing your ap­pli­ca­tion - you’re also link­ing the threat ac­tor’s de­vice. They have com­plete, per­sis­tent ac­cess to your WhatsApp ac­count, and you have no idea they’re there.

The threat ac­tor can read all your mes­sages, send mes­sages as you, down­load your me­dia, ac­cess your con­tacts - full ac­count con­trol. And here’s the crit­i­cal part, unin­stalling the npm pack­age re­moves the ma­li­cious code, but the threat ac­tor’s de­vice stays linked to your WhatsApp ac­count. The pair­ing per­sists in WhatsApp’s sys­tems un­til you man­u­ally un­link all de­vices from your WhatsApp set­tings. Even af­ter the pack­age is gone, they still have ac­cess.

The pack­age in­cludes 27 in­fi­nite loop traps that freeze ex­e­cu­tion if de­bug­ging tools are de­tected:

These traps check for de­bug­gers, in­spect process ar­gu­ments, de­tect sand­box en­vi­ron­ments, and gen­er­ally make dy­namic analy­sis painful. They also left help­ful com­ments in their code mark­ing the ma­li­cious sec­tions - pro­fes­sional de­vel­op­ment prac­tices ap­plied to sup­ply chain at­tacks. Someone prob­a­bly has a Jira board for this.

Supply chain at­tacks aren’t slow­ing down - they’re get­ting bet­ter. We’re see­ing work­ing code with so­phis­ti­cated anti-de­bug­ging, cus­tom en­cryp­tion, and multi-layer ob­fus­ca­tion that sur­vives mar­ket­place re­views. The lo­tus­bail case is­n’t an out­lier. It’s a pre­view.

Traditional se­cu­rity does­n’t catch this. Static analy­sis sees work­ing WhatsApp code and ap­proves it. Reputation sys­tems see 56,000 down­loads and trust it. The mal­ware hides in the gap be­tween this code works” and this code only does what it claims.”

Catching so­phis­ti­cated sup­ply chain at­tacks re­quires be­hav­ioral analy­sis - watch­ing what pack­ages ac­tu­ally do at run­time. When a WhatsApp li­brary im­ple­ments cus­tom RSA en­cryp­tion and in­cludes 27 anti-de­bug­ging traps, those are sig­nals. But you need sys­tems watch­ing for them.

This writeup was au­thored by the re­search team at Koi Security. We built Koi to de­tect threats that pass tra­di­tional checks but ex­hibit ma­li­cious be­hav­ior at run­time.

Book a demo to see how be­hav­ioral analy­sis catches what sta­tic re­view misses.

...

Read the original on www.koi.ai »

5 290 shares, 26 trendiness

Instant database clones with PostgreSQL 18

Have you ever watched long run­ning mi­gra­tion script, won­der­ing if it’s about to wreck your data? Or wish you can just” spin a fresh copy of data­base for each test run? Or wanted to have re­pro­ducible snap­shots to re­set be­tween runs of your test suite, (and yes, be­cause you are read­ing bor­ingSQL) needed to re­set the learn­ing en­vi­ron­ment?

When your data­base is a few megabytes, pg_­dump and re­store works fine. But what hap­pens when you’re deal­ing with hun­dreds of megabytes/​gi­ga­bytes - or more? Suddenly just make a copy” be­comes a bur­den.

You’ve prob­a­bly no­ticed that PostgreSQL con­nects to tem­plate1 by de­fault. What you might have missed is that there’s a whole tem­plat­ing sys­tem hid­ing in plain sight. Every time you run

PostgreSQL qui­etly clones stan­dard sys­tem data­base tem­plate1 be­hind the scenes. Making it same as if you would use

The real power comes from the fact that you can re­place tem­plate1 with any data­base. You can find more at Template Database

doc­u­men­ta­tion.

In this ar­ti­cle, we will cover a few tweaks that turn this tem­plat­ing sys­tem into an in­stant, zero-copy data­base cloning ma­chine.

Before PostgreSQL 15, when you cre­ated a new data­base from a tem­plate, it op­er­ated strictly on the file level. This was ef­fec­tive, but to make it re­li­able, Postgres had to flush all pend­ing op­er­a­tions to disk (using

CHECKPOINT) be­fore tak­ing a con­sis­tent snap­shot. This cre­ated a mas­sive I/O spike - a Checkpoint Storm” - that could stall your pro­duc­tion traf­fic.

Version 15 of PostgreSQL in­tro­duced new pa­ra­me­ter CREATE DATABASESTRATEGY = [strategy] and at the same time changed the de­fault be­hav­iour how the new data­bases are cre­ated from tem­plates. The new de­fault be­come WAL_LOG which copies block-by-block via the Write-Ahead Log (WAL), mak­ing I/O se­quen­tial (and much smoother) and sup­port for con­cur­rency with­out fac­ing la­tency spike. This pre­vented the need to CHECKPOINT but made the data­base cloning op­er­a­tion po­ten­tially sig­nif­i­cantly slower. For an empty tem­plate1, you won’t no­tice the dif­fer­ence. But if you try to clone a 500GB data­base us­ing WAL_LOG, you are go­ing to be wait­ing a long time.

The STRATEGY pa­ra­me­ter al­lows us to switch back to the orig­i­nal method

FILE_COPY to keep the be­hav­iour, and speed. And since PostgreSQL 18, this opens the whole new set of op­tions.

Because the FILE_COPY strat­egy is a proxy to op­er­at­ing sys­tem file op­er­a­tions, we can change how the OS han­dles those files.

When us­ing stan­dard file sys­tem (like ext4), PostgreSQL reads every byte of the source file and writes it to a new lo­ca­tion. It’s a phys­i­cal copy. However start­ing with PostgreSQL 18 - file_­copy­_method gives you op­tions to switch that logic; while de­fault op­tion re­mains copy.

With mod­ern filesys­tems (like ZFS, XFS with re­flinks, APFS, etc.) you can switch it to clone and lever­age CLONE (FICLONE on Linux) op­er­a­tion for al­most in­stant op­er­a­tion. And it won’t take any ad­di­tional space.

All you have to do is:

Linux with XFS or ZFS sup­port (we will use XFS for the demostra­tion) or sim­i­lar

op­er­at­ing sys­tem. MacOS APFS is also fully sup­ported. FreeBSD with ZFS also

sup­ported (which nor­mally would be my choice, but haven’t got time to test so

far)

We need some dummy data to copy. This is the only part of the tu­to­r­ial where you have to wait. Let’s gen­er­ate a ~6GB data­base.

You can ver­ify the data­base now has roughly 6GB of data.

While en­abling \timing you can test the de­fault (WAL_LOG) strat­egy. And on my test vol­ume (relatively slow stor­age) I get

Now, let’s ver­ify our con­fig­u­ra­tion is set for speed:

Let’s re­quest the semi-in­stant clone of the same data­base, with­out tak­ing ex­tra disk space at the same time.

That’s a quite an im­prove­ment, is­n’t it?

That was the sim­ple part. But what is hap­pen­ing be­hind the scenes?

When you clone a data­base with file_­copy­_method = clone, PostgreSQL does­n’t du­pli­cate any data. The filesys­tem cre­ates new meta­data en­tries that point to the same phys­i­cal blocks. Both data­bases share iden­ti­cal stor­age.

This can cre­ate some ini­tial con­fu­sion. If you ask PostgreSQL for the size:

PostgreSQL re­ports both as ~6GB be­cause that’s the log­i­cal size - how much data each data­base contains” - i.e. log­i­cal size.

The in­ter­est­ing part hap­pens when you start writ­ing. PostgreSQL does­n’t up­date tu­ples in place. When you UPDATE a row, it writes a new tu­ple ver­sion some­where (often a dif­fer­ent page en­tirely) and marks the old one as dead. The filesys­tem does­n’t care about PostgreSQL in­ter­nals - it just sees writes to 8KB pages. Any write to a shared page trig­gers a copy of that en­tire page.

A sin­gle UPDATE will there­fore trig­ger copy-on-write on mul­ti­ple pages:

the page hold­ing the old tu­ple

the page re­ceiv­ing the new tu­ple

And later, VACUUM touches even more pages while clean­ing up dead tu­ples. In this case di­verg­ing quickly from the linked stor­age.

Using the data­base OID and relfilen­ode we can ver­ify the both data­bases are now shar­ing phys­i­cal blocks.

root@clone-demo:/​var/​lib/​post­gresql# sudo file­frag -v /var/lib/postgresql/18/main/base/16402/16404

Filesystem type is: 58465342

File size of /var/lib/postgresql/18/main/base/16402/16404 is 1073741824 (262144 blocks of 4096 bytes)

ext: log­i­cal_off­set: phys­i­cal_off­set: length: ex­pected: flags:

0: 0.. 2031: 10471550.. 10473581: 2032: shared

1: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared

2: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared

3: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared

4: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared

5: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared

6: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof

/var/lib/postgresql/18/main/base/16402/16404: 7 ex­tents found

root@clone-demo:/​var/​lib/​post­gresql#

root@clone-demo:/​var/​lib/​post­gresql#

root@clone-demo:/​var/​lib/​post­gresql# sudo file­frag -v /var/lib/postgresql/18/main/base/16418/16404

Filesystem type is: 58465342

File size of /var/lib/postgresql/18/main/base/16418/16404 is 1073741824 (262144 blocks of 4096 bytes)

ext: log­i­cal_off­set: phys­i­cal_off­set: length: ex­pected: flags:

0: 0.. 2031: 10471550.. 10473581: 2032: shared

1: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared

2: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared

3: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared

4: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared

5: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared

6: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof

/var/lib/postgresql/18/main/base/16418/16404: 7 ex­tents found

All it takes is to up­date some rows us­ing

up­date bor­ing_­data set pay­load = new val­ue’ || id where id IN (select id from bor­ing_­data limit 20);

and the sit­u­a­tion will start to change.

root@clone-demo:/​var/​lib/​post­gresql# sudo file­frag -v /var/lib/postgresql/18/main/base/16402/16404

Filesystem type is: 58465342

File size of /var/lib/postgresql/18/main/base/16402/16404 is 1073741824 (262144 blocks of 4096 bytes)

ext: log­i­cal_off­set: phys­i­cal_off­set: length: ex­pected: flags:

0: 0.. 39: 10471550.. 10471589: 40:

1: 40.. 2031: 10471590.. 10473581: 1992: shared

2: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared

3: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared

4: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared

5: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared

6: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared

7: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof

/var/lib/postgresql/18/main/base/16402/16404: 7 ex­tents found

root@clone-demo:/​var/​lib/​post­gresql# sudo file­frag -v /var/lib/postgresql/18/main/base/16418/16404

Filesystem type is: 58465342

File size of /var/lib/postgresql/18/main/base/16418/16404 is 1073741824 (262144 blocks of 4096 bytes)

ext: log­i­cal_off­set: phys­i­cal_off­set: length: ex­pected: flags:

0: 0.. 39: 10297326.. 10297365: 40:

1: 40.. 2031: 10471590.. 10473581: 1992: 10297366: shared

2: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared

3: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared

4: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared

5: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared

6: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared

7: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof

/var/lib/postgresql/18/main/base/16418/16404: 8 ex­tents found

root@clone-demo:/​var/​lib/​post­gresql#

In this case ex­tent 0 no longer has shared flag, first 40 blocks size (with de­fault size 4KB) now di­verge, mak­ing it to­tal of 160KB. Each data­base now has its own copy at dif­fer­ent phys­i­cal ad­dress. The re­main­ing ex­tents are still shared.

Cloning is tempt­ing but there’s one se­ri­ous lim­i­ta­tion you need to be aware if you ever at­tempt to do it in pro­duc­tion. The source data­base can’t have any ac­tive con­nec­tions dur­ing cloning. This is a PostgreSQL lim­i­ta­tion, not a filesys­tem one. For pro­duc­tion use, this usu­ally means you cre­ate a ded­i­cated tem­plate data­base rather than cloning your live data­base di­rectly. Or given the rel­a­tively short time the op­er­a­tion takes you have to sched­ule the cloning in times where you can tem­po­rary block/​ter­mi­nate all con­nec­tions.

Other lim­i­ta­tion is that the cloning only works within a sin­gle filesys­tem. If your data­bases spans mul­ti­ple table spaces on dif­fer­ent mount points, cloning will fall back to reg­u­lar phys­i­cal copy.

Finally, in most man­aged cloud en­vi­ron­ments (AWS RDS, Google Cloud SQL), you will not have ac­cess to the un­der­ly­ing filesys­tem to con­fig­ure this. You are stuck with their pro­pri­etary (and of­ten billed) func­tion­al­ity. But for your own VMs or bare metal? Go ahead and try it.

...

Read the original on boringsql.com »

6 279 shares, 17 trendiness

karol-broda/snitch: a prettier way to inspect network connections

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

There was an er­ror while load­ing. .

a friend­lier ss / net­stat for hu­mans. in­spect net­work con­nec­tions with a clean tui or styled ta­bles.

go in­stall github.com/​karol-broda/​snitch@lat­est

# try it

nix run github:karol-broda/​snitch

# in­stall to pro­file

nix pro­file in­stall github:karol-broda/​snitch

# or add to flake in­puts

in­puts.snitch.url = github:karol-broda/snitch”;

# then use: in­puts.snitch.pack­ages.${sys­tem}.de­fault

# with yay

yay -S snitch-bin

# with paru

paru -S snitch-bin

curl -sSL https://​raw.githubuser­con­tent.com/​karol-broda/​snitch/​mas­ter/​in­stall.sh | sh

in­stalls to ~/.local/bin if avail­able, oth­er­wise /usr/local/bin. over­ride with:

curl -sSL https://​raw.githubuser­con­tent.com/​karol-broda/​snitch/​mas­ter/​in­stall.sh | INSTALL_DIR=~/bin sh

ma­cos: the in­stall script au­to­mat­i­cally re­moves the quar­an­tine at­tribute (com.apple.quarantine) from the bi­nary to al­low it to run with­out gate­keeper warn­ings. to dis­able this, set KEEP_QUARANTINE=1.

tar xzf snitch_*.tar.gz

sudo mv snitch /usr/local/bin/

ma­cos: if blocked with cannot be opened be­cause the de­vel­oper can­not be ver­i­fied”, run:

xattr -d com.ap­ple.quar­an­tine /usr/local/bin/snitch

snitch # launch in­ter­ac­tive tui

snitch -l # tui show­ing only lis­ten­ing sock­ets

snitch ls # print styled table and exit

snitch ls -l # lis­ten­ing sock­ets only

snitch ls -t -e # tcp es­tab­lished con­nec­tions

snitch ls -p # plain out­put (parsable)

snitch # all con­nec­tions

snitch -l # lis­ten­ing only

snitch -t # tcp only

snitch -e # es­tab­lished only

snitch -i 2s # 2 sec­ond re­fresh in­ter­val

snitch ls # styled table (default)

snitch ls -l # lis­ten­ing only

snitch ls -t -l # tcp lis­ten­ers

snitch ls -e # es­tab­lished only

snitch ls -p # plain/​parsable out­put

snitch ls -o json # json out­put

snitch ls -o csv # csv out­put

snitch ls -n # nu­meric (no dns res­o­lu­tion)

snitch ls –no-headers # omit head­ers

snitch json

snitch json -l

snitch watch -i 1s | jq .count’

snitch watch -l -i 500ms

snitch up­grade # check for up­dates

snitch up­grade –yes # up­grade au­to­mat­i­cally

snitch up­grade -v 0.1.7 # in­stall spe­cific ver­sion

for more spe­cific fil­ter­ing, use key=value syn­tax with ls:

snitch ls proto=tcp state=lis­ten

snitch ls pid=1234

snitch ls proc=ng­inx

snitch ls lport=443

snitch ls con­tains=google

[defaults]

nu­meric = false

theme = auto”

linux: reads from /proc/net/*, root or CAP_NET_ADMIN for full process info

ma­cos: uses sys­tem APIs, may re­quire sudo for full process info

There was an er­ror while load­ing. Please re­load this page.

You can’t per­form that ac­tion at this time.

...

Read the original on github.com »

7 263 shares, 21 trendiness

eyeblech/cinecli: CineCLI is a cross-platform command-line movie browser built with Python.

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

There was an er­ror while load­ing. .

Browse, in­spect, and launch movie tor­rents di­rectly from your ter­mi­nal.

Fast. Cross-platform. Minimal. Beautiful.

pip in­stall cinecli

cinecli search ma­trix

cinecli watch 3525

Auto-selects the best op­tion (you can over­ride)

cinecli in­ter­ac­tive

xdg-mime query de­fault x-scheme-han­dler/​mag­net

Use it. Fork it. Improve it.

STAR the repo if you like it! ⭐

There was an er­ror while load­ing. Please re­load this page.

You can’t per­form that ac­tion at this time.

...

Read the original on github.com »

8 261 shares, 11 trendiness

The Garbage Collection Handbook

Richard Jones’s Garbage Collection (Wiley, 1996) was a mile­stone book in the area of au­to­matic mem­ory man­age­ment. Its widely ac­claimed suc­ces­sor, The Garbage Collection Handbook: The Art of Automatic Memory Management

cap­tured the state of the field in 2012. However, tech­nol­ogy de­vel­op­ments have made mem­ory man­age­ment more chal­leng­ing, in­ter­est­ing and im­por­tant than ever. This sec­ond edi­tion up­dates the hand­book, bring­ing to­gether a wealth of knowl­edge gath­ered by au­to­matic mem­ory man­age­ment re­searchers and de­vel­op­ers over the past sixty years. The au­thors com­pare the most im­por­tant ap­proaches and state-of-the-art tech­niques in a sin­gle, ac­ces­si­ble frame­work.

The book ad­dresses new chal­lenges to garbage col­lec­tion made by re­cent ad­vances in hard­ware and soft­ware, and the en­vi­ron­ments in which pro­grams are ex­e­cuted. It ex­plores the con­se­quences of these changes for de­sign­ers and im­ple­menters of high per­for­mance garbage col­lec­tors. Along with sim­ple and tra­di­tional al­go­rithms, the book cov­ers state-of-the-art par­al­lel, in­cre­men­tal, con­cur­rent and real-time garbage col­lec­tion. Algorithms and con­cepts are of­ten de­scribed with pseudocode and il­lus­tra­tions.

The nearly uni­ver­sal adop­tion of garbage col­lec­tion by mod­ern pro­gram­ming lan­guages makes a thor­ough un­der­stand­ing of this topic es­sen­tial for any pro­gram­mer. This au­thor­i­ta­tive hand­book gives ex­pert in­sight on how dif­fer­ent col­lec­tors work as well as the var­i­ous is­sues cur­rently fac­ing garbage col­lec­tors. Armed with this knowl­edge, pro­gram­mers can con­fi­dently se­lect and con­fig­ure the many choices of garbage col­lec­tors.

* Provides a com­plete, up-to-date, and au­thor­i­ta­tive se­quel to the 1996 and 2012 books

* Offers thor­ough cov­er­age of par­al­lel, con­cur­rent and real-time garbage col­lec­tion al­go­rithms

* Explains some of the tricky as­pects of garbage col­lec­tion, in­clud­ing the in­ter­face to the run-time sys­tem

* Over 90 more pages, in­clud­ing new chap­ters on per­sis­tence and en­ergy-aware garbage col­lec­tion

* Backed by a com­pre­hen­sive on­line data­base of nearly 3,400 garbage col­lec­tion-re­lated pub­li­ca­tions

The e-book en­hances the print ver­sions with a rich col­lec­tion of over 37,000 hy­per­links to chap­ters, sec­tions, al­go­rithms, fig­ures, glos­sary en­tries, in­dex items, orig­i­nal re­search pa­pers and much more.

Chinese and Japanese trans­la­tions of the first edi­tion were pub­lished in 2016. We thank the trans­la­tors for their work in bring­ing our book to a wider au­di­ence.

The on­line bib­li­o­graphic data­base in­cludes nearly 3,400 garbage col­lec­tion-re­lated pub­li­ca­tions. It con­tains ab­stracts for some en­tries and URLs or DOIs for most of the elec­tron­i­cally avail­able ones, and is con­tin­u­ally be­ing up­dated. The data­base can be searched on­line, or down­loaded as BibTeX, PostScript or PDF.

...

Read the original on gchandbook.org »

9 223 shares, 11 trendiness

Archivists Posted the 60 Minutes CECOT Segment Bari Weiss Killed

Archivists have saved and up­loaded copies of the 60 Minutes episode new CBS ed­i­tor-in-chief Bari Weiss or­dered be shelved as a tor­rent and mul­ti­ple file shar­ing sites af­ter an in­ter­na­tional dis­trib­u­tor aired the episode.

The moves show how dif­fi­cult it may be for CBS to stop the episode, which fo­cused on the ex­pe­ri­ence of Venezuelans de­ported to El Salvadorian mega prison CECOT, from spread­ing across the in­ter­net. Bari Weiss stopped the episode from be­ing re­leased Sunday even af­ter the episode was re­viewed and checked mul­ti­ple times by the news out­let, ac­cord­ing to an email CBS cor­re­spon­dent Sharyn Alfonsi sent to her col­leagues.

You may re­call ear­lier this year when the Trump ad­min­is­tra­tion de­ported hun­dreds of Venezuelan men to El Salvador, a coun­try most had no con­nec­tion to,” the show starts, ac­cord­ing to a copy viewed by 404 Media.

...

Read the original on www.404media.co »

10 219 shares, 18 trendiness

A look back at 2025

As 2025 comes to a close, it’s once again time to re­flect. It’s been an­other packed twelve months, and it’s great to look back at every­thing we achieved, day by day. (Yes, we’re pat­ting our­selves on the back. It’s our blog, we’re al­lowed to.)

Want to take a walk down mem­ory lane? Here are pre­vi­ous edi­tions: 2024, 2023, 2022, 2021, 2020.

This year, we reached €6.5 mil­lion in rev­enue, a solid 10% year-over-year growth. Not that many com­pa­nies still have dou­ble-digit growth af­ter ten years! Most are ei­ther dead, lay­ing off half their teams, ac­qui-hired, or piv­ot­ing to AI-something.

With our con­tin­ued fo­cus on sus­tain­able op­er­a­tions and dis­ci­plined ex­e­cu­tion, we achieved an EBIT mar­gin of 65%. To put this in per­spec­tive: while most SaaS com­pa­nies cel­e­brate 20-30% mar­gins, and in­dus­try lead­ers hover around 40%, DatoCMS has reached a level of prof­itabil­ity that places us in the top 5% of SaaS com­pa­nies glob­ally.

For those fa­mil­iar with SaaS met­rics, the Rule of 40” states that growth rate plus profit mar­gin should ex­ceed 40%. Ours is 75%. We’re not brag­ging (okay, we’re brag­ging a lit­tle) but it turns out that not burn­ing through VC cash on ping-pong ta­bles and growth at all costs” ac­tu­ally works.

With 185 agency part­ners now fully en­rolled in our part­ner net­work (!!!), we’re gen­uinely blown away. These are peo­ple who build web­sites for a liv­ing, with real dead­lines and real clients breath­ing down their necks. They don’t have time for tools that get in the way — and they chose us. We don’t take that for granted.

This year, we dou­bled down on mak­ing your work more vis­i­ble. All that real work for real clients? It adds up — we now have 340 pro­jects in the show­case (63 added this year alone!), enough that we had to re­vamp the page with proper fil­ters so peo­ple can ac­tu­ally find things.

And what pro­jects they are. You’ve used DatoCMS to power of­fline wayfind­ing. You’ve helped shape the early days of the en­tire GraphQL com­mu­nity. Heck, one of you even took a day to graf­fiti the streets of Switzerland about us — which is ei­ther peak brand loy­alty or a cry for help, we’re not sure. Either way, never felt so loved.

If you’re an agency and you’re not in the part­ner pro­gram yet — come on. We’re not col­lect­ing lo­gos here. We want to build a real re­la­tion­ship, learn what’s slow­ing you down, and give you the per­fect tool to ship qual­ity work fast and pain­lessly. Half the fea­tures we shipped this year came from part­ner feed­back. You’re lit­er­ally shap­ing the prod­uct. That’s the whole point. No awk­ward sales calls, promise, we hate those too.

2025 has been an­other year of re­lent­less ship­ping. We did­n’t just fo­cus on one area — we im­proved the en­tire stack, from the way de­vel­op­ers write code to how ed­i­tors man­age con­tent, all while hard­en­ing se­cu­rity and prepar­ing for the AI era (gosh, we said it, now we need to wash our mouths).

Here is an ex­haus­tive look at every­thing we shipped this year, grouped by how they help you:

* Records, fi­nally typed — The biggest DX win of the year. The JavaScript client now sup­ports full end-to-end type safety, gen­er­at­ing types di­rectly from your schema for real au­to­com­plete and com­pile-time safety. No more any types haunt­ing your dreams.

* Reactive Plugins — Plugin set­tings are now synced in real-time across users, pre­vent­ing con­fig­u­ra­tion con­flicts when mul­ti­ple peo­ple are work­ing on com­plex se­tups si­mul­ta­ne­ously.

* LLM-Ready Documentation — We made our docs AI-friendly with llms-full.txt and a Copy as Markdown” fea­ture on every page, so you can eas­ily feed con­text to ChatGPT or Claude. Because let’s be hon­est, that’s how half of you read doc­u­men­ta­tion now any­way.

* MCP Server — We re­leased a Model Context Protocol (MCP) server that en­ables AI as­sis­tants to in­ter­act di­rectly with your DatoCMS pro­jects. It works. Sometimes. We wrote a whole blog post about the sometimes” part.

* AI Translations — Bulk-translate en­tire records with OpenAI, Claude, Gemini, or DeepL. Finally, a rea­son to stop copy-past­ing into Google Translate.

* Structured Text to Markdown — A new pack­age that turns Structured Text fields back into clean, CommonMark-compatible Markdown: per­fect for LLM pipelines or mi­gra­tion scripts.

* Inline Blocks in Structured Text — One of our most re­quested fea­tures! You can now in­sert blocks di­rectly in­side Structured Text fields — per­fect for in­line links, men­tions, or notes — un­lock­ing in­fi­nite nest­ing pos­si­bil­i­ties.

* Tabular View for Trees — Hierarchical mod­els got a mas­sive up­grade with a new Tabular View, bring­ing cus­tom columns, pag­i­na­tion, and sort­ing to tree struc­tures.

* Favorite Locales — Editors can now pin their most-used lan­guages to the top of the UI, hid­ing the noise of un­used lo­cales in mas­sive multi-lan­guage pro­jects. Finally, some peace for the peo­ple man­ag­ing 40+ lo­cales.

* Enhanced Previews — We in­tro­duced in­line pre­views for blocks and link fields, let­ting you see col­ors, dates, and im­ages di­rectly in the list view with­out click­ing through.

* Single Block Presentation — You can now use a Single Block field as a mod­el’s pre­sen­ta­tion ti­tle or im­age, per­fect for mod­els where the main info is nested in­side a block.

* Improved Link Field Filtering — Link fields now cor­rectly fil­ter records by the cur­rent lo­cale, elim­i­nat­ing con­fu­sion when ref­er­enc­ing lo­cal­ized con­tent.

* Fixed Headers — We uni­fied the UI with fixed head­ers across all sec­tions, en­sur­ing that save and pub­lish but­tons are al­ways within reach. A small change that sounds bor­ing un­til you re­al­ize how much scrolling it saves.

* New CLI cma:call com­mand — You can now call any API method di­rectly from the ter­mi­nal with­out writ­ing cus­tom scripts, thanks to dy­namic dis­cov­ery of API re­sources.

* Filter up­loads by path — We added a new path fil­ter to the GraphQL API, al­low­ing you to query as­sets based on their stor­age path with in­clu­sion, ex­clu­sion, and ex­act match­ing.

* Increased GraphQL Pagination — We bumped the max­i­mum num­ber of items you can fetch in a sin­gle GraphQL query from 100 to 500, re­duc­ing the num­ber of re­quests needed for large datasets. Five times more stuff in one go — you’re wel­come.

* Site Search Decoupled — Site Search is now an in­de­pen­dent en­tity, sep­a­rate from Build Triggers. You can con­trol in­dex­ing ex­plic­itly and ac­cess de­tailed crawler logs to de­bug ro­bots.txt and sitemap is­sues.

* Enhanced Build Triggers Activity — We en­hanced the Activity view to show events be­yond the 30-item limit, with bet­ter fil­ter­ing and de­tailed logs for every op­er­a­tion.

* Access to CDA Playground with Limited Permissions — Developers can now use the GraphQL Playground with­out need­ing full API to­ken man­age­ment per­mis­sions, safer for con­trac­tors and tem­po­rary ac­cess.

* All API Tokens are Deletable — For bet­ter se­cu­rity hy­giene, you can now delete any API to­ken, in­clud­ing the de­fault read-only ones gen­er­ated by the sys­tem.

* API Token Last Used Time — You can now see when each API to­ken was last used di­rectly in Project Settings, mak­ing it easy to iden­tify stale to­kens and clean up ones that haven’t been ac­tive in months. Or years. We don’t judge.

* No Default Full-Access Token — New pro­jects no longer come with a full-ac­cess API to­ken by de­fault, en­cour­ag­ing the prin­ci­ple of least priv­i­lege from day one.

* Improved Roles & Permissions — We re­vamped the roles in­ter­face to clearly show in­her­ited per­mis­sions and hu­man-read­able sum­maries of what a user can ac­tu­ally do.

* DatoCMS Recipes & Import/Export — We launched a mar­ket­place of reusable pro­ject recipes” — pre-built mod­els and blocks you can in­stall into any pro­ject to save setup time, pow­ered by the new Schema Import/Export plu­gin.

* Dedicated SEO Fallback Options — We de­cou­pled SEO meta­data from in­ter­nal pre­view fields, al­low­ing you to set spe­cific fall­backs for SEO ti­tles and im­ages with­out af­fect­ing the CMS UI.

* Force Validations on Publishing — You can now pre­vent the pub­lish­ing of records that don’t meet cur­rent val­i­da­tion rules — cru­cial when you’ve tight­ened schema re­quire­ments on ex­ist­ing con­tent.

* Save Invalid Drafts — Conversely, you can now save drafts even if they are in­valid, al­low­ing ed­i­tors to save their work-in-progress with­out be­ing blocked by strict val­i­da­tion rules un­til they are ready to pub­lish. Because some­times half-done” is bet­ter than lost.”

* Draft Mode by Default — To en­cour­age bet­ter ed­i­to­r­ial work­flows, Draft/Published” mode is now the de­fault set­ting for all new mod­els.

* Smart Confirmation Guardrails — Destructive ac­tions now cal­cu­late their im­pact be­fore ex­e­cu­tion. If you’re about to delete some­thing used in 10+ records, we force a typed con­fir­ma­tion to pre­vent ac­ci­dents. We’ve all been there. This is us pro­tect­ing you from your­self.

…and we also cleaned up some tech debt by sun­set­ting legacy batch end­points and re­mov­ing un­used CI trig­gers, keep­ing the plat­form lean and fast.

30 new pub­lic plu­g­ins landed in the mar­ket­place this year — plus count­less pri­vate ones we’ll never see. The com­mu­nity (and our sup­port team!) keeps sur­pris­ing us with stuff we did­n’t even know we needed.

This year, DatoCMS han­dled an av­er­age of 3.5B API calls/​month (+80%), while serv­ing 500TB of traf­fic/​month and 4.5M op­ti­mized video views/​month. At the same time, we ex­e­cuted the most am­bi­tious en­gi­neer­ing pro­ject in our his­tory: a com­plete mi­gra­tion from Heroku to a cus­tom Kubernetes clus­ter on AWS.

For al­most ten years, man­aged host­ing served us well — but by mid-2024, we had hit a ceil­ing. Costs were ris­ing while our need for gran­u­lar con­trol grew. We re­al­ized we were pay­ing a pre­mium for con­ve­nience we no longer needed. It was time to build our own home.

The jour­ney be­gan back in October 2024, kick­ing off a nine-month marathon. We spent the win­ter pro­to­typ­ing (experimenting with every­thing from bare metal to al­ter­na­tive PaaS providers — some of which shall re­main un­named to pro­tect the guilty), the spring ar­chi­tect­ing, and the early sum­mer stress-test­ing.

After months of plan­ning, we flipped the switch on Saturday, June 7th. We pre­pared for a bat­tle, but we mostly ended up watch­ing dash­boards. Aside from a tiny de­tail that cost us ex­actly 1 minute of down­time, the tran­si­tion was flaw­less. By the time we turned the writes back on, every byte of data had been suc­cess­fully se­cured in AWS.

The re­sults were im­me­di­ate and star­tling:

* Speed: Response times for the Content Delivery API (CDA) were halved in­stantly.

* Efficiency: We are now run­ning on 64GB RAM data­base in­stances on AWS that han­dle traf­fic bet­ter than the 256GB in­stances we used on Heroku. Yes, you read that right. Four times less RAM, bet­ter per­for­mance.

It was a mas­sive bet, but look­ing at the met­rics to­day, it is un­de­ni­ably one of the best wins of our year.

We did­n’t just move servers and DBs; while mov­ing our core ap­pli­ca­tions to AWS EKS was the main event, we ex­e­cuted a to­tal over­haul of the ecosys­tem sur­round­ing it:

* Infrastructure as Code: We cod­i­fied our en­tire en­vi­ron­ment us­ing Terraform, giv­ing us a re­pro­ducible, ver­sion-con­trolled blue­print of our in­fra­struc­ture that elim­i­nates man­ual con­fig­u­ra­tion drift.

* CDN Caching: We switched from Fastly to Cloudflare for our CDN cache, im­ple­ment­ing smarter caching rules that im­proved our hit ra­tio from 85% to 97%.

* Storage: We mi­grated from AWS S3 to Cloudflare R2, elim­i­nat­ing mas­sive egress fees and op­ti­miz­ing as­set de­liv­ery. Goodbye, AWS data trans­fer bills. We won’t miss you.

* Observability: We ditched ex­pen­sive CloudWatch logs for a cus­tom Prometheus & Loki stack, slash­ing our mon­i­tor­ing bills to near zero while im­prov­ing data qual­ity.

* Developer Experience: To tame Kubernetes com­plex­ity, we built cubo, a cus­tom kubectl wrap­per tai­lored around our needs that han­dles every­thing from gen­er­at­ing K8S man­i­fests and or­ches­trat­ing roll­outs to man­ag­ing cron­jobs, real-time logs, and one-off com­mands, pre­serv­ing the git push” and CLI sim­plic­ity we loved on Heroku.

The Bottom Line: We low­ered over­all in­fra­struc­ture costs by over 25%, re­duced Content Delivery API la­tency by 50%, ex­panded Realtime API ca­pac­ity by 10×, and gained full con­trol across every in­fra­struc­ture layer. And we kept our san­ity. Mostly.

While lib­er­at­ing our­selves from man­aged host­ing, we made an­other quiet move: we fully in­ter­nal­ized our ac­count­ing. For years, we out­sourced this to ex­ter­nal firms — the typ­i­cal setup where you hand over re­ceipts and hope for the best. But as we grew, fly­ing blind be­tween quar­terly re­ports be­came un­ten­able.

Now we run every­thing in-house with full vis­i­bil­ity into our fi­nances at any mo­ment. No more wait­ing for ex­ter­nal ac­coun­tants to rec­on­cile things. Same phi­los­o­phy as the in­fra­struc­ture mi­gra­tion: con­trol beats con­ve­nience when you’re build­ing for the long term.

This year marked our 10th an­niver­sary — a decade of sur­viv­ing fron­tend trends, CMS wars, and the oc­ca­sional ex­is­ten­tial cri­sis about whether headless” is still a cool term. To cel­e­brate, we flew our en­tire team to the Tuscan coun­try­side to eat, drink, and ride quad bikes. You can read the full story of our trip (and our 25% Matteo con­cen­tra­tion rate”) here: Dato Turns 10.

Despite our growth in rev­enue and traf­fic, we re­main a team of just 13 peo­ple. This is­n’t an ac­ci­dent — it’s a de­lib­er­ate choice.

As we wrote in How can you be eight peo­ple?” (well, now thir­teen), build­ing a mas­sive or­ga­ni­za­tion is op­tional. We choose to ig­nore the pres­sure to max­i­mize head­count or chase VC fund­ing. Instead, we fo­cus on what ac­tu­ally mat­ters: a solid prod­uct, a healthy work-life bal­ance, and stay­ing prof­itable on our own terms. We don’t mind leaving a lit­tle wa­ter in the cloth” if it means we get to keep build­ing the soft­ware we love, the way we want to build it.

No idea. And hon­estly, we like it that way.

We’re not go­ing to pre­tend we have a five-year vi­sion carved in stone or a slide deck about the fu­ture of con­tent.” We’ll keep ship­ping what mat­ters, keep ig­nor­ing the hype cy­cles, and keep cash­ing checks in­stead of burn­ing through run­way.

That said… we may have a few things cook­ing that we’re gen­uinely ex­cited about. But we’re not go­ing to jinx it by over­promis­ing — you’ll see them when they ship.

Well, see you in 2026. We’ll still be here. Probably still 13 peo­ple. Definitely still not tak­ing our­selves too se­ri­ously. 🧡

...

Read the original on www.datocms.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.