10 interesting stories served every morning and every evening.
Skip to main content
Ask the publishers to restore access to 500,000+ books.
9 Days Left: The year is almost over—help us finish strong in 2025!
Please Don’t Scroll Past This
Can you chip in? As an independent nonprofit, the Internet Archive is fighting for universal access to quality information. We build and maintain all our own systems, but we don’t charge for access, sell user information, or run ads. We’d be deeply grateful if you’d join the one in a thousand users that support us financially.
We understand that not everyone can donate right now, but if you can afford to contribute this Tuesday, we promise it will be put to good use. Our resources are crucial for knowledge lovers everywhere—so if you find all these bits and bytes useful, please pitch in.
Please Don’t Scroll Past This The Internet Archive is working to keep the record straight by recording government websites, news publications, historical documents, and more. If you find our library useful, please pitch in.
Remind Me
By submitting, you agree to receive donor-related emails from the Internet Archive. Your privacy is important to us. We do not sell or trade your information with anyone.
An icon used to represent a menu that can be
toggled by interacting with this icon.
An illustration of an open book.
An illustration of two cells of a film
strip.
An illustration of an audio speaker.
An illustration of two photographs.
An illustration of a person’s head and chest.
An illustration of a horizontal line over an up
pointing arrow.
Search the history of more than 1 trillion web pages.
Capture a web page as it appears now for use as a trusted citation in the future.
Internet Archive’s in-browser video “theater” requires JavaScript to be enabled.
It appears your browser does not have it turned on.
Please see your browser settings for this feature.
Sharyn Alfonsi’s “Inside CECOT” for 60 Minutes, which was censored by Bari Weiss, as it appeared on Canada’s Global TV app.
...
Read the original on archive.org »
On Monday, the US Department of the Interior announced that it was pausing the leases on all five offshore wind sites currently under construction in the US. The move comes despite the fact that these projects already have installed significant hardware in the water and on land; one of them is nearly complete. In what appears to be an attempt to avoid legal scrutiny, the Interior is blaming the decisions on a classified report from the Department of Defense.
The second Trump administration announced its animosity toward offshore wind power literally on day one, issuing an executive order on inauguration day that called for a temporary halt to issuing permits for new projects pending a re-evaluation. Earlier this month, however, a judge vacated that executive order, noting that the government has shown no indication that it was even attempting to start the re-evaluation it said was needed.
But a number of projects have gone through the entire permitting process, and construction has started. Before today, the administration had attempted to stop these in an erratic, halting manner. Empire Wind, an 800 MW farm being built off New York, was stopped by the Department of the Interior, which alleged that it had been rushed through permitting. That hold was lifted following lobbying and negotiations by New York and the project developer Orsted, and the Department of the Interior never revealed why it changed its mind. When the Interior Department blocked a second Orsted project, Revolution Wind offshore of southern New England, the company took the government to court and won a ruling that let it continue construction.
...
Read the original on arstechnica.com »
Try it at Z.aiGLM-4.7, your new coding partner, is coming with the following features:Core Coding: GLM-4.7 brings clear gains, compared to its predecessor GLM-4.6, in multilingual agentic coding and terminal-based tasks, including (73.8%, +5.8%) on SWE-bench, (66.7%, +12.9%) on SWE-bench Multilingual, and (41%, +16.5%) on Terminal Bench 2.0. GLM-4.7 also supports thinking before acting, with significant improvements on complex tasks in mainstream agent frameworks such as Claude Code, Kilo Code, Cline, and Roo Code. Vibe Coding: GLM-4.7 takes a major step forward in UI quality. It produces cleaner, more modern webpages and generates better-looking slides with more accurate layout and sizing.Tool Using: GLM-4.7 achieves significantly improvements in Tool using. Significant better performances can be seen on benchmarks such as τ^2-Bench and on web browsing via BrowseComp.Complex Reasoning: GLM-4.7 delivers a substantial boost in mathematical and reasoning capabilities, achieving (42.8%, +12.4%) on the HLE (Humanity’s Last Exam) benchmark compared to GLM-4.6.You can also see significant improvements in many other scenarios such as chat, creative writing, and role-play scenario.Benchmark Performance. More detailed comparisons of GLM-4.7 with other models GPT-5, GPT-5.1-High, Claude Sonnet 4.5, Gemini 3.0 Pro, DeepSeek-V3.2, Kimi K2 Thinking, on 17 benchmarks (including 8 reasoning, 5 coding, and 3 agents benchmarks) can be seen in the below table.Coding: AGI is a long journey, and benchmarks are only one way to evaluate performance. While the metrics provide necessary checkpoints, the most important thing is still how it *feels*. True intelligence isn’t just about acing a test or processing data faster; ultimately, the success of AGI will be measured by how seamlessly it integrates into our lives-“coding” this time.Scroll down to see moreDesign a richly crafted voxel-art environment featuring an ornate pagoda set within a vibrant garden.
Include diverse vegetation—especially cherry blossom trees—and ensure the composition feels lively, colorful, and visually striking.
Use any voxel or WebGL libraries you prefer, but deliver the entire project as a single, self-contained HTML file that I can paste and open directly in Chrome View full trajectory at Z.ai Design a poster introducing Paris, with a romantic and fashionable aesthetic. The overall style should feel elegant, visually refined, and design-driven. View full trajectory at Z.ai GLM-4.7 enhances Interleaved Thinking, a feature introduced since GLM-4.5, and further introduces Preserved Thinking and Turn-level Thinking. By thinking between actions and staying consistent across turns, it makes complex tasks more stable and more controllable:Interleaved Thinking: GLM-4.7 thinks before every response and tool calling, improving instruction following and the quality of generation.Preserved Thinking: In coding agent scenarios, GLM-4.7 automatically retains all thinking blocks across multi-turn conversations, reusing the existing reasoning instead of re-deriving from scratch. This reduces information loss and inconsistencies, and is well-suited for long-horizon, complex tasks.Turn-level Thinking: GLM-4.7 supports per-turn control over reasoning within a session—disable thinking for lightweight requests to reduce latency/cost, enable it for complex tasks to improve accuracy and stability.The Z.ai API platform offers the GLM-4.7 model. For comprehensive API documentation and integration guidelines, please refer to https://docs.z.ai/guides/llm/glm-4.7. At the same time, the model is also available worldwide through OpenRouter (https://openrouter.ai/).GLM-4.7 is now available to use within coding agents (Claude Code, Kilo Code, Roo Code, Cline and more).For GLM Coding Plan subscribers: You’ll be automatically upgraded to GLM-4.7. If you’ve previously customized the app configs (like ~/.claude/settings.json in Claude Code), simply update the model name to “glm-4.7” to complete the upgrade.For New users: Subscribing GLM Coding Plan means having access to a Claude-level coding model at a fraction of the cost — just 1/7th the price with 3x the usage quota. Start building today: https://z.ai/subscribe.GLM-4.7 is accessible through Z.ai. Try to change the model option to GLM-4.7, if the system does not automatically do that (not like an AGI in that case :))Model weights for GLM-4.7 are publicly available on HuggingFace and ModelScope. For local deployment, GLM-4.7 supports inference frameworks including vLLM and SGLang. Comprehensive deployment instructions are available in the official GitHub repository.1: Default settings (most tasks): temperature 1.0, top-p 0.95, max new tokens 131072. For multi-turn agentic tasks (τ²-Bench and Terminal Bench 2), enable Preserved Thinking mode.3: τ²-Bench settings: temperature 0, max new tokens 16384. For τ²-Bench, we added an extra prompt in the Retail and Telecom interactions to avoid failures caused by users ending the interaction incorrectly; for the Airline domain, we applied the domain fixes proposed in the Claude Opus 4.5 release report.
...
The lotusbail npm package presents itself as a WhatsApp Web API library - a fork of the legitimate @whiskeysockets/baileys package. With over 56,000 downloads and functional code that actually works as advertised, it’s the kind of dependency developers install without a second thought. The package has been available on npm for 6 months and is still live at the time of writing.
Behind that working functionality: sophisticated malware that steals your WhatsApp credentials, intercepts every message, harvests your contacts, installs a persistent backdoor, and encrypts everything before sending it to the threat actor’s server.
Most malicious npm packages reveal themselves quickly - they’re typosquats, they don’t work, or they’re obviously sketchy. This one actually functions as a WhatsApp API. It’s based on the legitimate Baileys library and provides real, working functionality for sending and receiving WhatsApp messages.
Obvious malware is easy to spot. Functional malware? That gets installed, tested, approved, and deployed to production.
The social engineering here is brilliant: developers don’t look for malware in code that works. They look for code that breaks.
The package wraps the legitimate WebSocket client that communicates with WhatsApp. Every message that flows through your application passes through the malware’s socket wrapper first.
When you authenticate, the wrapper captures your credentials. When messages arrive, it intercepts them. When you send messages, it records them. The legitimate functionality continues working normally - the malware just adds a second recipient for everything.
All your WhatsApp authentication tokens, every message sent or received, complete contact lists, media files - everything that passes through the API gets duplicated and prepared for exfiltration.
But the stolen data doesn’t get sent in plain text. The malware includes a complete, custom RSA implementation for encrypting the data before transmission:
Why implement custom RSA? Because legitimate WhatsApp libraries don’t need custom encryption - WhatsApp already handles end-to-end encryption. The custom crypto exists for one reason: to encrypt stolen data before exfiltration so network monitoring won’t catch it.
The exfiltration server URL is buried in encrypted configuration strings, hidden inside compressed payloads. The malware uses four layers of obfuscation: Unicode variable manipulation, LZString compression, Base-91 encoding, and AES encryption. The server location isn’t hardcoded anywhere visible.
Here’s where it gets particularly nasty. WhatsApp uses pairing codes to link new devices to accounts. You request a code, WhatsApp generates a random 8-character string, you enter it on your new device, and the devices link together.
The malware hijacks this process with a hardcoded pairing code. The code is encrypted with AES and hidden in the package:
This means the threat actor has a key to your WhatsApp account. When you use this library to authenticate, you’re not just linking your application - you’re also linking the threat actor’s device. They have complete, persistent access to your WhatsApp account, and you have no idea they’re there.
The threat actor can read all your messages, send messages as you, download your media, access your contacts - full account control. And here’s the critical part, uninstalling the npm package removes the malicious code, but the threat actor’s device stays linked to your WhatsApp account. The pairing persists in WhatsApp’s systems until you manually unlink all devices from your WhatsApp settings. Even after the package is gone, they still have access.
The package includes 27 infinite loop traps that freeze execution if debugging tools are detected:
These traps check for debuggers, inspect process arguments, detect sandbox environments, and generally make dynamic analysis painful. They also left helpful comments in their code marking the malicious sections - professional development practices applied to supply chain attacks. Someone probably has a Jira board for this.
Supply chain attacks aren’t slowing down - they’re getting better. We’re seeing working code with sophisticated anti-debugging, custom encryption, and multi-layer obfuscation that survives marketplace reviews. The lotusbail case isn’t an outlier. It’s a preview.
Traditional security doesn’t catch this. Static analysis sees working WhatsApp code and approves it. Reputation systems see 56,000 downloads and trust it. The malware hides in the gap between “this code works” and “this code only does what it claims.”
Catching sophisticated supply chain attacks requires behavioral analysis - watching what packages actually do at runtime. When a WhatsApp library implements custom RSA encryption and includes 27 anti-debugging traps, those are signals. But you need systems watching for them.
This writeup was authored by the research team at Koi Security. We built Koi to detect threats that pass traditional checks but exhibit malicious behavior at runtime.
Book a demo to see how behavioral analysis catches what static review misses.
...
Read the original on www.koi.ai »
Have you ever watched long running migration script, wondering if it’s about to wreck your data? Or wish you can “just” spin a fresh copy of database for each test run? Or wanted to have reproducible snapshots to reset between runs of your test suite, (and yes, because you are reading boringSQL) needed to reset the learning environment?
When your database is a few megabytes, pg_dump and restore works fine. But what happens when you’re dealing with hundreds of megabytes/gigabytes - or more? Suddenly “just make a copy” becomes a burden.
You’ve probably noticed that PostgreSQL connects to template1 by default. What you might have missed is that there’s a whole templating system hiding in plain sight. Every time you run
PostgreSQL quietly clones standard system database template1 behind the scenes. Making it same as if you would use
The real power comes from the fact that you can replace template1 with any database. You can find more at Template Database
documentation.
In this article, we will cover a few tweaks that turn this templating system into an instant, zero-copy database cloning machine.
Before PostgreSQL 15, when you created a new database from a template, it operated strictly on the file level. This was effective, but to make it reliable, Postgres had to flush all pending operations to disk (using
CHECKPOINT) before taking a consistent snapshot. This created a massive I/O spike - a “Checkpoint Storm” - that could stall your production traffic.
Version 15 of PostgreSQL introduced new parameter CREATE DATABASE … STRATEGY = [strategy] and at the same time changed the default behaviour how the new databases are created from templates. The new default become WAL_LOG which copies block-by-block via the Write-Ahead Log (WAL), making I/O sequential (and much smoother) and support for concurrency without facing latency spike. This prevented the need to CHECKPOINT but made the database cloning operation potentially significantly slower. For an empty template1, you won’t notice the difference. But if you try to clone a 500GB database using WAL_LOG, you are going to be waiting a long time.
The STRATEGY parameter allows us to switch back to the original method
FILE_COPY to keep the behaviour, and speed. And since PostgreSQL 18, this opens the whole new set of options.
Because the FILE_COPY strategy is a proxy to operating system file operations, we can change how the OS handles those files.
When using standard file system (like ext4), PostgreSQL reads every byte of the source file and writes it to a new location. It’s a physical copy. However starting with PostgreSQL 18 - file_copy_method gives you options to switch that logic; while default option remains copy.
With modern filesystems (like ZFS, XFS with reflinks, APFS, etc.) you can switch it to clone and leverage CLONE (FICLONE on Linux) operation for almost instant operation. And it won’t take any additional space.
All you have to do is:
Linux with XFS or ZFS support (we will use XFS for the demostration) or similar
operating system. MacOS APFS is also fully supported. FreeBSD with ZFS also
supported (which normally would be my choice, but haven’t got time to test so
far)
We need some dummy data to copy. This is the only part of the tutorial where you have to wait. Let’s generate a ~6GB database.
You can verify the database now has roughly 6GB of data.
While enabling \timing you can test the default (WAL_LOG) strategy. And on my test volume (relatively slow storage) I get
Now, let’s verify our configuration is set for speed:
Let’s request the semi-instant clone of the same database, without taking extra disk space at the same time.
That’s a quite an improvement, isn’t it?
That was the simple part. But what is happening behind the scenes?
When you clone a database with file_copy_method = clone, PostgreSQL doesn’t duplicate any data. The filesystem creates new metadata entries that point to the same physical blocks. Both databases share identical storage.
This can create some initial confusion. If you ask PostgreSQL for the size:
PostgreSQL reports both as ~6GB because that’s the logical size - how much data each database “contains” - i.e. logical size.
The interesting part happens when you start writing. PostgreSQL doesn’t update tuples in place. When you UPDATE a row, it writes a new tuple version somewhere (often a different page entirely) and marks the old one as dead. The filesystem doesn’t care about PostgreSQL internals - it just sees writes to 8KB pages. Any write to a shared page triggers a copy of that entire page.
A single UPDATE will therefore trigger copy-on-write on multiple pages:
the page holding the old tuple
the page receiving the new tuple
And later, VACUUM touches even more pages while cleaning up dead tuples. In this case diverging quickly from the linked storage.
Using the database OID and relfilenode we can verify the both databases are now sharing physical blocks.
root@clone-demo:/var/lib/postgresql# sudo filefrag -v /var/lib/postgresql/18/main/base/16402/16404
Filesystem type is: 58465342
File size of /var/lib/postgresql/18/main/base/16402/16404 is 1073741824 (262144 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 2031: 10471550.. 10473581: 2032: shared
1: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared
2: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared
3: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared
4: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared
5: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared
6: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof
/var/lib/postgresql/18/main/base/16402/16404: 7 extents found
root@clone-demo:/var/lib/postgresql#
root@clone-demo:/var/lib/postgresql#
root@clone-demo:/var/lib/postgresql# sudo filefrag -v /var/lib/postgresql/18/main/base/16418/16404
Filesystem type is: 58465342
File size of /var/lib/postgresql/18/main/base/16418/16404 is 1073741824 (262144 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 2031: 10471550.. 10473581: 2032: shared
1: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared
2: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared
3: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared
4: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared
5: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared
6: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof
/var/lib/postgresql/18/main/base/16418/16404: 7 extents found
All it takes is to update some rows using
update boring_data set payload = ‘new value’ || id where id IN (select id from boring_data limit 20);
and the situation will start to change.
root@clone-demo:/var/lib/postgresql# sudo filefrag -v /var/lib/postgresql/18/main/base/16402/16404
Filesystem type is: 58465342
File size of /var/lib/postgresql/18/main/base/16402/16404 is 1073741824 (262144 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 39: 10471550.. 10471589: 40:
1: 40.. 2031: 10471590.. 10473581: 1992: shared
2: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared
3: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared
4: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared
5: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared
6: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared
7: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof
/var/lib/postgresql/18/main/base/16402/16404: 7 extents found
root@clone-demo:/var/lib/postgresql# sudo filefrag -v /var/lib/postgresql/18/main/base/16418/16404
Filesystem type is: 58465342
File size of /var/lib/postgresql/18/main/base/16418/16404 is 1073741824 (262144 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 39: 10297326.. 10297365: 40:
1: 40.. 2031: 10471590.. 10473581: 1992: 10297366: shared
2: 2032.. 16367: 10474098.. 10488433: 14336: 10473582: shared
3: 16368.. 32751: 10497006.. 10513389: 16384: 10488434: shared
4: 32752.. 65519: 10522066.. 10554833: 32768: 10513390: shared
5: 65520.. 129695: 10571218.. 10635393: 64176: 10554834: shared
6: 129696.. 195231: 10635426.. 10700961: 65536: 10635394: shared
7: 195232.. 262143: 10733730.. 10800641: 66912: 10700962: last,shared,eof
/var/lib/postgresql/18/main/base/16418/16404: 8 extents found
root@clone-demo:/var/lib/postgresql#
In this case extent 0 no longer has shared flag, first 40 blocks size (with default size 4KB) now diverge, making it total of 160KB. Each database now has its own copy at different physical address. The remaining extents are still shared.
Cloning is tempting but there’s one serious limitation you need to be aware if you ever attempt to do it in production. The source database can’t have any active connections during cloning. This is a PostgreSQL limitation, not a filesystem one. For production use, this usually means you create a dedicated template database rather than cloning your live database directly. Or given the relatively short time the operation takes you have to schedule the cloning in times where you can temporary block/terminate all connections.
Other limitation is that the cloning only works within a single filesystem. If your databases spans multiple table spaces on different mount points, cloning will fall back to regular physical copy.
Finally, in most managed cloud environments (AWS RDS, Google Cloud SQL), you will not have access to the underlying filesystem to configure this. You are stuck with their proprietary (and often billed) functionality. But for your own VMs or bare metal? Go ahead and try it.
...
Read the original on boringsql.com »
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
Notifications
You must be signed in to change notification settings
Notifications
You must be signed in to change notification settings
There was an error while loading. .
a friendlier ss / netstat for humans. inspect network connections with a clean tui or styled tables.
go install github.com/karol-broda/snitch@latest
# try it
nix run github:karol-broda/snitch
# install to profile
nix profile install github:karol-broda/snitch
# or add to flake inputs
inputs.snitch.url = “github:karol-broda/snitch”;
# then use: inputs.snitch.packages.${system}.default
# with yay
yay -S snitch-bin
# with paru
paru -S snitch-bin
curl -sSL https://raw.githubusercontent.com/karol-broda/snitch/master/install.sh | sh
installs to ~/.local/bin if available, otherwise /usr/local/bin. override with:
curl -sSL https://raw.githubusercontent.com/karol-broda/snitch/master/install.sh | INSTALL_DIR=~/bin sh
macos: the install script automatically removes the quarantine attribute (com.apple.quarantine) from the binary to allow it to run without gatekeeper warnings. to disable this, set KEEP_QUARANTINE=1.
tar xzf snitch_*.tar.gz
sudo mv snitch /usr/local/bin/
macos: if blocked with “cannot be opened because the developer cannot be verified”, run:
xattr -d com.apple.quarantine /usr/local/bin/snitch
snitch # launch interactive tui
snitch -l # tui showing only listening sockets
snitch ls # print styled table and exit
snitch ls -l # listening sockets only
snitch ls -t -e # tcp established connections
snitch ls -p # plain output (parsable)
snitch # all connections
snitch -l # listening only
snitch -t # tcp only
snitch -e # established only
snitch -i 2s # 2 second refresh interval
snitch ls # styled table (default)
snitch ls -l # listening only
snitch ls -t -l # tcp listeners
snitch ls -e # established only
snitch ls -p # plain/parsable output
snitch ls -o json # json output
snitch ls -o csv # csv output
snitch ls -n # numeric (no dns resolution)
snitch ls –no-headers # omit headers
snitch json
snitch json -l
snitch watch -i 1s | jq ‘.count’
snitch watch -l -i 500ms
snitch upgrade # check for updates
snitch upgrade –yes # upgrade automatically
snitch upgrade -v 0.1.7 # install specific version
for more specific filtering, use key=value syntax with ls:
snitch ls proto=tcp state=listen
snitch ls pid=1234
snitch ls proc=nginx
snitch ls lport=443
snitch ls contains=google
[defaults]
numeric = false
theme = “auto”
linux: reads from /proc/net/*, root or CAP_NET_ADMIN for full process info
macos: uses system APIs, may require sudo for full process info
There was an error while loading. Please reload this page.
You can’t perform that action at this time.
...
Read the original on github.com »
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
Notifications
You must be signed in to change notification settings
Notifications
You must be signed in to change notification settings
There was an error while loading. .
Browse, inspect, and launch movie torrents directly from your terminal.
Fast. Cross-platform. Minimal. Beautiful.
pip install cinecli
cinecli search matrix
cinecli watch 3525
Auto-selects the best option (you can override)
cinecli interactive
xdg-mime query default x-scheme-handler/magnet
Use it. Fork it. Improve it.
STAR the repo if you like it! ⭐
There was an error while loading. Please reload this page.
You can’t perform that action at this time.
...
Read the original on github.com »
Richard Jones’s Garbage Collection (Wiley, 1996) was a milestone book in the area of automatic memory management. Its widely acclaimed successor, The Garbage Collection Handbook: The Art of Automatic Memory Management
captured the state of the field in 2012. However, technology developments have made memory management more challenging, interesting and important than ever. This second edition updates the handbook, bringing together a wealth of knowledge gathered by automatic memory management researchers and developers over the past sixty years. The authors compare the most important approaches and state-of-the-art techniques in a single, accessible framework.
The book addresses new challenges to garbage collection made by recent advances in hardware and software, and the environments in which programs are executed. It explores the consequences of these changes for designers and implementers of high performance garbage collectors. Along with simple and traditional algorithms, the book covers state-of-the-art parallel, incremental, concurrent and real-time garbage collection. Algorithms and concepts are often described with pseudocode and illustrations.
The nearly universal adoption of garbage collection by modern programming languages makes a thorough understanding of this topic essential for any programmer. This authoritative handbook gives expert insight on how different collectors work as well as the various issues currently facing garbage collectors. Armed with this knowledge, programmers can confidently select and configure the many choices of garbage collectors.
* Provides a complete, up-to-date, and authoritative sequel to the 1996 and 2012 books
* Offers thorough coverage of parallel, concurrent and real-time garbage collection algorithms
* Explains some of the tricky aspects of garbage collection, including the interface to the run-time system
* Over 90 more pages, including new chapters on persistence and energy-aware garbage collection
* Backed by a comprehensive online database of nearly 3,400 garbage collection-related publications
The e-book enhances the print versions with a rich collection of over 37,000 hyperlinks to chapters, sections, algorithms, figures, glossary entries, index items, original research papers and much more.
Chinese and Japanese translations of the first edition were published in 2016. We thank the translators for their work in bringing our book to a wider audience.
The online bibliographic database includes nearly 3,400 garbage collection-related publications. It contains abstracts for some entries and URLs or DOIs for most of the electronically available ones, and is continually being updated. The database can be searched online, or downloaded as BibTeX, PostScript or PDF.
...
Read the original on gchandbook.org »
Archivists have saved and uploaded copies of the 60 Minutes episode new CBS editor-in-chief Bari Weiss ordered be shelved as a torrent and multiple file sharing sites after an international distributor aired the episode.
The moves show how difficult it may be for CBS to stop the episode, which focused on the experience of Venezuelans deported to El Salvadorian mega prison CECOT, from spreading across the internet. Bari Weiss stopped the episode from being released Sunday even after the episode was reviewed and checked multiple times by the news outlet, according to an email CBS correspondent Sharyn Alfonsi sent to her colleagues.
“You may recall earlier this year when the Trump administration deported hundreds of Venezuelan men to El Salvador, a country most had no connection to,” the show starts, according to a copy viewed by 404 Media.
...
Read the original on www.404media.co »
As 2025 comes to a close, it’s once again time to reflect. It’s been another packed twelve months, and it’s great to look back at everything we achieved, day by day. (Yes, we’re patting ourselves on the back. It’s our blog, we’re allowed to.)
Want to take a walk down memory lane? Here are previous editions: 2024, 2023, 2022, 2021, 2020.
This year, we reached €6.5 million in revenue, a solid 10% year-over-year growth. Not that many companies still have double-digit growth after ten years! Most are either dead, laying off half their teams, acqui-hired, or pivoting to AI-something.
With our continued focus on sustainable operations and disciplined execution, we achieved an EBIT margin of 65%. To put this in perspective: while most SaaS companies celebrate 20-30% margins, and industry leaders hover around 40%, DatoCMS has reached a level of profitability that places us in the top 5% of SaaS companies globally.
For those familiar with SaaS metrics, the “Rule of 40” states that growth rate plus profit margin should exceed 40%. Ours is 75%. We’re not bragging (okay, we’re bragging a little) but it turns out that not burning through VC cash on ping-pong tables and “growth at all costs” actually works.
With 185 agency partners now fully enrolled in our partner network (!!!), we’re genuinely blown away. These are people who build websites for a living, with real deadlines and real clients breathing down their necks. They don’t have time for tools that get in the way — and they chose us. We don’t take that for granted.
This year, we doubled down on making your work more visible. All that real work for real clients? It adds up — we now have 340 projects in the showcase (63 added this year alone!), enough that we had to revamp the page with proper filters so people can actually find things.
And what projects they are. You’ve used DatoCMS to power offline wayfinding. You’ve helped shape the early days of the entire GraphQL community. Heck, one of you even took a day to graffiti the streets of Switzerland about us — which is either peak brand loyalty or a cry for help, we’re not sure. Either way, never felt so loved.
If you’re an agency and you’re not in the partner program yet — come on. We’re not collecting logos here. We want to build a real relationship, learn what’s slowing you down, and give you the perfect tool to ship quality work fast and painlessly. Half the features we shipped this year came from partner feedback. You’re literally shaping the product. That’s the whole point. No awkward sales calls, promise, we hate those too.
2025 has been another year of relentless shipping. We didn’t just focus on one area — we improved the entire stack, from the way developers write code to how editors manage content, all while hardening security and preparing for the AI era (gosh, we said it, now we need to wash our mouths).
Here is an exhaustive look at everything we shipped this year, grouped by how they help you:
* Records, finally typed — The biggest DX win of the year. The JavaScript client now supports full end-to-end type safety, generating types directly from your schema for real autocomplete and compile-time safety. No more any types haunting your dreams.
* Reactive Plugins — Plugin settings are now synced in real-time across users, preventing configuration conflicts when multiple people are working on complex setups simultaneously.
* LLM-Ready Documentation — We made our docs AI-friendly with llms-full.txt and a “Copy as Markdown” feature on every page, so you can easily feed context to ChatGPT or Claude. Because let’s be honest, that’s how half of you read documentation now anyway.
* MCP Server — We released a Model Context Protocol (MCP) server that enables AI assistants to interact directly with your DatoCMS projects. It works. Sometimes. We wrote a whole blog post about the “sometimes” part.
* AI Translations — Bulk-translate entire records with OpenAI, Claude, Gemini, or DeepL. Finally, a reason to stop copy-pasting into Google Translate.
* Structured Text to Markdown — A new package that turns Structured Text fields back into clean, CommonMark-compatible Markdown: perfect for LLM pipelines or migration scripts.
* Inline Blocks in Structured Text — One of our most requested features! You can now insert blocks directly inside Structured Text fields — perfect for inline links, mentions, or notes — unlocking infinite nesting possibilities.
* Tabular View for Trees — Hierarchical models got a massive upgrade with a new Tabular View, bringing custom columns, pagination, and sorting to tree structures.
* Favorite Locales — Editors can now pin their most-used languages to the top of the UI, hiding the noise of unused locales in massive multi-language projects. Finally, some peace for the people managing 40+ locales.
* Enhanced Previews — We introduced inline previews for blocks and link fields, letting you see colors, dates, and images directly in the list view without clicking through.
* Single Block Presentation — You can now use a Single Block field as a model’s presentation title or image, perfect for models where the main info is nested inside a block.
* Improved Link Field Filtering — Link fields now correctly filter records by the current locale, eliminating confusion when referencing localized content.
* Fixed Headers — We unified the UI with fixed headers across all sections, ensuring that save and publish buttons are always within reach. A small change that sounds boring until you realize how much scrolling it saves.
* New CLI cma:call command — You can now call any API method directly from the terminal without writing custom scripts, thanks to dynamic discovery of API resources.
* Filter uploads by path — We added a new path filter to the GraphQL API, allowing you to query assets based on their storage path with inclusion, exclusion, and exact matching.
* Increased GraphQL Pagination — We bumped the maximum number of items you can fetch in a single GraphQL query from 100 to 500, reducing the number of requests needed for large datasets. Five times more stuff in one go — you’re welcome.
* Site Search Decoupled — Site Search is now an independent entity, separate from Build Triggers. You can control indexing explicitly and access detailed crawler logs to debug robots.txt and sitemap issues.
* Enhanced Build Triggers Activity — We enhanced the Activity view to show events beyond the 30-item limit, with better filtering and detailed logs for every operation.
* Access to CDA Playground with Limited Permissions — Developers can now use the GraphQL Playground without needing full API token management permissions, safer for contractors and temporary access.
* All API Tokens are Deletable — For better security hygiene, you can now delete any API token, including the default read-only ones generated by the system.
* API Token Last Used Time — You can now see when each API token was last used directly in Project Settings, making it easy to identify stale tokens and clean up ones that haven’t been active in months. Or years. We don’t judge.
* No Default Full-Access Token — New projects no longer come with a full-access API token by default, encouraging the principle of least privilege from day one.
* Improved Roles & Permissions — We revamped the roles interface to clearly show inherited permissions and human-readable summaries of what a user can actually do.
* DatoCMS Recipes & Import/Export — We launched a marketplace of reusable project “recipes” — pre-built models and blocks you can install into any project to save setup time, powered by the new Schema Import/Export plugin.
* Dedicated SEO Fallback Options — We decoupled SEO metadata from internal preview fields, allowing you to set specific fallbacks for SEO titles and images without affecting the CMS UI.
* Force Validations on Publishing — You can now prevent the publishing of records that don’t meet current validation rules — crucial when you’ve tightened schema requirements on existing content.
* Save Invalid Drafts — Conversely, you can now save drafts even if they are invalid, allowing editors to save their work-in-progress without being blocked by strict validation rules until they are ready to publish. Because sometimes “half-done” is better than “lost.”
* Draft Mode by Default — To encourage better editorial workflows, “Draft/Published” mode is now the default setting for all new models.
* Smart Confirmation Guardrails — Destructive actions now calculate their impact before execution. If you’re about to delete something used in 10+ records, we force a typed confirmation to prevent accidents. We’ve all been there. This is us protecting you from yourself.
…and we also cleaned up some tech debt by sunsetting legacy batch endpoints and removing unused CI triggers, keeping the platform lean and fast.
30 new public plugins landed in the marketplace this year — plus countless private ones we’ll never see. The community (and our support team!) keeps surprising us with stuff we didn’t even know we needed.
This year, DatoCMS handled an average of 3.5B API calls/month (+80%), while serving 500TB of traffic/month and 4.5M optimized video views/month. At the same time, we executed the most ambitious engineering project in our history: a complete migration from Heroku to a custom Kubernetes cluster on AWS.
For almost ten years, managed hosting served us well — but by mid-2024, we had hit a ceiling. Costs were rising while our need for granular control grew. We realized we were paying a premium for convenience we no longer needed. It was time to build our own home.
The journey began back in October 2024, kicking off a nine-month marathon. We spent the winter prototyping (experimenting with everything from bare metal to alternative PaaS providers — some of which shall remain unnamed to protect the guilty), the spring architecting, and the early summer stress-testing.
After months of planning, we flipped the switch on Saturday, June 7th. We prepared for a battle, but we mostly ended up watching dashboards. Aside from a tiny detail that cost us exactly 1 minute of downtime, the transition was flawless. By the time we turned the writes back on, every byte of data had been successfully secured in AWS.
The results were immediate and startling:
* Speed: Response times for the Content Delivery API (CDA) were halved instantly.
* Efficiency: We are now running on 64GB RAM database instances on AWS that handle traffic better than the 256GB instances we used on Heroku. Yes, you read that right. Four times less RAM, better performance.
It was a massive bet, but looking at the metrics today, it is undeniably one of the best wins of our year.
We didn’t just move servers and DBs; while moving our core applications to AWS EKS was the main event, we executed a total overhaul of the ecosystem surrounding it:
* Infrastructure as Code: We codified our entire environment using Terraform, giving us a reproducible, version-controlled blueprint of our infrastructure that eliminates manual configuration drift.
* CDN Caching: We switched from Fastly to Cloudflare for our CDN cache, implementing smarter caching rules that improved our hit ratio from 85% to 97%.
* Storage: We migrated from AWS S3 to Cloudflare R2, eliminating massive egress fees and optimizing asset delivery. Goodbye, AWS data transfer bills. We won’t miss you.
* Observability: We ditched expensive CloudWatch logs for a custom Prometheus & Loki stack, slashing our monitoring bills to near zero while improving data quality.
* Developer Experience: To tame Kubernetes complexity, we built cubo, a custom kubectl wrapper tailored around our needs that handles everything from generating K8S manifests and orchestrating rollouts to managing cronjobs, real-time logs, and one-off commands, preserving the “git push” and CLI simplicity we loved on Heroku.
The Bottom Line: We lowered overall infrastructure costs by over 25%, reduced Content Delivery API latency by 50%, expanded Realtime API capacity by 10×, and gained full control across every infrastructure layer. And we kept our sanity. Mostly.
While liberating ourselves from managed hosting, we made another quiet move: we fully internalized our accounting. For years, we outsourced this to external firms — the typical setup where you hand over receipts and hope for the best. But as we grew, flying blind between quarterly reports became untenable.
Now we run everything in-house with full visibility into our finances at any moment. No more waiting for external accountants to reconcile things. Same philosophy as the infrastructure migration: control beats convenience when you’re building for the long term.
This year marked our 10th anniversary — a decade of surviving frontend trends, CMS wars, and the occasional existential crisis about whether “headless” is still a cool term. To celebrate, we flew our entire team to the Tuscan countryside to eat, drink, and ride quad bikes. You can read the full story of our trip (and our “25% Matteo concentration rate”) here: Dato Turns 10.
Despite our growth in revenue and traffic, we remain a team of just 13 people. This isn’t an accident — it’s a deliberate choice.
As we wrote in “How can you be eight people?” (well, now thirteen), building a massive organization is optional. We choose to ignore the pressure to maximize headcount or chase VC funding. Instead, we focus on what actually matters: a solid product, a healthy work-life balance, and staying profitable on our own terms. We don’t mind “leaving a little water in the cloth” if it means we get to keep building the software we love, the way we want to build it.
No idea. And honestly, we like it that way.
We’re not going to pretend we have a five-year vision carved in stone or a slide deck about “the future of content.” We’ll keep shipping what matters, keep ignoring the hype cycles, and keep cashing checks instead of burning through runway.
That said… we may have a few things cooking that we’re genuinely excited about. But we’re not going to jinx it by overpromising — you’ll see them when they ship.
Well, see you in 2026. We’ll still be here. Probably still 13 people. Definitely still not taking ourselves too seriously. 🧡
...
Read the original on www.datocms.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.