10 interesting stories served every morning and every evening.
A few months ago, users started reporting that Ghostty was consuming absurd amounts of memory, with one user reporting 37 GB after 10 days of uptime. Today, I’m happy to say the fix has been found and merged. This post is an overview of what caused the leak, a look at some of Ghostty’s internals, and some brief descriptions of how we tracked it down.1
The leak was present since at least Ghostty 1.0, but it is only recently that popular CLI applications (particularly Claude Code) started producing the correct conditions to trigger it at scale. The limited conditions that triggered the leak are what made it particularly tricky to diagnose.
The fix is merged and is available in tip/nightly releases, and will be part of the tagged 1.3 release in March.
To understand the bug, we first need to understand how Ghostty manages terminal memory. Ghostty uses a data structure called the
PageList
to store terminal content. PageList is a doubly-linked list of memory pages that store the terminal content (characters, styles, hyperlinks, etc.).
The underlying “pages” are not single virtual memory pages
but they are a contiguous block of memory aligned to page boundaries and composed of an even multiple of system pages.2
These pages are allocated using mmap. mmap isn’t particularly fast, so to avoid constant syscalls, we use a memory pool. When we need a new page, we pull from the pool. When we’re done with a page, we return it to the pool for reuse.
The pool uses a standard size for pages. Think of it like buying standard-sized shipping boxes: most things people ship fit in a standard box, and having a standard box comes with various efficiencies.
But sometimes terminals need more memory than a standard page provides. If a set of lines has many emoji, styles, or hyperlinks, we need a larger page. In these cases, we allocate a non-standard page
directly with mmap, bypassing the pool entirely. This is typically a rare scenario.
When we “free” a page, we apply some simple logic:
If the page is : return it to the pool
If the page is > standard size: call munmap to free it
This is the core background for terminal memory management in Ghostty, and the idea itself is sound. A logic bug around an optimization is what produced the leak, as we’ll see next.
There’s one more background detail we need to cover to understand the bug: scrollback pruning.
Ghostty has a scrollback-limit configuration that caps how much history is retained. When you hit this limit, we delete the oldest pages in the scrollback buffer to free up memory.
But this often happens in a super hot path (e.g. when outputting large amounts of data quickly), and allocating and freeing memory pages is expensive, even with the pool. Therefore, we have an optimization:
reuse the oldest page as the newest page when we reach the limit.
This optimization works great. It requires zero allocations and uses only some quick pointer manipulations to move the page from the front to the back of the list. We do some metadata cleanup to “clear” the page but otherwise leave the previous memory intact.
It’s fast and empirically speeds up scrollback-heavy workloads significantly.
During the scrollback pruning optimization, we always
resized our page back to standard size. But we didn’t resize the underlying memory allocation itself, we only noted the resize in the metadata. The underlying memory was still the large non-standard mmap allocation, but now the PageList thought it was standard sized.
Eventually, we’d free the page under various circumstances (e.g. when the user closes the terminal, but also other times). At that point, we’d see the page memory was within the standard size, assume it was part of the pool, and we would never call munmap on it. A classic leak.
This all seems pretty obvious, but the issue is that non-standard pages are rare by design. The goal of our design and optimizations is that standard pages are the common case and provide a fast-path. Only very specific scenarios produce non-standard pages and they’re usually not produced in large quantities.
But the rise of Claude Code
changed this. For some reason, Claude Code’s CLI produces a lot of multi-codepoint grapheme outputs which force Ghostty to regularly use non-standard pages. Additionally, Claude Code uses the primary screen and produces a significant amount of scrollback output. These things combined together created the perfect storm to trigger the leak in huge quantities.
The fix is conceptually simple: never reuse non-standard pages. If we encounter a non-standard page during scrollback pruning, we destroy it properly (calling munmap) and allocate a fresh standard-sized page from the pool.
The core of the fix is in the snippet below, but some extra work was needed to fix up some other bits of accounting we have:
We could’ve also reused the non-standard page and just retained the large memory size, but until we have data that shows otherwise, we’re still operating under the assumption that standard pages are the common case and it makes sense to reset back to a standard pooled page.
Other users have recommended more complex strategies (e.g. maintaining some metrics on how often non-standard pages are used and adjusting our assumptions accordingly), but more research is needed before making those changes. This change is simple, fixes the bug, and aligns with our current assumptions.
As part of the fix, I added support for virtual memory tags on macOS provided by the Mach kernel. This lets us tag our PageList memory allocations with a specific identifier that shows up in various tooling.
Now when debugging memory on macOS, Ghostty’s PageList memory shows up with a specific tag instead of being lumped in with everything else. This made it trivial to identify the leak, associate it with the PageList, and also verify that the fix worked by observing the tagged memory being properly freed.
We do a lot of work in the Ghostty project to find and prevent memory leaks:
* In debug builds and unit tests, we use leak-detecting Zig allocators.
* The CI runs valgrind on our full unit test suite on every commit
to find more than just leaks, such as undefined memory usage.
* We regularly run the macOS GUI via macOS Instruments to look for
leaks particularly in the Swift codebase.
* We run every GTK-related PR using Valgrind (the full GUI) to look
for leaks in the GTK codepath that isn’t unit tested.
This has worked really well to date, but unfortunately it didn’t catch this particular leak because it only triggers under very specific conditions that our tests didn’t reproduce. The merged PR includes a test that does reproduce the leak to prevent regressions in the future.
This was the largest known memory leak in Ghostty to date, and the only reported leak that has been confirmed by more than a single user. We’ll continue to monitor and address memory reports as they come in, but remember that reproduction is the key to diagnosing and fixing memory leaks!
Big thanks to @grishy who finally got me a reliable reproduction so I could analyze the issue myself. Their own analysis reached the same conclusion as mine, and the reproduction let me verify both our understandings independently.
Thanks also to everyone who reported this issue with detailed diagnostics. The community’s analysis, especially around the footprint output and VM region counting, gave me important clues that pointed toward the PageList as the culprit.
...
Read the original on mitchellh.com »
There. That’s out of the way. I recently installed Linux on my main desktop computer and work laptop, overwriting the Windows partition completely. Essentially, I deleted the primary operating system from the two computers I use the most, day in and day out, instead trusting all of my personal and work computing needs to the Open Source community. This has been a growing trend, and I hopped on the bandwagon, but for good reasons. Some of those reasons might pertain to you and convince you to finally make the jump as well. Here’s my experience.
It’s no secret that Windows 11 harvests data like a pumpkin farmer in October, and there is no easy way (and sometimes no way at all) to stop it. The operating system itself acts exactly like what was called “spyware” a decade or so ago, pulling every piece of data it can about its current user. This data includes (but is far from limited to) hardware information, specific apps and software used, usage trends, and more. With the advent of AI, Microsoft made headlines with Copilot, an artificial assistant designed to help users by capturing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.
Why are so many articles and YouTube videos lately regaling readers and watchers with the harrowing tales of techies switching from Windows to Linux? Anyone who has read one of those articles or watched one of those videos will know it boils down to two main issues: telemetry and poor software stability.
After dealing with these issues and trying to solve them with workarounds, I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn’t choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux.
The other main reason folks uninstall Windows is due to the overall poor software experience. Windows 11 has multiple settings modules to handle the same task (such as setting up networking or adding devices), and none of them seem to talk to each other. Additionally, each new update (which will eventually be forced upon you) seems to bring more bugs than fixes. Personally, I encountered 2-3 full system crashes a week when I ran Windows 11, and my hardware is fairly decent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my computer would freeze for a few seconds, the displays would go dark, and the PC would either restart or hang indefinitely.
There. That’s out of the way. I recently installed Linux on my main desktop computer and work laptop, overwriting the Windows partition completely. Essentially, I deleted the primary operating system from the two computers I use the most, day in and day out, instead trusting all of my personal and work computing needs to the Open Source community. This has been a growing trend, and I hopped on the bandwagon, but for good reasons. Some of those reasons might pertain to you and convince you to finally make the jump as well. Here’s my experience.
It’s no secret that Windows 11 harvests data like a pumpkin farmer in October, and there is no easy way (and sometimes no way at all) to stop it. The operating system itself acts exactly like what was called “spyware” a decade or so ago, pulling every piece of data it can about its current user. This data includes (but is far from limited to) hardware information, specific apps and software used, usage trends, and more. With the advent of AI, Microsoft made headlines with Copilot, an artificial assistant designed to help users by capturing their data with tools like Recall. It turns out that Copilot has largely been a flop and helps Microsoft (and data thieves) more than its users.
Why are so many articles and YouTube videos lately regaling readers and watchers with the harrowing tales of techies switching from Windows to Linux? Anyone who has read one of those articles or watched one of those videos will know it boils down to two main issues: telemetry and poor software stability.
After dealing with these issues and trying to solve them with workarounds, I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn’t choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux.
The other main reason folks uninstall Windows is due to the overall poor software experience. Windows 11 has multiple settings modules to handle the same task (such as setting up networking or adding devices), and none of them seem to talk to each other. Additionally, each new update (which will eventually be forced upon you) seems to bring more bugs than fixes. Personally, I encountered 2-3 full system crashes a week when I ran Windows 11, and my hardware is fairly decent: AMD Ryzen 7 6800H, 32 GB of RAM, and a 1 TB PCIe NVMe drive. Still, a few times a week, my computer would freeze for a few seconds, the displays would go dark, and the PC would either restart or hang indefinitely.
The first question often asked of Windows refugees migrating to Linux is, “Why Linux?” It’s a good question, and one that needs to be asked before dumping Windows for anything else. Personally, I tried macOS first. The experience was smooth and easy but ultimately felt restrictive (installing from third-party developers, anyone?). Additionally, the only Apple computer I have is a 2014 MacBook Air. As such, the latest version of macOS I could actually run is 11 (Big Sur), which was released in 2020. Overall system operation was quite sluggish on the older hardware, and I knew that time would inevitably take its toll on the software experience — apps would soon be out of date and I wouldn’t be able to update them. I also tried the OpenCore Legacy Patcher to push the laptop to macOS 13. While performance improved, key features like iMessage and Continuity Camera were either buggy or flat out refused to work. It felt like my laptop was running in mud with its hands tied behind its back. Plus, I needed something for my desktop. Not wanting to drop a mortgage payment or two on new hardware, I opted for Linux.
Linux promised me the potential of what I wanted - high hardware compatibility with full software freedom. The operating system can run on pretty much anything, and it grants users a huge amount of control over their system. I tried out a few ditributions, or distros, of Linux. A distro is like a “flavor” of Linux, and each one has unique factors (e.g., app/package management, bundled user interface). With most distros, these differences are largely irrelevant; most distros offer the same main packages as others.
...
Read the original on www.notebookcheck.net »
Self-deception as strategy: the best liars believe themselves.
...
Read the original on trails.pieterma.es »
The state of the art in agent isolation is a read-only sandbox. At Fly.io, we’ve been selling that story for years, and we’re calling it: ephemeral sandboxes are obsolete. Stop killing your sandboxes every time you use them.
My argument won’t make sense without showing you something new we’ve built. We’re all adults here, this is a company, we talk about what we do. Here goes.
So, I want to run some code. So what I do is, I run sprite create. While it operates, I’ll explain what’s happening behind the—
Wrap text
Copy to clipboard
✓ Created demo-123 sprite in 1.0s
● Connecting to console…
sprite@sprite:~#
That’s a root shell on a Linux computer we now own. It came online in about the same amount of time it would take to ssh into a host that already existed. We call these things “Sprites”.
Wrap text
Copy to clipboard
sudo apt-get install -y ffmpeg >/dev/null 2>&1
Unlike creating the Sprite in the first place, installing ffmpeg with apt-get is dog slow. Let’s try not to have to do that again:
Wrap text
Copy to clipboard
sprite@sprite:~# sprite-env checkpoints create
{“type”:“complete”,“data”:“Checkpoint v1 created successfully”,
“time”:“2025-12-22T22:50:48.60423809Z”}
This completes instantly. Didn’t even bother to measure.
I step away to get coffee. Time passes. The Sprite, noticing my inactivity, goes to sleep. I meet an old friend from high school at the coffee shop. End up spending the day together. More time passes. Days even. Returning later:
Wrap text
Copy to clipboard
> $ sprite console
sprite@sprite:~# ffmpeg
ffmpeg version 7.1.1-1ubuntu1.3 Copyright (c) 2000-2025 the FFmpeg developers
Use -h to get full help or, even better, run ‘man ffmpeg’
sprite@sprite:~#
Everything’s where I left it. Sprites are durable. 100GB capacity to start, no ceremony. Maybe I’ll keep it around a few more days, maybe a few months, doesn’t matter, just works.
Say I get an application up on its legs. Install more packages. Then: disaster. Maybe an ill-advised global pip3 install . Or rm -rf $HMOE/bin. Or dd if=/dev/random of=/dev/vdb. Whatever it was, everything’s broken. So:
Wrap text
Copy to clipboard
> $ sprite checkpoint restore v1
Restoring from checkpoint v1…
Container components started successfully
Restore from v1 complete
> $ sprite console
sprite@sprite:~#
Sprites have first-class checkpoint and restore. You can’t see it in text, but that restore took about one second. It’s fast enough to use casually, interactively. Not an escape hatch. Rather: an intended part of the ordinary course of using a Sprite. Like git, but for the whole system.
If you’re asking how this is any different from an EC2 instance, good. That’s what we’re going for, except:
I can casually create hundreds of them (without needing a Docker container), each appearing in 1-2 seconds.
They go idle and stop metering automatically, so it’s cheap to have lots of them. I use dozens.
They’re hooked up to our Anycast network, so I can get an HTTPS URL.
Despite all that, they’re fully durable. They don’t die until I tell them to.
This combination of attributes isn’t common enough to already have a name, so we decided we get to name them “Sprites”. Sprites are like BIC disposable cloud computers.
That’s what we built. You can go try it yourself. We wrote another 1000 words about how they work, but I cut them out because I want to stop talking about our products now and get to my point.
For years, we’ve been trying to serve two very different users with the same abstraction. It hasn’t worked.
Professional software developers are trained to build stateless instances. Stateless deployments, where persistent data is confined to database servers, buys you simplicity, flexible scale-out, and reduced failure blast radius. It’s a good idea, so popular that most places you can run code in the cloud look like stateless containers. Fly Machines, our flagship offering, look like stateless containers.
The problem is that Claude isn’t a pro developer. Claude is a hyper-productive five-year-old savant. It’s uncannily smart, wants to stick its finger in every available electrical socket, and works best when you find a way to let it zap itself.
If you force an agent to, it’ll work around containerization and do work . But you’re not helping the agent in any way by doing that. They don’t want containers. They don’t want “sandboxes”. They want computers.
Someone asked me about this the other day and wanted to know if I was saying that agents needed sound cards and USB ports. And, maybe? I don’t know. Not today.
In a moment, I’ll explain why. But first I probably need to explain what the hell I mean by a “computer”. I think we all agree:
* A computer doesn’t necessarily vanish after a single job is completed, and
Since current agent sandboxes have neither of these, I can stop the definition right there and get back to my point.
Start here: with an actual computer, Claude doesn’t have to rebuild my entire development environment every time I pick up a PR.
This seems superficial but rebuilding stuff like node_modules is such a monumental pain in the ass that the industry is spending tens of millions of dollars figuring out how to snapshot and restore ephemeral sandboxes.
I’m not saying those problems are intractable. I’m saying they’re unnecessary. Instead of figuring them out, just use an actual computer. Work out a PR, review and push it, then just start on the next one. Without rebooting.
People will rationalize why it’s a good thing that they start from a new build environment every time they start a changeset. Stockholm Syndrome. When you start a feature branch on your own, do you create an entirely new development environment to do it?
The reason agents waste all this effort is that nobody saw them coming. Read-only ephemeral sandboxes were the only tool we had hanging on the wall to help use them sanely.
Have you ever had to set up actual infrastructure to give an agent access to realistic data? People do this. Because they know they’re dealing with a clean slate every time they prompt their agent, they arrange for S3 buckets, Redis servers, or even RDS instances outside the sandbox for their agents to talk to. They’re building infrastructure to work around the fact that they can’t just write a file and trust it to stay put. Gross.
Ephemerality means time limits. Providers design sandbox systems to handle the expected workloads agents generate. Most things agents do today don’t take much time; in fact, they’re often limited only by the rate at which frontier models can crunch tokens. Test suites run quickly. The 99th percentile sandboxed agent run probably needs less than 15 minutes.
But there are feature requests where compute and network time swamp token crunching. I built the documentation site for the Sprites API by having a Claude Sprite interact with the code and our API, building and testing examples for the API one at a time. There are APIs where the client interaction time alone would blow sandbox budgets.
You see the limits of the current approach in how people round-trip state through “plan files”, which are ostensibly prose but often really just egregiously-encoded key-value stores.
An agent running on an actual computer can exploit the whole lifecycle of the application. We saw this when Chris McCord built Phoenix.new. The agent behind a Phoenix.new app runs on a Fly Machine where it can see the app logs from the Phoenix app it generated. When users do things that generate exceptions, Phoenix.new notices and gets to work figuring out what happened.
It took way too much work for Chris to set that up, and he was able to do it in part because he wrote his own agent. You can do it with Claude today with an MCP server or some other arrangement to haul logs over. But all you really need is to just not shoot your sandbox in the head when the agent finishes writing code.
Here’s where I lose you. I know this because it’s also where I lose my team, most of whom don’t believe me about this.
The nature of software development is changing out from under us, and I think we’re kidding ourselves that it’s going to end with just a reconfiguration of how professional developers ship software.
I have kids. They have devices. I wanted some control over them. So I did what many of you would do in my situation: I vibe-coded an MDM.
I built this thing with Claude. It’s a SQLite-backed Go application running on a Sprite. The Anycast URL my Sprite exports works as an MDM registration URL. Claude also worked out all the APNS Push Certificate drama for me. It all just works.
“Editing PHP files over FTP: we weren’t wrong, just ahead of our time!”
I’ve been running this for a month now, still on a Sprite, and see no reason ever to stop. It is a piece of software that solves an important real-world problem for me. It might evolve as my needs change, and I tell Claude to change it. Or it might not. For this app, dev is prod, prod is dev.
For reasons we’ll get into when we write up how we built these things, you wouldn’t want to ship an app to millions of people on a Sprite. But most apps don’t want to serve millions of people. The most important day-to-day apps disproportionately won’t have million-person audiences. There are some important million-person apps, but most of them just destroy civil society, melt our brains, and arrange chauffeurs for individual cheeseburgers.
Applications that solve real problems for people will be owned by the people they solve problems for. And for the most part, they won’t need a professional guild of software developers to gatekeep feature development for them. They’ll just ask for things and get them.
The problem we’re all working on is bigger than safely accelerating pro software developers. Sandboxes are holding us back.
Obviously, I’m trying to sell you something here. But that doesn’t make me wrong. The argument I’m making is the reason we built the specific thing I’m selling.
It took us a long time to get here. We spent years kidding ourselves. We built a platform for horizontal-scaling production applications with micro-VMs that boot so quickly that, if you hold them in exactly the right way, you can do a pretty decent code sandbox with them. But it’s always been a square peg, round hole situation.
We have a lot to say about how Sprites work. They’re related to Fly Machines but sharply different in important ways. They have an entirely new storage stack. They’re orchestrated differently. No Dockerfiles.
But for now, I just want you to think about what I’m saying here. Whether or not you ever boot a Sprite, ask: if you could run a coding agent anywhere, would you want it to look more like a read-only sandbox in a K8s cluster in the cloud, or like an entire EC2 instance you could summon in the snap of a finger?
I think the answer is obvious. The age of sandboxes is over. The time of the disposable computer has come.
...
I love writing software, line by line. It could be said that my career was a continuous effort to create software well written, minimal, where the human touch was the fundamental feature. I also hope for a society where the last are not forgotten. Moreover, I don’t want AI to economically succeed, I don’t care if the current economic system is subverted (I could be very happy, honestly, if it goes in the direction of a massive redistribution of wealth). But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.
In 2020 I left my job in order to write a novel about AI, universal basic income, a society that adapted to the automation of work facing many challenges. At the very end of 2024 I opened a YouTube channel focused on AI, its use in coding tasks, its potential social and economical effects. But while I recognized what was going to happen very early, I thought that we had more time before programming would be completely reshaped, at least a few years. I no longer believe this is the case. Recently, state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be. The degree of success you’ll get is related to the kind of programming you do (the more isolated, and the more textually representable, the better: system programming is particularly apt), and to your ability to create a mental representation of the problem to communicate to the LLM. But, in general, it is now clear that for most projects, writing the code yourself is no longer sensible, if not to have fun.
In the past week, just prompting, and inspecting the code to provide guidance from time to time, in a few hours I did the following four tasks, in hours instead of weeks:
1. I modified my linenoise library to support UTF-8, and created a framework for line editing testing that uses an emulated terminal that is able to report what is getting displayed in each character cell. Something that I always wanted to do, but it was hard to justify the work needed just to test a side project of mine. But if you can just describe your idea, and it materializes in the code, things are very different.
2. I fixed transient failures in the Redis test. This is very annoying work, timing related issues, TCP deadlock conditions, and so forth. Claude Code iterated for all the time needed to reproduce it, inspected the state of the processes to understand what was happening, and fixed the bugs.
3. Yesterday I wanted a pure C library that would be able to do the inference of BERT like embedding models. Claude Code created it in 5 minutes. Same output and same speed (15% slower) than PyTorch. 700 lines of code. A Python tool to convert the GTE-small model.
4. In the past weeks I operated changes to Redis Streams internals. I had a design document for the work I did. I tried to give it to Claude Code and it reproduced my work in, like, 20 minutes or less (mostly because I’m slow at checking and authorizing to run the commands needed).
It is simply impossible not to see the reality of what is happening. Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too). It does not matter if AI companies will not be able to get their money back and the stock market will crash. All that is irrelevant, in the long run. It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway.
How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies. The same thing open source software did in the 90s.
However, this technology is far too important to be in the hands of a few companies. For now, you can do the pre-training better or not, you can do reinforcement learning in a much more effective way than others, but the open models, especially the ones produced in China, continue to compete (even if they are behind) with frontier models of closed labs. There is a sufficient democratization of AI, so far, even if imperfect. But: it is absolutely not obvious that it will be like that forever. I’m scared about the centralization. At the same time, I believe neural networks, at scale, are simply able to do incredible things, and that there is not enough “magic” inside current frontier AI for the other labs and teams not to catch up (otherwise it would be very hard to explain, for instance, why OpenAI, Anthropic and Google are so near in their results, for years now).
As a programmer, I want to write more open source than ever, now. I want to improve certain repositories of mine abandoned for time concerns. I want to apply AI to my Redis workflow. Improve the Vector Sets implementation and then other data structures, like I’m doing with Streams now.
But I’m worried for the folks that will get fired. It is not clear what the dynamic at play will be: will companies try to have more people, and to build more? Or will they try to cut salary costs, having fewer programmers that are better at prompting? And, there are other sectors where humans will become completely replaceable, I fear.
What is the social solution, then? Innovation can’t be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless. And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection. But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.
Anyway, back to programming. I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can’t control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.
Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched.
Please enable JavaScript to view the comments powered by Disqus.
blog comments powered by
...
Read the original on antirez.com »
PROVIDENCE, R. I. [Brown University] — Private equity firms acquired more than 500 autism therapy centers across the U.S. over the past decade, with nearly 80% of acquisitions occurring over a four-year span.
That’s according to a new study from researchers at Brown University’s Center for Advancing Health Policy through Research.
Study author Yashaswini Singh, a health economist at Brown’s School of Public Health, said the work highlights how financial firms are rapidly moving into a sensitive area of health care with little public scrutiny or data on where this is happening or why.
“The big takeaway is that there is yet another segment of health care that has emerged as potentially profitable to private equity investors, and it is very distinct from where we have traditionally known investors to go, so the potential for harm can be a lot more serious,” Singh said. “We’re also dealing with children who are largely insured by Medicaid programs, so if private equity increases the intensity of care, what we’re looking at are impacts to state Medicaid budgets down the road.”
The findings were published in JAMA Pediatrics and offer one of the first national assessments of private equity’s growing role in autism therapies and services. Autism diagnoses among U. S. children have risen sharply in recent years, nearly tripling between 2011 and 2022, and autism has been in the national spotlight amid political debate claiming links between autism and childhood vaccines.
The findings suggest that investment has been concentrated in states with higher rates of autism diagnoses among children and states that have fewer limits on insurance coverage.
The researchers identified a total of 574 autism therapy centers owned by private equity firms as of 2024, spanning 42 states. Most of those centers were acquired between 2018 and 2022, the result of 142 separate deals. The largest concentrations of centers were in California (97), Texas (81), Colorado (38), Illinois (36) and Florida (36). Sixteen states had one or no private equity-owned clinics at the end of 2024.
States in the top third for childhood autism prevalence were 24% more likely to have private equity–owned clinics than others, according to the study.
The scale and speed of acquisitions underscore the growing trend of private equity’s entry into the market, the researchers say. According to Singh, the team was prompted to investigate that trend after hearing anecdotal reports from families and health providers about changes following private equity takeovers.
The primary concern is that private equity firms may prioritize financial gains over families, said Daniel Arnold, a senior research scientist at the School of Public Health.
“It’s all about the financial incentives,” Arnold said. “I worry about the same types of revenue-generating strategies seen in other private equity-backed settings. I worry about children receiving more than the clinically appropriate amount of services and worsening disparities in terms of which children have access to services.”
To establish a baseline of where private equity firms are investing and why, the team used a mix of proprietary databases, public press releases and manual verification of archived websites to track changes in ownership. Unlike public companies, private equity firms and private practices are not required to disclose acquisitions, making data collection challenging and labor-intensive.
The team is now seeking federal funding to examine how private equity ownership affects outcomes, including changes in therapy intensity, medication use, diagnosis age or how long children stay in treatment. They seek to determine whether these investments are helping to meet real needs or are primarily a way to make money.
“Private investors making a little bit of money while expanding access is not a bad thing, per se,” Singh said. “But we need to understand how much of a bad thing this is and how much of a good thing this is. This is a first step in that direction.”
This study received funding from the National Institute on Aging (R01AG073286) and the National Institute on Mental Health (R01MH132128).
...
Read the original on www.brown.edu »
Happy New Year 2026! Once again, a lot has happened in Gentoo over the past months. New developers, more binary packages, GnuPG alternatives support, Gentoo for WSL, improved Rust bootstrap, better NGINX packaging, … As always here
we’re going to revisit all the exciting news from our favourite Linux distribution.
Gentoo currently consists of 31663 ebuilds for 19174 different packages. For amd64 (x86-64), there are 89 GBytes of binary packages available on the mirrors. Gentoo each week builds 154
distinct installation stages for different processor architectures and system configurations, with an overwhelming part of these fully up-to-date.
The number of commits to the main ::gentoo repository
has remained at an overall high level in 2025, with a slight decrease from 123942 to 112927. The number of commits by external contributors was 9396, now across 377 unique external authors.
GURU, our user-curated repository with a trusted user model, as entry point for potential developers, has shown a decrease in activity. We have had 5813 commits in 2025, compared to 7517 in 2024. The number of contributors to GURU has increased, from 241 in 2024 to 264 in 2025. Please join us there and help packaging the latest and greatest software. That’s the ideal preparation for becoming a Gentoo developer!
Activity has slowed down somewhat on the Gentoo bugtracker bugs.gentoo.org, where we’ve had 20763 bug reports created in 2025, compared to 26123 in 2024. The number of resolved bugs shows the same trend, with 22395 in 2025 compared to 25946 in 2024. The current values are closer to those of 2023 - but clearly this year we fixed more than we broke!
In 2025 we have gained four new Gentoo developers. They are in chronological order:
Let’s now look at the major improvements and news of 2025 in Gentoo.
RISC-V bootable QCOW2: Same as for amd64 and arm64, also for RISC-V we now have ready-made bootable disk images in QCOW2 format
available for download on our mirrors in a console and a cloud-init variant. The disk images use the rv64gc instruction set and the lp64d ABI, and can be booted via the standard RISC-V UEFI support.
Gentoo for WSL: We now publish weekly Gentoo images for Windows
Subsystem for Linux (WSL), based on the amd64 stages,
see our mirrors. While these images are not present in the Microsoft store yet, that’s something we intend to fix soon.
hppa and sparc destabilized: Since we do not have hardware readily available anymore and these architectures mostly fill a retrocomputing niche, stable keywords have been dropped for both hppa (PA-RISC) and sparc. The architectures will remain supported with testing keywords.
musl with locales: Localization support via the package
sys-apps/musl-locales has been added by default to the Gentoo stages based on the lightweight musl C library.
GPG alternatives: Given the unfortunate fracturing of the GnuPG / OpenPGP / LibrePGP ecosystem due to competing standards, we now provide an alternatives mechanism to choose the system gpg provider and ease compatibility testing. At the moment,
the original, unmodified GnuPG, the FreePG fork/patchset as also used in many other Linux distributions (Fedora, Debian, Arch, …), and the re-implementation
Sequoia-PGP with
Chameleon
are available. In practice, implementation details vary between the providers, and while GnuPG and FreePG are fully supported, you may still encounter difficulties when selecting Sequoia-PGP/Chameleon.
zlib-ng support: We have introduced initial support for using zlib-ng and
minizip-ng in compatibility mode in place of the reference zlib libraries.
System-wide jobserver: We have created steve, an implementation of a token-accounting system-wide jobserver, and introduced experimental global jobserver support in Portage. Thanks to that, it is now possible to globally control the concurrently running build job count, correctly accounting for parallel emerge jobs, make and ninja jobs, and other clients supporting the jobserver protocol.
NGINX rework: The packaging of the NGINX web server and reverse proxy in Gentoo has undergone a major improvement, including also the splitting off of several third-party modules into separate packages.
C++ based Rust bootstrap: We have added a bootstrap path for Rust from C++ using
Mutabah’s Rust compiler mrustc, which alleviates the need for pre-built binaries and makes it significantly easier to support more configurations.
Ada and D bootstrap: Similarly, Ada and D support in gcc now have clean bootstrap paths, which makes enabling these in the compiler as easy as switching the useflags on gcc and running emerge.
FlexiBLAS: Gentoo has adopted the new FlexiBLAS wrapper
library as the primary way of switching implementations of the BLAS numerical algorithm library at runtime. This automatically also provides ABI stability for linking programs and bundles the specific treatment of different BLAS variants in one place.
Python: In the meantime the default Python version in Gentoo has reached Python 3.13. Additionally we have also Python 3.14 available stable - fully up to date with upstream.
KDE upgrades: As of end of 2025, in Gentoo stable we have KDE Gear 25.08.3, KDE Frameworks 6.20.0, and KDE Plasma 6.5.4. As always, Gentoo testing follows the newest upstream releases (and using the KDE overlay you can even install from git sources).
Additional build server: A second dedicated build server, hosted at Hetzner Germany, has been added to speed up the generation of installation stages, iso and qcow2 images, and binary packages.
Documentation: Documentation work has made constant progress on wiki.gentoo.org. The Gentoo Handbook had some particularly useful updates, and the documentation received lots of improvements and additions from the many active volunteers. There are currently 9,647 pages on the wiki, and there have been 766,731 edits since the project started. Please help
Gentoo by contributing to documentation!
Income: The Gentoo Foundation took in $12,066 in fiscal year 2025 (ending 2025/06/30); the dominant part
(over 80%) consists of individual cash donations from the community. On the SPI side, we received $8,471
in the same period as fiscal year 2025; also here, this is all from small individual cash donations.
* Expenses: Our expenses in 2025 were, program services (e.g. hosting costs) $8,332, management & general (accounting)
$1,724, fundraising $905, and non-operating (depreciation expenses) $10,075.
* Balance: We have $104,831 in the bank as of July 1, 2025 (which is when our fiscal year 2026 starts for accounting
purposes). The Gentoo Foundation FY2025 financial statement
is available on the Gentoo Wiki.
* Transition to SPI: The Foundation encourages donors to ensure their ongoing contributions are going to
SPI - more than 40 donors had not responded to requests to move the recurring donations
by the end of the year. Expenses will be moved to the SPI structure as ongoing income permits.
As every year, we would like to thank all Gentoo developers and all who have submitted contributions
for their relentless everyday Gentoo work. If you are interested and would like to help, please join us to make Gentoo even better! As a volunteer project, Gentoo could not exist without its community.
...
Read the original on www.gentoo.org »
Thanks HN folks for all the comments. To clarify a bit, the cables are pulled through PVC conduits under the flooring before being buried in cement. Currently the hypothesis for why the cable disintegrated so quickly is hydrolysis. Singapore is extremely humid after all. A second possibility is that I keep the leftover wall paints (Nippon Paint Vinilex 5000) in the same room and have noticed that much of the solvents have evaporated. It is possible that the solvents in the air might have caused the cable to fail in 3 years. The other ends of the cables don’t feel as sticky and crumbly despite being out in the open exposed to the humidity. My guess is that the paint solvent got to it.
Some other learnings from this. Buried cabling should always be permanently fixed and attached to a patch panel instead of dangling in the open. That was the original plan but I figured it wouldn’t be an issue. I was wrong. Always measure exact length of buried fibre cabling as they aren’t meant to be stored in loops.
This morning I woke up and headed to my bomb shelter to grab the bike pump to inflate the tyres on my children’s bikes. The handle got slightly tangled up in the fibre optic cables so I lifted up the cables to free the pump.
Like cookie crumbs the fibre cable’s sleeve jackets crumbled in my hands.
Before I could even utter “Oh fuck no”, another section of the cable exploded outwards with thin metal wires jutting out from what seems to be like strands of white plastic threads, which I assume is the Kevlar sheath. I think I must have stood in my pseudo server room in shock for a whole minute, unable to move or process what had happened. A main component of why I was in sheer horror was the fact that I had stupidly buried all of these cables under my cement flooring in PVC trunking from my shelter to all of the rooms in the flat. If this cable fails, the connection from the server room to a specific room would be permanently severed. The room for this particular cable turned out to be my home office where my homelab MS-A2 resided.
I had purchased these cables from FS.com roughly 3.5 years ago in 2022. Because I was burying the cables underground permanently, I opted to get the MiLiTaRy GrAdE armoured fibre cables for this purpose.
The cables had been kept spooled up with a radius of around 5cm for 3 whole years, lightly tied together with hook and loop cable fasteners and hung on laundry hooks in the shelter all this time.
The destroyed cable is the only one that I had unravelled recently to patch into my UDM to enable SFP+ connection to my office space. As it turns out, armoured cables in this specific instance aren’t really meant for movement, it’s likely more of a bury and forget purpose. In hindsight I should’ve connected all of the cables to a fibre patch panel on the wall so that they would never move, then connect the patch panel to my UDM with easily replaceable LSZH cables.
But it’s too late now, all I can do is to salvage the situation. I headed out and purchased 3M self-bonding rubber electrical tape 23, and Temflex 160 vinyl electrical tape. The idea I had was to use the compression properties of the stretched rubber tape to hold the corrugated metal sheath and wire mesh in place, before wrapping a second vinyl protection layer outside with the 160.
However, the wrapping process itself requires me to slowly shift the cable around to hook onto higher ground to prevent kinks. The action itself triggered more jacket failures. Some of the failures actually forced the cable in a sharp right angle, which I am almost certain has caused kinks and cracks in the inner fibre strand. RIP.
At this point, I’m looking at rebuilding the entire sleeve jacket of anything that’s exposed and movable with electrical tape. What I had previously thought was a good idea to keep about 5-10m of slack to allow me to easily move my server rack around is now causing me more problems as good electrical tape ain’t cheap. I have to essentially repair around 10 metres of jacket without accidentally destroying parts inside trunking that I am unable to reach. This is assuming that the 4 other untouched cables wouldn’t spontaneously crumble as well. Based on how they felt in my hand, I think it is an inevitable outcome.
I’m pretty certain that datacentre technicians reading this by chance would mock my idiotic setup and I would be inclined to join in. This is not a good day.
On the dim side of things, at least it seems like fibre optic cables are pretty hardy. My MS-A2 SFP+ connection is still working and speedtest-cli is reporting around 4000/3000 Mbps up/down speeds to my ISP (10G fibre internet plan). UDM is seeing 6000/7000, so the fibre cable is definitely compromised. :(
...
Read the original on alienchow.dev »
A fast, lightweight text editor for Markdown, JSON, YAML, and TOML files. Built with Rust and egui for a native, responsive experience.
Platform Note: Ferrite has been primarily developed and tested on Windows. While it should work on Linux and macOS, these platforms have not been extensively tested. If you encounter issues, please report them.
🤖 AI Disclosure: This project is 100% AI-generated code. All Rust code, documentation, and configuration was written by Claude (Anthropic) via Cursor with MCP tools. My role is product direction, testing, and learning to orchestrate AI-assisted development effectively. The code is reviewed and tested, not blindly accepted — but I want to be transparent about the development process. This project is partly a learning exercise in exploring how far AI-assisted development can go.
* Tree Viewer - Hierarchical view for JSON/YAML/TOML with inline editing, expand/collapse, and path copying
* Syntax Highlighting - Full-file syntax highlighting for 40+ languages (Rust, Python, JavaScript, Go, etc.)
* Code Folding - Fold detection with gutter indicators (▶/▼) for headings, code blocks, and lists (text hiding deferred to v0.3.0)
* Minimap - VS Code-style navigation panel with click-to-jump and search highlights
Native rendering of 11 diagram types directly in the preview:
✨ v0.2.2 Released: Stability & CLI improvements! CJK font support, undo/redo fixes, command-line file opening (ferrite file.md), configurable log level, and default view mode setting. See CHANGELOG.md for full details.
* Export Options - Export to HTML with themed styling, or copy as HTML
* Formatting Toolbar - Quick access to bold, italic, headings, lists, links, and more
Download the latest release for your platform from GitHub Releases.
# Download the .deb file, then install with:
sudo apt install ./ferrite-editor_amd64.deb
# Or using dpkg:
sudo dpkg -i ferrite-editor_amd64.deb
Ferrite is available on the AUR:
You can install it using your AUR helper of choice.
# Release package
yay -Sy ferrite
# Binary package
yay -Sy ferrite-bin
tar -xzf ferrite-linux-x64.tar.gz
./ferrite
# Ubuntu/Debian
sudo apt install build-essential pkg-config libgtk-3-dev libxcb-shape0-dev libxcb-xfixes0-dev
# Fedora
sudo dnf install gcc pkg-config gtk3-devel libxcb-devel
# Arch
sudo pacman -S base-devel pkg-config gtk3 libxcb
xcode-select –install
# Clone the repository
git clone https://github.com/OlaProeis/Ferrite.git
cd Ferrite
# Build release version (optimized)
cargo build –release
# The binary will be at:
# Windows: target/release/ferrite.exe
# Linux/macOS: target/release/ferrite
# Run from source
cargo run –release
# Or run the binary directly
./target/release/ferrite
# Open a specific file
./target/release/ferrite path/to/file.md
# Open multiple files as tabs
./target/release/ferrite file1.md file2.md
# Open a folder as workspace
./target/release/ferrite path/to/folder/
# Show version
./target/release/ferrite –version
# Show help
./target/release/ferrite –help
Toggle between modes using the toolbar buttons or keyboard shortcuts.
Workspace settings are stored in .ferrite/ within the workspace folder.
Access settings via Ctrl+, or the gear icon. Configure:
See ROADMAP.md for planned features and known issues.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
# Fork and clone
git clone https://github.com/YOUR_USERNAME/Ferrite.git
cd Ferrite
# Create a feature branch
git checkout -b feature/your-feature
# Make changes, then verify
cargo fmt
cargo clippy
cargo test
cargo build
# Commit and push
git commit -m “feat: your feature description”
git push origin feature/your-feature
This project is licensed under the MIT License - see the LICENSE file for details.
...
Read the original on github.com »
A video filmed by the US immigration agent who fatally shot a woman in Minneapolis on Wednesday has emerged, showing the moments before gunfire rang out. The 47-second clip, obtained by Minnesota-based conservative news outlet Alpha News, shows Renee Nicole Good sitting behind the wheel of her car and speaking to the officer. US Vice-President JD Vance shared the footage on social media, commenting that the agent had acted in self-defence. Local officials have insisted the woman posed no danger.Good’s wife has paid tribute to the 37-year-old, saying the pair had been trying to support their neighbours when she was shot. Her death has sparked protests across the US.
President Donald Trump’s administration says Good tried to run over the US Immigration and Customs Enforcement (ICE) officer in an act of “domestic terrorism” after blocking the road and impeding the agency’s work. Democratic Minneapolis Mayor Jacob Frey has described that account as “garbage” based on the video footage.The BBC has asked the homeland security department and the White House for comment on the new video that emerged on Friday.The footage starts with the officer getting out of his car and filming Good’s vehicle and registration plate while he walks around the Honda SUV. A dog is in the backseat. Good says: “That’s fine dude. I’m not mad at you.“Her wife, Becca Good, is standing on the street filming the interaction with her mobile phone. She tells the ICE agent: “That’s OK, we don’t change our plates every morning just so you know. It will be the same plate when you come talk to us later.“She adds: “You want to come at us? You want to come at us? I say go and get yourself some lunch, big boy.”
Another agent approaches Good on the driver’s side and uses an expletive as he says: “Get out of the car.” The agent filming the clip moves in front of Good’s car as she reverses. In a chaotic few seconds, she turns the wheel to the right and pulls forwards.The camera jerks up to the sky. “Woah, woah!” a voice says, as bangs are heard.In the final part of the video, the car is seen veering down the road. The ICE agent swears.Other clips previously released from the scene show the maroon SUV crashed into the side of the road after Good was shot by the agent.The officer appears to stay on his feet, and is later seen in other videos walking toward the crashed car. Federal officials say the agent was injured and treated in hospital. The FBI is investigating the incident.
The officer who fired on Good is Jonathan Ross, a veteran ICE agent who was previously injured in the line of duty when he was struck by a car. When asked about the video at the White House on Friday, Trump said: “You have agitators and we will always be protecting ICE, and we’re always going to be protecting our border patrol and our law enforcement.“Vance reposted the video on X on Friday, and defended the agent’s actions, saying: “The reality is that his life was endangered and he fired in self-defence.“White House spokeswoman Karoline Leavitt also share the video, saying the media had smeared an ICE agent who had “properly defended himself from being run over”. Good’s wife told local media the pair had gone to the scene of immigration enforcement activity to support neighbours. “We had whistles,” Becca Good said. “They had guns.“When speaking about Good - a mother-of-three, including a six-year-old son - she said “kindness radiated out of her”.“We were raising our son to believe that no matter where you come from or what you look like, all of us deserve compassion and kindness,” she added.
Demonstrators turned out for a third night of protests on Friday over the killing of Good. The Minneapolis Police Department told BBC News that at least 30 people were detained, cited and released after protests in the downtown area. Photos showed protesters gathered outside a hotel in the city, believed to be where some ICE agents were staying.Minnesota’s Department of Public Safety said it assisted police officers with arresting people suspected of unlawful assembly, after receiving “information that demonstrations were no longer peaceful and reports of damage to property” near the Canopy Hotel in the city’s downtown.Minnesota Governor Tim Walz earlier said he had activated the state’s National Guard to help with security around the protests.
On Friday, Minnesota officials said they would open an inquiry into the shooting after saying they had been frozen out of the federal investigation. Trump was asked by a reporter whether the FBI should share its findings with Minnesota, and said: “Well normally I would, but they’re crooked officials.“The announcement by Hennepin County’s top prosecutor Mary Moriarty and Minnesota’s Democratic Attorney General Keith Ellison came a day after the Minnesota Bureau of Criminal Apprehension said the FBI had initially pledged a joint investigation, then reversed course.One federal agency that is not looking into the shooting is the US justice department’s Civil Rights Division, which has in the past investigated alleged excessive use of force by law enforcement.But prosecutors have advised its criminal section that there will be no investigation in this case, sources told the BBC’s US partner, CBS News. Walz, a Democrat, has accused the Trump administration of blocking state officials, but Vance said it was a federal matter.
...
Read the original on www.bbc.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.