10 interesting stories served every morning and every evening.
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on eupolicy.social »
In November, I got fed up and said screw it, I’m installing Linux. Since that article was published, I have dealt with one minor catastrophe after another. None of that has anything to do with Linux, mind you. It just meant I didn’t install it on my desktop until Sunday evening.
My goal here is to see how far I can get using Linux as my main OS without spending a ton of time futzing with it — or even much time researching beforehand. I am not looking for more high-maintenance hobbies at this stage. I want to see if Linux is a wingable alternative to Microsoft’s increasingly annoying OS.
Honestly? So far it’s been fine. Many things I expected to be difficult — like getting my Nvidia graphics card working properly — were perfectly straightforward. A few things I thought would be simple weren’t. And I’ve run into one very funny issue with a gaming mouse that only works in games. But I’ve been able to use my Linux setup for work this week, I played exactly one video game, and I even printed something from my accursed printer.
I picked CachyOS rather than a better-known distro like Ubuntu because it’s optimized for modern hardware, and I had heard that it’s easy to install and set up for gaming, which is one of the reasons I’d stuck with Windows for this long. After backing up my Windows image sometime in December (close enough), I follow the installation instructions in the Cachy wiki and download the CachyOS live image to a Ventoy USB drive, plug it into my PC, reboot into the BIOS to disable Secure Boot, reboot again into the Ventoy bootloader, and launch the CachyOS disk image.
First challenge: My mouse buttons don’t work. I can move the cursor, but can’t click on anything. I try plugging in a mouse (without unplugging the first one), same deal. Not a major issue; I can get around fine with just the keyboard. Maybe this is just an issue with the live image.
I launch the installer and am thrust into analysis paralysis. An operating system needs lots of little pieces to work — stuff you don’t even think of as individual components if you use Mac or Windows. How do you boot into the OS? What runs the desktop environment? How are windows drawn? What’s the file system? Where do you get software updates? In Mac and Windows, all those decisions are made for you. But Linux is fundamentally different: The core of the OS is the kernel, and everything else is kind of up to you. A distro is just somebody’s idea of what pieces to use. Some, like Pop_OS! and Mint, aim for simplicity and make all those choices for you (though you can still change them if you want). But Cachy is based on Arch, a notoriously DIY distro, and before I do anything else, I have to pick one of four bootloaders. I pick Limine, for reasons I can’t recall.
Next, I need to figure out where to install it. On the recommendation of Will Smith from the Dual Boot Diaries podcast — from whom the “an operating system is a bunch of pieces” thing above is largely cribbed — I install Cachy on a different physical drive from Windows, since Windows updates tend not to care if they overwrite other bootloaders.
I have a 4TB storage drive with just over a terabyte of data on it, so I shrink that partition down to 2TB using the installer’s manual partitioning interface, then (following the guide) make a 2GB boot partition and a root partition using the btrfs file system. The guide says it needs at least 20GB, so I go big and make it 100GB. This will cause a minor problem later.
Then I’m on the Cachy desktop, and my mouse buttons still aren’t working. Swapping USB ports doesn’t do anything. Plugging in my trackball doesn’t fix it either. I finally try unplugging the mouse, which makes the trackball work normally. My gaming mouse is an ancient Mad Catz Cyborg RAT 7; it turns out this is a known issue. I defer editing configuration files for now and just keep the mouse unplugged.
There are lots of ways to install apps on Linux. Sometimes you can just download them from a company’s website, or you get them from your distro’s official repositories, or GitHub, or wherever. There’s no official app store for Linux, but there are at least three projects aiming to provide universal Linux apps: Flatpak, AppImage, and Snap. Neat! Commence hodgepodging.
Cachy has a one-click gaming package install that includes the Proton compatibility layer, Steam, and Heroic (a launcher for Epic, GOG, and Amazon). I figure I ought to try one game. Then I remember that my root partition is only 100GB. I reboot back into the Cachy live image and use the Parted utility to increase it to 1TB, then make a second btrfs partition in the remaining space. I reboot, log into Epic and GOG, and start downloading The Outer Worlds, a game from 2019 I’ve been playing a bit lately. It runs fine with Proton, and I can even sync my saves from the cloud. I play it for a few minutes with my trackball, remember I hate gaming on a trackball, and plug my gaming mouse back in. It works fine as long as I’m in the game, but outside the game, mouse clicks stop working again. It makes sense — the bug is on the desktop, not in games — but it’s very funny to have a gaming mouse that only works for gaming.
The biggest issue I’ve had so far is Minecraft: Bedrock Edition. For some reason, Microsoft hasn’t prioritized making a Linux version of Bedrock. Java Edition works fine in Linux, but I play Minecraft with my kids, and they’re on Bedrock Edition on their iPads. There’s supposed to be a way to run the Android app with MCPE Launcher, but I couldn’t get it to work. There’s also a project to get the Windows version running on Proton, which will be my next step.
I’m well aware this is the honeymoon phase. And using Linux for less than a week isn’t exactly a flex. Many people use Linux. And I haven’t even tried doing anything particularly difficult, or playing a game that came out this decade. But so far it’s been a much easier transition than expected, and a quieter experience overall. My OS isn’t trying to change my browser or search engine to make some shareholder happy somewhere. It’s not nudging me to try some bullshit AI feature.
Will I go crawling back to macOS or Windows the first time I have to edit a batch of photos? Possibly! I’ll definitely boot back into Windows — or pull out a Chromebook — to play Minecraft with my kids, if I can’t get it running on Linux. And I don’t think I’ll ever be able to use Linux exclusively; my job as a reviews editor means I have to stay familiar with as many operating systems as possible. (This is a good way to drive yourself nuts.)
...
Read the original on www.theverge.com »
Oh My Zsh is still getting recommended a lot. The main problem with Oh My Zsh is that it adds a lot of unnecessary bloat that affects shell startup time.
Since OMZ is written in shell scripts, every time you open a new terminal tab, it has to interpret all those scripts. Most likely, you don’t need OMZ at all.
Here are the timings from the default setup with a few plugins (git, zsh-autosuggestions, zsh-autocomplete) that are usually recommended:
And that’s only for prompt and a new shell instance, without actually measuring the git plugin and virtual env plugins (which are often used for Python). Creating a new tab takes some time for your terminal, too. It feels like a whole second to me when opening a new tab in a folder with a git repository.
My workflows involve opening and closing up to hundreds of terminal or tmux tabs a day. I do everything from the terminal. Just imagine that opening a new tab in a text editor would take half a second every time.
Once in a while, it also checks for updates, which can take up to a few seconds when you open a new tab.
I see no reason in frequent updates for my shell configuration. Especially, when a lot of third-party plugins are getting updates too. Why would you want you shell to fetch updates?
My advice is to start simple and only add what you really need.
Here is the minimal Zsh configuration that works well as a starting point:
It’s an already pretty good setup with completions!
Some details about this configuration:
* HISTSIZE and SAVEHIST set the size of your history.
* autocd allows you to change directories without typing cd.
You also want to customize your prompt. For prompts, I’m using starship which is a fast and minimal prompt packed into a single binary.
The very old way of doing this in Oh My Zsh was to use plugins and custom themes. With starship, it’s very simple and easy now. It replaces git, virtual environment and language specific plugins.
Here is my config for starship:
Because cloud services are available globally, I’ve disabled them. I don’t want them to be displayed on every prompt, since this adds visual noise.
Here is how my prompt looks like now:
This project uses both Python and Rust, they are highlighted in the prompt. When you run a command, it also shows how long it took to execute.
To enable it, add the following line to your .zshrc:
A lot of people use zsh-autosuggestions plugin for history search. I find it distracting, because it shows all suggestions as you type.
Instead, I prefer using fzf binded to Ctrl+R for searching history. It gives an interactive fuzzy search.
To enable it, add the following lines to your .zshrc:
After these changes, the startup should look as follows:
For Vim users, I also suggest enabling Vim mode in Zsh. It makes editing commands much faster.
It works the same way as in Vim. By default, zle (the library that reads the shell input) uses Emacs keybindings.
After switching from OMZ a year ago, it only took me a few days to get used to the new workflow. If you still missing some of the plugins, you can always load them manually.
Some people wonder why I open so many tabs. I use tmux and a terminal-based editor (helix). In tmux, I have popups for lazygit and yazi file manager. Every time I need to check git history or browse files, I just open them. They open on top of the current session as an overlay. You can view them as windows in IDEs.
I also use temporary splits to quickly run the code/tests and see the output. They count as separate shell sessions. I want to see code and output side by side, but I don’t need it all the time.
...
Read the original on rushter.com »
Effective in 2026, to align with our trunk stable development model and ensure platform stability for the ecosystem, we will publish source code to AOSP in Q2 and Q4. For building and contributing to AOSP, we recommend utilizing android-latest-release instead of aosp-main. The android-latest-release manifest branch will always reference the most recent release pushed to AOSP. For more information, see Changes to AOSP.
Learn about the features added in Android 16 QPR2 and how to implement them on your devices.
Android feature launch flags ensure that the AOSP development branch is stable for everyone. Contributors to AOSP can use feature launch flags to make sure only tested code is executed.
Read about the Android Open Source Project (AOSP) and learn how to develop, customize, and test your devices.
Learn how to set up your environment, download the AOSP source, build Android, and make contributions.
Find out how Android incorporates industry-leading security features to keep the Android platform and ecosystem safe.
From the basic building blocks of an Android device to the more intricate advanced features, learn about all the ways you can customize the Android OS.
Ensure your users have a coherent experience as they use your device alongside other Android devices in the ecosystem.
Learn how to develop and customize the Android Automotive platform, which runs directly on in-vehicle hardware.
Read about creating configurable virtual Android devices, managing devices on a corporate network, and delivering live content to devices through Android TV.
Help protect your device by checking the latest Android Security Bulletins. Also see the Android Automotive, Chromecast, Wear OS, Pixel, and Pixel Watch bulletins.
Learn how to set up your environment, download the source, and start contributing to AOSP through this tutorial.
Learn how the pieces fit together, from the kernel to the HALs to updatable system components.
Make sure your devices work together and are connected through Bluetooth, NFC, Wi-Fi, and telephony.
...
Read the original on source.android.com »
When it comes to performance, it’s hard to find users who are completely satisfied with Windows 11. Many believe it’s filled with unnecessary features that are difficult to disable.
According to a recent test by YouTuber TrigzZolt, Windows 11 might actually be the slowest version of Windows released in the past 25 years — yes, even slower than the often-criticized Windows Vista.
Microsoft promotes Windows 11 as its fastest operating system ever, but real-world tests tell a different story. In a detailed comparison, TrigzZolt benchmarked several Windows versions: XP, Vista, 7, 8.1, 10, and 11. The results showed that the newest version, Windows 11, performed the worst overall.
The tests covered system boot time, laptop battery performance, application launch speed, memory management, and video editing capabilities. The benchmarks were run on a Lenovo ThinkPad X220 — a model not officially compatible with Windows 11 — which could have affected some results. However, since the incompatibility mainly relates to hardware security rather than performance, the impact was likely minor.
Among all the tested systems, Windows 11 was the slowest to start up, took the longest to open simple applications like Paint and File Explorer, and performed poorly when handling video editing in OpenShot. It also showed the highest RAM usage when idle. Even without active applications, Windows 11 consumes significantly more memory due to its many background services and always-on features. Recently added AI functions have also contributed to slowing it down.
The only areas where Windows 11 performed better were file transfer speed and storage space management for system applications. Overall, the conclusion was clear — Microsoft needs to lighten Windows 11, instead of filling it with more and more features that few users actually use.
...
Read the original on www.eteknix.com »
ANALYSIS From May’s cyberattack on the Legal Aid Agency to the Foreign Office breach months later, cyber incidents have become increasingly common in UK government.
The scale extends far beyond these high-profile cases: the NCSC reports that 40 percent of attacks it managed between September 2020 and August 2021 targeted the public sector, a figure expected to grow.
Given this threat landscape, why does the UK’s flagship Cyber Security and Resilience (CSR) Bill exclude both central and local government?
Sir Oliver Dowden, former digital secretary and current shadow deputy PM, led calls in the House of Commons this week urging Labour to rethink its stance on excluding central government from the Cyber Security and Resilience (CSR) Bill.
“I would just urge the minister, as this bill passes through Parliament, to look again at that point, and I think there is a case for putting more stringent requirements on the public sector in order to force ministers’ minds on that point.”
The CSR bill was announced days into Sir Keir Starmer’s tenure as Prime Minister, aiming to provide an essential refresh of the country’s heavily outdated NIS 2018 regulations.
It proposed to bring managed service providers into scope, as was scheduled in 2022 before those plans fell by the wayside, and datacenters, among many other aspects.
Parallels can be drawn with the EU’s NIS2. However, the CSR bill’s scope is narrower, excluding public authorities, unlike the EU’s equivalent regulatory refresh.
Ian Murray, minister of state across two government departments and responsible, in part, for data policy and public sector reform, thanked Dowden for his suggestions and promised to take them on board.
In responding to the shadow deputy PM, Murray also pointed to the Government Cyber Action Plan, which it launched hours before the CSR bill was set for a second reading in the Commons.
This plan will ostensibly hold government departments to equal security standards as the CSR bill… just without any of the legal obligations.
Cynics may see it as a tool to quell any criticisms of the bill’s scope not extending to central government, all without making any hard security commitments.
As Dowden noted in the Commons on Tuesday, cybersecurity is a matter that is often deprioritized quickly in government. “I welcome the minister’s comments about the obligation on the public sector. However, I would caution him that, in my experience, cybersecurity is one of those things that ministers talk about but then other priorities overtake it. And the advantage of legislative requirements is that it forces ministers to think about it.”
“I do think that more pressure needs to be brought to bear on ministers in terms of their accountability for cybersecurity. I fear that if we don’t put this into primary legislation, it’s something that can slip further and further down ministers’ in-trays. Whilst [some] ministers may have a desire to address it, other, more pressing, immediate problems distract their attention.”
One could argue that if the government is serious about holding itself to the same standards as the critical service providers in scope of the CSR bill, it would just bring itself and local authorities also into scope.
Neil Brown, director at British law firm decoded.legal, told The Register: “The argument is that government departments will be held to standards equivalent to those set out in the bill, and so do not need to be included. This does not fill me with confidence.
“If the government is going to hold itself to standards equivalent to those set out in the bill, then it has nothing to fear from being included in the bill since, by definition, it will be compliant.”
Labour MP Matt Western, who also chairs the National Security Strategy joint committee, suggested that the CSR bill would not be a cure-all, but the first of many pieces of bespoke legislation the government will pass to improve national security.
This suggests the government is considering specific legislation to shore up public sector security further down the line. Perhaps this is wishful thinking.
Brown told us “separate legislation does not sound like a terrible idea,” and notes that existing UK telecoms law is separated for effect.
The Telecommunications (Security) Act 2021 and the Product Security and Telecommunications Infrastructure Act 2022, for example, both seek to improve security in the telco space, but target different organizations. Security requirements often differ between types of organization, so potentially reserving a public sector-specific cybersecurity bill could be the way to go.
Ministers’ plans also include a provision in the bill to introduce new legislative amendments as needed, to meet the demands of a rapidly shifting cybersecurity landscape, leaving behind the Brexit-related hindrances that delayed the previous NIS updates in the first place.
However, the likelihood of being able to deliver on effective legislative amendments at pace is uncertain.
Arguably, if the government wanted to do it correctly, it would carry out a comprehensive (and lengthy) industry consultation before pushing any amendments through the two Houses, another typically arduous process.
Whether this way of iterating on existing law could balance speed with comprehensiveness in unanswered.
For Brown, the approach taken by Labour — to legislate in smaller steps — seems like the smarter choice.
“My preference is to legislate little and often, iterating as needed, rather than trying to create one piece of legislation which is all things to all people,” he says. “Legislation inevitably entails compromise, and often reflects the divergent interests of numerous interested parties (including lobbying groups) — I look, for instance, at the Online Safety Act 2023. Smaller bills/acts, more targeted in scope, responding to a clearly-articulated problem statement, seems more sensible to me.
“As to whether the CSR would result in a better outcome than NIS2, I’m afraid I do not know.”
Given the scale of the cyber threat facing the UK’s public sector, failing to account for this in the CSR bill could open the government up to intense scrutiny.
The National Audit Office’s report into UK government security improvements in January 2025 laid bare the sorry state of its systems. Of the 72 most critical systems run by various departments, 58 were reviewed; auditors found a litany of security flaws across them and noted a staggeringly slow pace at which the issues were being addressed.
That is not an assessment which goes hand-in-hand with a public sector free from regular cyberattacks.
Each time a central authority, arm’s-length body, local council, or NHS trust is compromised, the government’s decision not to include the public sector within the scope of the CSR bill hands the opposition another opportunity to question its commitment to cybersecurity.
Labour does, at least, have some ammo to fire back if this scenario were to ever become reality, with the Conservatives having failed to enact the cybersecurity recommendations from its 2022 consultation, despite having had more than two years to do so.
Even with the government’s Cyber Action Plan, its reluctance to bring the public sector into the scope of its flagship cyber legislation fails to inspire any confidence that it has serious ambitions to improve security in this problem area. ®
...
Read the original on www.theregister.com »
We discovered that entire sections describing domestic U. S. intelligence facilities were deliberately removed from two published documents, while equivalent foreign facilities remained visible. The evidence exists in an unexpected place - the PDF metadata of documents published by The Intercept in 2016, and by The Intercept and the Australian Broadcasting Corporation in a 2017 collaborative investigation. To our knowledge, this is the first time this information has been revealed publicly. The removed sections reveal the operational designations and cover name structure for domestic U.S. NRO Mission Ground Stations.
Using PDF analysis tools, we found hidden text embedded in the metadata versioning of two documents published alongside investigative stories about NSA satellite surveillance facilities. These metadata artifacts prove that earlier versions of the documents contained detailed descriptions of domestic U. S. ground stations that were systematically scrubbed before publication (not just redacted with black boxes, but with text completely removed).
What was published from the Snowden documents:
What was removed from the published documents:
The facilities themselves are not unknown. “Aerospace Data Facility” at Buckley Space Force Base is publicly acknowledged as a National Reconnaissance Office (NRO) Mission Ground Station. “Classic Wizard Reporting and Testing Center” at Naval Research Laboratory is publicly acknowledged, though its designation as a Mission Ground Station is less clear. What’s NOT public (until now) is the specific operational designations used in classified networks: “Consolidated Denver Mission Ground Station (CDMGS)” and “Potomac Mission Ground Station (PMGS).” The Snowden documents prove these are deliberate cover names (not just alternative terminology) and show exactly what’s classified and what’s not.
The first PDF document titled “Menwith satellite classification guide” has two versions in the file metadata: an older one and a newer one. The removed information exists in the earlier version, and is completely removed in the second, published version. This is not standard redaction with black boxes - the text was completely deleted from the visible document while remaining embedded in the PDF’s internal version history.
Screenshot from the first version of the document, containing the hidden text (sections 5.1.5.2 - 5.1.5.6).
Screenshot from the second version of the document, where the text is removed.
5.1.5.2 (U) Facility Name: Formally identified as the ,Mission Support Facility(MSF) also re-
ferred as the Classic Wizard Reporting and Testing Center (CWRTC).
5.1.5.3 (S//TK) Cover Story: The fact of a cover story is S//TK, the cover story itself is unclas-
sified.
5.1.5.4 (U) Software development, maintenance, testing, and communications support to a
world-wide Navy communications and reporting system.
5.1.5.5 (U) Associations:
1. The term Potomac Mission Ground Station (PMGS)=S//TK
2. The term Classic Wizard Reporting and Testing Center
(CWRTC)=UNCLASSIFIED
3. The term Naval Research Laboratory=UNCLASSIFIED
4. The fact that CWRTC is the cover name for the PMGS=S//TK
5. The fact that CWRTC is a communications and data relay location for the
US=UNCLASSIFIED (no association w/NRO)
6. The fact that PMGS is located on the NRL=S//TK
7. The fact that the NRO has a MGS located on the NRL=S//TK
8. The fact that the CWRTC is located on the NRL=UNCLASSIFIED (no association
w/NRO)
9. CWRTC associated w/NRO=S//TK
10. Association of NRO, CIA, or NSA personnel with the CWRTC=S//TK
11. Association of CWRTC with other NRO MGS=S//TK
12. Association of MSF with the NRO=S//TK.
13. Association of CWRTC with the ADF=UNCLASSIFIED (no association w/NRO)
5.1.5.6 (U) Visitors: CWRTC is housed within buildings 259 and 260 on the Naval Research
Laboratory in Southwest Washington, DC.
The second document “NRO SIGINT Guide for Pine Gap” also has two versions in the file metadata: an older one and a newer one.
Screenshot from the first version of the document, containing the hidden text (section 5.1.2).
Screenshot from the second version of the document, where the text is removed.
5.1.2 (S//TK) Consolidated Denver Mission Ground Station (CDMGS)
5.1.2.1 (U) Facility Name: Aerospace Data Facility (ADF)
5.1.2.2 (S//TK) Cover Story: The fact of a cover story is S/TK, the cover story itself is unclassi-
fied.
In the “hidden version” of the document “Menwith satellite classification guide” section 5.1.5 describes a facility formally identified by its cover name “Mission Support Facility (MSF),” also referred to as the “Classic Wizard Reporting and Testing Center (CWRTC).” The classified operational designation is Potomac Mission Ground Station (PMGS).
Public cover story: “Software development, maintenance, testing, and communications support to a world-wide Navy communications and reporting system.”
The document explicitly states that “the fact of a cover story is S//TK” - meaning even acknowledging that CWRTC is a cover name (rather than the facility’s real identity) is classified Secret/Talent Keyhole.
The classification guide provides a detailed breakdown of what’s public and what’s secret:
The fact that CWRTC is a cover name for PMGS = S//TK (SECRET)
Association of NRO, CIA, or NSA personnel with CWRTC = S//TK (SECRET)
This layered classification allows the facility to operate under a public identity while keeping its actual intelligence designation and function secret.
In the “hidden version” of the document “NRO SIGINT Guide for Pine Gap” section 5.1.2 describes a facility identified by the cover name “Aerospace Data Facility (ADF)”. The classified operational designation is “Consolidated Denver Mission Ground Station (CDMGS).”
Public acknowledgment: Publicly known as “Aerospace Data Facility-Colorado.” For example ADF-C is openly described in Wikipedia as an NRO facility for “command and control of reconnaissance satellites.”
What’s NOT public: The operational designation “Consolidated Denver Mission Ground Station” and the fact that “Aerospace Data Facility” is a cover name for it.
As with PMGS, the classification guide shows “the fact of a cover story is S//TK” - the deception itself is classified.
Both hidden facilities appear in an organizational classification table in the Pine Gap guide. The table’s visual structure reveals which names refer to the same facilities through deliberate formatting. Thicker borders separate distinct facility groups, while thinner borders separate columns within each group (this might be a bit hard to see from the screenshot/document). Each facility appears multiple times - once under its classified operational designation, and again under its unclassified cover name(s):
For each facility, the real operational designation (CDMGS, PMGS, HMGS, AMGS) has all agency associations classified S//TK (Secret). But the cover names have NSA and NGA associations marked as unclassified (U) or For Official Use Only (U//FOUO).
This creates a classification system where you can publicly acknowledge intelligence work using sanitized facility names, while the actual operational designations remain secret.
The domestic facilities use two cover names each (ADF + FSD for Denver; MSF + CWRTC for Potomac), while foreign facilities use only one. This suggests layered operational security for facilities on U. S. soil - possibly to navigate Congressional oversight, legal restrictions on domestic intelligence operations, or different audiences requiring different levels of plausible deniability.
There is a deliberate pattern in these two classification guides: detailed sections describing specific U. S. facilities were removed, while equivalent foreign facility sections were published intact. The 2016 Menwith Hill guide had its PMGS section (5.1.5) completely removed. The 2017 Pine Gap guide had its CDMGS section (5.1.2) completely removed. Both guides retained their detailed descriptions of foreign facilities, including operational designations, cover stories, and visitor protocols.
U. S. facilities weren’t entirely absent from the published documents. The Pine Gap classification table shows CDMGS, PMGS, ADF, and other U.S. facility designations alongside foreign facilities, revealing the structure of the Mission Ground Station network. Other published documents from both investigations mention U.S. facilities. What was specifically removed were the detailed classification guide sections that would have explained these U.S. facilities the same way Menwith Hill and Pine Gap were explained.
PDF metadata provides forensic evidence of the editing process. The Pine Gap classification guide shows timestamps from July 31, 2017, three weeks before publication. Two versions were created minutes apart using Nitro Pro 8, a commercial PDF editor: version 1 at 13:48:54 (containing the CDMGS section) and version 2 at 13:50:48 (with CDMGS removed). The Intercept and ABC published identical PDFs with the same metadata artifacts, indicating the editing was done once and the same file shared between organizations.
The Intercept, as holder of the Snowden archive, likely handled technical document preparation for publications. The Menwith Hill classification guide, published solely by The Intercept in 2016, shows more thorough metadata sanitization but the same editorial pattern - domestic facility sections removed while foreign equivalents remain.
We contacted Ryan Gallagher, the journalist who led both investigations, to ask about the editorial decision to remove these sections. After more than a week, we have not received a response.
The next part will be a technical deep-dive into PDF metadata across the published Snowden documents. We found that many documents contain multiple versions in their metadata, revealing the editorial redaction process: visible NSA agents’ usernames that were later removed, screenshots that were later redacted, and surveillance data that went through multiple rounds of redaction. We’ll also document cases of failed redactions - including one where redacted text remained fully copyable, previously reported only by a Polish cybersecurity blog.
You can extract versions from a PDF file for example with a pdfresurrect tool (pdfresurrect -w filename.pdf).
You can download the document versions directly here:
...
Read the original on libroot.org »
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
Dorota Skowronska-Krawczyk sits in her office, eyes fixed on the computer monitor in front of her. “You see it move its eye,” says the UC Irvine associate professor of physiology and biophysics, pointing to an image of a Greenland shark slowly drifting through the murky Arctic Ocean. “The shark is tracking the light—it’s fascinating.”
The video shows the longest-living vertebrate in the world—long, thick, gray body; small head; and short, rounded snout—with opaque eyes that appear lifeless, except for the parasite latched to one of its eyeballs. Scientists have long suspected the large species to be functionally blind, given the frequent presence of the parasite and its exceptionally dim and obstructed visual environment.
Now, new research from Skowronska-Krawczyk on Greenland shark vision—co-authored by University of Basel, Switzerland researchers Walter Salzburger and Lily G. Fogg, who worked on the evolutionary aspect of the study—is challenging what we know about aging, vision and longevity.
Published in Nature Communications, her findings suggest that a DNA repair mechanism enables these sharks—some of which live for 400 years—to maintain their vision over centuries with no signs of retinal degeneration and that they are well adapted to extreme low-light conditions.
Skowronska-Krawczyk, who gleans insights into the molecular mechanisms of aging by studying processes that control age-related eye diseases, attributes her interest in the visual system of the Greenland shark to a 2016 research paper by John Fleng Steffensen published in the journal Science.
“One of my takeaway conclusions from the Science paper was that many Greenland sharks have parasites attached to their eyes—which could impair their vision,” she says. “Evolutionarily speaking, you don’t keep the organ that you don’t need. After watching many videos, I realized this animal was moving its eyeballs toward the light.”
This left Skowronska-Krawczyk wanting to learn more.
The Greenland sharks used in her co-study were caught between 2020 and 2024 using scientific long lines off the coast of the University of Copenhagen’s Arctic Station on Disko Island, Greenland. Steffensen, professor of marine biology at the University of Copenhagen, and colleagues Peter G. Bushnell, who teaches at Indiana University South Bend, and Richard W. Brill, who’s based at the Virginia Institute of Marine Science, dissected and preserved the eyeballs in a fixative solution for examination.
Emily Tom, a UC Irvine Ph. D. student and physician-scientist in training who works in Skowronska-Krawczyk’s lab, recalls receiving the box that held a fixed eyeball.
“I opened the package, and there was a giant, 200-year-old eyeball sitting on dry ice just staring back at me,” the 28-year-old says with a laugh. “We’re used to working with mouse eyeballs, which are the size of a papaya seed, so we had to figure out how to scale up to a baseball-sized eyeball. Luckily, Dorota is very hands-on, both in her mentoring style and in the lab—which you don’t see a lot of with professors.”
Tom then let the eyeball defrost. “The lab smelled like a fish market,” she says.
She emphasizes that it was a careful balance of not letting it thaw too much because once tissue samples reach room temperature, they begin to degrade. Her role involved histological and vision-specific analyses of the eyeball, finding no signs of cell death, and revealing that rhodopsin (a protein essential for vision in dim light) in the shark retinas remains active and is tuned to detect blue light.
“Not a lot of people are working on sharks, especially shark vision,” Tom says. “We can learn so much about vision and longevity from long-lived species like the Greenland shark, so having the funds to do research like this is very important.”
For Skowronska-Krawczyk, the findings open the door to discovering new approaches to avoiding age-related vision loss and eradicating eye diseases such as macular degeneration and glaucoma—and to more questions about how vision evolves, the mechanisms that help keep tissues alive and healthy for many years, and how to apply this knowledge to humans.
She notes that with federal research funding under threat, future support for her studies is a concern, but she believes that “we will prevail.”
“What I love about my work is that we are the first in the world to see results—at the forefront, finding new mechanisms, rules and discoveries,” Skowronska-Krawczyk says, looking over at the paused shark on the screen. “Then, being able to share this joy with students—that’s the best part of it.”
...
Read the original on phys.org »
A few months ago, users started reporting that Ghostty was consuming absurd amounts of memory, with one user reporting 37 GB after 10 days of uptime. Today, I’m happy to say the fix has been found and merged. This post is an overview of what caused the leak, a look at some of Ghostty’s internals, and some brief descriptions of how we tracked it down.1
The leak was present since at least Ghostty 1.0, but it is only recently that popular CLI applications (particularly Claude Code) started producing the correct conditions to trigger it at scale. The limited conditions that triggered the leak are what made it particularly tricky to diagnose.
The fix is merged and is available in tip/nightly releases, and will be part of the tagged 1.3 release in March.
To understand the bug, we first need to understand how Ghostty manages terminal memory. Ghostty uses a data structure called the
PageList
to store terminal content. PageList is a doubly-linked list of memory pages that store the terminal content (characters, styles, hyperlinks, etc.).
The underlying “pages” are not single virtual memory pages
but they are a contiguous block of memory aligned to page boundaries and composed of an even multiple of system pages.2
These pages are allocated using mmap. mmap isn’t particularly fast, so to avoid constant syscalls, we use a memory pool. When we need a new page, we pull from the pool. When we’re done with a page, we return it to the pool for reuse.
The pool uses a standard size for pages. Think of it like buying standard-sized shipping boxes: most things people ship fit in a standard box, and having a standard box comes with various efficiencies.
But sometimes terminals need more memory than a standard page provides. If a set of lines has many emoji, styles, or hyperlinks, we need a larger page. In these cases, we allocate a non-standard page
directly with mmap, bypassing the pool entirely. This is typically a rare scenario.
When we “free” a page, we apply some simple logic:
If the page is : return it to the pool
If the page is > standard size: call munmap to free it
This is the core background for terminal memory management in Ghostty, and the idea itself is sound. A logic bug around an optimization is what produced the leak, as we’ll see next.
There’s one more background detail we need to cover to understand the bug: scrollback pruning.
Ghostty has a scrollback-limit configuration that caps how much history is retained. When you hit this limit, we delete the oldest pages in the scrollback buffer to free up memory.
But this often happens in a super hot path (e.g. when outputting large amounts of data quickly), and allocating and freeing memory pages is expensive, even with the pool. Therefore, we have an optimization:
reuse the oldest page as the newest page when we reach the limit.
This optimization works great. It requires zero allocations and uses only some quick pointer manipulations to move the page from the front to the back of the list. We do some metadata cleanup to “clear” the page but otherwise leave the previous memory intact.
It’s fast and empirically speeds up scrollback-heavy workloads significantly.
During the scrollback pruning optimization, we always
resized our page back to standard size. But we didn’t resize the underlying memory allocation itself, we only noted the resize in the metadata. The underlying memory was still the large non-standard mmap allocation, but now the PageList thought it was standard sized.
Eventually, we’d free the page under various circumstances (e.g. when the user closes the terminal, but also other times). At that point, we’d see the page memory was within the standard size, assume it was part of the pool, and we would never call munmap on it. A classic leak.
This all seems pretty obvious, but the issue is that non-standard pages are rare by design. The goal of our design and optimizations is that standard pages are the common case and provide a fast-path. Only very specific scenarios produce non-standard pages and they’re usually not produced in large quantities.
But the rise of Claude Code
changed this. For some reason, Claude Code’s CLI produces a lot of multi-codepoint grapheme outputs which force Ghostty to regularly use non-standard pages. Additionally, Claude Code uses the primary screen and produces a significant amount of scrollback output. These things combined together created the perfect storm to trigger the leak in huge quantities.
The fix is conceptually simple: never reuse non-standard pages. If we encounter a non-standard page during scrollback pruning, we destroy it properly (calling munmap) and allocate a fresh standard-sized page from the pool.
The core of the fix is in the snippet below, but some extra work was needed to fix up some other bits of accounting we have:
We could’ve also reused the non-standard page and just retained the large memory size, but until we have data that shows otherwise, we’re still operating under the assumption that standard pages are the common case and it makes sense to reset back to a standard pooled page.
Other users have recommended more complex strategies (e.g. maintaining some metrics on how often non-standard pages are used and adjusting our assumptions accordingly), but more research is needed before making those changes. This change is simple, fixes the bug, and aligns with our current assumptions.
As part of the fix, I added support for virtual memory tags on macOS provided by the Mach kernel. This lets us tag our PageList memory allocations with a specific identifier that shows up in various tooling.
Now when debugging memory on macOS, Ghostty’s PageList memory shows up with a specific tag instead of being lumped in with everything else. This made it trivial to identify the leak, associate it with the PageList, and also verify that the fix worked by observing the tagged memory being properly freed.
We do a lot of work in the Ghostty project to find and prevent memory leaks:
* In debug builds and unit tests, we use leak-detecting Zig allocators.
* The CI runs valgrind on our full unit test suite on every commit
to find more than just leaks, such as undefined memory usage.
* We regularly run the macOS GUI via macOS Instruments to look for
leaks particularly in the Swift codebase.
* We run every GTK-related PR using Valgrind (the full GUI) to look
for leaks in the GTK codepath that isn’t unit tested.
This has worked really well to date, but unfortunately it didn’t catch this particular leak because it only triggers under very specific conditions that our tests didn’t reproduce. The merged PR includes a test that does reproduce the leak to prevent regressions in the future.
This was the largest known memory leak in Ghostty to date, and the only reported leak that has been confirmed by more than a single user. We’ll continue to monitor and address memory reports as they come in, but remember that reproduction is the key to diagnosing and fixing memory leaks!
Big thanks to @grishy who finally got me a reliable reproduction so I could analyze the issue myself. Their own analysis reached the same conclusion as mine, and the reproduction let me verify both our understandings independently.
Thanks also to everyone who reported this issue with detailed diagnostics. The community’s analysis, especially around the footprint output and VM region counting, gave me important clues that pointed toward the PageList as the culprit.
...
Read the original on mitchellh.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.