10 interesting stories served every morning and every evening.
From the beginning, our goal has been to build tools that radically change what it feels like to work with Python — tools that feel fast, robust, intuitive, and integrated.
Today, we’re taking a step forward in that mission by announcing that we’ve entered into an agreement to join OpenAI as part of the Codex
team.
Over the past few years, our tools have grown from zero to hundreds of millions of downloads per month across Ruff, uv, and
ty. The Astral toolchain has become foundational to modern Python development. The numbers — and the impact — went far beyond my most ambitious expectations at every step of the way.
Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and
OpenAI’s own announcement, OpenAI will continue supporting our open source tools after the deal closes. We’ll keep building in the open, alongside our community — and for the broader Python ecosystem — just as we have from the start.
I view building tools as an incredibly high-leverage endeavor. As I wrote in our
launch post three years ago: “If you could make the Python ecosystem even 1% more productive, imagine how that impact would compound?”
Today, AI is rapidly changing the way we build software, and the pace of that change is only accelerating. If our goal is to make programming more productive, then building at the frontier of AI and software feels like the highest-leverage thing we can do.
It is increasingly clear to me that Codex is that frontier. And by bringing Astral’s tooling and expertise to OpenAI, we’re putting ourselves in a position to push it forward. After joining the Codex team, we’ll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development.
Through it all, though, our goal remains the same: to make programming more productive. To build tools that radically change what it feels like to build software.
On a personal note, I want to say thank you, first, to the Astral team, who have always put our users first and shipped some of the most beloved software in the world. You’ve pushed me to be a better leader and a better programmer. I am so excited to keep building with you.
Second, to our investors, especially
Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that.
And third, to our users. Our tools exist because of you. Thank you for your trust. We won’t let you down.
...
Read the original on astral.sh »
The “advanced flow” will be available before verification enforcement begins later this year.
Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification.
With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google’s intervention.
Apps that come from unverified developers won’t be installable on Android phones—unless you use the new advanced flow, which will be buried in the developer settings.
When sideloading apps today, Android phones alert the user to the “unknown sources” toggle in the settings, and there’s a flow to help you turn it on. The verification bypass is different and will not be revealed to users. You have to know where this is and proactively turn it on yourself, and it’s not a quick process. Here are the steps:
Enable developer options by tapping the software build number in About Phone seven times
In Settings > System, open Developer Options and scroll down to “Allow Unverified Packages.”
Flip the toggle and tap to confirm you are not being coerced
Return to the unverified packages menu at the end of the security delay
Scroll past additional warnings and select either “Allow temporarily” (seven days) or “Allow indefinitely.”
Check the box confirming you understand the risks.
You can now install unverified packages on the device by tapping the “Install anyway” option in the package manager.
The actual legwork to activate this feature only takes a few seconds, but the 24-hour countdown makes it something you cannot do spur of the moment. But why 24 hours? According to Samat, this is designed to combat the rising use of high-pressure social engineering attacks, in which the scammer convinces the victim they have to install an app immediately to avoid severe consequences.
You’ll have to wait 24 hours to bypass verification.
You’ll have to wait 24 hours to bypass verification.
“In that 24-hour period, we think it becomes much harder for attackers to persist their attack,” said Samat. “In that time, you can probably find out that your loved one isn’t really being held in jail or that your bank account isn’t really under attack.”
But for people who are sure they don’t want Google’s verification system to get in the way of sideloading any old APK they come across, they don’t have to wait until they encounter an unverified app to get started. You only have to select the “indefinitely” option once on a phone, and you can turn dev options off again afterward.
According to Samat, Google feels a responsibility to Android users worldwide, and things are different than they used to be with more than 3 billion active devices out there.
“For a lot of people in the world, their phone is their only computer, and it stores some of their most private information,” Samat said. “Over the years, we’ve evolved the platform to keep it open while also keeping it safe. And I want to emphasize, if the platform isn’t safe, people aren’t going to use it, and that’s a lose-lose situation for everyone, including developers.”
But what does that safety look like? Google swears it’s not interested in the content of apps, and it won’t be checking proactively when developers register. This is only about identity verification—you should know when you’re installing an app that it’s not an imposter and does not come from known purveyors of malware. If a verified developer distributes malware, they’re unlikely to remain verified. And what is malware? For Samat, malware in the context of developer verification is an application package that “causes harm to the user’s device or personal data that the user did not intend.”
So a rootkit can be malware, but a rootkit you downloaded intentionally because you want root access on your phone is not malware, from Samat’s perspective. Likewise, an alternative YouTube client that bypasses Google’s ads and feature limits isn’t causing the kind of harm that would lead to issues with verification. But these are just broad strokes; Google has not commented on any specific apps.
Google says sideloading isn’t going away, but it is changing.
Google says sideloading isn’t going away, but it is changing.
Google is proceeding cautiously with the verification rollout, and some details are still spotty. Privacy advocates have expressed concern that verification will create a database that puts independent developers at risk of legal action. Samat says that Google does push back on judicial orders for user data when they are improper. The company further suggests it’s not intending to create a permanent list of developer identities that would be vulnerable to legal demands. We’ve asked for more detail on what data Google retains from the verification process and for what length of time.
There is also concern that developers living in sanctioned nations might be unable to verify due to the required fee. Google notes that the verification process may vary across countries and was not created specifically to bar developers in places like Cuba or Iran. We’ve asked for details on how Google will handle these edge cases and will update if we learn more.
Rolling out in 2026 and beyond
Android users in most of the world don’t have to worry about developer verification yet, but that day is coming. In September, verification enforcement will begin in Brazil, Singapore, Indonesia, and Thailand. Impersonation and guided scams are more common in these regions, so Google is starting there before expanding verification globally next year. Google has stressed that the advanced flow will be available before the initial rollout in September.
Google stands by its assertion that users are 50 times more likely to get malware outside Google Play than in it. A big part of the gap, Samat says, is Google’s decision in 2023 to begin verifying developer identities in the Play Store. This provided a framework for universal developer verification. While there are certainly reasons Google might like the control verification gives it, the Android team has felt real pressure from regulators in areas with malware issues to address platform security.
“In a lot of countries, there is chatter about if this isn’t safer, then there may need to be regulatory action to lock down more of this stuff,” Samat told Ars Technica. “I don’t think that it’s well understood that this is a real security concern in a number of countries.”
Google has already started delivering the verifier to devices around the world—it’s integrated with Android 16.1, which launched late in 2025. Eventually, the verifier and advanced flow will be on all currently supported Android devices. However, the UI will be consistent, with Google providing all the components and scare screens. So what you see here should be similar to what appears on your phone in a few months, regardless of who made it.
Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.
TCL’s German QLED ban puts pressure on TV brands to be more honest about QDs
Coal plant forced to stay open due to emergency order isn’t even running
After three months, Samsung is ending sales of the $2,899 Galaxy Z TriFold
...
Read the original on arstechnica.com »
There was an error while loading. Please reload this page.
Successfully merging this pull request may close these issues.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was an error while loading. Please reload this page.
There was an error while loading. Please reload this page.
Successfully merging this pull request may close these issues.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was an error while loading. Please reload this page.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
...
Read the original on github.com »
“Companies — wherever they’re based — are not allowed to sell unsafe toys to children in the UK. And society has long protected youngsters from things like alcohol, smoking and gambling. The digital world should be no different,” she said.
...
Read the original on www.bbc.com »
Skip to content
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
You must be signed in to star a gist
You must be signed in to fork a gist
Embed this gist in your website.
Save adamamyl/81b78eced40feae50eae7c4f3bec1f5a to your computer and use it in GitHub Desktop.
Embed this gist in your website.
Save adamamyl/81b78eced40feae50eae7c4f3bec1f5a to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can’t perform that action at this time.
...
Read the original on gist.github.com »
It may seem like the miles driven by Waymo (in the 100’s of millions of miles) pales in comparison to the billions of miles driven in the cities where Waymo drives, or trillions of miles driven annually in the entire United States. When comparing the rates of two populations, however, the conclusions you can draw from data are governed by what is called statistical power. The question being answered by the Safety Impact Data Hub is are the Waymo and benchmark crash rates different? The input to this calculation is the number of crashes and the number of miles driven by Waymo and the benchmark populations and is modeled using a Poisson distribution, the most common distribution for handling count data.
An example of this problem would be to examine the number of students that do not pass an exam. In a school district, say that 300 out of 1,000 students that take the same test do not pass (3 do not pass per 10 testtakers). One could ask whether a Class A of 20 students performed differently than the overall population on this test (note we are assuming passing or not passing the test is independent of being in Class A for the sake of this simplified example). Say Class A had 10 out of 20 students that did not pass the exam (5 do not pass per 10 test takers). Class A had a not pass rate that is double the rate of the school district. When we use a Poisson confidence interval, however, the rate of not passing in the class of 20 is not statistically different from the school district average at the 95% confidence level. If we instead compare Class A to the entire state of 100,000 students (with the same 3 not pass per 10 test takers rate, or 30,000 out of 100,000 to not pass), the 95% confidence intervals of this comparison are almost identical to the comparison to the county (300 out of 1000 test takers). This means that for this comparison, the uncertainty in the small number of observations in Class A (only 20 students) is much more than the uncertainty in the larger population. Take another class, Class B, that had only 1 out of 20 students not pass the test (0.5 do not pass per 10 test takers). When applying the 95% confidence intervals, this Class B does have a statistically different pass rate from the county average (as well when compared to the state). This example shows that when comparing rates of events in two populations where one population is much larger than the other (measured by test takers, or miles driven), the two things that drive statistical significance are: (a) the number of observations in the smaller population (more observations = significance sooner) and (b) bigger differences in the rates of occurrence (bigger difference = significance sooner).
Now consider another experiment with Waymo data. Consider the figure below that keeps the number of Waymo airbag deployment in any vehicle crashes (34) and VMT (71.1 million miles) constant while assuming different orders of magnitude of miles driven in the human benchmark population (benchmark rate of 1.649 incidents per million miles with 17.8 billion miles traveled). The point estimate is that Waymo has 71% fewer of these crashes than the benchmark. The confidence intervals (also sometimes called error bars) show uncertainty for this reduction at a 95% confidence level (95% confidence is the standard in most statistical testing). If the error bars do not cross 0%, that means that from a statistical standpoint we are 95% confident the result is not due to chance, which we also refer to as statistical significance. This “simulation” shows the effect on statistical significance when varying the VMT of the benchmark population. This comparison would be statistically significant even if the benchmark population had fewer miles driven than the Waymo population (10 million miles). Furthermore, as long as the human benchmark has more than 100 million miles, there is almost no discernable difference in the confidence intervals of the comparison. This means that comparisons in large US cities (based on billions of miles) are no different from a statistical perspective than a comparison to the entire US annual driving (trillions of miles). Like the school test example, Waymo has driven enough miles (tens to hundred of millions of miles) and the reductions are large enough (70%-90% reductions) that statistical significance can be achieved.
...
Read the original on waymo.com »
I wanted to provide an update on the situation with OpenTTD on Steam and GOG, and what the Atari re-release of Transport Tycoon Deluxe means for OpenTTD. There has been a lot of speculation and, in some cases, misinformation spread about what has happened. Our initial announcement perhaps didn’t provide as much detail as we could have, but I want to reassure OpenTTD fans that we have not been “pressured” by Atari to make these changes.
Atari approached us to explain their plans for the Transport Tycoon Deluxe re-release, and what it might mean for OpenTTD. They are keen to work with us, and hope that the new release will be welcomed by the community who have been playing OpenTTD for the past 20+ years. We discussed these plans, and we understood that a compromise would be needed to balance Atari’s commercial interests (which of course they are entitled to pursue as the rights holder) against the availability of a free, well-developed evolution of the game. The decision was made that access to OpenTTD on these platforms would be conditional, for new players, on purchasing Transport Tycoon Deluxe first, while retaining the ability to download OpenTTD for free from our web site. Some have suggested that we should have chosen to remove OpenTTD from Steam and GOG entirely, but that would have caused unnecessary disruption to the many thousands of people currently enjoying the game on these platforms, and would have potentially prevented new players from discovering the game in future.
The OpenTTD project owes a lot - indeed, it owes everything - to Transport Tycoon Deluxe and to Chris Sawyer. Without TTD, there would be no OpenTTD - it’s as simple as that.
As I covered in 2024, OpenTTD started off as a pretty much perfect clone of TTD, and though the game has evolved almost beyond belief since 2004, it is still rooted in the fundamentals of Transport Tycoon Deluxe. Agreeing to collaborate with Atari on their re-release not only enables you to go back and play the original game as it was in 1995, but helps to ensure OpenTTD remains a thriving project for years to come.
Additionally, as part of the discussions we held, Atari agreed to make a contribution towards the running costs of our server infrastructure. We are also extremely grateful for the many donations that have come in over the past few days from users - your support will help keep our services going, and it is deeply appreciated.
I understand that these changes have provoked strong feelings in the community, but I feel it important to emphasise that Atari have worked collaboratively with us, and that OpenTTD as a project retains its full independence. Even after reading this, you may still not agree with the choices that we’ve made, but I would please ask you to share your views respectfully. The Transport Tycoon community has been a source of joy in my own life for well over a quarter of a century, and it would be fantastic for us to be able to continue to enjoy these brilliant games well into the future.
...
Read the original on www.openttd.org »
Wayland has been a broad misdirection and misallocation of time and developer resources at the expense of users. With more migration from other operating systems, the pressure to fix fundamental problems has become more prominent. After 17 years of development, now is a good time to reflect on some of the larger promises that have been made around the development of Wayland as a replacement for the X11 display protocol.
If you’re not in this space, hopefully it will still be interesting as an engineering post-mortem on taking on new greenfield projects. Namely: What are the issues with what exists, why can they not be fixed, what do we hope to achieve with a new project, and how long do we expect it to take?
If you’re already familiar with X11 and Wayland feel free to skip to the next part.
For people not familiar with Linux, here’s a quick rundown of the terms in this space, roughly in the order of highest-level to lowest-level:
* Applications
These are the things you want to run
* These are the things you want to run
* Desktop Environment (DE)
This is what manages things like window theming, notifications, task bars, etc.
* This is what manages things like window theming, notifications, task bars, etc.
* Compositor
This is what layers windows on top of each other and does animations, graphical effects, etc.
* This is what layers windows on top of each other and does animations, graphical effects, etc.
* Display Server
This is the thing that manages the display, it also abstracts away some of the hardware details so all the above work on NVidia, AMD, Intel, etc.
* This is the thing that manages the display, it also abstracts away some of the hardware details so all the above work on NVidia, AMD, Intel, etc.
* Kernel / Operating System
The lower-layer thing that manages hardware resources, for us this is Linux
* The lower-layer thing that manages hardware resources, for us this is Linux
The above is not a complete list, but it’s enough to give some framing for understanding that X11 is a fundamental piece of most Linux environments.
X11 is currently still the most common popular display server in the Linux ecosystem. It was developed in the mid-1980s and, as legacy projects tend to do, has accumulated functionality that makes it difficult to maintain, according to the developers.
So, in 2008, Kristian Høgsberg started the project that became known as Wayland. Wayland (in theory) replaces the display server, as well as some parts of the compositor and desktop environment with a simpler display protocol and reference implementation. The original conceit behind Wayland is to only implement what is needed for a simple Linux desktop. The original implementation was a little over 3,000 lines of code.
It’s 2026, and Wayland has reached a market share of around 40-50%, or closer to 50-60% depending on your source. I would argue a product that has taken 17 years to gain substantial marketshare has issues hindering adoption. Compare the development of Wayland to a similar project for managing audio: PipeWire. Within ~8 years, every alternative has been mostly replaced. It’s been adopted as the default in Ubuntu since 22.04, roughly 4 years after it first launched!
These are the most common issues I have seen from the perspective of a user, so I will try to stay light on what I consider to be mostly irrelevant technical details and instead focus on larger issues around the rollout and design of Wayland.
The reason I use Linux and other Unix-likes is that they give me the ability to do whatever I want on my system, including making mistakes! So why is my display server telling me that certain applications that I installed and chose to run aren’t allowed to talk to each other in the name of security?
There are multiple cases of this: OBS can’t screen record (it segfaults instead), I can’t copy-paste, and I can’t see window previews unless everything implements a specific extension to the core protocol.
The actual “threat model” here is baffling and doesn’t seem to reflect a need for users. Applications are not able to see each other’s windows, but they’re not able to interact in any other way that could potentially cause problems?
I also don’t care for the “security” argument when parts of the core reference implementation are written in a memory-unsafe language. To be clear, I am not saying that software written in C is bad, I’m specifically calling out that making a security arugment about software then repeating decisions of the previous (40-year-old) implementation is a bad look.
Several of the design decisions that Wayland makes are claimed in the name of performance. Namely, collapsing many layers is supposed to reduce the number of copies when moving data between different components.
However, whatever the reason, these performance gains haven’t materialized, or are so anecdotal in either direction that it’s difficult to claim a clear win over X11. In fact, you can find examples showing roughly a 40% slowdown when using Wayland over X11! I’m sure there are similar benchmarks claiming Wayland wins and vice versa (happy to link them as well if provided).
The problem is that, even if Wayland was twice as fast, it doesn’t compare to improvements in hardware over the same period. It would’ve been better to just wait! The performance improvements would have to be much more substantial for this to be a reasonable argument in favor of Wayland. The fact that the question even exists as to whether one or the other is faster after a substantial period is an obvious failure.
Additionally, those performance gains don’t matter if I’m not able to make use of them. For example, if I’m using the most popular graphics card vendor in my system, I shouldn’t expect things to work out of the box.
One rebuttal I’ve heard is that it’s not an issue with Wayland, it’s an issue with the compositor/extension/application. After all, “Wayland” isn’t a piece of software, it’s a simple protocol that other software chooses to implement!
Of course, what this means in practice is that there are multiple (usually incompatible) implementations of multiple different standards. Maybe this would be fine if the concept of a desktop operating system was completely new and unknown, but users balk when discovering things like drag and drop or screen sharing are not natively supported and are essentially still in “beta” status.
Instead of providing a better way of doing something, common features are not supported at all, and instead it’s the job of everyone else in the ecosystem to agree on a standard. That’s not a stunning argument in favor of replacing something that already exists and that has already been standardized in X11!
Wayland has been around for only 17 years, while X11 has closer to 40 years of development behind it. Things are still under development and obviously will get better, so why complain about issues that will inevitably get fixed?
Because it’s been 17 years and people are still running into major issues!
I was unpleasantly surprised when using KDE Plasma that the default display server had been changed to Wayland. I noticed very quickly on startup when I encountered enough graphical hitches to realize I was running Wayland and quickly switch back. Anecdotal experience is not enough to say this is a broad issue, but my point is that when an average user encounters graphical issues within 60 seconds of using it, maybe it’s not ready to be made the default! It was only within the last 6 months that OBS stopped segfaulting when trying to launch on Wayland. I assume I’m in decent company when even the developer of a major compositor is still not able to use Wayland in 2026.
The number of “simple” utilities that seem partially supported or half-baked is incredible and seems to be a massive duplication of effort. The tooling around X11 that has been developed over the last 40 years seems to have been completely dropped and no alternative has been provided. Instead of providing an obvious transition path, Wayland has introduced even more fragmentation.
Older software that has a ton of “legacy cruft” has been tested and bugs have long since been fixed. I fully believe with another 20 years of development things will be better. The problem is that I am being forced to make the switch now. See: The push from KDE and RedHat to Wayland and dropping support for older technologies.
This post probably best encapsulates the developer opinion towards users trying to migrate to the next iteration of the Linux desktop:
Maybe Wayland doesn’t work for your precious use-case. More likely, it does work, and you swallowed some propaganda based on an assumption which might have been correct 7 years ago. Regardless, I simply don’t give a shit about you anymore.
We’ve sacrificed our spare time to build this for you for free. If you turn around and harass us based on some utterly nonsensical conspiracy theories, then you’re a fucking asshole.
It’s even more ironic compared to the post made a week later, expressing the same frustration with the Rust community that people have with Wayland!
Drew has since deleted this post, so I understand if he no longer stands by those opinions. However, it’s a representative slice of developer sentiment towards users that are now being forced to use unfinished software. Entitlement and bullying of open-source maintainers is not appropriate, and it’s understandable that the developers lash out after feeling beaten down by entitled users. However, to have some sympathy on the user side, it’s likely born out of frustration of being forced to use the new hotness and then encountering breaking bugs that are impossible for the average user to work around.
It is not the fault of the original developers for building what they wanted to build. I think it’s important to keep in mind that they didn’t necessarily choose for Wayland to become as popular as it has or the foundation for the desktop of the future. See the diagram below:
Having Wayland as a developers-only playground is fine! Have fun building stuff! But the second actual users are forced to use it expect them to be frustrated! At this point I consider Wayland to be a fun toy built entirely to pacify developers tired of working on a finished legacy project.
Since most of this post has been overwhelmingly negative against the development of Wayland, it’s instead better to learn as much as possible and look forward towards “what would I want to be able to do”. Windowing technology is absolutely not “done”, and instead of following other operating systems, it would be fantastic if Linux could do things no other environment could do.
For example, being able implement non-rectangular windows, exposing context actions (similar to MacOS), or making it easier to automate or script parts of the desktop environment would be incredibly exciting!
It’s difficult to overstate the amount of progress in support for gaming, new (and old) hardware, as well as the amount of overall “polish”. Every developer should be proud to be a part of that!
After 17 years, Wayland is still not ready for prime time. Notable breakage is being documented, and adoption has been correspondingly slow.
For some users the switch is seamless. For others (including myself), they tend to bounce off after encountering workflow-breaking issues. I think it’s obvious at this point that the trade-offs have not been worth the hassle.
My prediction is that within the next 5 years the following will be true:
Projects will drop Wayland support and go back to X11
There will be a new display protocol that displaces both X11 and Wayland
The new display protocol will be a drop-in replacement (similar to XWayland)
Fragmentation will still be an issue (this one’s a freebie)
See you in 2030 for the year of the Linux Desktop.
Included are some of the links referenced in this post as well as some additional reading.
...
Read the original on omar.yt »
Cockpit is an interactive server admin interface. It is easy to use and very lightweight. Cockpit interacts directly with the operating system from a real Linux session in a browser.
You can install Cockpit on many Linux operating systems including Debian, Fedora and RHEL.
Cockpit makes Linux discoverable, allowing sysadmins to easily perform tasks such as starting containers, storage administration, network configuration, inspecting logs and so on.
Jumping between the terminal and the web tool is no problem. A service started via Cockpit can be stopped via the terminal. Likewise, if an error occurs in the terminal, it can be seen in the Cockpit journal interface.
You can also easily add other machines that have Cockpit installed and are accessible via SSH and jump between these hosts.
...
Read the original on github.com »
Ross and Hugo Turner are genetically identical professional adventurers. By dressing one in cutting-edge technical apparel and the other in 100-year-old heritage kit on the world’s toughest expeditions, they are conducting the ultimate A/B test on modern gear.
On the vast, blinding expanse of the Greenland Ice Cap, two figures struggle against the wind. To a distant observer, it looks like a tear in the space-time continuum.
One figure is a vision of modern mountaineering: clad in streamlined, synthetic insulation, waterproof membranes, and modern plastic boots. The other looks like he just stepped out of a sepia-toned photograph from 1914: wrapped in heavy wool, gabardine cotton, and cumbersome leather boots treated with dubbin.
They are struggling up the ice at the exact same pace. They share the same parents. They share 100% of their DNA. Ross and Hugo Turner—The Turner Twins—are professional adventurers, but they are also living, breathing scientific instruments. In a world obsessed with the “cutting edge,” they’re using their unique genetic mirror to ask a fundamental question about adventure gear: exactly how far has a century of textile innovation taken us? And what have we perhaps forgotten?
Their findings are enough to make any adventurer rethink their entire layering system.
The Turner Twins’ trajectory into the world of high-stakes exploration wasn’t born from a childhood obsession with Everest; it spawned from a near-tragedy.
At age 17, just prior to their 18th birthday, Hugo dove into the sea and hit a sandbank. He fractured his C7 vertebra. In a week where eight other people were admitted to the same hospital with similar injuries, Hugo was the only one to walk out. The proximity to permanent paralysis was a profound wake-up call.
“We had a midlife crisis at 17,” Ross explains. “Life got put in perspective.”
They needed to live and test their limits. They started by rowing the Atlantic to raise funds for Spinal Research, a UK-based charity they’ve worked with for years. But the real epiphany came on a London tube train years later, reading about the centenary of Ernest Shackleton’s Imperial Trans-Antarctic Expedition. They looked at the grayed photos of men in tweed on the ice and wondered: How did they survive?
They realized they possessed the ultimate scientific tool: a perfect control subject and a perfect variable. If they went on an expedition, and Ross wore modern kit while Hugo wore historic replicas, any difference in performance—be it core temperature, calorie burn, or cognitive function—could be attributed solely to the gear, not genetics.
Now, here’s the fun bit for gear geeks like us: it’s not cosplay; it’s rigorous historical reconstruction. “There’s no point in saying, ’Oh, it looks correct,’” Hugo says. “If it’s just an adventure in traditional dress, it’s not a proper test.”
To ensure genuine data, they dive deep into the archives of carry history. Their rule is strict: materials must be 100% natural—wool, silk, cotton, fur, and leather.
The process of recreating this kit is a monumental challenge in supply chain archaeology. They track down the original manufacturers, many of whom are still operating as heritage brands in the UK, and convince them to restart production lines that have been dormant for decades.
A prime example is the footwear. For a recent expedition recreating George Mallory’s 1924 Everest attempt, they partnered with Crockett & Jones in Northampton, the actual manufacturer of Shackleton’s boots.
“Their commercial boot, the Snowdon, was the basis for the first Mallory boot,” Ross details. But early tests on Mt. Elbrus proved disastrous; the boots weren’t warm enough for high altitude. The redesign process was agonizingly complex, involving 18 months of prototyping.
Crockett & Jones had to stop their modern production line just to build these prototypes. The final product was a masterpiece of forgotten technology—a literal “sandwich” for the feet. It featured a leather inner boot, totally covered by a thick layer of yurt felt wool, then encased in yet another leather outer boot, all sitting on a massive rubber midsole for insulation.
“The Mallory shoe went through 40 pairs of hands doing little technical details,” Ross says. “It was an enormous undertaking because boots like that nowadays simply aren’t made.”
While one twin is dressed like a 1920s gentleman explorer, both are wired up with technology that would make NASA jealous.
Gone are the days of lugging heavy machinery up mountains. Today, their biometrics are tracked by ingestible sensor pills that monitor core temperature from the inside out, patches that analyze sweat composition, and smartphone-connected hygrometers taped to their skin to measure moisture buildup under layers.
“We are taking technology way outside its comfort zone,” Ross notes. They recently adapted pediatric thermometer patches—meant to track a baby’s fever via an app—hacking the software to register temperatures from 0 to 50 degrees Celsius to create thermal maps of their bodies during climbs.
They measure cognitive decline in hypoxia, gut microbiome changes under stress, and precise metabolic rates. It’s this hard data that separates their work from mere nostalgia.
So, what happens when you pit 1924 against 2024? The results confront the very foundation of the modern outdoor industry’s marketing machine.
The assumption is that a century of GORE-TEX, PrimaLoft, and ripstop nylon has exponentially increased our safety and warmth. The Twins’ data suggests the gap is much narrower than we think.
During their simulation of Mallory’s Everest expedition, the data showed that on summit night, the average body temperature difference between the twin in modern down and the twin in complicated layers of silk, wool, and gabardine was a staggering 1.8°C.
Furthermore, they found that natural materials managed moisture in ways modern synthetics struggle to replicate. While trekking across Greenland, the twin in the woollen jumper wasn’t clammy.
“There’s so much air in between the cable knit,” Ross describes. “Your back becomes just a huge field of frost. We just wipe it down and the air kind of wicks off the back. That moisture is gone.”
The historic gear worked spectacularly well, but it came with a caveat: it required immense skill to operate.
The true takeaway from the Turner Twins’ experiments isn’t that we should all burn our technical shells and start wearing tweed. It’s that technology has made us lazy with our micro-climate regulation.
Modern gear allows for a “set and forget” mentality—one big zip to regulate temperature from -20°C to 0°C. The old explorers, by contrast, were masters of active management.
They analyzed Mallory’s layering system and found genius in its complexity. He wore six layers on his torso. He placed a silk shirt over a wool jumper, trapping air in a way that mimicked the loft of modern down.
“They were extremely well-informed and educated about how their kit worked,” Ross argues. “They knew exactly what they were doing.”
The data proves that the gear of the past is capable, but it has a narrower operating window. If you stop moving in Mallory’s kit at 8,000 meters, you will freeze quickly. Modern gear buys you a safety margin if you become static.
For the Twins, the ultimate lesson for any modern adventurer is to reclaim that lost knowledge. “Find your own limitations with the kit you have,” Ross advises. “A lot of people go on projects and pack way too much because they think, ‘I need technical item X, Y, and Z.’ Understanding that old-school mentality of what you actually need is probably more efficient.”
The Turner Twins are proving that while protective gear has evolved, the human engine hasn’t. The most crucial piece of carry isn’t something you can buy; it’s the knowledge of how to use what you have, whether it was made last year, or a century ago.
You can follow the Turner Twins’ adventures here.
...
Read the original on www.carryology.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.