10 interesting stories served every morning and every evening.
Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter
Your browser does not support the video tag. Your browser does not support the video tag. A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag.A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag.
Build autonomous agents that plan, navigate apps, and complete tasks on your behalf, with native support for function calling. Develop applications with strong audio and visual understanding, for rich multimodal support.Create multilingual experiences that go beyond translation and understand cultural context.Improve performance for specific tasks by training Gemma using your preferred frameworks and techniques.Run models on your own hardware for efficient development and deployment.
A new level of intelligence for mobile and IoT devicesAudio and vision support for real-time edge processing. They can run completely offline with near-zero latency on edge devices like phones, Raspberry Pi, and Jetson Nano.
Advanced reasoning for IDEs, coding assistants, and agentic workflows. These models are optimized for consumer GPUs — giving students, researchers, and developers the ability to turn workstations into local-first AI servers.
Gemma 4 models undergo the same rigorous infrastructure security protocols as our proprietary models. By choosing Gemma 4, enterprises and sovereign organizations gain a trusted, transparent foundation that delivers state-of-the-art capabilities while meeting the highest standards for security and reliability.
...
Read the original on deepmind.google »
This is the first of a series of articles in which you will learn about what may be one of the silliest, most preventable, and most costly mishaps of the 21st century, where Microsoft all but lost OpenAI, its largest customer, and the trust of the US government.
I joined Azure Core on the dull Monday morning of May 1st, 2023, as a senior member of the Overlake R&D team, the folks behind the Azure Boost offload card and network accelerator.
I wasn’t new to Azure, having run what is likely the longest-running production subscription of this cloud service, which launched in February 2010 as Windows Azure.
I wasn’t new to Microsoft either, having been part of the Windows team since 1/1/2013 and later helped migrate SharePoint Online to Azure, before joining the Core OS team as a kernel engineer, where I notably helped improve the kernel and helped invent and deliver the Container platform that supports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all shipping technologies that resulted in multiple granted patents.
Furthermore, I contributed to brainstorming the early Overlake cards in 2020-2021, drafting a proposal for a Host OS Accelerator Card communication protocol and network stack, when all we had was a debugger’s serial connection. I also served as a Core OS specialist, helping Azure Core engineers diagnose deep OS issues.
I rejoined in 2023 as an Azure expert on day one, having contributed to the development of some of the technologies on which Azure relies and having used the platform for more than a decade, both outside and inside Microsoft at a global scale.
As a returning employee, I skipped the New Employee Orientation and had my Global Security invite for 12 noon to pick up my badge, but my future manager asked if I could come in earlier, as the team had their monthly planning meeting that morning.
I, of course, agreed and arrived a few minutes before 10 am at the entrance of the Studio X building, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I followed him to a meeting room through a labyrinth of corridors.
The room was chock-full, with more people on a live conference call. The dev manager, the leads, the architects, the principal and senior engineers shared the space with what appeared to be new hires and junior personnel.
The screen projected a slide where I recognized a number of familiar acronyms, like COM, WMI, perf counters, VHDX, NTFS, ETW, and a dozen others, mixed with new Azure-related ones, in an imbroglio of boxes linked by arrows.
I sat quietly at the back while a man was walking the room through a big porting plan of their current stack to the Overlake accelerator. As I listened, it was not immediately clear what that series of boxes with Windows user-mode and kernel components had to do with that plan.
After a few minutes, I risked a question: Are you planning to port those Windows features to Overlake? The answer was yes, or at least they were looking into it. The dev manager showed some doubt, and the man replied that they could at least “ask a couple of junior devs to look into it.”
The room remained silent for an instant. I had seen the hardware specs for the SoC on the Overlake card in my previous tenure: the RAM capacity and the power budget, which was just a tiny fraction of the TDP you can expect from a regular server CPU.
The hardware folks I had spoken with told me they could only spare 4KB of dual-ported memory on the FPGA for my doorbell shared-memory communication protocol.
Everything was nimble, efficient, and power-savvy, and the team I had joined 10 minutes earlier was seriously considering porting half of Windows to that tiny, fanless, Linux-running chip the size of a fingernail.
That felt like Elon talking about colonizing Mars: just nuke the poles then grow an atmosphere! Easier said than done, uh?
That entire 122-strong org was knee-deep in impossible ruminations involving porting Windows to Linux to support their existing VM management agents.
The man was a Principal Group Engineering Manager overseeing a chunk of the software running on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they really contemplated porting Windows to Linux to support their current software.
At first, I questioned my understanding. Was that serious? The rest of the talk left no doubt: the plan was outlined, and the dev leads were tasked with contributing people to the effort. It was immediately clear to me that this plan would never succeed and that the org needed a lot of help.
That first hour in the new role left me with a mix of strange feelings, stupefaction, and incredulity.
The stack was hitting its scaling limits on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hypervisor was capable of, and was a noisy neighbor consuming so many resources that it was causing jitter observable from the customer VMs.
There is no dimension in the universe where this stack would fit on a tiny ARM SoC and scale up by many factors. It was not going to happen.
I have seen a lot in my decades of industry (and Microsoft) experience, but I had never seen an organization so far from reality. My day-one problem was therefore not to ramp up on new technology, but rather to convince an entire org, up to my skip-skip-level, that they were on a death march.
Somewhere, I knew it was going to be a fierce uphill battle. As you can imagine, it didn’t go well, as you will later learn.
I spent the next few days reading more about the plans, studying the current systems, and visiting old friends in Core OS, my alma mater. I was lost away from home in a bizarre territory where people made plans that didn’t make sense with the aplomb of a drunk LLM.
I notably spent more than 90 minutes chatting in person with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the kernel team years earlier.
His org is responsible for delivering Mariner Linux (now Azure Linux) and the trimmed-down distro running on the Overlake / Azure Boost card. He kindly answered all my questions, and I learned that they had identified 173 agents (one hundred seventy-three) as candidates for porting to Overlake.
I later researched this further and found that no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node, what they all did, how they interacted with one another, what their feature set was, or even why they existed in the first place.
Azure sells VMs, networking, and storage at the core. Add observability and servicing, and you should be good. Everything else, SQL, K8s, AI workloads, and whatnot all build on VMs with xPU, networking, and storage, and the heavy lifting to make the magic happen is done by the good Core OS folks and the hypervisor.
How the Azure folks came up with 173 agents will probably remain a mystery, but it takes a serious amount of misunderstanding to get there, and this is also how disasters are built.
Now, fathom for a second that this pile of uncontrolled “stuff” is orchestrating the VMs running Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the government clouds and other mission-critical infrastructure, and you’ll be close to understanding how a grain of sand in that fragile pileup can cause a global collapse, with serious National Security implications as well as potential business-ending consequences for Microsoft.
We are still far from the vaporized trillion in market cap, my letters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their total silence, the quasi-loss of OpenAI, the breach of trust with the US government as publicly stated by the Secretary of Defense, the wasted engineering efforts, the Rust mandate, my stint on the OpenAI bare-metal team in Azure Core, the escort sessions from China and elsewhere, and the delayed features publicly implied as shipping since 2023, before the work even began.
If you’re running production workloads on Azure or relying on it for mission-critical systems, this story matters more than you think.
...
Read the original on isolveproblems.substack.com »
Tailscale should feel nearly invisible when it’s connecting you and all your devices together. But on some MacBooks, for a time, it could be a little too invisible. We have two fixes for it: one small and slightly quirky, and another really useful one, available now on macOS.
The small, quirky fix might soon become a thing of the past for the vast majority of Tailscale users on Macs. I wanted to document it here: to help other developers, to mark this moment in time, and quietly crow about our windowed macOS interface now being generally available.
So here’s the issue we had with Tailscale’s icon slipping into darkness, its little work-around, and then our greater solution.
At its debut on macOS, Tailscale was a command-line tool and a menu bar utility. Some MacBooks, starting with 2021 MacBook Pro models, have a notch in the top-middle of their display. And depending on how many other apps with menu bar icons are running, the Tailscale app’s icon can be hidden inside that notch.
Apple, a company that traditionally favors simple functionality over dense settings, does not offer users, or developers, a path out of the darkness. If there are more menu bar icons then there is space to the right side of the notch, the menu bar items simply disappear into the notch-y ether. If you don’t see it, you can’t click it. There is no notification to the user, no overflow section, no options to rearrange the menu bar items.
As of this writing, Apple has some indirect work-arounds, like pushing more of its own system icons into a revamped Control Center, and offering a somewhat inelegant “Scale to fit below camera” option. Third-party menu-bar-managing apps like ICE and Bartender can help, but they add complications and overhead.
“We don’t have any control over where things get rendered in the menu bar,” said one Tailscale engineer, who asked to go nameless so as to share their honest opinion. “You just say, ‘I want to be a menu bar app.’ They shove it up there, and that’s it, you end up where you end up.”
Given this there-or-not-there behavior, Tailscale developers received a number of bug reports from users when, after the notched MacBooks’ debut, their Tailscale icons fell into the middle-screen distance. “They were like, ’Actually, I can’t find my Tailscale. It’s gone. It didn’t start,” the engineer said. “We’re like, ‘No, it’s there, it’s just hiding behind the notch.’ But we kind of got sick of that.”
Mac menu bar icons may not know they are trapped inside the no-pixel phantom zone, but they can report that something is blocking them. Using data from occlusionState, the Tailscale app can see that its icon is in mid-bar limbo.
And while it cannot move, it can speak. Specifically, a pop-up message can say:
This affable warning is not perfect, by any means. The notch warning can be inadvertently triggered by other display quirks, like opening and closing the MacBook lid, moving between monitors, or some combination of the two. But it helped triage the “Where are my Tailscale settings?” issue for a while.
Apple could certainly make some changes to prevent this being an issue at all. The system could prevent menu bar icons from rendering in the notch area at all. An overflow mechanism could stack the icons that would otherwise drop into a negative notch zone. Or developers could be given more information and tools about icons’ notch-itive states.
In the meantime, here’s a look at the Swift code that let our app know it should chirp a bit when hidden. It should be unnecessary with the new windowed app—unless you enable the “Hide Dock icon” option in the windowed client options, in which case it might still call out its hidden nature.
As we noted at its September beta release, a windowed version of Tailscale’s macOS app doesn’t replace the menu bar app, but runs alongside it. It can be pulled up from the Dock or a Spotlight search, and makes a lot of Tailscale data and features more accessible.
The windowed interface, enabled by default starting with version 1.96.2 of our macOS client, offers:
* A searchable list of tailnet devices and their connection status
* Easily ping, copy IP addresses, and send files through Taildrop to devices
* Easy access to exit nodes, searchable and with one recommended based on latency, performance, and location
* A red dot on the Dock icon to note critical errors
* A “mini player” that shrinks Tailscale down to the bare minimum
* A product tour of all these things upon installing/updating
Let us know what you think of the new interface so we can make it better. We’re working on a comparable UI for Windows devices. And we’re always looking for ways to bring a little bit of functional whimsy to our software.
...
Read the original on tailscale.com »
Software development is changing, and so is Cursor.
In the last year, we moved from manually editing files to working with agents that write most of our code. How we create software will continue to evolve as we enter the third era of software development, where fleets of agents work autonomously to ship improvements.
We’re building toward this future, but there is a lot of work left to make it happen. Engineers are still micromanaging individual agents, trying to keep track of different conversations, and jumping between multiple terminals, tools, and windows.
We’re introducing Cursor 3, a unified workspace for building software with agents. The new Cursor interface brings clarity to the work agents produce, pulling you up to a higher level of abstraction, with the ability to dig deeper when you want. It’s faster, cleaner, and more powerful, with a multi-repo layout, seamless handoff between local and cloud agents, and the option to switch back to the Cursor IDE at any time.
When we started building Cursor, we forked VS Code instead of building an extension so we could shape our own surface. With Cursor 3, we took that a step further by building this new interface from scratch, centered around agents.
The new interface is inherently multi-workspace, allowing humans and agents to work across different repos.
Working with agents is now much easier. All local and cloud agents appear in the sidebar, including the ones you kick off from mobile, web, desktop, Slack, GitHub, and Linear.
Cloud agents produce demos and screenshots of their work for you to verify. This is the same experience you get at cursor.com/agents, now integrated into the desktop app.
We made moving agents between environments really fast.
Move an agent session from cloud to local when you want to make edits and test it on your own desktop. Composer 2, our own frontier coding model with high usage limits, is great for iterating quickly.
In the reverse direction, you can move an agent session from local to cloud to keep it running while you’re offline, or so that you can move on to the next task. This is especially useful for longer-running tasks that would otherwise get interrupted when you close your laptop.
The new diffs view allows you to edit and review changes faster with a simpler UI. When you’re ready, you can stage, commit, and manage PRs.
Alpha users told us that a lot of what they like about Cursor 3 is the way it combines the best parts of the IDE with more recent capabilities we’ve shipped in an agent-first interface.
Dive deeper anytime by viewing files, and go to definition in the editor with full LSPs.
Cursor can use the built-in browser to open, navigate, and prompt against local websites.
Browse hundreds of plugins that extend agents with MCPs, skills, subagents, and more. Install with one click, or set up your own team marketplace of private plugins.
With Cursor 3, we have the foundational pieces in place—model, product, and runtime—to build more autonomous agents and better collaboration across teams. We will also continue to invest in the IDE until codebases are self-driving.
This won’t be the last time the interface for building software changes. More powerful coding models will unlock new interaction patterns. We are excited to continue to build, simplify, and transform Cursor to be the best way to code with AI.
Upgrade Cursor, and type Cmd+Shift+P -> Agents Window to try the new interface. Or learn more in our docs.
...
Read the original on cursor.com »
The free AI already on your Mac. Every Mac with Apple Silicon has a built-in LLM. Apple locked it behind Siri. apfel sets it free - as a CLI tool, an OpenAI-compatible server, and a chat.
The AI is already installed on your Mac. Apple ships it with macOS. apfel just gives you a way to talk to it - from your terminal, from your code, from anywhere.
No API keys. No subscriptions. No per-token billing. It’s your hardware - use it.
Every token generated locally on your Apple Silicon. Nothing leaves your machine. Ever.
Context window for input and output combined. Enough for most single-turn tasks and short chats.
The model under the hood
Apple ML Research
Three ways to use it.
CLI tool, HTTP server, or interactive chat. Pick the one that fits.
Pipe-friendly and composable. Works with jq, xargs, and your shell scripts. stdin, stdout, JSON output, file attachments, proper exit codes.
apfel “What is the capital of Austria?”
The capital of Austria is Vienna.
Drop-in replacement at localhost:11434. Point any OpenAI SDK at it and go. Streaming, tool calling, CORS, response formats.
Multi-turn conversations with automatic context management. Five trimming strategies. System prompt support. All on your Mac.
> How do I reverse a list in Python?
Apple built an LLM into your Mac. apfel gives it a front door.
Starting with macOS 26 (Tahoe), every Apple Silicon Mac includes a language model as part of Apple Intelligence. Apple exposes it through the FoundationModels framework - a Swift API that gives apps access to SystemLanguageModel. All inference runs on the Neural Engine and GPU. No network calls, no cloud, no API keys. The model is just there.
But Apple only uses it for Siri
Out of the box, the on-device model powers Siri, Writing Tools, and system features. There is no terminal command, no HTTP endpoint, no way to pipe text through it. The FoundationModels framework exists, but you need to write a Swift app to use it. That is what apfel does.
apfel is a Swift 6.3 binary that wraps LanguageModelSession and exposes it three ways: as a UNIX command-line tool with stdin/stdout, as an OpenAI-compatible HTTP server (built on Hummingbird), and as an interactive chat with context management.
It handles the things Apple’s raw API does not: proper exit codes, JSON output, file attachments, five context trimming strategies for the small 4096-token window, real token counting via the SDK, and conversion of OpenAI tool schemas to Apple’s native Transcript. ToolDefinition format.
Shell scripts in the demo/ folder. Install apfel first, then grab the ones you want.
Natural language to shell command. Say what you want, get the command.
Pipe chains from plain English. awk, sed, sort, uniq - generated for you.
Explain any command, error message, or code snippet in plain English.
What’s this directory? Instant project orientation for any codebase.
Change one URL. Keep your code.
apfel speaks the OpenAI API. Any client library, any framework, any tool that talks to OpenAI can talk to your Mac’s AI instead. Just change the base URL.
from openai import OpenAI
# Just change the base_url. That’s it.
client = OpenAI(
base_url=“http://localhost:11434/v1”,
api_key=“unused” # no auth needed
resp = client.chat.completions.create(
model=“apple-foundationmodel”,
messages=[{
“role”: “user”,
“content”: “What is 1+1?”
print(resp.choices[0].message.content)
...
Read the original on apfel.franzai.com »
The race is on to test new vehicles in the underground Large Hadron Collider tunnel, ahead of major works starting this summer
The race is on to test new vehicles in the underground Large Hadron Collider tunnel, ahead of major works starting this summer
Update: did you enjoy our April Fool’s day story? While we won’t be racing karts through the tunnel, we are gearing up for major works to prepare for HiLumi LHC and its new technologies. The image is based on a real 1991 CERN image of the monorail used to transport people and equipment in the tunnel during the lifetime of the Large Electron-Positron Collider (LEP), which preceded the LHC.
Following on from the robotic mice, CERN engineers have now developed a super-charged kart to enable workers to race through the Large Hadron Collider (LHC) underground tunnel during the upcoming major works, starting this summer.
The karts promise a power boost to activities during this period, known as Long Shutdown 3 (LS3), which will see the LHC transformed into the High-Luminosity LHC. These vehicles will replace the bicycles that were used until now to travel through the 27-km underground tunnel, enabling engineers and technicians to speed to areas where improvements to the accelerator are required.
“Each kart is turbo-boosted by 64 superconducting engines,” explains project leader Mario Idraulico. “When the engines are cooled to below their critical temperatures, the Meissner effect levitates the karts, allowing them to zip through the tunnels at high speeds and, mamma mia, they’re super!”
Early tests have been promising, and the next steps involve testing different kart designs in an underground race. Safety coordinator Luigi Fratello has ensured that each driver will be issued with Safety and Health Equipment for Long and Limited Stays (SHELLS), although his response to drivers wanting bananas in the tunnel was “Oh no!”
These karts, although developed to support CERN’s fundamental research programme, show clear applications for society. CERN’s Knowledge Transfer Group has begun discussions with European startup company Quantum Mushroom to explore aerospace applications and powering for next-generation anti-gravity vehicles.
Surprisingly, the kart project began from a collaboration between CERN engineers and onsite nursery school children — one example of CERN’s commitment to inspiring future generations. “We’re thrilled that the children’s kart designs were the inspiration for the engineered karts,” exclaimed schoolteacher Yoshi Kyouryuu, mid-way through painting spots on eggs for an Easter egg hunt.
“As educators, we promote curiosity from a young age, which is why we paint question marks all over our yellow school walls,” explained school director, Rosalina Pfirsich, looking up from her storybook. “With all the contributions the children have made to the upcoming High-Luminosity LHC project, we’ve taken to calling them Luma!”
Find out more about the High-Luminosity LHC project.
...
Read the original on home.web.cern.ch »
Your directory for European software, products and services. For enhanced privacy, quality, and a strong Europe.
Select your currently used services and instantly receive tailored European solutions — secure, privacy-compliant, and powerful.
Your directory for European software, products and services. For enhanced privacy, quality, and a strong Europe.
Select your currently used services and instantly receive tailored European solutions — secure, privacy-compliant, and powerful.
What Europe does better
EU companies are subject to the world’s strictest environmental regulations. European products are designed for longevity — less throwaway culture, more responsibility.
Made in Europe has stood for top quality and durability for decades. Strict standards guarantee fair working conditions, while shorter supply chains measurably reduce CO₂.
EU providers are subject to the GDPR — the strictest data protection law worldwide. Your data belongs to you, not advertising networks. Note: US software can be compelled by the CLOUD Act to surrender data to US authorities — even if servers are located in Europe.
...
Read the original on only-eu.eu »
Economics and similar, for the sleep-deprived
A subtle change has been made to the comments links, so they no longer pop up. Does this in any way help with the problem about comments not appearing on permalinked posts, readers?
Update: seemingly not
Update: Oh yeah!
Update, September 2008. Hullo there Paul Krugman readers. Yes, I did say “Good ideas do not need lots of lies told about them in order to gain public acceptance”, and as a general maxim I wholeheartedly recommend it. I don’t necessarily, however, either endorse or whatever-the-opposite-of-endorse the specific use of that maxim in the context of Prof. Krugman’s post about the Paulson bailout plan; I don’t actually have a fully formed view about that plan. I do, however, wholeheartedly endorse “Development, Geography and Economic Theory”, which I think is a terribly underrated economics book, and am at this moment rather starstruck at having one of my essays admired by the nearest modern equivalent to my hero JK Galbraith. Anyway, as you were; by way of context, the post below was written just as a lot of high-profile commentators like Thomas Friedman were abandoning their support for the Iraq War.
The D-Squared Digest One Minute MBA - Avoiding Projects Pursued By Morons 101
Literally people have been asking me: “How is it that you were so amazingly prescient about Iraq? Why is it that you were right about everything at precisely the same moment when we were wrong?” No honestly, they have. I’d love to show you the emails I’ve received, there were dozens of them, honest. Honest. Anyway, I note that “errors of prewar planning” is now pretty much a mainstream stylised fact, so I suspect that it might make some small contribution to the commonweal if I were to explain how it was that I was able to spot so early that this dog wasn’t going to hunt. I will struggle manfully with the savage burden of boasting, self-aggrandisement and ego-stroking that this will necessarily involve. It’s been done before, although admittedly by a madman in the process of dying of syphilis of the brain. Sorry, where was I?
Anyway, the secret to every analysis I’ve ever done of contemporary politics has been, more or less, my expensive business school education (I would write a book entitled “Everything I Know I Learned At A Very Expensive University”, but I doubt it would sell). About half of what they say about business schools and their graduates is probably true, and they do often feel like the most collossal waste of time and money, but they occasionally teach you the odd thing which is very useful indeed. Here’s a few of the ones I learned which I considered relevant to judging the advisability of the Second Iraq War.
Good ideas do not need lots of lies told about them in order to gain public acceptance. I was first made aware of this during an accounting class. We were discussing the subject of accounting for stock options at technology companies. There was a live debate on this subject at the time. One side (mainly technology companies and their lobbyists) held that stock option grants should not be treated as an expense on public policy grounds; treating them as an expense would discourage companies from granting them, and stock options were a vital compensation tool that incentivised performance, rewarded dynamism and innovation and created vast amounts of value for America and the world. The other side (mainly people like Warren Buffet) held that stock options looked awfully like a massive blag carried out my management at the expense of shareholders, and that the proper place to record such blags was the P&L account.
Our lecturer, in summing up the debate, made the not unreasonable point that if stock options really were a fantastic tool which unleashed the creative power in every employee, everyone would want to expense as many of them as possible, the better to boast about how innovative, empowered and fantastic they were. Since the tech companies’ point of view appeared to be that if they were ever forced to account honestly for their option grants, they would quickly stop making them, this offered decent prima facie evidence that they weren’t, really, all that fantastic.
Application to Iraq. The general principle that good ideas are not usually associated with lying like a rug1 about their true nature seems to have been pretty well confirmed. In particular, however, this principle sheds light on the now quite popular claim that “WMDs were only part of the story; the real priority was to liberate the Iraqis, which is something that every decent person would support”.
Fibbers’ forecasts are worthless. Case after miserable case after bloody case we went through, I tell you, all of which had this moral. Not only that people who want a project will tend to make innacurate projections about the possible outcomes of that project, but about the futility of attempts to “shade” downward a fundamentally dishonest set of predictions. If you have doubts about the integrity of a forecaster, you can’t use their forecasts at all. Not even as a “starting point”. By the way, I would just love to get hold of a few of the quantitative numbers from documents prepared to support the war and give them a quick run through Benford’s Law.
Application to Iraq This was how I decided that it was worth staking a bit of credibility on the strong claim that absolutely no material WMD capacity would be found, rather than “some” or “some but not enough to justify a war” or even “some derisory but not immaterial capacity, like a few mobile biological weapons labs”. My reasoning was that Powell, Bush, Straw, etc, were clearly making false claims and therefore ought to be discounted completely, and that there were actually very few people who knew a bit about Iraq but were not fatally compromised in this manner who were making the WMD claim. Meanwhile, there were people like Scott Ritter and Andrew Wilkie who, whatever other faults they might or might not have had, did not appear to have told any provable lies on this subject and were therefore not compromised.
The Vital Importance of Audit. Emphasised over and over again. Brealey and Myers has a section on this, in which they remind callow students that like backing-up one’s computer files, this is a lesson that everyone seems to have to learn the hard way. Basically, it’s been shown time and again and again; companies which do not audit completed projects in order to see how accurate the original projections were, tend to get exactly the forecasts and projects that they deserve. Companies which have a culture where there are no consequences for making dishonest forecasts, get the projects they deserve. Companies which allocate blank cheques to management teams with a proven record of failure and mendacity, get what they deserve.
I hope I don’t have to spell out the implications of this one for Iraq. Krugman has gone on and on about this, seemingly with some small effect these days. The raspberry road that led to Abu Ghraib was paved with bland assumptions that people who had repeatedly proved their untrustworthiness, could be trusted. There is much made by people who long for the days of their fourth form debating society about the fallacy of “argumentum ad hominem”. There is, as I have mentioned in the past, no fancy Latin term for the fallacy of “giving known liars the benefit of the doubt”, but it is in my view a much greater source of avoidable error in the world. Audit is meant to protect us from this, which is why audit is so important.
And so the lesson ends. Next week, perhaps, a few reflections on why it is that people don’t support the neoconservative project to bring democracy to the Middle East (a trailer for those who can’t wait; the title is going to be something like “If You Tell Lies A Lot, You Tend To Get A Reputation As A Liar”). Mind how you go.
1 We also learned in accounting class that the difference between “making a definite single false claim with provable intent to deceive” and “creating a very false impression and allowing it to remain without correcting it” is not one that you should rely upon to keep you out of jail. Even if your motives are noble.
this item posted by the management 5/27/2004 11:57:00 PM
...
Read the original on blog.danieldavies.com »
The lunar-bound astronauts of NASA’s Artemis II mission will go boldly where none have gone before, thanks to the space agency’s first-ever flight of a functional toilet around the moon.
On their voyages to the moon, NASA’s astronauts are finally getting some creature comforts of terrestrial toilets—such as having a door and being able to pee and poop simultaneously
The lunar-bound astronauts of NASA’s Artemis II mission will go boldly where none have gone before, thanks to the space agency’s first-ever flight of a functional toilet around the moon.
On their voyages to the moon, NASA’s astronauts are finally getting some creature comforts of terrestrial toilets—such as having a door and being able to pee and poop simultaneously
When astronauts first made their way to the moon, they did so without a toilet. The Apollo program’s system of plastic bags and funnels was so unwieldy and messy that crew members found it “objectionable” and “distasteful,” according to a subsequent NASA report. But now, more than a half century since the last crewed lunar voyages and their toilet troubles, the four astronauts of NASA’s Artemis II mission will take flight with a more commodious bathroom in tow.
The space agency’s Universal Waste Management System (UWMS)—more colloquially called just “the toilet”—was created to solve longstanding potty problems faced by astronauts and to offer a more familiar bathroom experience on the final frontier. Lunar astronauts will now be spoiled by amenities that include handles to help them stay steady in microgravity, a system that can handle both urine and feces simultaneously, urine-collection devices that work for both male and female astronauts, and even a door for the helpful illusion of privacy in a cramped crew capsule.
The new design is more than a decade in the making. Space infrastructure company Collins Aerospace first entered into a contract with NASA to develop the project in 2015. In that time, project scientists have overcome fundamental issues with past space toilets while imagining and meeting future needs so that the same system used by Artemis II astronauts could be adapted for moon and Mars missions in decades to come.
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“I think of waste management as an evolution of design,” says Melissa McKinley, project manager and principal investigator for NASA’s UWMS team. “The toilet has built on designs from Apollo, the space shuttle and even the International Space Station…. There is so much learning that goes into it.”
In the tight quarters of Apollo crew capsules, astronauts strapped adhesive-rimmed plastic bags and tubes to themselves whenever they had to defecate or urinate. Attaching the awkward bags was difficult enough in weightless conditions, but the astronauts also had to manually mix in a packet of germicide to prevent the buildup of bacteria and gases within the sealed bag.
The system was infamously prone to leaks, such as during the Apollo 10 mission, when astronauts noticed “a turd floating through the air,” and during the Apollo 8 mission, when the crew had to chase down blobs of vomit and feces that escaped into the cabin. A NASA report released after the end of the Apollo missions noted that waste disposal “must be given poor marks” when it comes to crew satisfaction.
“I used to want to be the first man to Mars,” said astronaut Ken Mattingly during the Apollo 16 mission, after describing the system. “This has convinced me that, if we got to go on Apollo, I ain’t interested.”
Based on these scathing reviews, NASA scientists knew they had to create a more streamlined system. After all, “the toilet is a ‘mission-critical’ system, so if it breaks down, the whole mission is in jeopardy,” says David Munns, a science and technology historian at the City University of New York.
So before the space shuttle program, they engineered a toilet that could work in a low-gravity environment. It looked much like a typical terrestrial toilet but required the astronauts to strap in and use a vacuum hose to prevent waste from floating back up into the spacecraft.
Early toilets on both the space shuttle and the International Space Station (ISS) used this vacuum system—with the key difference being that the ISS model recycled some wastewater, whereas the space shuttle’s version vented it into space. Both systems were significantly improved over the “toilets” of the Apollo years but still had big limitations. They weren’t built with female anatomy in mind and couldn’t process urine and feces at the same time, and while they provided some semblance of privacy with a curtain, there wasn’t yet a solid door.
The UWMS is the aerospace-engineered culmination of all these pent-up problems with the user experience. 3D-printed from titanium, its lightweight, standardized design means it can easily fit in many different types of spacecraft, including the ISS, the Artemis missions’ Orion capsule and potential future vehicles that have yet to be built.
The first version of the UWMS was tested on the ISS in 2020, and final installation was completed in 2021. It featured urine and feces systems that could be used simultaneously, modifications to make these systems more unisex and the much-coveted bathroom door. With further modifications to help the same system function on a lunar mission, a version of the UWMS has also been installed in the Orion capsule for Artemis II, the program’s first crewed launch—and UWMS project scientists are on the edge of their seats, eager to learn whether the mission’s four astronauts are happy with the design.
“I am very excited for the crew to use this,” McKinley says. “We’ll know so much more when this mission comes back…. It’s really going to drive [waste management] on future Artemis missions and the lunar campaign—as well as the Mars campaign to come.”
...
Read the original on www.scientificamerican.com »
Good Or Bad, I Was One Of The Ones
Probably even worse than you realized
Friday Quick Hits and Varia (and some Photo Friday)
NaPoWriMo 2026 [3rd April] - The Cartographer Who Leaves Things Out
...
Read the original on text.blogosphere.app »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.