10 interesting stories served every morning and every evening.




1 1,107 shares, 51 trendiness

How Microsoft Vaporized a Trillion Dollars

This is the first of a se­ries of ar­ti­cles in which you will learn about what may be one of the sil­li­est, most pre­ventable, and most costly mishaps of the 21st cen­tury, where Microsoft all but lost OpenAI, its largest cus­tomer, and the trust of the US gov­ern­ment.

I joined Azure Core on the dull Monday morn­ing of May 1st, 2023, as a se­nior mem­ber of the Overlake R&D team, the folks be­hind the Azure Boost of­fload card and net­work ac­cel­er­a­tor.

I was­n’t new to Azure, hav­ing run what is likely the longest-run­ning pro­duc­tion sub­scrip­tion of this cloud ser­vice, which launched in February 2010 as Windows Azure.

I was­n’t new to Microsoft ei­ther, hav­ing been part of the Windows team since 1/1/2013 and later helped mi­grate SharePoint Online to Azure, be­fore join­ing the Core OS team as a ker­nel en­gi­neer, where I no­tably helped im­prove the ker­nel and helped in­vent and de­liver the Container plat­form that sup­ports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all ship­ping tech­nolo­gies that re­sulted in mul­ti­ple granted patents.

Furthermore, I con­tributed to brain­storm­ing the early Overlake cards in 2020-2021, draft­ing a pro­posal for a Host OS Accelerator Card com­mu­ni­ca­tion pro­to­col and net­work stack, when all we had was a de­bug­ger’s se­r­ial con­nec­tion. I also served as a Core OS spe­cial­ist, help­ing Azure Core en­gi­neers di­ag­nose deep OS is­sues.

I re­joined in 2023 as an Azure ex­pert on day one, hav­ing con­tributed to the de­vel­op­ment of some of the tech­nolo­gies on which Azure re­lies and hav­ing used the plat­form for more than a decade, both out­side and in­side Microsoft at a global scale.

As a re­turn­ing em­ployee, I skipped the New Employee Orientation and had my Global Security in­vite for 12 noon to pick up my badge, but my fu­ture man­ager asked if I could come in ear­lier, as the team had their monthly plan­ning meet­ing that morn­ing.

I, of course, agreed and ar­rived a few min­utes be­fore 10 am at the en­trance of the Studio X build­ing, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I fol­lowed him to a meet­ing room through a labyrinth of cor­ri­dors.

The room was chock-full, with more peo­ple on a live con­fer­ence call. The dev man­ager, the leads, the ar­chi­tects, the prin­ci­pal and se­nior en­gi­neers shared the space with what ap­peared to be new hires and ju­nior per­son­nel.

The screen pro­jected a slide where I rec­og­nized a num­ber of fa­mil­iar acronyms, like COM, WMI, perf coun­ters, VHDX, NTFS, ETW, and a dozen oth­ers, mixed with new Azure-related ones, in an im­broglio of boxes linked by ar­rows.

I sat qui­etly at the back while a man was walk­ing the room through a big port­ing plan of their cur­rent stack to the Overlake ac­cel­er­a­tor. As I lis­tened, it was not im­me­di­ately clear what that se­ries of boxes with Windows user-mode and ker­nel com­po­nents had to do with that plan.

After a few min­utes, I risked a ques­tion: Are you plan­ning to port those Windows fea­tures to Overlake? The an­swer was yes, or at least they were look­ing into it. The dev man­ager showed some doubt, and the man replied that they could at least ask a cou­ple of ju­nior devs to look into it.”

The room re­mained silent for an in­stant. I had seen the hard­ware specs for the SoC on the Overlake card in my pre­vi­ous tenure: the RAM ca­pac­ity and the power bud­get, which was just a tiny frac­tion of the TDP you can ex­pect from a reg­u­lar server CPU.

The hard­ware folks I had spo­ken with told me they could only spare 4KB of dual-ported mem­ory on the FPGA for my door­bell shared-mem­ory com­mu­ni­ca­tion pro­to­col.

Everything was nim­ble, ef­fi­cient, and power-savvy, and the team I had joined 10 min­utes ear­lier was se­ri­ously con­sid­er­ing port­ing half of Windows to that tiny, fan­less, Linux-running chip the size of a fin­ger­nail.

That felt like Elon talk­ing about col­o­niz­ing Mars: just nuke the poles then grow an at­mos­phere! Easier said than done, uh?

That en­tire 122-strong org was knee-deep in im­pos­si­ble ru­mi­na­tions in­volv­ing port­ing Windows to Linux to sup­port their ex­ist­ing VM man­age­ment agents.

The man was a Principal Group Engineering Manager over­see­ing a chunk of the soft­ware run­ning on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they re­ally con­tem­plated port­ing Windows to Linux to sup­port their cur­rent soft­ware.

At first, I ques­tioned my un­der­stand­ing. Was that se­ri­ous? The rest of the talk left no doubt: the plan was out­lined, and the dev leads were tasked with con­tribut­ing peo­ple to the ef­fort. It was im­me­di­ately clear to me that this plan would never suc­ceed and that the org needed a lot of help.

That first hour in the new role left me with a mix of strange feel­ings, stu­pe­fac­tion, and in­credulity.

The stack was hit­ting its scal­ing lim­its on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hy­per­vi­sor was ca­pa­ble of, and was a noisy neigh­bor con­sum­ing so many re­sources that it was caus­ing jit­ter ob­serv­able from the cus­tomer VMs.

There is no di­men­sion in the uni­verse where this stack would fit on a tiny ARM SoC and scale up by many fac­tors. It was not go­ing to hap­pen.

I have seen a lot in my decades of in­dus­try (and Microsoft) ex­pe­ri­ence, but I had never seen an or­ga­ni­za­tion so far from re­al­ity. My day-one prob­lem was there­fore not to ramp up on new tech­nol­ogy, but rather to con­vince an en­tire org, up to my skip-skip-level, that they were on a death march.

Somewhere, I knew it was go­ing to be a fierce up­hill bat­tle. As you can imag­ine, it did­n’t go well, as you will later learn.

I spent the next few days read­ing more about the plans, study­ing the cur­rent sys­tems, and vis­it­ing old friends in Core OS, my alma mater. I was lost away from home in a bizarre ter­ri­tory where peo­ple made plans that did­n’t make sense with the aplomb of a drunk LLM.

I no­tably spent more than 90 min­utes chat­ting in per­son with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the ker­nel team years ear­lier.

His org is re­spon­si­ble for de­liv­er­ing Mariner Linux (now Azure Linux) and the trimmed-down dis­tro run­ning on the Overlake / Azure Boost card. He kindly an­swered all my ques­tions, and I learned that they had iden­ti­fied 173 agents (one hun­dred sev­enty-three) as can­di­dates for port­ing to Overlake.

I later re­searched this fur­ther and found that no one at Microsoft, not a sin­gle soul, could ar­tic­u­late why up to 173 agents were needed to man­age an Azure node, what they all did, how they in­ter­acted with one an­other, what their fea­ture set was, or even why they ex­isted in the first place.

Azure sells VMs, net­work­ing, and stor­age at the core. Add ob­serv­abil­ity and ser­vic­ing, and you should be good. Everything else, SQL, K8s, AI work­loads, and what­not all build on VMs with xPU, net­work­ing, and stor­age, and the heavy lift­ing to make the magic hap­pen is done by the good Core OS folks and the hy­per­vi­sor.

How the Azure folks came up with 173 agents will prob­a­bly re­main a mys­tery, but it takes a se­ri­ous amount of mis­un­der­stand­ing to get there, and this is also how dis­as­ters are built.

Now, fathom for a sec­ond that this pile of un­con­trolled stuff” is or­ches­trat­ing the VMs run­ning Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the gov­ern­ment clouds and other mis­sion-crit­i­cal in­fra­struc­ture, and you’ll be close to un­der­stand­ing how a grain of sand in that frag­ile pileup can cause a global col­lapse, with se­ri­ous National Security im­pli­ca­tions as well as po­ten­tial busi­ness-end­ing con­se­quences for Microsoft.

We are still far from the va­por­ized tril­lion in mar­ket cap, my let­ters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their to­tal si­lence, the quasi-loss of OpenAI, the breach of trust with the US gov­ern­ment as pub­licly stated by the Secretary of Defense, the wasted en­gi­neer­ing ef­forts, the Rust man­date, my stint on the OpenAI bare-metal team in Azure Core, the es­cort ses­sions from China and else­where, and the de­layed fea­tures pub­licly im­plied as ship­ping since 2023, be­fore the work even be­gan.

If you’re run­ning pro­duc­tion work­loads on Azure or re­ly­ing on it for mis­sion-crit­i­cal sys­tems, this story mat­ters more than you think.

...

Read the original on isolveproblems.substack.com »

2 548 shares, 69 trendiness

Free AI on Your Mac

The free AI al­ready on your Mac. Every Mac with Apple Silicon has a built-in LLM. Apple locked it be­hind Siri. apfel sets it free - as a CLI tool, an OpenAI-compatible server, and a chat.

The AI is al­ready in­stalled on your Mac. Apple ships it with ma­cOS. apfel just gives you a way to talk to it - from your ter­mi­nal, from your code, from any­where.

No API keys. No sub­scrip­tions. No per-to­ken billing. It’s your hard­ware - use it.

Every to­ken gen­er­ated lo­cally on your Apple Silicon. Nothing leaves your ma­chine. Ever.

Context win­dow for in­put and out­put com­bined. Enough for most sin­gle-turn tasks and short chats.

The model un­der the hood

Apple ML Research

Three ways to use it.

CLI tool, HTTP server, or in­ter­ac­tive chat. Pick the one that fits.

Pipe-friendly and com­pos­able. Works with jq, xargs, and your shell scripts. stdin, std­out, JSON out­put, file at­tach­ments, proper exit codes.

apfel What is the cap­i­tal of Austria?”

The cap­i­tal of Austria is Vienna.

Drop-in re­place­ment at lo­cal­host:11434. Point any OpenAI SDK at it and go. Streaming, tool call­ing, CORS, re­sponse for­mats.

Multi-turn con­ver­sa­tions with au­to­matic con­text man­age­ment. Five trim­ming strate­gies. System prompt sup­port. All on your Mac.

> How do I re­verse a list in Python?

Apple built an LLM into your Mac. apfel gives it a front door.

Starting with ma­cOS 26 (Tahoe), every Apple Silicon Mac in­cludes a lan­guage model as part of Apple Intelligence. Apple ex­poses it through the FoundationModels frame­work - a Swift API that gives apps ac­cess to SystemLanguageModel. All in­fer­ence runs on the Neural Engine and GPU. No net­work calls, no cloud, no API keys. The model is just there.

But Apple only uses it for Siri

Out of the box, the on-de­vice model pow­ers Siri, Writing Tools, and sys­tem fea­tures. There is no ter­mi­nal com­mand, no HTTP end­point, no way to pipe text through it. The FoundationModels frame­work ex­ists, but you need to write a Swift app to use it. That is what apfel does.

apfel is a Swift 6.3 bi­nary that wraps LanguageModelSession and ex­poses it three ways: as a UNIX com­mand-line tool with stdin/​std­out, as an OpenAI-compatible HTTP server (built on Hummingbird), and as an in­ter­ac­tive chat with con­text man­age­ment.

It han­dles the things Apple’s raw API does not: proper exit codes, JSON out­put, file at­tach­ments, five con­text trim­ming strate­gies for the small 4096-token win­dow, real to­ken count­ing via the SDK, and con­ver­sion of OpenAI tool schemas to Apple’s na­tive Transcript. ToolDefinition for­mat.

Shell scripts in the demo/ folder. Install apfel first, then grab the ones you want.

Natural lan­guage to shell com­mand. Say what you want, get the com­mand.

Pipe chains from plain English. awk, sed, sort, uniq - gen­er­ated for you.

Explain any com­mand, er­ror mes­sage, or code snip­pet in plain English.

What’s this di­rec­tory? Instant pro­ject ori­en­ta­tion for any code­base.

Change one URL. Keep your code.

apfel speaks the OpenAI API. Any client li­brary, any frame­work, any tool that talks to OpenAI can talk to your Mac’s AI in­stead. Just change the base URL.

from ope­nai im­port OpenAI

# Just change the base_url. That’s it.

client = OpenAI(

base_url=“http://​lo­cal­host:11434/​v1,

api_key=“un­used” # no auth needed

resp = client.chat.com­ple­tions.cre­ate(

model=“ap­ple-foun­da­tion­model”,

mes­sages=[{

role”: user”,

content”: What is 1+1?”

print(resp.choices[0].mes­sage.con­tent)

...

Read the original on apfel.franzai.com »

3 532 shares, 22 trendiness

How notch traversal works on MacBooks

Tailscale should feel nearly in­vis­i­ble when it’s con­nect­ing you and all your de­vices to­gether. But on some MacBooks, for a time, it could be a lit­tle too in­vis­i­ble. We have two fixes for it: one small and slightly quirky, and an­other re­ally use­ful one, avail­able now on ma­cOS.

The small, quirky fix might soon be­come a thing of the past for the vast ma­jor­ity of Tailscale users on Macs. I wanted to doc­u­ment it here: to help other de­vel­op­ers, to mark this mo­ment in time, and qui­etly crow about our win­dowed ma­cOS in­ter­face now be­ing gen­er­ally avail­able.

So here’s the is­sue we had with Tailscale’s icon slip­ping into dark­ness, its lit­tle work-around, and then our greater so­lu­tion.

At its de­but on ma­cOS, Tailscale was a com­mand-line tool and a menu bar util­ity. Some MacBooks, start­ing with 2021 MacBook Pro mod­els, have a notch in the top-mid­dle of their dis­play. And de­pend­ing on how many other apps with menu bar icons are run­ning, the Tailscale ap­p’s icon can be hid­den in­side that notch.

Apple, a com­pany that tra­di­tion­ally fa­vors sim­ple func­tion­al­ity over dense set­tings, does not of­fer users, or de­vel­op­ers, a path out of the dark­ness. If there are more menu bar icons then there is space to the right side of the notch, the menu bar items sim­ply dis­ap­pear into the notch-y ether. If you don’t see it, you can’t click it. There is no no­ti­fi­ca­tion to the user, no over­flow sec­tion, no op­tions to re­arrange the menu bar items.

As of this writ­ing, Apple has some in­di­rect work-arounds, like push­ing more of its own sys­tem icons into a re­vamped Control Center, and of­fer­ing a some­what in­el­e­gant Scale to fit be­low cam­era” op­tion. Third-party menu-bar-man­ag­ing apps like ICE and Bartender can help, but they add com­pli­ca­tions and over­head.

We don’t have any con­trol over where things get ren­dered in the menu bar,” said one Tailscale en­gi­neer, who asked to go name­less so as to share their hon­est opin­ion. You just say, I want to be a menu bar app.’ They shove it up there, and that’s it, you end up where you end up.”

Given this there-or-not-there be­hav­ior, Tailscale de­vel­op­ers re­ceived a num­ber of bug re­ports from users when, af­ter the notched MacBooks’ de­but, their Tailscale icons fell into the mid­dle-screen dis­tance. They were like, Actually, I can’t find my Tailscale. It’s gone. It did­n’t start,” the en­gi­neer said. We’re like, No, it’s there, it’s just hid­ing be­hind the notch.’ But we kind of got sick of that.”

Mac menu bar icons may not know they are trapped in­side the no-pixel phan­tom zone, but they can re­port that some­thing is block­ing them. Using data from oc­clu­sion­State, the Tailscale app can see that its icon is in mid-bar limbo.

And while it can­not move, it can speak. Specifically, a pop-up mes­sage can say:

This af­fa­ble warn­ing is not per­fect, by any means. The notch warn­ing can be in­ad­ver­tently trig­gered by other dis­play quirks, like open­ing and clos­ing the MacBook lid, mov­ing be­tween mon­i­tors, or some com­bi­na­tion of the two. But it helped triage the Where are my Tailscale set­tings?” is­sue for a while.

Apple could cer­tainly make some changes to pre­vent this be­ing an is­sue at all. The sys­tem could pre­vent menu bar icons from ren­der­ing in the notch area at all. An over­flow mech­a­nism could stack the icons that would oth­er­wise drop into a neg­a­tive notch zone. Or de­vel­op­ers could be given more in­for­ma­tion and tools about icons’ notch-itive states.

In the mean­time, here’s a look at the Swift code that let our app know it should chirp a bit when hid­den. It should be un­nec­es­sary with the new win­dowed app—un­less you en­able the Hide Dock icon” op­tion in the win­dowed client op­tions, in which case it might still call out its hid­den na­ture.

As we noted at its September beta re­lease, a win­dowed ver­sion of Tailscale’s ma­cOS app does­n’t re­place the menu bar app, but runs along­side it. It can be pulled up from the Dock or a Spotlight search, and makes a lot of Tailscale data and fea­tures more ac­ces­si­ble.

The win­dowed in­ter­face, en­abled by de­fault start­ing with ver­sion 1.96.2 of our ma­cOS client, of­fers:

* A search­able list of tail­net de­vices and their con­nec­tion sta­tus

* Easily ping, copy IP ad­dresses, and send files through Taildrop to de­vices

* Easy ac­cess to exit nodes, search­able and with one rec­om­mended based on la­tency, per­for­mance, and lo­ca­tion

* A red dot on the Dock icon to note crit­i­cal er­rors

* A mini player” that shrinks Tailscale down to the bare min­i­mum

* A prod­uct tour of all these things upon in­stalling/​up­dat­ing

Let us know what you think of the new in­ter­face so we can make it bet­ter. We’re work­ing on a com­pa­ra­ble UI for Windows de­vices. And we’re al­ways look­ing for ways to bring a lit­tle bit of func­tional whimsy to our soft­ware.

...

Read the original on tailscale.com »

4 517 shares, 20 trendiness

Meet the new Cursor · Cursor

Software de­vel­op­ment is chang­ing, and so is Cursor.

In the last year, we moved from man­u­ally edit­ing files to work­ing with agents that write most of our code. How we cre­ate soft­ware will con­tinue to evolve as we en­ter the third era of soft­ware de­vel­op­ment, where fleets of agents work au­tonomously to ship im­prove­ments.

We’re build­ing to­ward this fu­ture, but there is a lot of work left to make it hap­pen. Engineers are still mi­cro­manag­ing in­di­vid­ual agents, try­ing to keep track of dif­fer­ent con­ver­sa­tions, and jump­ing be­tween mul­ti­ple ter­mi­nals, tools, and win­dows.

We’re in­tro­duc­ing Cursor 3, a uni­fied work­space for build­ing soft­ware with agents. The new Cursor in­ter­face brings clar­ity to the work agents pro­duce, pulling you up to a higher level of ab­strac­tion, with the abil­ity to dig deeper when you want. It’s faster, cleaner, and more pow­er­ful, with a multi-repo lay­out, seam­less hand­off be­tween lo­cal and cloud agents, and the op­tion to switch back to the Cursor IDE at any time.

When we started build­ing Cursor, we forked VS Code in­stead of build­ing an ex­ten­sion so we could shape our own sur­face. With Cursor 3, we took that a step fur­ther by build­ing this new in­ter­face from scratch, cen­tered around agents.

The new in­ter­face is in­her­ently multi-work­space, al­low­ing hu­mans and agents to work across dif­fer­ent re­pos.

Working with agents is now much eas­ier. All lo­cal and cloud agents ap­pear in the side­bar, in­clud­ing the ones you kick off from mo­bile, web, desk­top, Slack, GitHub, and Linear.

Cloud agents pro­duce demos and screen­shots of their work for you to ver­ify. This is the same ex­pe­ri­ence you get at cur­sor.com/​agents, now in­te­grated into the desk­top app.

We made mov­ing agents be­tween en­vi­ron­ments re­ally fast.

Move an agent ses­sion from cloud to lo­cal when you want to make ed­its and test it on your own desk­top. Composer 2, our own fron­tier cod­ing model with high us­age lim­its, is great for it­er­at­ing quickly.

In the re­verse di­rec­tion, you can move an agent ses­sion from lo­cal to cloud to keep it run­ning while you’re of­fline, or so that you can move on to the next task. This is es­pe­cially use­ful for longer-run­ning tasks that would oth­er­wise get in­ter­rupted when you close your lap­top.

The new diffs view al­lows you to edit and re­view changes faster with a sim­pler UI. When you’re ready, you can stage, com­mit, and man­age PRs.

Alpha users told us that a lot of what they like about Cursor 3 is the way it com­bines the best parts of the IDE with more re­cent ca­pa­bil­i­ties we’ve shipped in an agent-first in­ter­face.

Dive deeper any­time by view­ing files, and go to de­f­i­n­i­tion in the ed­i­tor with full LSPs.

Cursor can use the built-in browser to open, nav­i­gate, and prompt against lo­cal web­sites.

Browse hun­dreds of plu­g­ins that ex­tend agents with MCPs, skills, sub­agents, and more. Install with one click, or set up your own team mar­ket­place of pri­vate plu­g­ins.

With Cursor 3, we have the foun­da­tional pieces in place—model, prod­uct, and run­time—to build more au­tonomous agents and bet­ter col­lab­o­ra­tion across teams. We will also con­tinue to in­vest in the IDE un­til code­bases are self-dri­ving.

This won’t be the last time the in­ter­face for build­ing soft­ware changes. More pow­er­ful cod­ing mod­els will un­lock new in­ter­ac­tion pat­terns. We are ex­cited to con­tinue to build, sim­plify, and trans­form Cursor to be the best way to code with AI.

Upgrade Cursor, and type Cmd+Shift+P -> Agents Window to try the new in­ter­face. Or learn more in our docs.

...

Read the original on cursor.com »

5 452 shares, 81 trendiness

Frontpage of the indieweb

It’s Like Learning to Ride a Bike

...

Read the original on text.blogosphere.app »

6 394 shares, 0 trendiness

CERN levels up with new superconducting karts

The race is on to test new ve­hi­cles in the un­der­ground Large Hadron Collider tun­nel, ahead of ma­jor works start­ing this sum­mer

The race is on to test new ve­hi­cles in the un­der­ground Large Hadron Collider tun­nel, ahead of ma­jor works start­ing this sum­mer

Update: did you en­joy our April Fool’s day story? While we won’t be rac­ing karts through the tun­nel, we are gear­ing up for ma­jor works to pre­pare for HiLumi LHC and its new tech­nolo­gies. The im­age is based on a real 1991 CERN im­age of the mono­rail used to trans­port peo­ple and equip­ment in the tun­nel dur­ing the life­time of the Large Electron-Positron Collider (LEP), which pre­ceded the LHC.

Following on from the ro­botic mice, CERN en­gi­neers have now de­vel­oped a su­per-charged kart to en­able work­ers to race through the Large Hadron Collider (LHC) un­der­ground tun­nel dur­ing the up­com­ing ma­jor works, start­ing this sum­mer.

The karts promise a power boost to ac­tiv­i­ties dur­ing this pe­riod, known as Long Shutdown 3 (LS3), which will see the LHC trans­formed into the High-Luminosity LHC. These ve­hi­cles will re­place the bi­cy­cles that were used un­til now to travel through the 27-km un­der­ground tun­nel, en­abling en­gi­neers and tech­ni­cians to speed to ar­eas where im­prove­ments to the ac­cel­er­a­tor are re­quired.

Each kart is turbo-boosted by 64 su­per­con­duct­ing en­gines,” ex­plains pro­ject leader Mario Idraulico. When the en­gines are cooled to be­low their crit­i­cal tem­per­a­tures, the Meissner ef­fect lev­i­tates the karts, al­low­ing them to zip through the tun­nels at high speeds and, mamma mia, they’re su­per!”

Early tests have been promis­ing, and the next steps in­volve test­ing dif­fer­ent kart de­signs in an un­der­ground race. Safety co­or­di­na­tor Luigi Fratello has en­sured that each dri­ver will be is­sued with Safety and Health Equipment for Long and Limited Stays (SHELLS), al­though his re­sponse to dri­vers want­ing ba­nanas in the tun­nel was Oh no!”

These karts, al­though de­vel­oped to sup­port CERNs fun­da­men­tal re­search pro­gramme, show clear ap­pli­ca­tions for so­ci­ety. CERNs Knowledge Transfer Group has be­gun dis­cus­sions with European startup com­pany Quantum Mushroom to ex­plore aero­space ap­pli­ca­tions and pow­er­ing for next-gen­er­a­tion anti-grav­ity ve­hi­cles.

Surprisingly, the kart pro­ject be­gan from a col­lab­o­ra­tion be­tween CERN en­gi­neers and on­site nurs­ery school chil­dren — one ex­am­ple of CERNs com­mit­ment to in­spir­ing fu­ture gen­er­a­tions. We’re thrilled that the chil­dren’s kart de­signs were the in­spi­ra­tion for the en­gi­neered karts,” ex­claimed school­teacher Yoshi Kyouryuu, mid-way through paint­ing spots on eggs for an Easter egg hunt.

As ed­u­ca­tors, we pro­mote cu­rios­ity from a young age, which is why we paint ques­tion marks all over our yel­low school walls,” ex­plained school di­rec­tor, Rosalina Pfirsich, look­ing up from her sto­ry­book. With all the con­tri­bu­tions the chil­dren have made to the up­com­ing High-Luminosity LHC pro­ject, we’ve taken to call­ing them Luma!”

Find out more about the High-Luminosity LHC pro­ject.

...

Read the original on home.web.cern.ch »

7 348 shares, 74 trendiness

Marc Andreessen is wrong about introspection

Appearing on the Founders pod­cast this week, ven­ture cap­i­tal­ist Marc Andreessen made the rather ex­tra­or­di­nary claim that - go­ing back four hun­dred years - it would never have oc­curred to any­one to be introspective.”

Andreessen ap­par­ently blames Sigmund Freud and the Vienna Circle with hav­ing some­how manufactured” the whole prac­tice of in­tro­spec­tion some­where be­tween 1910-1920. He sum­marised his own ap­proach to life thus: Move for­ward. Go.”

Host David Senra, ap­par­ently de­lighted, con­grat­u­lated Andreessen on de­vel­op­ing what he called a zero-introspection mind­set.”

Marc Andreessen was right about web browsers.

But he has since been wrong about a great many things.

And he is en­tirely wrong about in­tro­spec­tion.

If we ac­cept that in­tro­spec­tion is a Viennese in­ven­tion of the early twen­ti­eth cen­tury, we have to ex­plain away…well, rather a lot.

Socrates made the ex­am­ined life a con­di­tion of the life worth liv­ing, and he ar­guably died for it. The Stoics built an en­tire philo­soph­i­cal prac­tice around self-ex­am­i­na­tion: Marcus Aurelius wrote the Meditations as a pri­vate ex­er­cise in catch­ing him­self fail­ing to live by his own prin­ci­ples, and he did this while run­ning the Roman Empire, which sug­gests he did­n’t find the two ac­tiv­i­ties in­com­pat­i­ble. Augustine’s Confessions, writ­ten around 400 AD, of­fer a sus­tained and search­ing ac­count of his own in­te­rior life that pre­dates Freud by about fif­teen cen­turies, give or take.

In Chinese phi­los­o­phy, Mencius de­scribes the con­cept of in­tro­spec­tion as seeking the lost heart,” the re­cov­ery of some­thing in­nate that gets buried un­der the noise of or­di­nary life. Shakespeare’s Hamlet is a play about what hap­pens when you’re con­sti­tu­tion­ally un­able to stop ex­am­in­ing your­self and start act­ing, and the fact that Elizabethan au­di­ences im­me­di­ately rec­og­nized this as a prob­lem im­plies they were al­ready some­what fa­mil­iar with the prac­tice be­ing sat­i­rized; you can’t par­ody a con­cept your au­di­ence has never en­coun­tered.

Andreessen’s novel idea that Freud in­vented in­tro­spec­tion is an in­ver­sion of the record. What Freud ac­tu­ally did was sys­tem­atize cer­tain ideas about the un­con­scious that were al­ready cir­cu­lat­ing in European in­tel­lec­tual cul­ture and put them into a clin­i­cal frame­work. Half of those ideas were them­selves wrong; but Freud was of­ten wrong” is a very dif­fer­ent ar­gu­ment from people had no in­ner lives worth ex­am­in­ing be­fore 1910.”

Andreessen is no stranger to the writ­ten word. His Techno-Optimist Manifesto quotes Nietzsche, he ref­er­ences the Italian Futurists with ad­mi­ra­tion and he’s not un­fa­mil­iar with the Western philo­soph­i­cal tra­di­tion. So the his­tor­i­cal re­vi­sion­ism can’t be called ig­no­rance; this is, on some level, a cal­cu­lated move. The claim that in­tro­spec­tion is a mod­ern pathol­ogy serves a spe­cific rhetor­i­cal func­tion by dele­git­imiz­ing an en­tire mode of en­gage­ment with hu­man ex­pe­ri­ence, clear­ing it off the table, and leav­ing only ex­ter­nal ac­tion as the proper re­sponse to ~being alive.

Andreessen and his cronies are mak­ing large claims about what hu­man be­ings want and need. His stated per­sonal phi­los­o­phy is ex­plic­itly a vi­sion of hu­man flour­ish­ing: abun­dance, growth, the elim­i­na­tion of ma­te­r­ial con­straints etc. These are claims about what will make peo­ple’s lives go well. But you can’t eval­u­ate those claims with­out some ac­count of hu­man in­ner life, be­cause hu­man in­ner life is where the ques­tion of whether a life is go­ing well ac­tu­ally gets an­swered. You can mea­sure GDP. You can mea­sure life ex­pectancy. You can mea­sure the num­ber of trans­ac­tions per sec­ond your pay­ment proces­sor han­dles. But none, not one sin­gle of these mea­sure­ments will tell you whether the peo­ple whose lives they de­scribe feel that their lives are worth liv­ing, whether they find their work mean­ing­ful, whether they wake up with some­thing that re­sem­bles pur­pose.

The only ac­cess any­one has to those ques­tions is through some­thing like in­tro­spec­tion: ei­ther their own, or some­one else’s hon­est re­ports of their ex­pe­ri­ence, or the ac­cu­mu­lated tes­ti­mony of lit­er­a­ture and phi­los­o­phy about what it’s like to be a liv­ing, breath­ing, doubt­ing, hurt­ing, in­ter­nally-scream­ing hu­man be­ing float­ing on a God-forsaken rock in a God-forsaken void. Strip that out and you’re left with a very thin the­ory of hu­man flour­ish­ing. It ba­si­cally runs to more is bet­ter, faster is bet­ter, big­ger is bet­ter with noth­ing else added or sub­tracted or at­tempted.

Perhaps, you find this to be a de­fen­si­ble po­si­tion; but you still have to ac­tu­ally ar­gue for it. You can’t just claim that the ques­tion of what peo­ple find mean­ing­ful is a Viennese in­ven­tion and move on.

The re­sponse to Andreessen’s in­ter­view that keeps cir­cu­lat­ing is that he hath no soul.”

This is, of course, wrong.

Andreessen al­most cer­tainly has a rich in­ner life. He has en­thu­si­asms and anx­i­eties and aes­thetic pref­er­ences and tribal loy­al­ties and all the rest of it. The prob­lem is­n’t that there’s noth­ing in­side; the prob­lem is that he’s cho­sen not to ex­am­ine what’s there, and has de­vel­oped an elab­o­rate post-hoc jus­ti­fi­ca­tion for that choice by claim­ing that ex­am­i­na­tion is it­self the pathol­ogy.

This is a rec­og­niz­able pat­tern. The Victorian vi­tal­ists who viewed mas­tur­ba­tion as phys­i­cally de­bil­i­tat­ing were wrong about the phys­i­ol­ogy, but they were also en­gaged in mo­ti­vated rea­son­ing: they al­ready knew they wanted to pro­hibit some­thing, and the sci­en­tific-sound­ing jus­ti­fi­ca­tion came later. Andreessen al­ready knows he wants to move fast with­out ex­am­in­ing him­self, and the his­tor­i­cal ar­gu­ment that in­tro­spec­tion is a Freudian man­u­fac­ture serves ex­actly that same func­tion.

The prac­ti­cal con­se­quences of an un­ex­am­ined in­ner life at scale are not the­o­ret­i­cal. The so­cial me­dia plat­forms built by peo­ple who be­lieved be­hav­ioral data was a re­li­able sub­sti­tute for un­der­stand­ing hu­man psy­chol­ogy pro­duced a decade of en­gage­ment met­rics while user well­be­ing de­clined and our en­tire so­cial or­der de­cayed. The en­gi­neers who built these sys­tems weren’t ma­li­cious; they were op­ti­miz­ing for things they could mea­sure, be­cause they’d im­plic­itly ac­cepted the view that mea­sur­able out­puts were a suf­fi­cient model of hu­man flour­ish­ing. Goodhart’s Law ex­acted its toll: the mea­sure be­came the tar­get, and the tar­get was not what any­one would have cho­sen if they’d been forced to ac­tu­ally spec­ify what they were aim­ing for.

Andreessen’s ad­vice to him­self, and ap­par­ently to oth­ers, is di­rec­tional with­out be­ing spe­cific. Forward, he says. Forward to­ward what? His man­i­festo ob­sesses over abun­dance, over the elim­i­na­tion of ma­te­r­ial suf­fer­ing, and a fu­ture in which tech­nol­ogy has lifted con­straints that cur­rently limit hu­man pos­si­bil­ity. These are goals I can get be­hind. But forward” pre­sup­poses that you know where you’re go­ing, and know­ing where you’re go­ing pre­sup­poses that you know what you want, and know­ing what you want does­n’t hap­pen with­out ex­actly the ex­am­i­na­tion the man has ruled out.

Andreessen’s model of hu­man be­ings is thin. He can ob­serve be­hav­ior. He can track pref­er­ences as ex­pressed through mar­ket choices. He can mea­sure what peo­ple click on and buy and use. What he can’t do, with­out some­thing like in­tro­spec­tion, is un­der­stand why, and the why is where most of the im­por­tant in­for­ma­tion lives.

Four hun­dred years ago, the peo­ple Andreessen imag­ines were bliss­fully un­self­con­scious were read­ing Augustine and Montaigne and ar­gu­ing about Stoic phi­los­o­phy. They were writ­ing di­aries and let­ters that ex­am­ined their own mo­tives with con­sid­er­able care. They were not, in fact, just mov­ing for­ward with­out ask­ing where they were go­ing. That habit is not a pathol­ogy Freud in­tro­duced into an oth­er­wise healthy civ­i­liza­tion. It’s one of the things that makes civ­i­liza­tion pos­si­ble, and pre­tend­ing oth­er­wise does­n’t make you a builder. It just makes you some­one who’s never looked at the blue­prints.

...

Read the original on www.joanwestenberg.com »

8 345 shares, 27 trendiness

European Alternatives to US Products & Software

Your di­rec­tory for European soft­ware, prod­ucts and ser­vices. For en­hanced pri­vacy, qual­ity, and a strong Europe.

Select your cur­rently used ser­vices and in­stantly re­ceive tai­lored European so­lu­tions — se­cure, pri­vacy-com­pli­ant, and pow­er­ful.

Your di­rec­tory for European soft­ware, prod­ucts and ser­vices. For en­hanced pri­vacy, qual­ity, and a strong Europe.

Select your cur­rently used ser­vices and in­stantly re­ceive tai­lored European so­lu­tions — se­cure, pri­vacy-com­pli­ant, and pow­er­ful.

What Europe does bet­ter

EU com­pa­nies are sub­ject to the world’s strictest en­vi­ron­men­tal reg­u­la­tions. European prod­ucts are de­signed for longevity — less throw­away cul­ture, more re­spon­si­bil­ity.

Made in Europe has stood for top qual­ity and dura­bil­ity for decades. Strict stan­dards guar­an­tee fair work­ing con­di­tions, while shorter sup­ply chains mea­sur­ably re­duce CO₂.

EU providers are sub­ject to the GDPR — the strictest data pro­tec­tion law world­wide. Your data be­longs to you, not ad­ver­tis­ing net­works. Note: US soft­ware can be com­pelled by the CLOUD Act to sur­ren­der data to US au­thor­i­ties — even if servers are lo­cated in Europe.

...

Read the original on only-eu.eu »

9 335 shares, 14 trendiness

D-squared Digest -- FOR bigger pies and shorter hours and AGAINST more or less everything else

Economics and sim­i­lar, for the sleep-de­prived

A sub­tle change has been made to the com­ments links, so they no longer pop up. Does this in any way help with the prob­lem about com­ments not ap­pear­ing on perma­linked posts, read­ers?

Update: seem­ingly not

Update: Oh yeah!

Update, September 2008. Hullo there Paul Krugman read­ers. Yes, I did say Good ideas do not need lots of lies told about them in or­der to gain pub­lic ac­cep­tance”, and as a gen­eral maxim I whole­heart­edly rec­om­mend it. I don’t nec­es­sar­ily, how­ever, ei­ther en­dorse or what­ever-the-op­po­site-of-en­dorse the spe­cific use of that maxim in the con­text of Prof. Krugman’s post about the Paulson bailout plan; I don’t ac­tu­ally have a fully formed view about that plan. I do, how­ever, whole­heart­edly en­dorse Development, Geography and Economic Theory”, which I think is a ter­ri­bly un­der­rated eco­nom­ics book, and am at this mo­ment rather starstruck at hav­ing one of my es­says ad­mired by the near­est mod­ern equiv­a­lent to my hero JK Galbraith. Anyway, as you were; by way of con­text, the post be­low was writ­ten just as a lot of high-pro­file com­men­ta­tors like Thomas Friedman were aban­don­ing their sup­port for the Iraq War.

The D-Squared Digest One Minute MBA - Avoiding Projects Pursued By Morons 101

Literally peo­ple have been ask­ing me: How is it that you were so amaz­ingly pre­scient about Iraq? Why is it that you were right about every­thing at pre­cisely the same mo­ment when we were wrong?” No hon­estly, they have. I’d love to show you the emails I’ve re­ceived, there were dozens of them, hon­est. Honest. Anyway, I note that errors of pre­war plan­ning” is now pretty much a main­stream stylised fact, so I sus­pect that it might make some small con­tri­bu­tion to the com­mon­weal if I were to ex­plain how it was that I was able to spot so early that this dog was­n’t go­ing to hunt. I will strug­gle man­fully with the sav­age bur­den of boast­ing, self-ag­gran­dis­e­ment and ego-stroking that this will nec­es­sar­ily in­volve. It’s been done be­fore, al­though ad­mit­tedly by a mad­man in the process of dy­ing of syphilis of the brain. Sorry, where was I?

Anyway, the se­cret to every analy­sis I’ve ever done of con­tem­po­rary pol­i­tics has been, more or less, my ex­pen­sive busi­ness school ed­u­ca­tion (I would write a book en­ti­tled Everything I Know I Learned At A Very Expensive University”, but I doubt it would sell). About half of what they say about busi­ness schools and their grad­u­ates is prob­a­bly true, and they do of­ten feel like the most col­los­sal waste of time and money, but they oc­ca­sion­ally teach you the odd thing which is very use­ful in­deed. Here’s a few of the ones I learned which I con­sid­ered rel­e­vant to judg­ing the ad­vis­abil­ity of the Second Iraq War.

Good ideas do not need lots of lies told about them in or­der to gain pub­lic ac­cep­tance. I was first made aware of this dur­ing an ac­count­ing class. We were dis­cussing the sub­ject of ac­count­ing for stock op­tions at tech­nol­ogy com­pa­nies. There was a live de­bate on this sub­ject at the time. One side (mainly tech­nol­ogy com­pa­nies and their lob­by­ists) held that stock op­tion grants should not be treated as an ex­pense on pub­lic pol­icy grounds; treat­ing them as an ex­pense would dis­cour­age com­pa­nies from grant­ing them, and stock op­tions were a vi­tal com­pen­sa­tion tool that in­cen­tivised per­for­mance, re­warded dy­namism and in­no­va­tion and cre­ated vast amounts of value for America and the world. The other side (mainly peo­ple like Warren Buffet) held that stock op­tions looked aw­fully like a mas­sive blag car­ried out my man­age­ment at the ex­pense of share­hold­ers, and that the proper place to record such blags was the P&L ac­count.

Our lec­turer, in sum­ming up the de­bate, made the not un­rea­son­able point that if stock op­tions re­ally were a fan­tas­tic tool which un­leashed the cre­ative power in every em­ployee, every­one would want to ex­pense as many of them as pos­si­ble, the bet­ter to boast about how in­no­v­a­tive, em­pow­ered and fan­tas­tic they were. Since the tech com­pa­nies’ point of view ap­peared to be that if they were ever forced to ac­count hon­estly for their op­tion grants, they would quickly stop mak­ing them, this of­fered de­cent prima fa­cie ev­i­dence that they weren’t, re­ally, all that fan­tas­tic.

Application to Iraq. The gen­eral prin­ci­ple that good ideas are not usu­ally as­so­ci­ated with ly­ing like a rug1 about their true na­ture seems to have been pretty well con­firmed. In par­tic­u­lar, how­ever, this prin­ci­ple sheds light on the now quite pop­u­lar claim that WMDs were only part of the story; the real pri­or­ity was to lib­er­ate the Iraqis, which is some­thing that every de­cent per­son would sup­port”.

Fibbers’ fore­casts are worth­less. Case af­ter mis­er­able case af­ter bloody case we went through, I tell you, all of which had this moral. Not only that peo­ple who want a pro­ject will tend to make in­nacu­rate pro­jec­tions about the pos­si­ble out­comes of that pro­ject, but about the fu­til­ity of at­tempts to shade” down­ward a fun­da­men­tally dis­hon­est set of pre­dic­tions. If you have doubts about the in­tegrity of a fore­caster, you can’t use their fore­casts at all. Not even as a starting point”. By the way, I would just love to get hold of a few of the quan­ti­ta­tive num­bers from doc­u­ments pre­pared to sup­port the war and give them a quick run through Benford’s Law.

Application to Iraq This was how I de­cided that it was worth stak­ing a bit of cred­i­bil­ity on the strong claim that ab­solutely no ma­te­r­ial WMD ca­pac­ity would be found, rather than some” or some but not enough to jus­tify a war” or even some de­risory but not im­ma­te­r­ial ca­pac­ity, like a few mo­bile bi­o­log­i­cal weapons labs”. My rea­son­ing was that Powell, Bush, Straw, etc, were clearly mak­ing false claims and there­fore ought to be dis­counted com­pletely, and that there were ac­tu­ally very few peo­ple who knew a bit about Iraq but were not fa­tally com­pro­mised in this man­ner who were mak­ing the WMD claim. Meanwhile, there were peo­ple like Scott Ritter and Andrew Wilkie who, what­ever other faults they might or might not have had, did not ap­pear to have told any prov­able lies on this sub­ject and were there­fore not com­pro­mised.

The Vital Importance of Audit. Emphasised over and over again. Brealey and Myers has a sec­tion on this, in which they re­mind cal­low stu­dents that like back­ing-up one’s com­puter files, this is a les­son that every­one seems to have to learn the hard way. Basically, it’s been shown time and again and again; com­pa­nies which do not au­dit com­pleted pro­jects in or­der to see how ac­cu­rate the orig­i­nal pro­jec­tions were, tend to get ex­actly the fore­casts and pro­jects that they de­serve. Companies which have a cul­ture where there are no con­se­quences for mak­ing dis­hon­est fore­casts, get the pro­jects they de­serve. Companies which al­lo­cate blank cheques to man­age­ment teams with a proven record of fail­ure and men­dac­ity, get what they de­serve.

I hope I don’t have to spell out the im­pli­ca­tions of this one for Iraq. Krugman has gone on and on about this, seem­ingly with some small ef­fect these days. The rasp­berry road that led to Abu Ghraib was paved with bland as­sump­tions that peo­ple who had re­peat­edly proved their un­trust­wor­thi­ness, could be trusted. There is much made by peo­ple who long for the days of their fourth form de­bat­ing so­ci­ety about the fal­lacy of argumentum ad hominem”. There is, as I have men­tioned in the past, no fancy Latin term for the fal­lacy of giving known liars the ben­e­fit of the doubt”, but it is in my view a much greater source of avoid­able er­ror in the world. Audit is meant to pro­tect us from this, which is why au­dit is so im­por­tant.

And so the les­son ends. Next week, per­haps, a few re­flec­tions on why it is that peo­ple don’t sup­port the neo­con­ser­v­a­tive pro­ject to bring democ­racy to the Middle East (a trailer for those who can’t wait; the ti­tle is go­ing to be some­thing like If You Tell Lies A Lot, You Tend To Get A Reputation As A Liar”). Mind how you go.

1 We also learned in ac­count­ing class that the dif­fer­ence be­tween making a def­i­nite sin­gle false claim with prov­able in­tent to de­ceive” and creating a very false im­pres­sion and al­low­ing it to re­main with­out cor­rect­ing it” is not one that you should rely upon to keep you out of jail. Even if your mo­tives are no­ble.

this item posted by the man­age­ment 5/27/2004 11:57:00 PM

...

Read the original on blog.danieldavies.com »

10 313 shares, 12 trendiness

Artemis II’s toilet is a moon mission milestone

The lu­nar-bound as­tro­nauts of NASAs Artemis II mis­sion will go boldly where none have gone be­fore, thanks to the space agen­cy’s first-ever flight of a func­tional toi­let around the moon.

On their voy­ages to the moon, NASAs as­tro­nauts are fi­nally get­ting some crea­ture com­forts of ter­res­trial toi­lets—such as hav­ing a door and be­ing able to pee and poop si­mul­ta­ne­ously

The lu­nar-bound as­tro­nauts of NASAs Artemis II mis­sion will go boldly where none have gone be­fore, thanks to the space agen­cy’s first-ever flight of a func­tional toi­let around the moon.

On their voy­ages to the moon, NASAs as­tro­nauts are fi­nally get­ting some crea­ture com­forts of ter­res­trial toi­lets—such as hav­ing a door and be­ing able to pee and poop si­mul­ta­ne­ously

When as­tro­nauts first made their way to the moon, they did so with­out a toi­let. The Apollo pro­gram’s sys­tem of plas­tic bags and fun­nels was so un­wieldy and messy that crew mem­bers found it objectionable” and distasteful,” ac­cord­ing to a sub­se­quent NASA re­port. But now, more than a half cen­tury since the last crewed lu­nar voy­ages and their toi­let trou­bles, the four as­tro­nauts of NASAs Artemis II mis­sion will take flight with a more com­modi­ous bath­room in tow.

The space agen­cy’s Universal Waste Management System (UWMS)—more col­lo­qui­ally called just the toi­let”—was cre­ated to solve long­stand­ing potty prob­lems faced by as­tro­nauts and to of­fer a more fa­mil­iar bath­room ex­pe­ri­ence on the fi­nal fron­tier. Lunar as­tro­nauts will now be spoiled by ameni­ties that in­clude han­dles to help them stay steady in mi­cro­grav­ity, a sys­tem that can han­dle both urine and fe­ces si­mul­ta­ne­ously, urine-col­lec­tion de­vices that work for both male and fe­male as­tro­nauts, and even a door for the help­ful il­lu­sion of pri­vacy in a cramped crew cap­sule.

The new de­sign is more than a decade in the mak­ing. Space in­fra­struc­ture com­pany Collins Aerospace first en­tered into a con­tract with NASA to de­velop the pro­ject in 2015. In that time, pro­ject sci­en­tists have over­come fun­da­men­tal is­sues with past space toi­lets while imag­in­ing and meet­ing fu­ture needs so that the same sys­tem used by Artemis II as­tro­nauts could be adapted for moon and Mars mis­sions in decades to come.

If you’re en­joy­ing this ar­ti­cle, con­sider sup­port­ing our award-win­ning jour­nal­ism by sub­scrib­ing. By pur­chas­ing a sub­scrip­tion you are help­ing to en­sure the fu­ture of im­pact­ful sto­ries about the dis­cov­er­ies and ideas shap­ing our world to­day.

I think of waste man­age­ment as an evo­lu­tion of de­sign,” says Melissa McKinley, pro­ject man­ager and prin­ci­pal in­ves­ti­ga­tor for NASAs UWMS team. The toi­let has built on de­signs from Apollo, the space shut­tle and even the International Space Station…. There is so much learn­ing that goes into it.”

In the tight quar­ters of Apollo crew cap­sules, as­tro­nauts strapped ad­he­sive-rimmed plas­tic bags and tubes to them­selves when­ever they had to defe­cate or uri­nate. Attaching the awk­ward bags was dif­fi­cult enough in weight­less con­di­tions, but the as­tro­nauts also had to man­u­ally mix in a packet of ger­mi­cide to pre­vent the buildup of bac­te­ria and gases within the sealed bag.

The sys­tem was in­fa­mously prone to leaks, such as dur­ing the Apollo 10 mis­sion, when as­tro­nauts no­ticed a turd float­ing through the air,” and dur­ing the Apollo 8 mis­sion, when the crew had to chase down blobs of vomit and fe­ces that es­caped into the cabin. A NASA re­port re­leased af­ter the end of the Apollo mis­sions noted that waste dis­posal must be given poor marks” when it comes to crew sat­is­fac­tion.

I used to want to be the first man to Mars,” said as­tro­naut Ken Mattingly dur­ing the Apollo 16 mis­sion, af­ter de­scrib­ing the sys­tem. This has con­vinced me that, if we got to go on Apollo, I ain’t in­ter­ested.”

Based on these scathing re­views, NASA sci­en­tists knew they had to cre­ate a more stream­lined sys­tem. After all, the toi­let is a mission-critical’ sys­tem, so if it breaks down, the whole mis­sion is in jeop­ardy,” says David Munns, a sci­ence and tech­nol­ogy his­to­rian at the City University of New York.

So be­fore the space shut­tle pro­gram, they en­gi­neered a toi­let that could work in a low-grav­ity en­vi­ron­ment. It looked much like a typ­i­cal ter­res­trial toi­let but re­quired the as­tro­nauts to strap in and use a vac­uum hose to pre­vent waste from float­ing back up into the space­craft.

Early toi­lets on both the space shut­tle and the International Space Station (ISS) used this vac­uum sys­tem—with the key dif­fer­ence be­ing that the ISS model re­cy­cled some waste­water, whereas the space shut­tle’s ver­sion vented it into space. Both sys­tems were sig­nif­i­cantly im­proved over the toilets” of the Apollo years but still had big lim­i­ta­tions. They weren’t built with fe­male anatomy in mind and could­n’t process urine and fe­ces at the same time, and while they pro­vided some sem­blance of pri­vacy with a cur­tain, there was­n’t yet a solid door.

The UWMS is the aero­space-en­gi­neered cul­mi­na­tion of all these pent-up prob­lems with the user ex­pe­ri­ence. 3D-printed from ti­ta­nium, its light­weight, stan­dard­ized de­sign means it can eas­ily fit in many dif­fer­ent types of space­craft, in­clud­ing the ISS, the Artemis mis­sions’ Orion cap­sule and po­ten­tial fu­ture ve­hi­cles that have yet to be built.

The first ver­sion of the UWMS was tested on the ISS in 2020, and fi­nal in­stal­la­tion was com­pleted in 2021. It fea­tured urine and fe­ces sys­tems that could be used si­mul­ta­ne­ously, mod­i­fi­ca­tions to make these sys­tems more uni­sex and the much-cov­eted bath­room door. With fur­ther mod­i­fi­ca­tions to help the same sys­tem func­tion on a lu­nar mis­sion, a ver­sion of the UWMS has also been in­stalled in the Orion cap­sule for Artemis II, the pro­gram’s first crewed launch—and UWMS pro­ject sci­en­tists are on the edge of their seats, ea­ger to learn whether the mis­sion’s four as­tro­nauts are happy with the de­sign.

I am very ex­cited for the crew to use this,” McKinley says. We’ll know so much more when this mis­sion comes back…. It’s re­ally go­ing to drive [waste man­age­ment] on fu­ture Artemis mis­sions and the lu­nar cam­paign—as well as the Mars cam­paign to come.”

...

Read the original on www.scientificamerican.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.