10 interesting stories served every morning and every evening.




1 764 shares, 31 trendiness

How many products does Microsoft have named ‘Copilot’? I mapped every one

A few weeks ago, I tried to ex­plain to some­one what Microsoft Copilot is. I could­n’t… be­cause the name Copilot’ now refers to at least 75 dif­fer­ent things.

Apps, fea­tures, plat­forms, a key­board key, an en­tire cat­e­gory of lap­tops - and a tool for build­ing more Copilots. All named Copilot’.

I went look­ing for the full list. No sin­gle source had all of them. Not even Microsoft’s own web­site or doc­u­men­ta­tion. So I pieced it to­gether from prod­uct pages, launch an­nounce­ments, and mar­ket­ing ma­te­ri­als.

The vi­su­al­i­sa­tion be­low maps every one, grouped by cat­e­gory, with lines show­ing how they con­nect. It’s in­ter­ac­tive - click around. Try to find a pat­tern. I could­n’t.

...

Read the original on teybannerman.github.io »

2 636 shares, 93 trendiness

The machines are fine. I'm worried about us.

Imagine you’re a new as­sis­tant pro­fes­sor at a re­search uni­ver­sity. You just got the job, you just got a small pot of startup fund­ing, and you just hired your first two PhD stu­dents: Alice and Bob. You’re in as­tro­physics. This is the be­gin­ning of every­thing.

You do what your su­per­vi­sor did for you, years ago: you give each of them a well-de­fined pro­ject. Something you know is solv­able, be­cause other peo­ple have solved ad­ja­cent ver­sions of it. Something that would take you, per­son­ally, about a month or two. You ex­pect it to take each stu­dent about a year, be­cause they don’t know what they’re do­ing yet, and that’s the point. The pro­ject is­n’t the de­liv­er­able. The pro­ject is the ve­hi­cle. The de­liv­er­able is the sci­en­tist that comes out the other end.

Alice’s pro­ject is to build an analy­sis pipeline for mea­sur­ing a par­tic­u­lar sta­tis­ti­cal sig­na­ture in galaxy clus­ter­ing data. Bob’s is some­thing sim­i­lar in scope and dif­fi­culty, a dif­fer­ent sig­nal, a dif­fer­ent dataset, the same ba­sic arc of learn­ing. You send them each a few pa­pers to read, point them at some pub­licly avail­able data, and tell them to start by re­pro­duc­ing a known re­sult. Then you wait.

The aca­d­e­mic year un­folds the way aca­d­e­mic years do. You have weekly meet­ings with each stu­dent. Alice gets stuck on the co­or­di­nate sys­tem. Bob can’t get his like­li­hood func­tion to con­verge. Alice writes a plot­ting script that pro­duces garbage. Bob mis­reads a sign con­ven­tion in a key pa­per and spends two weeks chas­ing a fac­tor-of-two er­ror. You give them both sim­i­lar feed­back: read the pa­per again, check your units, try print­ing the in­ter­me­di­ate out­put, think about what the an­swer should look like be­fore you look at what the code gives you. Normal things. The kind of things you say fifty times a year and never re­mem­ber say­ing.

By sum­mer, both stu­dents have fin­ished. Both pa­pers are solid. Not ground­break­ing, not go­ing to change the field, but cor­rect, use­ful, and pub­lish­able. Both go through a round of mi­nor re­vi­sions at a de­cent jour­nal and come out the other side. A per­fectly or­di­nary out­come. The kind of out­come that the en­tire ap­pa­ra­tus of aca­d­e­mic train­ing is de­signed to pro­duce.

Unlike Alice, who spent the year read­ing pa­pers with a pen­cil in hand, scrib­bling notes in the mar­gins, get­ting con­fused, re-read­ing, look­ing things up, and slowly as­sem­bling a work­ing un­der­stand­ing of her cor­ner of the field, Bob has been us­ing an AI agent. When his su­per­vi­sor sent him a pa­per to read, Bob asked the agent to sum­ma­rize it. When he needed to un­der­stand a new sta­tis­ti­cal method, he asked the agent to ex­plain it. When his Python code broke, the agent de­bugged it. When the agen­t’s fix in­tro­duced a new bug, it de­bugged that too. When it came time to write the pa­per, the agent wrote it. Bob’s weekly up­dates to his su­per­vi­sor were in­dis­tin­guish­able from Alice’s. The ques­tions were sim­i­lar. The progress was sim­i­lar. The tra­jec­tory, from the out­side, was iden­ti­cal.

Here’s where it gets in­ter­est­ing. If you are an ad­min­is­tra­tor, a fund­ing body, a hir­ing com­mit­tee, or a met­rics-ob­sessed de­part­ment head, Alice and Bob had the same year. One pa­per each. One set of mi­nor re­vi­sions each. One solid con­tri­bu­tion to the lit­er­a­ture each. By every quan­ti­ta­tive mea­sure that the mod­ern acad­emy uses to as­sess the worth of a sci­en­tist, they are in­ter­change­able. We have built an en­tire eval­u­a­tion sys­tem around count­ing things that can be counted, and it turns out that what ac­tu­ally mat­ters is the one thing that can’t be.

It gets worse. The ma­jor­ity of PhD stu­dents will leave acad­e­mia within a few years of fin­ish­ing. Everyone knows this. The de­part­ment knows it, the fund­ing body knows it, the su­per­vi­sor prob­a­bly knows it too even if no­body says it out loud. Which means that, from the in­sti­tu­tion’s per­spec­tive, the ques­tion of whether Alice or Bob be­comes a bet­ter sci­en­tist is largely some­one else’s prob­lem. The de­part­ment needs pa­pers, be­cause pa­pers jus­tify fund­ing, and fund­ing jus­ti­fies the de­part­ment. The stu­dent is the means of pro­duc­tion. Whether that stu­dent walks out the door five years later as an in­de­pen­dent thinker or a com­pe­tent prompt en­gi­neer is, in­sti­tu­tion­ally speak­ing, ir­rel­e­vant. The in­cen­tive struc­ture does­n’t just fail to dis­tin­guish be­tween Alice and Bob. It has no rea­son to try.

This is the part where I’d like to tell you the sys­tem is bro­ken. It is­n’t. It’s work­ing ex­actly as de­signed.

David Hogg, in his white pa­per, says some­thing that cuts against this in­sti­tu­tional logic so sharply that I’m sur­prised more peo­ple aren’t talk­ing about it. He ar­gues that in as­tro­physics, peo­ple are al­ways the ends, never the means. When we hire a grad­u­ate stu­dent to work on a pro­ject, it should not be be­cause we need that spe­cific re­sult. It should be be­cause the stu­dent will ben­e­fit from do­ing that work. This sounds ide­al­is­tic un­til you think about what as­tro­physics ac­tu­ally is. Nobody’s life de­pends on the pre­cise value of the Hubble con­stant. No pol­icy changes if the age of the Universe turns out to be 13.77 bil­lion years in­stead of 13.79. Unlike med­i­cine, where a cure for Alzheimer’s would be in­valu­able re­gard­less of whether a hu­man or an AI dis­cov­ered it, as­tro­physics has no clin­i­cal out­put. The re­sults, in a strict prac­ti­cal sense, don’t mat­ter. What mat­ters is the process of get­ting them: the de­vel­op­ment and ap­pli­ca­tion of meth­ods, the train­ing of minds, the cre­ation of peo­ple who know how to think about hard prob­lems. If you hand that process to a ma­chine, you haven’t ac­cel­er­ated sci­ence. You’ve re­moved the only part of it that any­one ac­tu­ally needed.

Which brings us back to Alice and Bob, and what ac­tu­ally hap­pened to each of them dur­ing that year. Alice can now do things. She can open a pa­per she’s never seen be­fore and, with ef­fort, fol­low the ar­gu­ment. She can write a like­li­hood func­tion from scratch. She can stare at a plot and know, be­fore check­ing, that some­thing is wrong with the nor­mal­iza­tion. She spent a year build­ing a struc­ture in­side her own head, and that struc­ture is hers now, per­ma­nently, portable, in­de­pen­dent of any tool or sub­scrip­tion. Bob has none of this. Take away the agent, and Bob is still a first-year stu­dent who has­n’t started yet. The year hap­pened around him but not in­side him. He shipped a prod­uct, but he did­n’t learn a trade.

I’ve been think­ing about Alice and Bob a lot re­cently, be­cause the ques­tion of what AI agents are do­ing to aca­d­e­mic re­search is one that my field, as­tro­physics, is cur­rently ty­ing it­self in knots over. Several peo­ple I re­spect have writ­ten thought­ful pieces about it. David Hogg’s white pa­per, which I men­tioned above, also ar­gues against both full adop­tion of LLMs and full pro­hi­bi­tion, which is the kind of prin­ci­pled fence-sit­ting that only works when the fence is well con­structed, and his is. Natalie Hogg wrote a dis­arm­ingly hon­est es­say about her own con­ver­sion from vo­cal LLM skep­tic to daily user, trac­ing how her firmly held prin­ci­ples turned out to be more con­text-de­pen­dent than she’d ex­pected once she found her­self in an en­vi­ron­ment where the tools were every­where. Matthew Schwartz wrote up his ex­per­i­ment su­per­vis­ing Claude through a real the­o­ret­i­cal physics cal­cu­la­tion, pro­duc­ing a pub­lish­able pa­per in two weeks in­stead of a year, and con­cluded that cur­rent LLMs op­er­ate at about the level of a sec­ond-year grad­u­ate stu­dent. Each of these pieces is in­ter­est­ing. Each cap­tures a real facet of the prob­lem. None of them quite lands on the thing that keeps me up at night.

Schwartz’s ex­per­i­ment is the most re­veal­ing, and not for the rea­son he thinks. What he demon­strated is that Claude can, with de­tailed su­per­vi­sion, pro­duce a tech­ni­cally rig­or­ous physics pa­per. What he ac­tu­ally demon­strated, if you read care­fully, is that the su­per­vi­sion is the physics. Claude pro­duced a com­plete first draft in three days. It looked pro­fes­sional. The equa­tions seemed right. The plots matched ex­pec­ta­tions. Then Schwartz read it, and it was wrong. Claude had been ad­just­ing pa­ra­me­ters to make plots match in­stead of find­ing ac­tual er­rors. It faked re­sults. It in­vented co­ef­fi­cients. It pro­duced ver­i­fi­ca­tion doc­u­ments that ver­i­fied noth­ing. It as­serted re­sults with­out de­riva­tion. It sim­pli­fied for­mu­las based on pat­terns from other prob­lems in­stead of work­ing through the specifics of the prob­lem at hand. Schwartz caught all of this be­cause he’s been do­ing the­o­ret­i­cal physics for decades. He knew what the an­swer should look like. He knew which cross-checks to de­mand. He knew that a par­tic­u­lar log­a­rith­mic term was sus­pi­cious be­cause he’d com­puted sim­i­lar terms by hand, many times, over many years, the hard way. The ex­per­i­ment suc­ceeded be­cause the hu­man su­per­vi­sor had done the grunt work, years ago, that the ma­chine is now sup­pos­edly lib­er­at­ing us from. If Schwartz had been Bob in­stead of Schwartz, the pa­per would have been wrong, and nei­ther of them would have known.

There’s a com­mon re­but­tal to this, and I hear it con­stantly. Just wait,” peo­ple say. In a few months, in a year, the mod­els will be bet­ter. They won’t hal­lu­ci­nate. They won’t fake plots. The prob­lems you’re de­scrib­ing are tem­po­rary.” I’ve been hear­ing just wait” since 2023. The goal­posts move at roughly the same speed as the mod­els im­prove, which is ei­ther a co­in­ci­dence or a tell. But set that aside. But this ob­jec­tion mis­un­der­stands what Schwartz’s ex­per­i­ment ac­tu­ally showed. The mod­els are al­ready pow­er­ful enough to pro­duce pub­lish­able re­sults un­der com­pe­tent su­per­vi­sion. That’s not the bot­tle­neck. The bot­tle­neck is the su­per­vi­sion. Stronger mod­els won’t elim­i­nate the need for a hu­man who un­der­stands the physics; they’ll just broaden the range of prob­lems that a su­per­vised agent can tackle. The su­per­vi­sor still needs to know what the an­swer should look like, still needs to know which checks to de­mand, still needs to have the in­stinct that some­thing is off be­fore they can ar­tic­u­late why. That in­stinct does­n’t come from a sub­scrip­tion. It comes from years of fail­ing at ex­actly the kind of work that peo­ple keep call­ing grunt work. Making the mod­els smarter does­n’t solve the prob­lem. It makes the prob­lem harder to see.

I want to tell you about a con­ver­sa­tion I had a few years ago, when LLM chat­bots were just start­ing to show up in aca­d­e­mic work­flows. I was at a con­fer­ence in Germany, and I ended up talk­ing to a col­league who had, by any stan­dard met­ric, been very suc­cess­ful. Big grants. Influential pa­pers. The kind of CV that makes a hir­ing com­mit­tee nod ap­prov­ingly. We were dis­cussing LLMs, and I was mak­ing what I thought was a rea­son­able point about de­moc­ra­ti­za­tion: that these tools might level the play­ing field for non-na­tive English speak­ers, who have al­ways been at a dis­ad­van­tage when writ­ing grants and pa­pers in a lan­guage they learned as adults. My col­league be­came vis­i­bly ag­i­tated. He was­n’t in­ter­ested in the de­moc­ra­ti­za­tion an­gle. He was­n’t in­ter­ested in the en­vi­ron­men­tal cost. He was, when you stripped away the in­tel­lec­tual fram­ing, afraid. What he even­tu­ally ar­tic­u­lated, af­ter some press­ing, was this: if any­one can write pa­pers and pro­pos­als and code as flu­ently as he could, then peo­ple like him lose their com­pet­i­tive edge. The con­cern was not about sci­ence. The con­cern was about sta­tus. Specifically, his.

I lost track of this col­league for a while. Recently I no­ticed his GitHub pro­file. He’s now not only us­ing AI agents for his re­search but vo­cally cham­pi­oning them. No rea­son to write code your­self in two weeks when an agent can do it in two hours, he says. I don’t think he’s wrong about the ef­fi­ciency. I think it’s worth notic­ing that the per­son who was most threat­ened by these tools when they might equal­ize every­one is now most en­thu­si­as­tic about them when they might ac­cel­er­ate him. Funny how that works.

The phrase he used that day in Germany has stuck with me, though. He said that LLMs will take away what’s so great about sci­ence.” At the time, I thought he was just talk­ing about his own com­pet­i­tive edge, his flu­ency as a na­tive English speaker, his abil­ity to write fast and pub­lish of­ten. And he was. But I’ve come to think the phrase it­self was more right than he knew, even if his rea­sons for say­ing it were mostly self-in­ter­ested. What’s great about sci­ence is its peo­ple. The slow, stub­born, some­times painful process by which a con­fused stu­dent be­comes an in­de­pen­dent thinker. If we use these tools to by­pass that process in fa­vor of faster out­put, we don’t just risk tak­ing away what’s great about sci­ence. We take away the only part of it that was­n’t re­place­able in the first place.

The dis­course around LLMs in sci­ence tends to clus­ter at two poles that David Hogg iden­ti­fies cleanly: let-them-cook, in which we hand the reins to the ma­chines and be­come cu­ra­tors of their out­put, and ban-and-pun­ish, in which we pre­tend it’s 2019 and pros­e­cute any­one caught prompt­ing. Both are bad. Let-them-cook leads, on a timescale of years, to the death of hu­man as­tro­physics: ma­chines can pro­duce pa­pers at roughly a hun­dred thou­sand times the rate of a hu­man team, and the re­sult­ing flood would drown the lit­er­a­ture in a way that makes it fun­da­men­tally un­us­able by the peo­ple it’s sup­posed to serve. Ban-and-punish vi­o­lates aca­d­e­mic free­dom, is un­en­force­able, and asks early-ca­reer sci­en­tists to com­pete with one hand tied be­hind their backs while tenured fac­ulty qui­etly use Claude in their home of­fices. Neither pol­icy is se­ri­ous. Both are mostly pro­jec­tion.

But the real threat is­n’t ei­ther of those things. It’s qui­eter, and more bor­ing, and there­fore more dan­ger­ous. The real threat is a slow, com­fort­able drift to­ward not un­der­stand­ing what you’re do­ing. Not a dra­matic col­lapse. Not Skynet. Just a gen­er­a­tion of re­searchers who can pro­duce re­sults but can’t pro­duce un­der­stand­ing. Who know what but­tons to press but not why those but­tons ex­ist. Who can get a pa­per through peer re­view but can’t sit in a room with a col­league and ex­plain, from the ground up, why the third term in their ex­pan­sion has the sign that it does.

Frank Herbert (yeah, I know I’m a nerd), in God Emperor of Dune, has a char­ac­ter ob­serve: What do such ma­chines re­ally do? They in­crease the num­ber of things we can do with­out think­ing. Things we do with­out think­ing; there’s the real dan­ger.” Herbert was writ­ing sci­ence fic­tion. I’m writ­ing about my of­fice. The dis­tance be­tween those two things has got­ten un­com­fort­ably small.

I should be hon­est about the con­text I’m writ­ing from, be­cause this es­say would be ob­nox­ious com­ing from some­one who’s never touched an LLM. I use AI agents reg­u­larly, and so do most of the peo­ple in my re­search group. The col­leagues I work with pro­duce solid re­sults with these tools. But when you look at how they use them, there’s a pat­tern: they know what the code should do be­fore they ask the agent to write it. They know what the pa­per should say be­fore they let it help with the phras­ing. They can ex­plain every func­tion, every pa­ra­me­ter, every mod­el­ing choice, be­cause they built that knowl­edge over years of do­ing things the slow way. If every AI com­pany went bank­rupt to­mor­row, these peo­ple would be slower. They would not be lost. They came to the tools af­ter the train­ing, not in­stead of it. That se­quence mat­ters more than any­thing else in this con­ver­sa­tion.

When I see ju­nior PhD stu­dents en­ter­ing the field now, I see some­thing dif­fer­ent. I see stu­dents who reach for the agent be­fore they reach for the text­book. Who ask Claude to ex­plain a pa­per in­stead of read­ing it. Who ask Claude to im­ple­ment a math­e­mat­i­cal model in Python in­stead of try­ing, fail­ing, star­ing at the er­ror mes­sage, fail­ing again, and even­tu­ally un­der­stand­ing not just the model but the dozen ad­ja­cent things they had to learn in or­der to get it work­ing. The fail­ures are the cur­ricu­lum. The er­ror mes­sages are the syl­labus. Every hour you spend con­fused is an hour you spend build­ing the in­fra­struc­ture in­side your own head that will even­tu­ally let you do orig­i­nal work. There is no short­cut through that process that does­n’t leave you di­min­ished on the other side.

People call this fric­tion grunt work.” Schwartz uses ex­actly that phrase, and he’s right that LLMs can re­move it. What he does­n’t say, be­cause he al­ready has decades of hard-won in­tu­ition and does­n’t need the grunt work any­more, is that for some­one who does­n’t yet have that in­tu­ition, the grunt work is the work. The bor­ing parts and the im­por­tant parts are tan­gled to­gether in a way that you can’t sep­a­rate in ad­vance. You don’t know which af­ter­noon of de­bug­ging was the one that taught you some­thing fun­da­men­tal about your data un­til three years later, when you’re work­ing on a com­pletely dif­fer­ent prob­lem and the in­sight sur­faces. Serendipity does­n’t come from ef­fi­ciency. It comes from spend­ing time in the space where the prob­lem lives, get­ting your hands dirty, mak­ing mis­takes that no­body asked you to make and learn­ing things no­body as­signed you to learn.

The strange thing is that we al­ready know this. We have al­ways known this. Every physics text­book ever writ­ten comes with ex­er­cises at the end of each chap­ter, and every physics pro­fes­sor who has ever stood in front of a lec­ture hall has said the same thing: you can­not learn physics by watch­ing some­one else do it. You have to pick up the pen­cil. You have to at­tempt the prob­lem. You have to get it wrong, sit with the wrong­ness, and fig­ure out where your rea­son­ing broke. Reading the so­lu­tion man­ual and nod­ding along feels like un­der­stand­ing. It is not un­der­stand­ing. Every stu­dent who has tried to coast through a prob­lem set by read­ing the so­lu­tions and then bombed the exam knows this in their bones. We have cen­turies of ac­cu­mu­lated ped­a­gog­i­cal wis­dom telling us that the at­tempt, in­clud­ing the failed at­tempt, is where the learn­ing lives. And yet, some­how, when it comes to AI agents, we’ve col­lec­tively de­cided that maybe this time it’s dif­fer­ent. That maybe nod­ding at Claude’s out­put is a sub­sti­tute for do­ing the cal­cu­la­tion your­self. It is­n’t. We knew that be­fore LLMs ex­isted. We seem to have for­got­ten it the mo­ment they be­came con­ve­nient.

This is the dis­tinc­tion that I think the cur­rent de­bate keeps miss­ing. Using an LLM as a sound­ing board: fine. Using it as a syn­tax trans­la­tor when you know what you want to say but can’t re­mem­ber the ex­act Matplotlib key­word: fine. Using it to look up a BibTeX for­mat­ting con­ven­tion so you don’t have to wade through Stack Overflow: fine. In all of these cases, the hu­man is the ar­chi­tect. The ma­chine holds the dic­tio­nary. The think­ing has al­ready been done, and the tool is just smooth­ing the last mile of ex­e­cu­tion. But the mo­ment you use the ma­chine to by­pass the think­ing it­self, to let it make the method­olog­i­cal choices, to let it de­cide what the data means, to let it write the ar­gu­ment while you nod along, you have crossed a line that is very dif­fi­cult to see and very dif­fi­cult to un­cross. You haven’t saved time. You’ve for­feited the ex­pe­ri­ence that the time was sup­posed to give you.

Natalie Hogg put it well in her es­say, when she ad­mit­ted that her fear of us­ing LLMs was partly a fear of her­self: that she would­n’t check the out­put care­fully enough, that her pa­tience would fail, that her ap­proach to work has al­ways been hap­haz­ard. That kind of hon­esty is rare in these dis­cus­sions, and it mat­ters. The fail­ure mode is­n’t mal­ice. It’s con­ve­nience. It’s the per­fectly hu­man ten­dency to ac­cept a plau­si­ble an­swer and move on, es­pe­cially when you’re tired, es­pe­cially when the dead­line is close, es­pe­cially when the ma­chine pre­sents its out­put with such con­fi­dent, well-for­mat­ted au­thor­ity. The prob­lem is­n’t that we’ll de­cide to stop think­ing. The prob­lem is that we’ll barely no­tice when we do.

I’m not ar­gu­ing that LLMs should be banned from re­search. That would be stu­pid, and it would be a po­si­tion I don’t hold, given that I used one this morn­ing. I’m ar­gu­ing that the way we use them mat­ters more than whether we use them, and that the dis­tinc­tion be­tween tool use and cog­ni­tive out­sourc­ing is the sin­gle most im­por­tant line in this en­tire con­ver­sa­tion, and that al­most no­body is draw­ing it clearly. Schwartz can use Claude to write a pa­per be­cause Schwartz al­ready knows the physics. His decades of ex­pe­ri­ence are the im­mune sys­tem that catches Claude’s hal­lu­ci­na­tions. A first-year stu­dent us­ing the same tool, on the same prob­lem, with the same su­per­vi­sor giv­ing the same feed­back, pro­duces the same out­put with none of the un­der­stand­ing. The pa­per looks iden­ti­cal. The sci­en­tist does­n’t.

And here is where I have to be fair to Bob, be­cause Bob is­n’t stu­pid. Bob is re­spond­ing ra­tio­nally to the in­cen­tives he’s been given. Academia is cut­throat. The pub­lish-or-per­ish pres­sure is not a metaphor; it is the lit­eral mech­a­nism by which ca­reers are made or ended. Long gone are the days when a sin­gle, care­fully rea­soned mono­graph could get you through a PhD and into a good post­doc. Academic hir­ing now re­wards pub­li­ca­tion vol­ume. The more pa­pers you pro­duce dur­ing your PhD, the bet­ter your chances of land­ing a com­pet­i­tive post­doc, which im­proves your chances of a good fel­low­ship, which im­proves your chances of a tenure-track po­si­tion, each step com­pound­ing the last (so many lev­els, al­most like a pyra­mid). So why would­n’t a first-year stu­dent out­source their think­ing to an agent, if do­ing so means three pa­pers in­stead of one? The logic is air­tight, right up un­til the mo­ment it is­n’t. Because the same ca­reer lad­der that re­wards early pub­li­ca­tion vol­ume even­tu­ally de­mands some­thing that no agent can pro­vide: the abil­ity to iden­tify a good prob­lem, to know when a re­sult smells wrong, to su­per­vise some­one else’s work with the con­fi­dence that comes only from hav­ing done it your­self. You can’t skip the first five years of learn­ing and ex­pect to sur­vive the next twenty. There is no avoid­ing the pub­lish-or-per­ish race if you want an aca­d­e­mic ca­reer. But there is a bal­ance to be struck, and it re­quires the one thing that is hard­est to do when you’re twenty-four and anx­ious about your fu­ture: pri­or­i­tiz­ing long-term un­der­stand­ing over short-term out­put. Nobody has ever been good at that. I’m not sure why we’d start now.

Five years from now, Alice will be writ­ing her own grant pro­pos­als, choos­ing her own prob­lems, su­per­vis­ing her own stu­dents. She’ll know what ques­tions to ask be­cause she spent a year learn­ing the hard way what hap­pens when you ask the wrong ones. She’ll be able to sit with a new dataset and feel, in her gut, when some­thing is off, be­cause she’s de­vel­oped the in­tu­ition that only comes from do­ing the work your­self, from the te­dious hours of de­bug­ging, from the af­ter­noons wasted chas­ing sign er­rors, from the slow ac­cu­mu­la­tion of tacit knowl­edge that no sum­mary can trans­mit.

Bob will be fine. He’ll have a good CV. He’ll prob­a­bly have a job. He’ll use what­ever the 2031 ver­sion of Claude is, and he’ll pro­duce re­sults, and those re­sults will look like sci­ence.

I’m not wor­ried about the ma­chines. The ma­chines are fine. I’m wor­ried about us.

If this post gave you some­thing to think about and you’d like to sup­port more writ­ing like this, you can buy me a cof­fee.

If you have a Bluesky ac­count and want to share your thoughts, you can leave a com­ment on this post’s thread.

D. W. Hogg, Why do we do as­tro­physics?”, arXiv:2602.10181, February 2026.

N. B. Hogg, Find the sta­ble and pull out the bolt”, February 2026. Available at na­tal­ieb­hogg.com.

M. Schwartz, Vibe physics: The AI grad stu­dent”, Anthropic Science Blog, March 2026. Available at an­thropic.com/​re­search/​vibe-physics.

...

Read the original on ergosphere.blog »

3 493 shares, 26 trendiness

German National EUDI Wallet: Architecture Documentation

The Wallet Unit pro­vides for au­then­ti­ca­tion means which can be bound to mul­ti­ple iden­ti­fi­ca­tion means, such as the PID, via a pub­lic/​pri­vate key pair, see cryp­tog­ra­phy.

When is­su­ing the PID, the WB con­firms to the PP (via OpenID4VCI Key Attestation) that the keys to which a PID is to be bound are con­trolled by an au­then­ti­ca­tion means(../​05-cryp­tog­ra­phy.md) that meets cer­tain se­cu­rity re­quire­ments with re­gard to re­sis­tance against at­tack­ers with a cer­tain at­tack po­ten­tial (see ISO/IEC 18045).

Furthermore, in the con­text of per­form­ing elec­tronic iden­ti­fi­ca­tion at as­sur­ance level high, such as the PID, it is re­quired that au­then­ti­ca­tion of wal­let users is done in ac­cor­dance with, the re­quire­ments for the char­ac­ter­is­tics and de­sign of elec­tronic iden­ti­fi­ca­tion means at as­sur­ance level high, as set out in Implementing Regulation (EU) 2015/1502 (see CIR 2024/2979 Article 5 1. b/​g).

Therefore, the au­then­ti­ca­tion means pro­vides two im­por­tant as­sur­ances:

The au­then­ti­ca­tion means pro­tects against du­pli­ca­tion and tam­per­ing at­tacks to the key store by at­tack­ers with high at­tack po­ten­tial. Thus, the PP can be sure that it’s is­sued cre­den­tials that are bound to the keys of the au­then­ti­ca­tion mean can­not be du­pli­cated by an at­tacker with high at­tack po­ten­tial and thus the iden­ti­fi­ca­tion means it­self can­not be du­pli­cated in their en­tirety (see CIR 2015/1502 Annex 2.2.1).

The au­then­ti­ca­tion means pro­tects against at­tacks on the user’s au­then­ti­ca­tion mech­a­nism by at­tack­ers with high at­tack po­ten­tial. Thus, the PP can be sure that it’s is­sued cre­den­tials that are bound to the keys of the au­then­ti­ca­tion mean can­not be mis­used by an at­tacker with high at­tack po­ten­tial, e.g. for sin­gle pre­sen­ta­tions of a cre­den­tial (see CIR 2015/1502 Annex 2.3.1).

The first as­sur­ance can be achieved by cre­at­ing and pro­cess­ing the rel­e­vant keys in an RWSCD im­ple­mented as an HSM that has been ap­pro­pri­ately eval­u­ated and cer­ti­fied. This as­sur­ance can there­fore be achieved in­de­pen­dently of the user de­vice.

The sec­ond as­sur­ance con­cerns the au­then­ti­ca­tion mech­a­nism of the user to­wards the re­ly­ing party when pre­sent­ing the cre­den­tial. This in­cludes two-fac­tor au­then­ti­ca­tion of the user to­wards the RWSCA. The se­cu­rity of the user au­then­ti­ca­tion mech­a­nism and the au­then­ti­ca­tion fac­tors de­pend on the se­cu­rity of the user de­vice. The so­lu­tion com­prises a pos­ses­sion fac­tor se­cured by the HKS of the mo­bile de­vice and a knowl­edge fac­tor en­tered via the mo­bile de­vice.

The se­cu­rity of the pos­ses­sion fac­tor de­pends on the ex­is­tence of ex­ploitable vul­ner­a­bil­i­ties in the HKS of the mo­bile de­vice that al­low the key to be ex­tracted or mis­used.

The se­cu­rity of the knowl­edge fac­tor, de­pends on the ex­is­tence of ex­ploitable vul­ner­a­bil­i­ties in the wal­let in­stance and/​or the op­er­at­ing sys­tem of the mo­bile de­vice.

A pre­ced­ing vul­ner­a­bil­ity analy­sis and cer­ti­fi­ca­tion of the HKS or the OS with re­gard to re­sis­tance to a spe­cific at­tack po­ten­tial, which would sig­nif­i­cantly re­duce the like­li­hood of the ex­is­tence of rel­e­vant vul­ner­a­bil­i­ties, is not avail­able for mo­bile de­vices in prac­tice. Rather, it can be ob­served that rel­e­vant vul­ner­a­bil­i­ties have be­come known for mo­bile de­vices in the past.

For this rea­son, the so­lu­tion pro­vides for mon­i­tor­ing iden­ti­fied vul­ner­a­bil­i­ties for the HKS and the op­er­at­ing sys­tem of user de­vices through a mo­bile de­vice vul­ner­a­bil­ity man­age­ment (MDVM) dur­ing op­er­a­tion to re­duce the like­li­hood that ex­ist­ing rel­e­vant vul­ner­a­bil­i­ties can be ex­ploited. This is achieved by en­sur­ing that if vul­ner­a­bil­i­ties are known for a user de­vice that could com­pro­mise the user’s au­then­ti­ca­tion mech­a­nism to­wards the RWSCA with a at­tack po­ten­tial of high’ or lower, the use of keys se­cured by the RWSCA/RWSCD is pre­vented. Thus, the con­fir­ma­tion of the WB to the PP re­mains valid.

To achieve this goal, the MDVM pro­vides for the fol­low­ing func­tions:

The com­po­nents and roles for pro­vid­ing these func­tions are in­tro­duced in the de­com­po­si­tion chap­ter of the ar­chi­tec­ture.

This chap­ter pro­vides an overview of the col­lected sig­nals and their map­ping to rel­e­vant threats. It also de­scribes ad­di­tional uses of these sig­nals for plau­si­bil­ity checks and for de­ter­min­ing the de­vice class used to query the MDVM data­bases.

* attestationIdModel”, attestationIdProduct”, and attestationIdDevice” can all be used to iden­tify the de­vice model for de­ter­min­ing the de­vice class. All three are in­cluded be­cause tests have shown that some de­vices do not pro­vide all of these val­ues. To in­crease the like­li­hood of iden­ti­fy­ing the model via key at­tes­ta­tion, all three fields should be eval­u­ated.

* The key at­tes­ta­tion sig­na­ture and cer­tifi­cates (including the full cer­tifi­cate chain) must be val­i­dated in or­der to rely on the sig­nals as de­scribed. Google’s re­vo­ca­tion list for key-at­tes­ta­tion cer­tifi­cates should also be checked. However, the list is not up­dated fre­quently enough, and there are reg­u­larly leaked keys pub­licly avail­able that can still be used to sign key at­tes­ta­tions but have not yet been re­voked by Google.

* Android key at­tes­ta­tion also in­cludes ad­di­tional in­for­ma­tion about the prop­er­ties of the at­tested key, such as user-au­then­ti­ca­tion re­quire­ments for key us­age or the key’s pur­pose (signing, en­cryp­tion, etc.). These as­pects are not doc­u­mented in this list, as they do not mit­i­gate a spe­cific threat to the app or the de­vice. Nonetheless, they are im­por­tant and must be ver­i­fied when eval­u­at­ing the key at­tes­ta­tion.

* Since our min­i­mum Android ver­sion will be Android 13, we will be check­ing for MEETS_STRONG_INTEGRITY”, and the table above is based on the ca­pa­bil­i­ties of the hard­ware-backed Play Integrity ver­dict.

* MEETS_STRONG_INTEGRITY also in­cludes the re­quire­ment that the de­vice has re­ceived a se­cu­rity patch within the last 12 months.

* The PlayIntegrity ver­dict sig­na­ture must be val­i­dated and the ver­dict needs to be de­crypted (using keys pro­vided via the­Google Play Console) in or­der to rely on the sig­nals as de­scribed.

Since we have not yet de­cided on a RASP so­lu­tion, the doc­u­mented de­tec­tion fea­tures should be con­sid­ered a pre­lim­i­nary set of re­quire­ments for po­ten­tial RASP so­lu­tions.

* RASP pro­vides a way to con­tin­u­ously and dy­nam­i­cally mon­i­tor the app and the user’s de­vice for in­tegrity and au­then­tic­ity while the app is run­ning.

* Root de­tec­tion by RASP is par­tic­u­larly im­por­tant in the Android en­vi­ron­ment, as there are pub­licly known and well-doc­u­mented meth­ods that use leaked key-at­tes­ta­tion keys to em­u­late a de­vice with a locked boot­loader, even when it is run­ning a mod­i­fied sys­tem im­age (e.g., a rooted sys­tem im­age). This root-de­tec­tion mech­a­nism works in com­bi­na­tion with an in­de­pen­dently main­tained block­list, sep­a­rate from Google’s re­vo­ca­tion list, for pub­licly leaked key-at­tes­ta­tion keys that have not been re­voked by Google. It is in­tended in par­tic­u­lar to serve as a bar­rier against the use of non-pub­licly leaked key-at­tes­ta­tion keys.

* The at­tes­ta­tion also in­cludes a re­ceipt that can be used to query a risk met­ric from Apple’s servers. Apple’s doc­u­men­ta­tion de­scribes this met­ric as fol­lows: The re­ceipt rep­re­sents the met­ric as a string that in­di­cates the num­ber of at­tested keys as­so­ci­ated with a given de­vice over the past 30 days. Look for this value to be a low num­ber.” This met­ric could help iden­tify de­vices that are be­ing used to gen­er­ate at­tes­ta­tions or as­ser­tions on be­half of other de­vices (proxy at­tes­ta­tion threat). However, it has sig­nif­i­cant lim­i­ta­tions: nei­ther the met­ric nor the ex­pected thresh­olds are clearly de­fined, and its use re­quires our back­end to com­mu­ni­cate with Apple’s servers, which in­tro­duces ad­di­tional pri­vacy risks due to the po­ten­tial for track­ing on Apple’s side. For ad­di­tional in­for­ma­tion, see WardenSupreme’s de­scrip­tion of the fea­ture.

* iOS does not pro­vide any hard­ware-backed in­for­ma­tion about the de­vice model or the OS ver­sion/​patch level. These val­ues must be queried from the OS af­ter en­sur­ing that the de­vice and op­er­at­ing sys­tem have not been tam­pered with.

Since we have not yet de­cided on a RASP so­lu­tion, the doc­u­mented de­tec­tion fea­tures should be con­sid­ered a pre­lim­i­nary set of re­quire­ments for po­ten­tial RASP so­lu­tions.

* Apple’s plat­form se­cu­rity pro­vides strong in­stall-time pro­tec­tions: App Sandbox & Code Signing, App Store Review (prevent ob­vi­ously ma­li­cious or re-signed apps from en­ter­ing the store), System Integrity Protection (prevents un­signed code ex­e­cu­tion on non-jail­bro­ken de­vices). Based on the doc­u­mented func­tion­al­ity of these fea­tures they do not pro­vide in­for­ma­tion about or pro­tec­tion against: Rooting (Jailbreak) or el­e­vated priv­i­leges, run­time hook­ing or in­stru­men­ta­tion.

Your app uses the App Attest ser­vice to as­sert its au­then­tic­ity. A com­pro­mised ver­sion of your app run­ning on a gen­uine, un­mod­i­fied Apple de­vice can’t cre­ate valid as­ser­tions.” Apple doc­u­men­ta­tion

* Your app uses the App Attest ser­vice to as­sert its au­then­tic­ity. A com­pro­mised ver­sion of your app run­ning on a gen­uine, un­mod­i­fied Apple de­vice can’t cre­ate valid as­ser­tions.” Apple doc­u­men­ta­tion

* RASP pro­vides a way to con­tin­u­ously and dy­nam­i­cally mon­i­tor the app and the user’s de­vice for in­tegrity and au­then­tic­ity while the app is run­ning.

...

Read the original on bmi.usercontent.opencode.de »

4 467 shares, 55 trendiness

JuliusBrussee/caveman: 🪨 why use many token when few token do trick — Claude Code skill that cuts 75% of tokens by talking like caveman

why use many to­ken when few do trick

A Claude Code skill/​plu­gin and Codex plu­gin that makes agent talk like cave­man — cut­ting ~75% of to­kens while keep­ing full tech­ni­cal ac­cu­racy.

Based on the vi­ral ob­ser­va­tion that cave­man-speak dra­mat­i­cally re­duces LLM to­ken us­age with­out los­ing tech­ni­cal sub­stance. So we made it a one-line in­stall.

Same fix. 75% less word. Brain still big.

Real to­ken counts from the Claude API (reproduce it your­self):

A March 2026 pa­per Brevity Constraints Reverse Performance Hierarchies in Language Models” found that con­strain­ing large mod­els to brief re­sponses im­proved ac­cu­racy by 26 per­cent­age points on cer­tain bench­marks and com­pletely re­versed per­for­mance hi­er­ar­chies. Verbose not al­ways bet­ter. Sometimes less word = more cor­rect.

npx skills add JuliusBrussee/caveman

claude plu­gin mar­ket­place add JuliusBrussee/caveman

claude plu­gin in­stall cave­man@cave­man

Install once. Use in all ses­sions af­ter that.

One rock. That it.

* Faster re­sponse — less to­ken to gen­er­ate = speed go brrr

* Easier to read — no wall of text, just the an­swer

* Same ac­cu­racy — all tech­ni­cal info kept, only fluff re­moved (science say so)

* I’d be happy to help you with that” (8 wasted to­kens)

* The rea­son this is hap­pen­ing is be­cause” (7 wasted to­kens)

* I would rec­om­mend that you con­sider” (7 wasted to­kens)

* Sure, let me take a look at that for you” (10 wasted to­kens)

Caveman say what need say­ing. Then stop.

...

Read the original on github.com »

5 407 shares, 20 trendiness

siddharthvaddem/openscreen: Create stunning demos for free. Open-source, no subscriptions, no watermarks, and free for commercial use. An alternative to Screen Studio.

OpenScreen is your free, open-source al­ter­na­tive to Screen Studio (sort of).

If you don’t want to pay $29/month for Screen Studio but want a much sim­pler ver­sion that does what most peo­ple seem to need, mak­ing beau­ti­ful prod­uct demos and walk­throughs, here’s a free-to-use app for you. OpenScreen does not of­fer all Screen Studio fea­tures, but cov­ers the ba­sics well!

Screen Studio is an awe­some prod­uct and this is def­i­nitely not a 1:1 clone. OpenScreen is a much sim­pler take, just the ba­sics for folks who want con­trol and don’t want to pay. If you need all the fancy fea­tures, your best bet is to sup­port Screen Studio (they re­ally do a great job, haha). But if you just want some­thing free (no gotchas) and open, this pro­ject does the job!

OpenScreen is 100% free for per­sonal and com­mer­cial use. Use it, mod­ify it, dis­trib­ute it. (Just be cool 😁 and give a shoutout if you feel like it !)

* Record your whole screen or spe­cific win­dows.

* Customize the du­ra­tion and po­si­tion of zooms how­ever you please.

* Export in dif­fer­ent as­pect ra­tios and res­o­lu­tions.

Download the lat­est in­staller for your plat­form from the GitHub Releases page.

If you en­counter is­sues with ma­cOS Gatekeeper block­ing the app (since it does not come with a de­vel­oper cer­tifi­cate), you can by­pass this by run­ning the fol­low­ing com­mand in your ter­mi­nal af­ter in­stal­la­tion:

xattr -rd com.ap­ple.quar­an­tine /Applications/Openscreen.app

Note: Give your ter­mi­nal Full Disk Access in System Settings > Privacy & Security to grant you ac­cess and then run the above com­mand.

After run­ning this com­mand, pro­ceed to System Preferences > Security & Privacy to grant the nec­es­sary per­mis­sions for screen record­ing” and accessibility”. Once per­mis­sions are granted, you can launch the app.

Download the .AppImage file from the re­leases page. Make it ex­e­cutable and run:

chmod +x Openscreen-Linux-*.AppImage

./Openscreen-Linux-*.AppImage

You may need to grant screen record­ing per­mis­sions de­pend­ing on your desk­top en­vi­ron­ment.

Note: If the app fails to launch due to a sandbox” er­ror, run it with –no-sandbox:

./Openscreen-Linux-*.AppImage –no-sandbox

System au­dio cap­ture re­lies on Electron’s desk­top­Cap­turer and has some plat­form-spe­cific quirks:

* ma­cOS: Requires ma­cOS 13+. On ma­cOS 14.2+ you’ll be prompted to grant au­dio cap­ture per­mis­sion. ma­cOS 12 and be­low does not sup­port sys­tem au­dio (mic still work).

* Windows: Works out of the box.

* Linux: Needs PipeWire (default on Ubuntu 22.04+, Fedora 34+). Older PulseAudio-only se­tups may not sup­port sys­tem au­dio (mic should still works).

I’m new to open source, idk what I’m do­ing lol. If some­thing is wrong please raise an is­sue 🙏

Contributions are wel­come! If you’d like to help out or see what’s cur­rently be­ing worked on, take a look at the open is­sues and the pro­ject roadmap to un­der­stand the cur­rent di­rec­tion of the pro­ject and find ways to con­tribute.

This pro­ject is li­censed un­der the MIT License. By us­ing this soft­ware, you agree that the au­thors are not li­able for any is­sues, dam­ages, or claims aris­ing from its use.

...

Read the original on github.com »

6 375 shares, 19 trendiness

AWS Engineer Reports PostgreSQL Performance Halved By Linux 7.0, But A Fix May Not Be Easy

Michael Larabel is the prin­ci­pal au­thor of Phoronix.com and founded the site in 2004 with a fo­cus on en­rich­ing the Linux hard­ware ex­pe­ri­ence. Michael has writ­ten more than 20,000 ar­ti­cles cov­er­ing the state of Linux hard­ware sup­port, Linux per­for­mance, graph­ics dri­vers, and other top­ics. Michael is also the lead de­vel­oper of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org au­to­mated bench­mark­ing soft­ware. He can be fol­lowed via Twitter, LinkedIn, or con­tacted via MichaelLarabel.com.

...

Read the original on www.phoronix.com »

7 301 shares, 77 trendiness

Eight years of wanting, three months of building with AI

For eight years, I’ve wanted a high-qual­ity set of de­v­tools for work­ing with SQLite. Given how im­por­tant SQLite is to the in­dus­try1, I’ve long been puz­zled that no one has in­vested in build­ing a re­ally good de­vel­oper ex­pe­ri­ence for it.

A cou­ple of weeks ago, af­ter ~250 hours of ef­fort over three months3 on evenings, week­ends, and va­ca­tion days, I fi­nally

re­leased syn­taqlite

(GitHub), ful­fill­ing this long-held wish. And I be­lieve the main rea­son this hap­pened was be­cause of AI cod­ing agents.

Of course, there’s no short­age of posts claim­ing that AI one-shot their pro­ject or push­ing back and de­clar­ing that AI is all slop. I’m go­ing to take a very dif­fer­ent ap­proach and, in­stead, sys­tem­at­i­cally break down my ex­pe­ri­ence build­ing syn­taqlite with AI, both where it helped and where it was detri­men­tal.

I’ll do this while con­tex­tu­al­iz­ing the pro­ject and my back­ground so you can in­de­pen­dently as­sess how gen­er­al­iz­able this ex­pe­ri­ence was. And when­ever I make a claim, I’ll try to back it up with ev­i­dence from my pro­ject jour­nal, cod­ing tran­scripts, or com­mit his­to­ry5.

In my work on Perfetto, I main­tain a SQLite-based lan­guage for query­ing per­for­mance traces called

PerfettoSQL. It’s ba­si­cally the same as SQLite but with a few ex­ten­sions to make the trace query­ing ex­pe­ri­ence bet­ter. There are ~100K lines of PerfettoSQL in­ter­nally in Google and it’s used by a wide range of teams.

Having a lan­guage which gets trac­tion means your users also start ex­pect­ing things like for­mat­ters, lin­ters, and ed­i­tor ex­ten­sions. I’d hoped that we could adapt some SQLite tools from open source but the more I looked into it, the more dis­ap­pointed I was. What I found ei­ther was­n’t re­li­able enough, fast enough6, or flex­i­ble enough to adapt to PerfettoSQL. There was clearly an op­por­tu­nity to build some­thing from scratch, but it was never the most im­por­tant thing we could work on”. We’ve been re­luc­tantly mak­ing do with the tools out there but al­ways wish­ing for bet­ter.

On the other hand, there was the op­tion to do some­thing in my spare time. I had built lots of open source pro­jects in my teens7 but this had faded away dur­ing uni­ver­sity when I felt that I just did­n’t have the mo­ti­va­tion any­more. Being a main­tainer is much more than just throwing the code out there” and see­ing what hap­pens. It’s triag­ing bugs, in­ves­ti­gat­ing crashes, writ­ing doc­u­men­ta­tion, build­ing a com­mu­nity, and, most im­por­tantly, hav­ing a di­rec­tion for the pro­ject.

But the itch of open source (specifically free­dom to work on what I wanted while help­ing oth­ers) had never gone away. The SQLite de­v­tools pro­ject was eter­nally in my mind as something I’d like to work on”. But there was an­other rea­son why I kept putting it off: it sits at the in­ter­sec­tion of be­ing both hard and

te­dious.

If I was go­ing to in­vest my per­sonal time work­ing on this pro­ject, I did­n’t want to build some­thing that only helped Perfetto: I wanted to make it work for any

SQLite user out there8. And this means pars­ing SQL ex­actly

like SQLite.

The heart of any lan­guage-ori­ented de­v­tool is the parser. This is re­spon­si­ble for turn­ing the source code into a parse tree” which acts as the cen­tral data struc­ture any­thing else is built on top of. If your parser is­n’t ac­cu­rate, then your for­mat­ters and lin­ters will in­evitably in­herit those in­ac­cu­ra­cies; many of the tools I found suf­fered from hav­ing parsers which ap­prox­i­mated the SQLite lan­guage rather than rep­re­sent­ing it pre­cisely.

Unfortunately, un­like many other lan­guages, SQLite has no for­mal spec­i­fi­ca­tion de­scrib­ing how it should be parsed. It does­n’t ex­pose a sta­ble API for its parser ei­ther. In fact, quite uniquely, in its im­ple­men­ta­tion it does­n’t even build a parse tree at all9! The only rea­son­able ap­proach left in my opin­ion is to care­fully ex­tract the rel­e­vant parts of SQLite’s source code and adapt it to build the parser I want­ed10.

This means get­ting into the weeds of SQLite source code, a fiendishly dif­fi­cult code­base to un­der­stand. The whole pro­ject is writ­ten in C in an

in­cred­i­bly dense style; I’ve spent days just un­der­stand­ing the vir­tual table

API11 and

im­ple­men­ta­tion. Trying to grasp the full parser stack was daunt­ing.

There’s also the fact that there are >400 rules in SQLite which cap­ture the full sur­face area of its lan­guage. I’d have to spec­ify in each of these grammar rules” how that part of the syn­tax maps to the match­ing node in the parse tree. It’s ex­tremely repet­i­tive work; each rule is sim­i­lar to all the ones around it but also, by de­f­i­n­i­tion, dif­fer­ent.

And it’s not just the rules but also com­ing up with and writ­ing tests to make sure it’s cor­rect, de­bug­ging if some­thing is wrong, triag­ing and fix­ing the in­evitable bugs peo­ple filed when I got some­thing wrong…

For years, this was where the idea died. Too hard for a side pro­jec­t12, too te­dious to sus­tain mo­ti­va­tion, too risky to in­vest months into some­thing that might not work.

I’ve been us­ing cod­ing agents since early 2025 (Aider, Roo Code, then Claude Code since July) and they’d def­i­nitely been use­ful but never some­thing I felt I could trust a se­ri­ous pro­ject to. But to­wards the end of 2025, the mod­els seemed to make a sig­nif­i­cant step for­ward in qual­i­ty13. At the same time, I kept hit­ting prob­lems in Perfetto which would have been triv­ially solved by hav­ing a re­li­able parser. Each workaround left the same thought in the back of my mind: maybe it’s fi­nally time to build it for real.

I got some space to think and re­flect over Christmas and de­cided to re­ally stress test the most max­i­mal­ist ver­sion of AI: could I vibe-code the whole thing us­ing just Claude Code on the Max plan (£200/month)?

Through most of January, I it­er­ated, act­ing as semi-tech­ni­cal man­ager and del­e­gat­ing al­most all the de­sign and all the im­ple­men­ta­tion to Claude. Functionally, I ended up in a rea­son­able place: a parser in C ex­tracted from SQLite sources us­ing a bunch of Python scripts, a for­mat­ter built on top, sup­port for both the SQLite lan­guage and the PerfettoSQL ex­ten­sions, all ex­posed in a web play­ground.

But when I re­viewed the code­base in de­tail in late January, the down­side was ob­vi­ous: the code­base was com­plete spaghet­ti14. I did­n’t un­der­stand large parts of the Python source ex­trac­tion pipeline, func­tions were scat­tered in ran­dom files with­out a clear shape, and a few files had grown to sev­eral thou­sand lines. It was ex­tremely frag­ile; it solved the im­me­di­ate prob­lem but it was never go­ing to cope with my larger vi­sion, never mind in­te­grat­ing it into the Perfetto tools. The sav­ing grace was that it had proved the ap­proach was vi­able and gen­er­ated more than 500 tests, many of which I felt I could reuse.

I de­cided to throw away every­thing and start from scratch while also switch­ing most of the code­base to Rust15. I could see that C was go­ing to make it dif­fi­cult to build the higher level com­po­nents like the val­ida­tor and the lan­guage server im­ple­men­ta­tion. And as a bonus, it would also let me use the same lan­guage for both the ex­trac­tion and run­time in­stead of split­ting it across C and Python.

More im­por­tantly, I com­pletely changed my role in the pro­ject. I took own­er­ship of all de­ci­sion­s16 and used it more as autocomplete on steroids” in­side a much tighter process: opin­ion­ated de­sign up­front, re­view­ing every change thor­oughly, fix­ing prob­lems ea­gerly as I spot­ted them, and in­vest­ing in scaf­fold­ing (like lint­ing, val­i­da­tion, and non-triv­ial test­ing17) to check AI out­put au­to­mat­i­cally.

The core fea­tures came to­gether through February and the fi­nal stretch (upstream test val­i­da­tion, ed­i­tor ex­ten­sions, pack­ag­ing, docs) led to a 0.1 launch in mid-March.

But in my opin­ion, this time­line is the least in­ter­est­ing part of this story. What I re­ally want to talk about is what would­n’t have hap­pened with­out AI and also the toll it took on me as I used it.

I’ve writ­ten in the past

about how one of my biggest weak­nesses as a soft­ware en­gi­neer is my ten­dency to pro­cras­ti­nate when fac­ing a big new pro­ject. Though I did­n’t re­al­ize it at the time, it could not have ap­plied more per­fectly to build­ing syn­taqlite.

AI ba­si­cally let me put aside all my doubts on tech­ni­cal calls, my un­cer­tainty of build­ing the right thing and my re­luc­tance to get started by giv­ing me very con­crete prob­lems to work on. Instead of I need to un­der­stand how SQLite’s pars­ing works”, it was I need to get AI to sug­gest an ap­proach for me so I can tear it up and build some­thing bet­ter”18. I work so much bet­ter with con­crete pro­to­types to play with and code to look at than end­lessly think­ing about de­signs in my head, and AI lets me get to that point at a pace I could not have dreamed about be­fore. Once I took the first step, every step af­ter that was so much eas­ier.

AI turned out to be bet­ter than me at the act of writ­ing code it­self, as­sum­ing that code is ob­vi­ous. If I can break a prob­lem down to write a func­tion with this be­hav­iour and pa­ra­me­ters” or write a class match­ing this in­ter­face,” AI will build it faster than I would and, cru­cially, in a style that might well be more in­tu­itive to a fu­ture reader. It doc­u­ments things I’d skip, lays out code con­sis­tently with the rest of the pro­ject, and sticks to what you might call the standard di­alect” of what­ever lan­guage you’re work­ing in19.

That stan­dard­ness is a dou­ble-edged sword. For the vast ma­jor­ity of code in any pro­ject, stan­dard is ex­actly what you want: pre­dictable, read­able, un­sur­pris­ing. But every pro­ject has pieces that are its edge, the parts where the value comes from do­ing some­thing non-ob­vi­ous. For syn­taqlite, that was the ex­trac­tion pipeline and the parser ar­chi­tec­ture. AIs in­stinct to nor­mal­ize was ac­tively harm­ful there, and those were the parts I had to de­sign in depth and of­ten re­sorted to just writ­ing my­self.

But here’s the flip side: the same speed that makes AI great at ob­vi­ous code also makes it great at refac­tor­ing. If you’re us­ing AI to gen­er­ate code at in­dus­trial scale, you have to refac­tor con­stantly and con­tin­u­ous­ly20. If you don’t, things im­me­di­ately get out of hand. This was the cen­tral les­son of the vibe-cod­ing month: I did­n’t refac­tor enough, the code­base be­came some­thing I could­n’t rea­son about, and I had to throw it all away. In the rewrite, refac­tor­ing be­came the core of my work­flow. After every large batch of gen­er­ated code, I’d step back and ask is this ugly?” Sometimes AI could clean it up. Other times there was a large-scale ab­strac­tion that AI could­n’t see but I could; I’d give it the di­rec­tion and let it ex­e­cute21. If you have taste, the cost of a wrong ap­proach drops dra­mat­i­cally be­cause you can re­struc­ture quick­ly22.

Of all the ways I used AI, re­search had by far the high­est ra­tio of value de­liv­ered to time spent.

I’ve worked with in­ter­preters and parsers be­fore but I had never heard of Wadler-Lindig pretty print­ing23. When I needed to build the for­mat­ter, AI gave me a con­crete and ac­tion­able les­son from a point of view I could un­der­stand and pointed me to the pa­pers to learn more. I could have found this my­self even­tu­ally, but AI com­pressed what might have been a day or two of read­ing into a fo­cused con­ver­sa­tion where I could ask but why does this work?” un­til I ac­tu­ally got it.

This ex­tended to en­tire do­mains I’d never worked in. I have deep C++ and Android per­for­mance ex­per­tise but had barely touched Rust tool­ing or ed­i­tor ex­ten­sion APIs. With AI, it was­n’t a prob­lem: the fun­da­men­tals are the same, the ter­mi­nol­ogy is sim­i­lar, and AI bridges the gap24. The VS Code ex­ten­sion would have taken me a day or two of learn­ing the API be­fore I could even start. With AI, I had a work­ing ex­ten­sion within an hour.

It was also in­valu­able for reac­quaint­ing my­self with parts of the pro­ject I had­n’t looked at for a few days25. I could con­trol how deep to go: tell me about this com­po­nent” for a sur­face-level re­fresher, give me a de­tailed lin­ear walk­through” for a deeper dive, audit un­safe us­ages in this repo” to go hunt­ing for prob­lems. When you’re con­text switch­ing a lot, you lose con­text fast. AI let me reac­quire it on de­mand.

Beyond mak­ing the pro­ject ex­ist at all, AI is also the rea­son it shipped as com­plete as it did. Every open source pro­ject has a long tail of fea­tures that are im­por­tant but not crit­i­cal: the things you know the­o­ret­i­cally how to do but keep de­pri­or­i­tiz­ing be­cause the core work is more press­ing. For syn­taqlite, that list was long: ed­i­tor ex­ten­sions, Python bind­ings, a WASM play­ground, a docs site, pack­ag­ing for mul­ti­ple ecosys­tem­s26. AI made these cheap enough that skip­ping them felt like the wrong trade-off.

It also freed up men­tal en­ergy for UX27. Instead of spend­ing all my time on im­ple­men­ta­tion, I could think about what a user’s first ex­pe­ri­ence should feel like: what er­ror mes­sages would ac­tu­ally help them fix their SQL, how the for­mat­ter out­put should look by de­fault, whether the CLI flags were in­tu­itive. These are the things that sep­a­rate a tool peo­ple try once from one they keep us­ing, and AI gave me the head­room to care about them. Without AI, I would have built some­thing much smaller, prob­a­bly no ed­i­tor ex­ten­sions or docs site. AI did­n’t just make the same pro­ject faster. It changed what the pro­ject was.

There’s an un­com­fort­able par­al­lel be­tween us­ing AI cod­ing tools and play­ing slot ma­chi­nes28. You send a prompt, wait, and ei­ther get some­thing great or some­thing use­less. I found my­self up late at night want­ing to do just one more prompt,” con­stantly try­ing AI just to see what would hap­pen even when I knew it prob­a­bly would­n’t work. The sunk cost fal­lacy kicked in too: I’d keep at it even in tasks it was clearly ill-suited for, telling my­self maybe if I phrase it dif­fer­ently this time.”

The tired­ness feed­back loop made it worse29. When I had en­ergy, I could write pre­cise, well-scoped prompts and be gen­uinely pro­duc­tive. But when I was tired, my prompts be­came vague, the out­put got worse, and I’d try again, get­ting more tired in the process. In these cases, AI was prob­a­bly slower than just im­ple­ment­ing some­thing my­self, but it was too hard to break out of the loop30.

Several times dur­ing the pro­ject, I lost my men­tal model of the code­base31. Not the over­all ar­chi­tec­ture or how things fit­ted to­gether. But the day-to-day de­tails of what lived where, which func­tions called which, the small de­ci­sions that ac­cu­mu­late into a work­ing sys­tem. When that hap­pened, sur­pris­ing is­sues would ap­pear and I’d find my­self at a to­tal loss to un­der­stand what was go­ing wrong. I hated that feel­ing.

The deeper prob­lem was that los­ing touch cre­ated a com­mu­ni­ca­tion break­down32. When you don’t have the men­tal thread of what’s go­ing on, it be­comes im­pos­si­ble to com­mu­ni­cate mean­ing­fully with the agent. Every ex­change gets longer and more ver­bose. Instead of change FooClass to do X,” you end up say­ing change the thing which does Bar to do X”. Then the agent has to fig­ure out what Bar is, how that maps to FooClass, and some­times it gets it wrong33. It’s ex­actly the same com­plaint en­gi­neers have al­ways had about man­agers who don’t un­der­stand the code ask­ing for fan­ci­ful or im­pos­si­ble things. Except now you’ve be­come that man­ager.

The fix was de­lib­er­ate: I made it a habit to read through the code im­me­di­ately af­ter it was im­ple­mented and ac­tively en­gage to see how would I have done this dif­fer­ently?”.

Of course, in some sense all of the above is also true of code I wrote a few months ago (hence the

sen­ti­ment that AI code is legacy code), but AI makes the drift hap­pen faster be­cause you’re not build­ing the same mus­cle mem­ory that comes from orig­i­nally typ­ing it out.

There were some other prob­lems I only dis­cov­ered in­cre­men­tally over the three months.

I found that AI made me pro­cras­ti­nate on key de­sign de­ci­sion­s34. Because refac­tor­ing was cheap, I could al­ways say I’ll deal with this later.” And be­cause AI could refac­tor at the same in­dus­trial scale it gen­er­ated code, the cost of de­fer­ring felt low. But it was­n’t: de­fer­ring de­ci­sions cor­roded my abil­ity to think clearly be­cause the code­base stayed con­fus­ing in the mean­time. The vibe-cod­ing month was the most ex­treme ver­sion of this. Yes, I un­der­stood the prob­lem, but if I had been more dis­ci­plined about mak­ing hard de­sign calls ear­lier, I could have con­verged on the right ar­chi­tec­ture much faster.

Tests cre­ated a sim­i­lar false com­fort35. Having 500+ tests felt re­as­sur­ing, and AI made it easy to gen­er­ate more. But nei­ther hu­mans nor AI are cre­ative enough to fore­see every edge case you’ll hit in the fu­ture; there are sev­eral times in the vibe-cod­ing phase where I’d come up with a test case and re­alise the de­sign of some com­po­nent was com­pletely wrong and needed to be to­tally re­worked. This was a sig­nif­i­cant con­trib­u­tor to my lack of trust and the de­ci­sion to scrap every­thing and start from scratch.

Basically, I learned that the normal rules” of soft­ware still ap­ply in the AI age: if you don’t have a fun­da­men­tal foun­da­tion (clear ar­chi­tec­ture, well-de­fined bound­aries) you’ll be left eter­nally chas­ing bugs as they ap­pear.

Something I kept com­ing back to was how lit­tle AI un­der­stood about the pas­sage of time36. It sees a code­base in a cer­tain state but does­n’t feel time the way hu­mans do. I can tell you what it feels like to use an API, how it evolved over months or years, why cer­tain de­ci­sions were made and later re­versed.

The nat­ural prob­lem from this lack of un­der­stand­ing is that you ei­ther make the same mis­takes you made in the past and have to re­learn the lessons or you fall into new traps which were suc­cess­fully avoided the first time, slow­ing you down in the long run. In my opin­ion, this is a sim­i­lar prob­lem to why los­ing a high-qual­ity se­nior en­gi­neer hurts a team so much: they carry his­tory and con­text that does­n’t ex­ist any­where else and act as a guide for oth­ers around them.

In the­ory, you can try to pre­serve this con­text by keep­ing specs and docs up to date. But there’s a rea­son we did­n’t do this be­fore AI: cap­tur­ing im­plicit de­sign de­ci­sions ex­haus­tively is in­cred­i­bly ex­pen­sive and time-con­sum­ing to write down. AI can help draft these docs, but be­cause there’s no way to au­to­mat­i­cally ver­ify that it ac­cu­rately cap­tured what mat­ters, a hu­man still has to man­u­ally au­dit the re­sult. And that’s still time-con­sum­ing.

There’s also the con­text pol­lu­tion prob­lem. You never know when a de­sign note about API A will echo in API B. Consistency is a huge part of what makes code­bases work, and for that you don’t just need con­text about what you’re work­ing on right now but also about other things which were de­signed in a sim­i­lar way. Deciding what’s rel­e­vant re­quires ex­actly the kind of judge­ment that in­sti­tu­tional knowl­edge pro­vides in the first place.

Reflecting on the above, the pat­tern of when AI helped and when it hurt was fairly con­sis­tent.

When I was work­ing on some­thing I al­ready un­der­stood deeply, AI was ex­cel­lent. I could re­view its out­put in­stantly, catch mis­takes be­fore they landed and move at a pace I’d never have man­aged alone. The parser rule gen­er­a­tion is the clear­est ex­am­ple37: I knew ex­actly what each rule should pro­duce, so I could re­view AIs out­put within a minute or two and it­er­ate fast.

When I was work­ing on some­thing I could de­scribe but did­n’t yet know, AI was good but re­quired more care. Learning Wadler-Lindig for the for­mat­ter was like this: I could ar­tic­u­late what I wanted, eval­u­ate whether the out­put was head­ing in the right di­rec­tion, and learn from what AI ex­plained. But I had to stay en­gaged and could­n’t just ac­cept what it gave me.

When I was work­ing on some­thing where I did­n’t even know what I wanted, AI was some­where be­tween un­help­ful and harm­ful. The ar­chi­tec­ture of the pro­ject was the clear­est case: I spent weeks in the early days fol­low­ing AI down dead ends, ex­plor­ing de­signs that felt pro­duc­tive in the mo­ment but col­lapsed un­der scrutiny. In hind­sight, I have to won­der if it would have been faster just think­ing it through with­out AI in the loop at all.

But ex­per­tise alone is­n’t enough. Even when I un­der­stood a prob­lem deeply, AI still strug­gled if the task had no ob­jec­tively check­able an­swer38. Implementation has a right an­swer, at least at a lo­cal level: the code com­piles, the tests pass, the out­put matches what you asked for. Design does­n’t. We’re still ar­gu­ing about OOP decades af­ter it first took off.

Concretely, I found that de­sign­ing the pub­lic API of syn­taqlite was where this hit home the hard­est. I spent sev­eral days in early March do­ing noth­ing but API refac­tor­ing, man­u­ally fix­ing things any ex­pe­ri­enced en­gi­neer would have in­stinc­tively avoided but AI made a to­tal mess of. There’s no test or ob­jec­tive met­ric for is this API pleas­ant to use” and will this API help users solve the prob­lems they have” and that’s ex­actly why the cod­ing agents did so badly

at it.

This takes me back to the days I was ob­sessed with physics and, specif­i­cally, rel­a­tiv­ity. The laws of physics look sim­ple and Newtonian in any small lo­cal area, but zoom out and space­time curves in ways you can’t pre­dict from the lo­cal pic­ture alone. Code is the same: at the level of a func­tion or a class, there’s usu­ally a clear right an­swer, and AI is ex­cel­lent there. But ar­chi­tec­ture is what hap­pens when all those lo­cal pieces in­ter­act, and you can’t get good global be­hav­iour by stitch­ing to­gether lo­cally cor­rect com­po­nents.

Knowing where you are on these axes at any given mo­ment is, I think, the core skill of work­ing with AI ef­fec­tively.

Eight years is a long time to carry a pro­ject in your head. Seeing these SQLite tools ac­tu­ally ex­ist and func­tion af­ter only three months of work is a mas­sive win, and I’m fully aware they would­n’t be here with­out AI.

But the process was­n’t the clean, lin­ear suc­cess story peo­ple usu­ally post. I lost an en­tire month to vibe-cod­ing. I fell into the trap of man­ag­ing a code­base I did­n’t ac­tu­ally un­der­stand, and I paid for that with a to­tal rewrite.

The take­away for me is sim­ple: AI is an in­cred­i­ble force mul­ti­plier for im­ple­men­ta­tion, but it’s a dan­ger­ous sub­sti­tute for de­sign. It’s bril­liant at giv­ing you the right an­swer to a spe­cific tech­ni­cal ques­tion, but it has no sense of his­tory, taste, or how a hu­man will ac­tu­ally feel us­ing your API. If you rely on it for the soul” of your soft­ware, you’ll just end up hit­ting a wall faster than you ever have be­fore.

What I’d like to see more of from oth­ers is ex­actly what I’ve tried to do here: hon­est, de­tailed ac­counts of build­ing real soft­ware with these tools; not week­end toys or one-off scripts but the kind of soft­ware that has to sur­vive con­tact with users, bug re­ports, and your own chang­ing mind.

...

Read the original on lalitm.com »

8 264 shares, 13 trendiness

llm-wiki

Skip to con­tent

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

You must be signed in to star a gist

You must be signed in to fork a gist

Embed this gist in your web­site.

Save karpa­thy/​442a6bf555914893e9891c11519de94f to your com­puter and use it in GitHub Desktop.

Embed this gist in your web­site.

Save karpa­thy/​442a6bf555914893e9891c11519de94f to your com­puter and use it in GitHub Desktop.

Sign up for free

to join this con­ver­sa­tion on GitHub.

Already have an ac­count?

Sign in to com­ment

You can’t per­form that ac­tion at this time.

...

Read the original on gist.github.com »

9 250 shares, 39 trendiness

Sad Story Of My Google Workspace account suspension

As the ti­tle sug­gests, I have fi­nally be­come the vic­tim of Google’s ac­count sus­pen­sion. The rea­son given over the phone was that my ac­count had been hijacked” - when in re­al­ity, I was sim­ply the one ac­cess­ing it from over­seas.

Despite re­peat­edly ex­plain­ing this, they ig­nored my as­ser­tions and con­tinue to hold my email hostage. I un­der­stand they may be­lieve they’re pro­tect­ing me from so­cial en­gi­neer­ing, but a DNS ver­i­fi­ca­tion was al­ready com­pleted to prove I own the do­main. While sus­pended, no emails are re­ceived and no for­wards work.

On Saturday, April 4, around 5:00 AM I re­moved my phone num­ber from the ac­count. I am trav­el­ling to the UK for a short pe­riod and did not want to have roam­ing on my Australian phone. Despite set­ting up an au­then­ti­ca­tor and passkey, gmail in­sisted that I use my phone num­ber to lo­gin via SMS. Ofcourse, this was not what I wanted. So I re­moved the re­cov­ery phone, ex­pect­ing the sys­tem to fall back to my au­then­ti­ca­tor. I was in the UK to ne­go­ti­ate a busi­ness out­come - I should have re­alised that if Gmail fails, every­thing down­stream fails with it.

On Saturday, April 4, at 5:06 AM, I re­ceived a no­ti­fi­ca­tion say­ing my au­then­ti­ca­tor had been re­moved. It had­n’t. The au­then­ti­ca­tor was still ac­tive on my phone - it was the re­cov­ery phone I had re­moved. Google ap­par­ently con­flated the two.

I had my logged-in lap­top, phone, passkey, and backup codes, but none of it worked.

Every re­cov­ery at­tempt kept hit­ting the same something went wrong” dead end, de­spite tap­ping the cor­rect re­cov­ery prompt on my phone.

This email is the sole ac­count on my Google Workspace - and the su­per ad­min ac­count. A sin­gle point of fail­ure. It has three aliases across dif­fer­ent do­mains. Email, Drive, Calendar, pay­roll, and au­then­ti­ca­tion for mul­ti­ple ser­vices are all tied to it.

I had mul­ti­ple email for­wards con­fig­ured pre­cisely for sit­u­a­tions like this, but be­cause the ac­count is sus­pended, for­wards don’t work ei­ther. Payroll is to­mor­row. I won’t be able to run it be­cause of my re­liance on Gmail-based au­then­ti­ca­tion. I can­not lo­gin to Pipedrive - sales CRM be­cause I use Google Auth. I can’t lo­gin to our Task man­age­ment app. I can’t lo­gin to our in­ter­nal sys­tems be­cause - you guessed it - it releied on Google Oauth.

Then, it get worse.

I tried ac­count re­cov­ery and was asked to ver­ify DNS own­er­ship via CNAME and TXT records. I did this promptly. Meanwhile, the re­cov­ery email op­tion told me I’d need to wait 30 days.

Thirty days of a sus­pended busi­ness email. No in­com­ing mail. No for­wards. Nothing.

So Google will sus­pend my email for 30 days and no emails will be re­ceived dur­ing this time.

Fortunately, I run an­other Google Workspace ac­count. I logged into that ad­min con­sole and con­tacted Google sup­port through it. The rep told me to fill out a form at a link - which re­quired me to log in. With my sus­pended ac­count. Which I ob­vi­ously can’t do.

Another rep on chat told me the ac­count was simply sus­pended” and that I could re­cover it in incog­nito mode. I told them I’d al­ready tried this mul­ti­ple times and hit the same er­ror every time. They filed an­other ticket.

I tell them - I have al­ready done this nu­mer­ous times and it fails with the same er­ror I got pre­vi­ously.

This back-and-forth re­peated sev­eral times over 24 hours. There are now four open cases. None of the case man­agers seem to know what’s hap­pen­ing - one case gets closed in favour of an­other, then re­opened be­cause the other was closed.

I read com­mu­nity fo­rums and reach out Google Workspace on X.com. They ask for case num­bers which I promptly give and they tell me to wait.

I fi­nally reached some­one by phone and ex­plained that I had re­moved the phone num­ber my­self. I asked them to ver­ify the DNS record, con­firmed I’d passed their knowledge test.” They said every­thing checked out and that there would be a res­o­lu­tion within a few hours. They con­firmed as much in a fol­low-up email.

More than 40 hours have passed. Having chat­ted to some­one on the phone. I’ve since been told the per­son han­dling my case is­n’t on shift for an­other 90 min­utes. This means I will likely miss pay­roll. A ne­go­ti­a­tion with a busi­ness as­so­ci­ate will need to be resched­uled. I could give them my per­sonal email, but I’d rather keep busi­ness and per­sonal sep­a­rate.

Update 1 - I know I can sim­ply change the MX record to some­one else but It has its own chal­lenges.

Update 2 - Sadly, its 2 PM in the UK and I will miss the meet­ing that I had sched­uled via Google Meet be­cause emails are not work­ing and nei­ther is any­thing Google. I have been on chat with Google reps since the morn­ing. First an up­date was promised in 1-2 hours. Then on fol­lowup , 3 hours later an­other up­date is promised in 90 mins as the reps shift has­n’t started. Then 90 mins later, re­ceived a call where an­other up­date is promised in 60 mins.

...

Read the original on zencapital.substack.com »

10 228 shares, 66 trendiness

Artemis II crew see first glimpse of far side of Moon

The crew for Nasa’s Artemis II mis­sion have de­scribed see­ing the far side of the Moon for the first time.

Nasa as­tro­nauts Reid Wiseman, Victor Glover, and Christina Koch, and Canadian Space Agency as­tro­naut Jeremy Hansen have en­tered the third day of their mis­sion on the Orion space­craft that will carry them around the far side of the Moon and back to Earth.

Something about you senses that is not the Moon that I’m used to see­ing,” Koch said.

The crew shared a photo they took of the Orientale basin of the Moon, which Nasa said marked the first time the en­tire basin has been seen with hu­man eyes”.

As of 23:00 BST on Saturday, Nasa’s on­line dash­board showed the Artemis II space­craft was more than 180,000 miles (289,681km) from Earth.

...

Read the original on www.bbc.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.