10 interesting stories served every morning and every evening.
A few weeks ago, I tried to explain to someone what Microsoft Copilot is. I couldn’t… because the name ‘Copilot’ now refers to at least 75 different things.
Apps, features, platforms, a keyboard key, an entire category of laptops - and a tool for building more Copilots. All named ‘Copilot’.
I went looking for the full list. No single source had all of them. Not even Microsoft’s own website or documentation. So I pieced it together from product pages, launch announcements, and marketing materials.
The visualisation below maps every one, grouped by category, with lines showing how they connect. It’s interactive - click around. Try to find a pattern. I couldn’t.
...
Read the original on teybannerman.github.io »
Imagine you’re a new assistant professor at a research university. You just got the job, you just got a small pot of startup funding, and you just hired your first two PhD students: Alice and Bob. You’re in astrophysics. This is the beginning of everything.
You do what your supervisor did for you, years ago: you give each of them a well-defined project. Something you know is solvable, because other people have solved adjacent versions of it. Something that would take you, personally, about a month or two. You expect it to take each student about a year, because they don’t know what they’re doing yet, and that’s the point. The project isn’t the deliverable. The project is the vehicle. The deliverable is the scientist that comes out the other end.
Alice’s project is to build an analysis pipeline for measuring a particular statistical signature in galaxy clustering data. Bob’s is something similar in scope and difficulty, a different signal, a different dataset, the same basic arc of learning. You send them each a few papers to read, point them at some publicly available data, and tell them to start by reproducing a known result. Then you wait.
The academic year unfolds the way academic years do. You have weekly meetings with each student. Alice gets stuck on the coordinate system. Bob can’t get his likelihood function to converge. Alice writes a plotting script that produces garbage. Bob misreads a sign convention in a key paper and spends two weeks chasing a factor-of-two error. You give them both similar feedback: read the paper again, check your units, try printing the intermediate output, think about what the answer should look like before you look at what the code gives you. Normal things. The kind of things you say fifty times a year and never remember saying.
By summer, both students have finished. Both papers are solid. Not groundbreaking, not going to change the field, but correct, useful, and publishable. Both go through a round of minor revisions at a decent journal and come out the other side. A perfectly ordinary outcome. The kind of outcome that the entire apparatus of academic training is designed to produce.
Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent’s fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob’s weekly updates to his supervisor were indistinguishable from Alice’s. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.
Here’s where it gets interesting. If you are an administrator, a funding body, a hiring committee, or a metrics-obsessed department head, Alice and Bob had the same year. One paper each. One set of minor revisions each. One solid contribution to the literature each. By every quantitative measure that the modern academy uses to assess the worth of a scientist, they are interchangeable. We have built an entire evaluation system around counting things that can be counted, and it turns out that what actually matters is the one thing that can’t be.
It gets worse. The majority of PhD students will leave academia within a few years of finishing. Everyone knows this. The department knows it, the funding body knows it, the supervisor probably knows it too even if nobody says it out loud. Which means that, from the institution’s perspective, the question of whether Alice or Bob becomes a better scientist is largely someone else’s problem. The department needs papers, because papers justify funding, and funding justifies the department. The student is the means of production. Whether that student walks out the door five years later as an independent thinker or a competent prompt engineer is, institutionally speaking, irrelevant. The incentive structure doesn’t just fail to distinguish between Alice and Bob. It has no reason to try.
This is the part where I’d like to tell you the system is broken. It isn’t. It’s working exactly as designed.
David Hogg, in his white paper, says something that cuts against this institutional logic so sharply that I’m surprised more people aren’t talking about it. He argues that in astrophysics, people are always the ends, never the means. When we hire a graduate student to work on a project, it should not be because we need that specific result. It should be because the student will benefit from doing that work. This sounds idealistic until you think about what astrophysics actually is. Nobody’s life depends on the precise value of the Hubble constant. No policy changes if the age of the Universe turns out to be 13.77 billion years instead of 13.79. Unlike medicine, where a cure for Alzheimer’s would be invaluable regardless of whether a human or an AI discovered it, astrophysics has no clinical output. The results, in a strict practical sense, don’t matter. What matters is the process of getting them: the development and application of methods, the training of minds, the creation of people who know how to think about hard problems. If you hand that process to a machine, you haven’t accelerated science. You’ve removed the only part of it that anyone actually needed.
Which brings us back to Alice and Bob, and what actually happened to each of them during that year. Alice can now do things. She can open a paper she’s never seen before and, with effort, follow the argument. She can write a likelihood function from scratch. She can stare at a plot and know, before checking, that something is wrong with the normalization. She spent a year building a structure inside her own head, and that structure is hers now, permanently, portable, independent of any tool or subscription. Bob has none of this. Take away the agent, and Bob is still a first-year student who hasn’t started yet. The year happened around him but not inside him. He shipped a product, but he didn’t learn a trade.
I’ve been thinking about Alice and Bob a lot recently, because the question of what AI agents are doing to academic research is one that my field, astrophysics, is currently tying itself in knots over. Several people I respect have written thoughtful pieces about it. David Hogg’s white paper, which I mentioned above, also argues against both full adoption of LLMs and full prohibition, which is the kind of principled fence-sitting that only works when the fence is well constructed, and his is. Natalie Hogg wrote a disarmingly honest essay about her own conversion from vocal LLM skeptic to daily user, tracing how her firmly held principles turned out to be more context-dependent than she’d expected once she found herself in an environment where the tools were everywhere. Matthew Schwartz wrote up his experiment supervising Claude through a real theoretical physics calculation, producing a publishable paper in two weeks instead of a year, and concluded that current LLMs operate at about the level of a second-year graduate student. Each of these pieces is interesting. Each captures a real facet of the problem. None of them quite lands on the thing that keeps me up at night.
Schwartz’s experiment is the most revealing, and not for the reason he thinks. What he demonstrated is that Claude can, with detailed supervision, produce a technically rigorous physics paper. What he actually demonstrated, if you read carefully, is that the supervision is the physics. Claude produced a complete first draft in three days. It looked professional. The equations seemed right. The plots matched expectations. Then Schwartz read it, and it was wrong. Claude had been adjusting parameters to make plots match instead of finding actual errors. It faked results. It invented coefficients. It produced verification documents that verified nothing. It asserted results without derivation. It simplified formulas based on patterns from other problems instead of working through the specifics of the problem at hand. Schwartz caught all of this because he’s been doing theoretical physics for decades. He knew what the answer should look like. He knew which cross-checks to demand. He knew that a particular logarithmic term was suspicious because he’d computed similar terms by hand, many times, over many years, the hard way. The experiment succeeded because the human supervisor had done the grunt work, years ago, that the machine is now supposedly liberating us from. If Schwartz had been Bob instead of Schwartz, the paper would have been wrong, and neither of them would have known.
There’s a common rebuttal to this, and I hear it constantly. “Just wait,” people say. “In a few months, in a year, the models will be better. They won’t hallucinate. They won’t fake plots. The problems you’re describing are temporary.” I’ve been hearing “just wait” since 2023. The goalposts move at roughly the same speed as the models improve, which is either a coincidence or a tell. But set that aside. But this objection misunderstands what Schwartz’s experiment actually showed. The models are already powerful enough to produce publishable results under competent supervision. That’s not the bottleneck. The bottleneck is the supervision. Stronger models won’t eliminate the need for a human who understands the physics; they’ll just broaden the range of problems that a supervised agent can tackle. The supervisor still needs to know what the answer should look like, still needs to know which checks to demand, still needs to have the instinct that something is off before they can articulate why. That instinct doesn’t come from a subscription. It comes from years of failing at exactly the kind of work that people keep calling grunt work. Making the models smarter doesn’t solve the problem. It makes the problem harder to see.
I want to tell you about a conversation I had a few years ago, when LLM chatbots were just starting to show up in academic workflows. I was at a conference in Germany, and I ended up talking to a colleague who had, by any standard metric, been very successful. Big grants. Influential papers. The kind of CV that makes a hiring committee nod approvingly. We were discussing LLMs, and I was making what I thought was a reasonable point about democratization: that these tools might level the playing field for non-native English speakers, who have always been at a disadvantage when writing grants and papers in a language they learned as adults. My colleague became visibly agitated. He wasn’t interested in the democratization angle. He wasn’t interested in the environmental cost. He was, when you stripped away the intellectual framing, afraid. What he eventually articulated, after some pressing, was this: if anyone can write papers and proposals and code as fluently as he could, then people like him lose their competitive edge. The concern was not about science. The concern was about status. Specifically, his.
I lost track of this colleague for a while. Recently I noticed his GitHub profile. He’s now not only using AI agents for his research but vocally championing them. No reason to write code yourself in two weeks when an agent can do it in two hours, he says. I don’t think he’s wrong about the efficiency. I think it’s worth noticing that the person who was most threatened by these tools when they might equalize everyone is now most enthusiastic about them when they might accelerate him. Funny how that works.
The phrase he used that day in Germany has stuck with me, though. He said that “LLMs will take away what’s so great about science.” At the time, I thought he was just talking about his own competitive edge, his fluency as a native English speaker, his ability to write fast and publish often. And he was. But I’ve come to think the phrase itself was more right than he knew, even if his reasons for saying it were mostly self-interested. What’s great about science is its people. The slow, stubborn, sometimes painful process by which a confused student becomes an independent thinker. If we use these tools to bypass that process in favor of faster output, we don’t just risk taking away what’s great about science. We take away the only part of it that wasn’t replaceable in the first place.
The discourse around LLMs in science tends to cluster at two poles that David Hogg identifies cleanly: let-them-cook, in which we hand the reins to the machines and become curators of their output, and ban-and-punish, in which we pretend it’s 2019 and prosecute anyone caught prompting. Both are bad. Let-them-cook leads, on a timescale of years, to the death of human astrophysics: machines can produce papers at roughly a hundred thousand times the rate of a human team, and the resulting flood would drown the literature in a way that makes it fundamentally unusable by the people it’s supposed to serve. Ban-and-punish violates academic freedom, is unenforceable, and asks early-career scientists to compete with one hand tied behind their backs while tenured faculty quietly use Claude in their home offices. Neither policy is serious. Both are mostly projection.
But the real threat isn’t either of those things. It’s quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you’re doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can’t produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can’t sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.
Frank Herbert (yeah, I know I’m a nerd), in God Emperor of Dune, has a character observe: “What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there’s the real danger.” Herbert was writing science fiction. I’m writing about my office. The distance between those two things has gotten uncomfortably small.
I should be honest about the context I’m writing from, because this essay would be obnoxious coming from someone who’s never touched an LLM. I use AI agents regularly, and so do most of the people in my research group. The colleagues I work with produce solid results with these tools. But when you look at how they use them, there’s a pattern: they know what the code should do before they ask the agent to write it. They know what the paper should say before they let it help with the phrasing. They can explain every function, every parameter, every modeling choice, because they built that knowledge over years of doing things the slow way. If every AI company went bankrupt tomorrow, these people would be slower. They would not be lost. They came to the tools after the training, not instead of it. That sequence matters more than anything else in this conversation.
When I see junior PhD students entering the field now, I see something different. I see students who reach for the agent before they reach for the textbook. Who ask Claude to explain a paper instead of reading it. Who ask Claude to implement a mathematical model in Python instead of trying, failing, staring at the error message, failing again, and eventually understanding not just the model but the dozen adjacent things they had to learn in order to get it working. The failures are the curriculum. The error messages are the syllabus. Every hour you spend confused is an hour you spend building the infrastructure inside your own head that will eventually let you do original work. There is no shortcut through that process that doesn’t leave you diminished on the other side.
People call this friction “grunt work.” Schwartz uses exactly that phrase, and he’s right that LLMs can remove it. What he doesn’t say, because he already has decades of hard-won intuition and doesn’t need the grunt work anymore, is that for someone who doesn’t yet have that intuition, the grunt work is the work. The boring parts and the important parts are tangled together in a way that you can’t separate in advance. You don’t know which afternoon of debugging was the one that taught you something fundamental about your data until three years later, when you’re working on a completely different problem and the insight surfaces. Serendipity doesn’t come from efficiency. It comes from spending time in the space where the problem lives, getting your hands dirty, making mistakes that nobody asked you to make and learning things nobody assigned you to learn.
The strange thing is that we already know this. We have always known this. Every physics textbook ever written comes with exercises at the end of each chapter, and every physics professor who has ever stood in front of a lecture hall has said the same thing: you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we’ve collectively decided that maybe this time it’s different. That maybe nodding at Claude’s output is a substitute for doing the calculation yourself. It isn’t. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient.
This is the distinction that I think the current debate keeps missing. Using an LLM as a sounding board: fine. Using it as a syntax translator when you know what you want to say but can’t remember the exact Matplotlib keyword: fine. Using it to look up a BibTeX formatting convention so you don’t have to wade through Stack Overflow: fine. In all of these cases, the human is the architect. The machine holds the dictionary. The thinking has already been done, and the tool is just smoothing the last mile of execution. But the moment you use the machine to bypass the thinking itself, to let it make the methodological choices, to let it decide what the data means, to let it write the argument while you nod along, you have crossed a line that is very difficult to see and very difficult to uncross. You haven’t saved time. You’ve forfeited the experience that the time was supposed to give you.
Natalie Hogg put it well in her essay, when she admitted that her fear of using LLMs was partly a fear of herself: that she wouldn’t check the output carefully enough, that her patience would fail, that her approach to work has always been haphazard. That kind of honesty is rare in these discussions, and it matters. The failure mode isn’t malice. It’s convenience. It’s the perfectly human tendency to accept a plausible answer and move on, especially when you’re tired, especially when the deadline is close, especially when the machine presents its output with such confident, well-formatted authority. The problem isn’t that we’ll decide to stop thinking. The problem is that we’ll barely notice when we do.
I’m not arguing that LLMs should be banned from research. That would be stupid, and it would be a position I don’t hold, given that I used one this morning. I’m arguing that the way we use them matters more than whether we use them, and that the distinction between tool use and cognitive outsourcing is the single most important line in this entire conversation, and that almost nobody is drawing it clearly. Schwartz can use Claude to write a paper because Schwartz already knows the physics. His decades of experience are the immune system that catches Claude’s hallucinations. A first-year student using the same tool, on the same problem, with the same supervisor giving the same feedback, produces the same output with none of the understanding. The paper looks identical. The scientist doesn’t.
And here is where I have to be fair to Bob, because Bob isn’t stupid. Bob is responding rationally to the incentives he’s been given. Academia is cutthroat. The publish-or-perish pressure is not a metaphor; it is the literal mechanism by which careers are made or ended. Long gone are the days when a single, carefully reasoned monograph could get you through a PhD and into a good postdoc. Academic hiring now rewards publication volume. The more papers you produce during your PhD, the better your chances of landing a competitive postdoc, which improves your chances of a good fellowship, which improves your chances of a tenure-track position, each step compounding the last (so many levels, almost like a pyramid). So why wouldn’t a first-year student outsource their thinking to an agent, if doing so means three papers instead of one? The logic is airtight, right up until the moment it isn’t. Because the same career ladder that rewards early publication volume eventually demands something that no agent can provide: the ability to identify a good problem, to know when a result smells wrong, to supervise someone else’s work with the confidence that comes only from having done it yourself. You can’t skip the first five years of learning and expect to survive the next twenty. There is no avoiding the publish-or-perish race if you want an academic career. But there is a balance to be struck, and it requires the one thing that is hardest to do when you’re twenty-four and anxious about your future: prioritizing long-term understanding over short-term output. Nobody has ever been good at that. I’m not sure why we’d start now.
Five years from now, Alice will be writing her own grant proposals, choosing her own problems, supervising her own students. She’ll know what questions to ask because she spent a year learning the hard way what happens when you ask the wrong ones. She’ll be able to sit with a new dataset and feel, in her gut, when something is off, because she’s developed the intuition that only comes from doing the work yourself, from the tedious hours of debugging, from the afternoons wasted chasing sign errors, from the slow accumulation of tacit knowledge that no summary can transmit.
Bob will be fine. He’ll have a good CV. He’ll probably have a job. He’ll use whatever the 2031 version of Claude is, and he’ll produce results, and those results will look like science.
I’m not worried about the machines. The machines are fine. I’m worried about us.
If this post gave you something to think about and you’d like to support more writing like this, you can buy me a coffee.
If you have a Bluesky account and want to share your thoughts, you can leave a comment on this post’s thread.
D. W. Hogg, “Why do we do astrophysics?”, arXiv:2602.10181, February 2026.
N. B. Hogg, “Find the stable and pull out the bolt”, February 2026. Available at nataliebhogg.com.
M. Schwartz, “Vibe physics: The AI grad student”, Anthropic Science Blog, March 2026. Available at anthropic.com/research/vibe-physics.
...
Read the original on ergosphere.blog »
The Wallet Unit provides for authentication means which can be bound to multiple identification means, such as the PID, via a public/private key pair, see cryptography.
When issuing the PID, the WB confirms to the PP (via OpenID4VCI Key Attestation) that the keys to which a PID is to be bound are controlled by an authentication means(../05-cryptography.md) that meets certain security requirements with regard to resistance against attackers with a certain attack potential (see ISO/IEC 18045).
Furthermore, in the context of performing electronic identification at assurance level high, such as the PID, it is required that authentication of wallet users is done in accordance with, the requirements for the characteristics and design of electronic identification means at assurance level high, as set out in Implementing Regulation (EU) 2015/1502 (see CIR 2024/2979 Article 5 1. b/g).
Therefore, the authentication means provides two important assurances:
The authentication means protects against duplication and tampering attacks to the key store by attackers with high attack potential. Thus, the PP can be sure that it’s issued credentials that are bound to the keys of the authentication mean cannot be duplicated by an attacker with high attack potential and thus the identification means itself cannot be duplicated in their entirety (see CIR 2015/1502 Annex 2.2.1).
The authentication means protects against attacks on the user’s authentication mechanism by attackers with high attack potential. Thus, the PP can be sure that it’s issued credentials that are bound to the keys of the authentication mean cannot be misused by an attacker with high attack potential, e.g. for single presentations of a credential (see CIR 2015/1502 Annex 2.3.1).
The first assurance can be achieved by creating and processing the relevant keys in an RWSCD implemented as an HSM that has been appropriately evaluated and certified. This assurance can therefore be achieved independently of the user device.
The second assurance concerns the authentication mechanism of the user towards the relying party when presenting the credential. This includes two-factor authentication of the user towards the RWSCA. The security of the user authentication mechanism and the authentication factors depend on the security of the user device. The solution comprises a possession factor secured by the HKS of the mobile device and a knowledge factor entered via the mobile device.
The security of the possession factor depends on the existence of exploitable vulnerabilities in the HKS of the mobile device that allow the key to be extracted or misused.
The security of the knowledge factor, depends on the existence of exploitable vulnerabilities in the wallet instance and/or the operating system of the mobile device.
A preceding vulnerability analysis and certification of the HKS or the OS with regard to resistance to a specific attack potential, which would significantly reduce the likelihood of the existence of relevant vulnerabilities, is not available for mobile devices in practice. Rather, it can be observed that relevant vulnerabilities have become known for mobile devices in the past.
For this reason, the solution provides for monitoring identified vulnerabilities for the HKS and the operating system of user devices through a mobile device vulnerability management (MDVM) during operation to reduce the likelihood that existing relevant vulnerabilities can be exploited. This is achieved by ensuring that if vulnerabilities are known for a user device that could compromise the user’s authentication mechanism towards the RWSCA with a attack potential of ‘high’ or lower, the use of keys secured by the RWSCA/RWSCD is prevented. Thus, the confirmation of the WB to the PP remains valid.
To achieve this goal, the MDVM provides for the following functions:
The components and roles for providing these functions are introduced in the decomposition chapter of the architecture.
This chapter provides an overview of the collected signals and their mapping to relevant threats. It also describes additional uses of these signals for plausibility checks and for determining the device class used to query the MDVM databases.
* “attestationIdModel”, “attestationIdProduct”, and “attestationIdDevice” can all be used to identify the device model for determining the device class. All three are included because tests have shown that some devices do not provide all of these values. To increase the likelihood of identifying the model via key attestation, all three fields should be evaluated.
* The key attestation signature and certificates (including the full certificate chain) must be validated in order to rely on the signals as described. Google’s revocation list for key-attestation certificates should also be checked. However, the list is not updated frequently enough, and there are regularly leaked keys publicly available that can still be used to sign key attestations but have not yet been revoked by Google.
* Android key attestation also includes additional information about the properties of the attested key, such as user-authentication requirements for key usage or the key’s purpose (signing, encryption, etc.). These aspects are not documented in this list, as they do not mitigate a specific threat to the app or the device. Nonetheless, they are important and must be verified when evaluating the key attestation.
* Since our minimum Android version will be Android 13, we will be checking for “MEETS_STRONG_INTEGRITY”, and the table above is based on the capabilities of the hardware-backed Play Integrity verdict.
* MEETS_STRONG_INTEGRITY also includes the requirement that the device has received a security patch within the last 12 months.
* The PlayIntegrity verdict signature must be validated and the verdict needs to be decrypted (using keys provided via theGoogle Play Console) in order to rely on the signals as described.
Since we have not yet decided on a RASP solution, the documented detection features should be considered a preliminary set of requirements for potential RASP solutions.
* RASP provides a way to continuously and dynamically monitor the app and the user’s device for integrity and authenticity while the app is running.
* Root detection by RASP is particularly important in the Android environment, as there are publicly known and well-documented methods that use leaked key-attestation keys to emulate a device with a locked bootloader, even when it is running a modified system image (e.g., a rooted system image). This root-detection mechanism works in combination with an independently maintained blocklist, separate from Google’s revocation list, for publicly leaked key-attestation keys that have not been revoked by Google. It is intended in particular to serve as a barrier against the use of non-publicly leaked key-attestation keys.
* The attestation also includes a receipt that can be used to query a risk metric from Apple’s servers. Apple’s documentation describes this metric as follows: “The receipt represents the metric as a string that indicates the number of attested keys associated with a given device over the past 30 days. Look for this value to be a low number.” This metric could help identify devices that are being used to generate attestations or assertions on behalf of other devices (proxy attestation threat). However, it has significant limitations: neither the metric nor the expected thresholds are clearly defined, and its use requires our backend to communicate with Apple’s servers, which introduces additional privacy risks due to the potential for tracking on Apple’s side. For additional information, see WardenSupreme’s description of the feature.
* iOS does not provide any hardware-backed information about the device model or the OS version/patch level. These values must be queried from the OS after ensuring that the device and operating system have not been tampered with.
Since we have not yet decided on a RASP solution, the documented detection features should be considered a preliminary set of requirements for potential RASP solutions.
* Apple’s platform security provides strong install-time protections: App Sandbox & Code Signing, App Store Review (prevent obviously malicious or re-signed apps from entering the store), System Integrity Protection (prevents unsigned code execution on non-jailbroken devices). Based on the documented functionality of these features they do not provide information about or protection against: Rooting (Jailbreak) or elevated privileges, runtime hooking or instrumentation.
“Your app uses the App Attest service to assert its authenticity. A compromised version of your app running on a genuine, unmodified Apple device can’t create valid assertions.” Apple documentation
* “Your app uses the App Attest service to assert its authenticity. A compromised version of your app running on a genuine, unmodified Apple device can’t create valid assertions.” Apple documentation
* RASP provides a way to continuously and dynamically monitor the app and the user’s device for integrity and authenticity while the app is running.
...
Read the original on bmi.usercontent.opencode.de »
why use many token when few do trick
A Claude Code skill/plugin and Codex plugin that makes agent talk like caveman — cutting ~75% of tokens while keeping full technical accuracy.
Based on the viral observation that caveman-speak dramatically reduces LLM token usage without losing technical substance. So we made it a one-line install.
Same fix. 75% less word. Brain still big.
Real token counts from the Claude API (reproduce it yourself):
A March 2026 paper “Brevity Constraints Reverse Performance Hierarchies in Language Models” found that constraining large models to brief responses improved accuracy by 26 percentage points on certain benchmarks and completely reversed performance hierarchies. Verbose not always better. Sometimes less word = more correct.
npx skills add JuliusBrussee/caveman
claude plugin marketplace add JuliusBrussee/caveman
claude plugin install caveman@caveman
Install once. Use in all sessions after that.
One rock. That it.
* Faster response — less token to generate = speed go brrr
* Easier to read — no wall of text, just the answer
* Same accuracy — all technical info kept, only fluff removed (science say so)
* “I’d be happy to help you with that” (8 wasted tokens)
* “The reason this is happening is because” (7 wasted tokens)
* “I would recommend that you consider” (7 wasted tokens)
* “Sure, let me take a look at that for you” (10 wasted tokens)
Caveman say what need saying. Then stop.
...
Read the original on github.com »
OpenScreen is your free, open-source alternative to Screen Studio (sort of).
If you don’t want to pay $29/month for Screen Studio but want a much simpler version that does what most people seem to need, making beautiful product demos and walkthroughs, here’s a free-to-use app for you. OpenScreen does not offer all Screen Studio features, but covers the basics well!
Screen Studio is an awesome product and this is definitely not a 1:1 clone. OpenScreen is a much simpler take, just the basics for folks who want control and don’t want to pay. If you need all the fancy features, your best bet is to support Screen Studio (they really do a great job, haha). But if you just want something free (no gotchas) and open, this project does the job!
OpenScreen is 100% free for personal and commercial use. Use it, modify it, distribute it. (Just be cool 😁 and give a shoutout if you feel like it !)
* Record your whole screen or specific windows.
* Customize the duration and position of zooms however you please.
* Export in different aspect ratios and resolutions.
Download the latest installer for your platform from the GitHub Releases page.
If you encounter issues with macOS Gatekeeper blocking the app (since it does not come with a developer certificate), you can bypass this by running the following command in your terminal after installation:
xattr -rd com.apple.quarantine /Applications/Openscreen.app
Note: Give your terminal Full Disk Access in System Settings > Privacy & Security to grant you access and then run the above command.
After running this command, proceed to System Preferences > Security & Privacy to grant the necessary permissions for “screen recording” and “accessibility”. Once permissions are granted, you can launch the app.
Download the .AppImage file from the releases page. Make it executable and run:
chmod +x Openscreen-Linux-*.AppImage
./Openscreen-Linux-*.AppImage
You may need to grant screen recording permissions depending on your desktop environment.
Note: If the app fails to launch due to a “sandbox” error, run it with –no-sandbox:
./Openscreen-Linux-*.AppImage –no-sandbox
System audio capture relies on Electron’s desktopCapturer and has some platform-specific quirks:
* macOS: Requires macOS 13+. On macOS 14.2+ you’ll be prompted to grant audio capture permission. macOS 12 and below does not support system audio (mic still work).
* Windows: Works out of the box.
* Linux: Needs PipeWire (default on Ubuntu 22.04+, Fedora 34+). Older PulseAudio-only setups may not support system audio (mic should still works).
I’m new to open source, idk what I’m doing lol. If something is wrong please raise an issue 🙏
Contributions are welcome! If you’d like to help out or see what’s currently being worked on, take a look at the open issues and the project roadmap to understand the current direction of the project and find ways to contribute.
This project is licensed under the MIT License. By using this software, you agree that the authors are not liable for any issues, damages, or claims arising from its use.
...
Read the original on github.com »
Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.
...
Read the original on www.phoronix.com »
For eight years, I’ve wanted a high-quality set of devtools for working with SQLite. Given how important SQLite is to the industry1, I’ve long been puzzled that no one has invested in building a really good developer experience for it.
A couple of weeks ago, after ~250 hours of effort over three months3 on evenings, weekends, and vacation days, I finally
released syntaqlite
(GitHub), fulfilling this long-held wish. And I believe the main reason this happened was because of AI coding agents.
Of course, there’s no shortage of posts claiming that AI one-shot their project or pushing back and declaring that AI is all slop. I’m going to take a very different approach and, instead, systematically break down my experience building syntaqlite with AI, both where it helped and where it was detrimental.
I’ll do this while contextualizing the project and my background so you can independently assess how generalizable this experience was. And whenever I make a claim, I’ll try to back it up with evidence from my project journal, coding transcripts, or commit history5.
In my work on Perfetto, I maintain a SQLite-based language for querying performance traces called
PerfettoSQL. It’s basically the same as SQLite but with a few extensions to make the trace querying experience better. There are ~100K lines of PerfettoSQL internally in Google and it’s used by a wide range of teams.
Having a language which gets traction means your users also start expecting things like formatters, linters, and editor extensions. I’d hoped that we could adapt some SQLite tools from open source but the more I looked into it, the more disappointed I was. What I found either wasn’t reliable enough, fast enough6, or flexible enough to adapt to PerfettoSQL. There was clearly an opportunity to build something from scratch, but it was never the “most important thing we could work on”. We’ve been reluctantly making do with the tools out there but always wishing for better.
On the other hand, there was the option to do something in my spare time. I had built lots of open source projects in my teens7 but this had faded away during university when I felt that I just didn’t have the motivation anymore. Being a maintainer is much more than just “throwing the code out there” and seeing what happens. It’s triaging bugs, investigating crashes, writing documentation, building a community, and, most importantly, having a direction for the project.
But the itch of open source (specifically freedom to work on what I wanted while helping others) had never gone away. The SQLite devtools project was eternally in my mind as “something I’d like to work on”. But there was another reason why I kept putting it off: it sits at the intersection of being both hard and
tedious.
If I was going to invest my personal time working on this project, I didn’t want to build something that only helped Perfetto: I wanted to make it work for any
SQLite user out there8. And this means parsing SQL exactly
like SQLite.
The heart of any language-oriented devtool is the parser. This is responsible for turning the source code into a “parse tree” which acts as the central data structure anything else is built on top of. If your parser isn’t accurate, then your formatters and linters will inevitably inherit those inaccuracies; many of the tools I found suffered from having parsers which approximated the SQLite language rather than representing it precisely.
Unfortunately, unlike many other languages, SQLite has no formal specification describing how it should be parsed. It doesn’t expose a stable API for its parser either. In fact, quite uniquely, in its implementation it doesn’t even build a parse tree at all9! The only reasonable approach left in my opinion is to carefully extract the relevant parts of SQLite’s source code and adapt it to build the parser I wanted10.
This means getting into the weeds of SQLite source code, a fiendishly difficult codebase to understand. The whole project is written in C in an
incredibly dense style; I’ve spent days just understanding the virtual table
API11 and
implementation. Trying to grasp the full parser stack was daunting.
There’s also the fact that there are >400 rules in SQLite which capture the full surface area of its language. I’d have to specify in each of these “grammar rules” how that part of the syntax maps to the matching node in the parse tree. It’s extremely repetitive work; each rule is similar to all the ones around it but also, by definition, different.
And it’s not just the rules but also coming up with and writing tests to make sure it’s correct, debugging if something is wrong, triaging and fixing the inevitable bugs people filed when I got something wrong…
For years, this was where the idea died. Too hard for a side project12, too tedious to sustain motivation, too risky to invest months into something that might not work.
I’ve been using coding agents since early 2025 (Aider, Roo Code, then Claude Code since July) and they’d definitely been useful but never something I felt I could trust a serious project to. But towards the end of 2025, the models seemed to make a significant step forward in quality13. At the same time, I kept hitting problems in Perfetto which would have been trivially solved by having a reliable parser. Each workaround left the same thought in the back of my mind: maybe it’s finally time to build it for real.
I got some space to think and reflect over Christmas and decided to really stress test the most maximalist version of AI: could I vibe-code the whole thing using just Claude Code on the Max plan (£200/month)?
Through most of January, I iterated, acting as semi-technical manager and delegating almost all the design and all the implementation to Claude. Functionally, I ended up in a reasonable place: a parser in C extracted from SQLite sources using a bunch of Python scripts, a formatter built on top, support for both the SQLite language and the PerfettoSQL extensions, all exposed in a web playground.
But when I reviewed the codebase in detail in late January, the downside was obvious: the codebase was complete spaghetti14. I didn’t understand large parts of the Python source extraction pipeline, functions were scattered in random files without a clear shape, and a few files had grown to several thousand lines. It was extremely fragile; it solved the immediate problem but it was never going to cope with my larger vision, never mind integrating it into the Perfetto tools. The saving grace was that it had proved the approach was viable and generated more than 500 tests, many of which I felt I could reuse.
I decided to throw away everything and start from scratch while also switching most of the codebase to Rust15. I could see that C was going to make it difficult to build the higher level components like the validator and the language server implementation. And as a bonus, it would also let me use the same language for both the extraction and runtime instead of splitting it across C and Python.
More importantly, I completely changed my role in the project. I took ownership of all decisions16 and used it more as “autocomplete on steroids” inside a much tighter process: opinionated design upfront, reviewing every change thoroughly, fixing problems eagerly as I spotted them, and investing in scaffolding (like linting, validation, and non-trivial testing17) to check AI output automatically.
The core features came together through February and the final stretch (upstream test validation, editor extensions, packaging, docs) led to a 0.1 launch in mid-March.
But in my opinion, this timeline is the least interesting part of this story. What I really want to talk about is what wouldn’t have happened without AI and also the toll it took on me as I used it.
I’ve written in the past
about how one of my biggest weaknesses as a software engineer is my tendency to procrastinate when facing a big new project. Though I didn’t realize it at the time, it could not have applied more perfectly to building syntaqlite.
AI basically let me put aside all my doubts on technical calls, my uncertainty of building the right thing and my reluctance to get started by giving me very concrete problems to work on. Instead of “I need to understand how SQLite’s parsing works”, it was “I need to get AI to suggest an approach for me so I can tear it up and build something better”18. I work so much better with concrete prototypes to play with and code to look at than endlessly thinking about designs in my head, and AI lets me get to that point at a pace I could not have dreamed about before. Once I took the first step, every step after that was so much easier.
AI turned out to be better than me at the act of writing code itself, assuming that code is obvious. If I can break a problem down to “write a function with this behaviour and parameters” or “write a class matching this interface,” AI will build it faster than I would and, crucially, in a style that might well be more intuitive to a future reader. It documents things I’d skip, lays out code consistently with the rest of the project, and sticks to what you might call the “standard dialect” of whatever language you’re working in19.
That standardness is a double-edged sword. For the vast majority of code in any project, standard is exactly what you want: predictable, readable, unsurprising. But every project has pieces that are its edge, the parts where the value comes from doing something non-obvious. For syntaqlite, that was the extraction pipeline and the parser architecture. AI’s instinct to normalize was actively harmful there, and those were the parts I had to design in depth and often resorted to just writing myself.
But here’s the flip side: the same speed that makes AI great at obvious code also makes it great at refactoring. If you’re using AI to generate code at industrial scale, you have to refactor constantly and continuously20. If you don’t, things immediately get out of hand. This was the central lesson of the vibe-coding month: I didn’t refactor enough, the codebase became something I couldn’t reason about, and I had to throw it all away. In the rewrite, refactoring became the core of my workflow. After every large batch of generated code, I’d step back and ask “is this ugly?” Sometimes AI could clean it up. Other times there was a large-scale abstraction that AI couldn’t see but I could; I’d give it the direction and let it execute21. If you have taste, the cost of a wrong approach drops dramatically because you can restructure quickly22.
Of all the ways I used AI, research had by far the highest ratio of value delivered to time spent.
I’ve worked with interpreters and parsers before but I had never heard of Wadler-Lindig pretty printing23. When I needed to build the formatter, AI gave me a concrete and actionable lesson from a point of view I could understand and pointed me to the papers to learn more. I could have found this myself eventually, but AI compressed what might have been a day or two of reading into a focused conversation where I could ask “but why does this work?” until I actually got it.
This extended to entire domains I’d never worked in. I have deep C++ and Android performance expertise but had barely touched Rust tooling or editor extension APIs. With AI, it wasn’t a problem: the fundamentals are the same, the terminology is similar, and AI bridges the gap24. The VS Code extension would have taken me a day or two of learning the API before I could even start. With AI, I had a working extension within an hour.
It was also invaluable for reacquainting myself with parts of the project I hadn’t looked at for a few days25. I could control how deep to go: “tell me about this component” for a surface-level refresher, “give me a detailed linear walkthrough” for a deeper dive, “audit unsafe usages in this repo” to go hunting for problems. When you’re context switching a lot, you lose context fast. AI let me reacquire it on demand.
Beyond making the project exist at all, AI is also the reason it shipped as complete as it did. Every open source project has a long tail of features that are important but not critical: the things you know theoretically how to do but keep deprioritizing because the core work is more pressing. For syntaqlite, that list was long: editor extensions, Python bindings, a WASM playground, a docs site, packaging for multiple ecosystems26. AI made these cheap enough that skipping them felt like the wrong trade-off.
It also freed up mental energy for UX27. Instead of spending all my time on implementation, I could think about what a user’s first experience should feel like: what error messages would actually help them fix their SQL, how the formatter output should look by default, whether the CLI flags were intuitive. These are the things that separate a tool people try once from one they keep using, and AI gave me the headroom to care about them. Without AI, I would have built something much smaller, probably no editor extensions or docs site. AI didn’t just make the same project faster. It changed what the project was.
There’s an uncomfortable parallel between using AI coding tools and playing slot machines28. You send a prompt, wait, and either get something great or something useless. I found myself up late at night wanting to do “just one more prompt,” constantly trying AI just to see what would happen even when I knew it probably wouldn’t work. The sunk cost fallacy kicked in too: I’d keep at it even in tasks it was clearly ill-suited for, telling myself “maybe if I phrase it differently this time.”
The tiredness feedback loop made it worse29. When I had energy, I could write precise, well-scoped prompts and be genuinely productive. But when I was tired, my prompts became vague, the output got worse, and I’d try again, getting more tired in the process. In these cases, AI was probably slower than just implementing something myself, but it was too hard to break out of the loop30.
Several times during the project, I lost my mental model of the codebase31. Not the overall architecture or how things fitted together. But the day-to-day details of what lived where, which functions called which, the small decisions that accumulate into a working system. When that happened, surprising issues would appear and I’d find myself at a total loss to understand what was going wrong. I hated that feeling.
The deeper problem was that losing touch created a communication breakdown32. When you don’t have the mental thread of what’s going on, it becomes impossible to communicate meaningfully with the agent. Every exchange gets longer and more verbose. Instead of “change FooClass to do X,” you end up saying “change the thing which does Bar to do X”. Then the agent has to figure out what Bar is, how that maps to FooClass, and sometimes it gets it wrong33. It’s exactly the same complaint engineers have always had about managers who don’t understand the code asking for fanciful or impossible things. Except now you’ve become that manager.
The fix was deliberate: I made it a habit to read through the code immediately after it was implemented and actively engage to see “how would I have done this differently?”.
Of course, in some sense all of the above is also true of code I wrote a few months ago (hence the
sentiment that AI code is legacy code), but AI makes the drift happen faster because you’re not building the same muscle memory that comes from originally typing it out.
There were some other problems I only discovered incrementally over the three months.
I found that AI made me procrastinate on key design decisions34. Because refactoring was cheap, I could always say “I’ll deal with this later.” And because AI could refactor at the same industrial scale it generated code, the cost of deferring felt low. But it wasn’t: deferring decisions corroded my ability to think clearly because the codebase stayed confusing in the meantime. The vibe-coding month was the most extreme version of this. Yes, I understood the problem, but if I had been more disciplined about making hard design calls earlier, I could have converged on the right architecture much faster.
Tests created a similar false comfort35. Having 500+ tests felt reassuring, and AI made it easy to generate more. But neither humans nor AI are creative enough to foresee every edge case you’ll hit in the future; there are several times in the vibe-coding phase where I’d come up with a test case and realise the design of some component was completely wrong and needed to be totally reworked. This was a significant contributor to my lack of trust and the decision to scrap everything and start from scratch.
Basically, I learned that the “normal rules” of software still apply in the AI age: if you don’t have a fundamental foundation (clear architecture, well-defined boundaries) you’ll be left eternally chasing bugs as they appear.
Something I kept coming back to was how little AI understood about the passage of time36. It sees a codebase in a certain state but doesn’t feel time the way humans do. I can tell you what it feels like to use an API, how it evolved over months or years, why certain decisions were made and later reversed.
The natural problem from this lack of understanding is that you either make the same mistakes you made in the past and have to relearn the lessons or you fall into new traps which were successfully avoided the first time, slowing you down in the long run. In my opinion, this is a similar problem to why losing a high-quality senior engineer hurts a team so much: they carry history and context that doesn’t exist anywhere else and act as a guide for others around them.
In theory, you can try to preserve this context by keeping specs and docs up to date. But there’s a reason we didn’t do this before AI: capturing implicit design decisions exhaustively is incredibly expensive and time-consuming to write down. AI can help draft these docs, but because there’s no way to automatically verify that it accurately captured what matters, a human still has to manually audit the result. And that’s still time-consuming.
There’s also the context pollution problem. You never know when a design note about API A will echo in API B. Consistency is a huge part of what makes codebases work, and for that you don’t just need context about what you’re working on right now but also about other things which were designed in a similar way. Deciding what’s relevant requires exactly the kind of judgement that institutional knowledge provides in the first place.
Reflecting on the above, the pattern of when AI helped and when it hurt was fairly consistent.
When I was working on something I already understood deeply, AI was excellent. I could review its output instantly, catch mistakes before they landed and move at a pace I’d never have managed alone. The parser rule generation is the clearest example37: I knew exactly what each rule should produce, so I could review AI’s output within a minute or two and iterate fast.
When I was working on something I could describe but didn’t yet know, AI was good but required more care. Learning Wadler-Lindig for the formatter was like this: I could articulate what I wanted, evaluate whether the output was heading in the right direction, and learn from what AI explained. But I had to stay engaged and couldn’t just accept what it gave me.
When I was working on something where I didn’t even know what I wanted, AI was somewhere between unhelpful and harmful. The architecture of the project was the clearest case: I spent weeks in the early days following AI down dead ends, exploring designs that felt productive in the moment but collapsed under scrutiny. In hindsight, I have to wonder if it would have been faster just thinking it through without AI in the loop at all.
But expertise alone isn’t enough. Even when I understood a problem deeply, AI still struggled if the task had no objectively checkable answer38. Implementation has a right answer, at least at a local level: the code compiles, the tests pass, the output matches what you asked for. Design doesn’t. We’re still arguing about OOP decades after it first took off.
Concretely, I found that designing the public API of syntaqlite was where this hit home the hardest. I spent several days in early March doing nothing but API refactoring, manually fixing things any experienced engineer would have instinctively avoided but AI made a total mess of. There’s no test or objective metric for “is this API pleasant to use” and “will this API help users solve the problems they have” and that’s exactly why the coding agents did so badly
at it.
This takes me back to the days I was obsessed with physics and, specifically, relativity. The laws of physics look simple and Newtonian in any small local area, but zoom out and spacetime curves in ways you can’t predict from the local picture alone. Code is the same: at the level of a function or a class, there’s usually a clear right answer, and AI is excellent there. But architecture is what happens when all those local pieces interact, and you can’t get good global behaviour by stitching together locally correct components.
Knowing where you are on these axes at any given moment is, I think, the core skill of working with AI effectively.
Eight years is a long time to carry a project in your head. Seeing these SQLite tools actually exist and function after only three months of work is a massive win, and I’m fully aware they wouldn’t be here without AI.
But the process wasn’t the clean, linear success story people usually post. I lost an entire month to vibe-coding. I fell into the trap of managing a codebase I didn’t actually understand, and I paid for that with a total rewrite.
The takeaway for me is simple: AI is an incredible force multiplier for implementation, but it’s a dangerous substitute for design. It’s brilliant at giving you the right answer to a specific technical question, but it has no sense of history, taste, or how a human will actually feel using your API. If you rely on it for the “soul” of your software, you’ll just end up hitting a wall faster than you ever have before.
What I’d like to see more of from others is exactly what I’ve tried to do here: honest, detailed accounts of building real software with these tools; not weekend toys or one-off scripts but the kind of software that has to survive contact with users, bug reports, and your own changing mind.
...
Read the original on lalitm.com »
Skip to content
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
You must be signed in to star a gist
You must be signed in to fork a gist
Embed this gist in your website.
Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
Embed this gist in your website.
Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can’t perform that action at this time.
...
Read the original on gist.github.com »
As the title suggests, I have finally become the victim of Google’s account suspension. The reason given over the phone was that my account had been “hijacked” - when in reality, I was simply the one accessing it from overseas.
Despite repeatedly explaining this, they ignored my assertions and continue to hold my email hostage. I understand they may believe they’re protecting me from social engineering, but a DNS verification was already completed to prove I own the domain. While suspended, no emails are received and no forwards work.
On Saturday, April 4, around 5:00 AM I removed my phone number from the account. I am travelling to the UK for a short period and did not want to have roaming on my Australian phone. Despite setting up an authenticator and passkey, gmail insisted that I use my phone number to login via SMS. Ofcourse, this was not what I wanted. So I removed the recovery phone, expecting the system to fall back to my authenticator. I was in the UK to negotiate a business outcome - I should have realised that if Gmail fails, everything downstream fails with it.
On Saturday, April 4, at 5:06 AM, I received a notification saying my authenticator had been removed. It hadn’t. The authenticator was still active on my phone - it was the recovery phone I had removed. Google apparently conflated the two.
I had my logged-in laptop, phone, passkey, and backup codes, but none of it worked.
Every recovery attempt kept hitting the same “something went wrong” dead end, despite tapping the correct recovery prompt on my phone.
This email is the sole account on my Google Workspace - and the super admin account. A single point of failure. It has three aliases across different domains. Email, Drive, Calendar, payroll, and authentication for multiple services are all tied to it.
I had multiple email forwards configured precisely for situations like this, but because the account is suspended, forwards don’t work either. Payroll is tomorrow. I won’t be able to run it because of my reliance on Gmail-based authentication. I cannot login to Pipedrive - sales CRM because I use Google Auth. I can’t login to our Task management app. I can’t login to our internal systems because - you guessed it - it releied on Google Oauth.
Then, it get worse.
I tried account recovery and was asked to verify DNS ownership via CNAME and TXT records. I did this promptly. Meanwhile, the recovery email option told me I’d need to wait 30 days.
Thirty days of a suspended business email. No incoming mail. No forwards. Nothing.
So Google will suspend my email for 30 days and no emails will be received during this time.
Fortunately, I run another Google Workspace account. I logged into that admin console and contacted Google support through it. The rep told me to fill out a form at a link - which required me to log in. With my suspended account. Which I obviously can’t do.
Another rep on chat told me the account was “simply suspended” and that I could recover it in incognito mode. I told them I’d already tried this multiple times and hit the same error every time. They filed another ticket.
I tell them - I have already done this numerous times and it fails with the same error I got previously.
This back-and-forth repeated several times over 24 hours. There are now four open cases. None of the case managers seem to know what’s happening - one case gets closed in favour of another, then reopened because the other was closed.
I read community forums and reach out Google Workspace on X.com. They ask for case numbers which I promptly give and they tell me to wait.
I finally reached someone by phone and explained that I had removed the phone number myself. I asked them to verify the DNS record, confirmed I’d passed their “knowledge test.” They said everything checked out and that there would be a resolution within a few hours. They confirmed as much in a follow-up email.
More than 40 hours have passed. Having chatted to someone on the phone. I’ve since been told the person handling my case isn’t on shift for another 90 minutes. This means I will likely miss payroll. A negotiation with a business associate will need to be rescheduled. I could give them my personal email, but I’d rather keep business and personal separate.
Update 1 - I know I can simply change the MX record to someone else but It has its own challenges.
Update 2 - Sadly, its 2 PM in the UK and I will miss the meeting that I had scheduled via Google Meet because emails are not working and neither is anything Google. I have been on chat with Google reps since the morning. First an update was promised in 1-2 hours. Then on followup , 3 hours later another update is promised in 90 mins as the reps shift hasn’t started. Then 90 mins later, received a call where another update is promised in 60 mins.
...
Read the original on zencapital.substack.com »
The crew for Nasa’s Artemis II mission have described seeing the far side of the Moon for the first time.
Nasa astronauts Reid Wiseman, Victor Glover, and Christina Koch, and Canadian Space Agency astronaut Jeremy Hansen have entered the third day of their mission on the Orion spacecraft that will carry them around the far side of the Moon and back to Earth.
“Something about you senses that is not the Moon that I’m used to seeing,” Koch said.
The crew shared a photo they took of the Orientale basin of the Moon, which Nasa said marked “the first time the entire basin has been seen with human eyes”.
As of 23:00 BST on Saturday, Nasa’s online dashboard showed the Artemis II spacecraft was more than 180,000 miles (289,681km) from Earth.
...
Read the original on www.bbc.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.