10 interesting stories served every morning and every evening.
Our latest model, Claude Opus 4.7, is now generally available. Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks. Users report being able to hand off their hardest coding work—the kind that previously needed close supervision—to Opus 4.7 with confidence. Opus 4.7 handles complex, long-running tasks with rigor and consistency, pays precise attention to instructions, and devises ways to verify its own outputs before reporting back.The model also has substantially better vision: it can see images in greater resolution. It’s more tasteful and creative when completing professional tasks, producing higher-quality interfaces, slides, and docs. And—although it is less broadly capable than our most powerful model, Claude Mythos Preview—it shows better results than Opus 4.6 across a range of benchmarks:Last week we announced Project Glasswing, highlighting the risks—and benefits—of AI models for cybersecurity. We stated that we would keep Claude Mythos Preview’s release limited and test new cyber safeguards on less capable models first. Opus 4.7 is the first such model: its cyber capabilities are not as advanced as those of Mythos Preview (indeed, during its training we experimented with efforts to differentially reduce these capabilities). We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses. What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.Security professionals who wish to use Opus 4.7 for legitimate cybersecurity purposes (such as vulnerability research, penetration testing, and red-teaming) are invited to join our new Cyber Verification Program.Opus 4.7 is available today across all Claude products and our API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers can use claude-opus-4-7 via the Claude API.Claude Opus 4.7 has garnered strong feedback from our early-access testers:In early testing, we’re seeing the potential for a significant leap for our developers with Claude Opus 4.7. It catches its own logical faults during the planning phase and accelerates execution, far beyond previous Claude models. As a financial technology platform serving millions of consumers and businesses at significant scale, this combination of speed and precision could be game-changing: accelerating development velocity for faster delivery of the trusted financial solutions our customers rely on every day.Anthropic has already set the standard for coding models, and Claude Opus 4.7 pushes that further in a meaningful way as the state-of-the-art model on the market. In our internal evals, it stands out not just for raw capability, but for how well it handles real-world async workflows—automations, CI/CD, and long-running tasks. It also thinks more deeply about problems and brings a more opinionated perspective, rather than simply agreeing with the user.Claude Opus 4.7 is the strongest model Hex has evaluated. It correctly reports when data is missing instead of providing plausible-but-incorrect fallbacks, and it resists dissonant-data traps that even Opus 4.6 falls for. It’s a more intelligent, more efficient Opus 4.6: low-effort Opus 4.7 is roughly equivalent to medium-effort Opus 4.6.On our 93-task coding benchmark, Claude Opus 4.7 lifted resolution by 13% over Opus 4.6, including four tasks neither Opus 4.6 nor Sonnet 4.6 could solve. Combined with faster median latency and strict instruction following, it’s particularly meaningful for complex, long-running coding workflows. It cuts the friction from those multi-step tasks so developers can stay in the flow and focus on building.Based on our internal research-agent benchmark, Claude Opus 4.7 has the strongest efficiency baseline we’ve seen for multi-step work. It tied for the top overall score across our six modules at 0.715 and delivered the most consistent long-context performance of any model we tested. On General Finance—our largest module—it improved meaningfully on Opus 4.6, scoring 0.813 versus 0.767, while also showing the best disclosure and data discipline in the group. And on deductive logic, an area where Opus 4.6 struggled, Opus 4.7 is solid.Claude Opus 4.7 extends the limit of what models can do to investigate and get tasks done. Anthropic has clearly optimized for sustained reasoning over long runs, and it shows with market-leading performance. As engineers shift from working 1:1 with agents to managing them in parallel, this is exactly the kind of frontier capability that unlocks new workflows.We’re seeing major improvements in Claude Opus 4.7’s multimodal understanding, from reading chemical structures to interpreting complex technical diagrams. The higher resolution support is helping Solve Intelligence build best-in-class tools for life sciences patent workflows, from drafting and prosecution to infringement detection and invalidity charting.Claude Opus 4.7 takes long-horizon autonomy to a new level in Devin. It works coherently for hours, pushes through hard problems rather than giving up, and unlocks a class of deep investigation work we couldn’t reliably run before.For Replit, Claude Opus 4.7 was an easy upgrade decision. For the work our users do every day, we observed it achieving the same quality at lower cost—more efficient and precise at tasks like analyzing logs and traces, finding bugs, and proposing fixes. Personally, I love how it pushes back during technical discussions to help me make better decisions. It really feels like a better coworker.Claude Opus 4.7 demonstrates strong substantive accuracy on BigLaw Bench for Harvey, scoring 90.9% at high effort with better reasoning calibration on review tables and noticeably smarter handling of ambiguous document editing tasks. It correctly distinguishes assignment provisions from change-of-control provisions, a task that has historically challenged frontier models. Substance was consistently rated as a strength across our evaluations: correct, thorough, and well-cited.Claude Opus 4.7 is a very impressive coding model, particularly for its autonomy and more creative reasoning. On CursorBench, Opus 4.7 is a meaningful jump in capabilities, clearing 70% versus Opus 4.6 at 58%.For complex multi-step workflows, Claude Opus 4.7 is a clear step up: plus 14% over Opus 4.6 at fewer tokens and a third of the tool errors. It’s the first model to pass our implicit-need tests, and it keeps executing through tool failures that used to stop Opus cold. This is the reliability jump that makes Notion Agent feel like a true teammate.In our evals, we saw a double-digit jump in accuracy of tool calls and planning in our core orchestrator agents. As users leverage Hebbia to plan and execute on use cases like retrieval, slide creation, or document generation, Claude Opus 4.7 shows the potential to improve agent decision-making in these workflows.On Rakuten-SWE-Bench, Claude Opus 4.7 resolves 3x more production tasks than Opus 4.6, with double-digit gains in Code Quality and Test Quality. This is a meaningful lift and a clear upgrade for the engineering work our teams are shipping every day.For CodeRabbit’s code review workloads, Claude Opus 4.7 is the sharpest model we’ve tested. Recall improved by over 10%, surfacing some of the most difficult-to-detect bugs in our most complex PRs, while precision remained stable despite the increased coverage. It’s a bit faster than GPT-5.4 xhigh on our harness, and we’re lining it up for our heaviest review work at launch.For Genspark’s Super Agent, Claude Opus 4.7 nails the three production differentiators that matter most: loop resistance, consistency, and graceful error recovery. Loop resistance is the most critical. A model that loops indefinitely on 1 in 18 queries wastes compute and blocks users. Lower variance means fewer surprises in prod. And Opus 4.7 achieves the highest quality-per-tool-call ratio we’ve measured.Claude Opus 4.7 is a meaningful step up for Warp. Opus 4.6 is one of the best models out there for developers, and this model is measurably more thorough on top of that. It passed Terminal Bench tasks that prior Claude models had failed, and worked through a tricky concurrency bug Opus 4.6 couldn’t crack. For us, that’s the signal.Claude Opus 4.7 is the best model in the world for building dashboards and data-rich interfaces. The design taste is genuinely surprising—it makes choices I’d actually ship. It’s my default daily driver now.Claude Opus 4.7 is the most capable model we’ve tested at Quantium. Evaluated against leading AI models through our proprietary benchmarking solution, the biggest gains showed up where they matter most: reasoning depth, structured problem-framing, and complex technical work. Fewer corrections, faster iterations, and stronger outputs to solve the hardest problems our clients bring us.Claude Opus 4.7 feels like a real step up in intelligence. Code quality is noticeably improved, it’s cutting out the meaningless wrapper functions and fallback scaffolding that used to pile up, and fixes its own code as it goes. It’s the cleanest jump we’ve seen since the move from Sonnet 3.7 to the Claude 4 series.For the computer-use work that sits at the heart of XBOW’s autonomous penetration testing, the new Claude Opus 4.7 is a step change: 98.5% on our visual-acuity benchmark versus 54.5% for Opus 4.6. Our single biggest Opus pain point effectively disappeared, and that unlocks its use for a whole class of work where we couldn’t use it before.Claude Opus 4.7 is a solid upgrade with no regressions for Vercel. It’s phenomenal on one-shot coding tasks, more correct and complete than Opus 4.6, and noticeably more honest about its own limits. It even does proofs on systems code before starting work, which is new behavior we haven’t seen from earlier Claude models.Claude Opus 4.7 is very strong and outperforms Opus 4.6 with a 10% to 15% lift in task success for Factory Droids, with fewer tool errors and more reliable follow-through on validation steps. It carries work all the way through instead of stopping halfway, which is exactly what enterprise engineering teams need.Claude Opus 4.7 autonomously built a complete Rust text-to-speech engine from scratch—neural model, SIMD kernels, browser demo—then fed its own output through a speech recognizer to verify it matched the Python reference. Months of senior engineering, delivered autonomously. The step up from Opus 4.6 is clear, and the codebase is public.Claude Opus 4.7 passed three TBench tasks that prior Claude models couldn’t, and it’s landing fixes our previous best model missed, including a race condition. It demonstrates strong precision in identifying real issues, and surfaces important findings that other models either gave up on or didn’t resolve. In Qodo’s real-world code review benchmark, we observed top-tier precision.On Databricks’ OfficeQA Pro, Claude Opus 4.7 shows meaningfully stronger document reasoning, with 21% fewer errors than Opus 4.6 when working with source information. Across our agentic reasoning over data benchmarks, it is the best-performing Claude model for enterprise document analysis.For Ramp, Claude Opus 4.7 stands out in agent-team workflows. We’re seeing stronger role fidelity, instruction-following, coordination, and complex reasoning, especially on engineering tasks that span tools, codebases, and debugging context. Compared with Opus 4.6, it needs much less step-by-step guidance, helping us scale the internal agent workflows our engineering teams run.Claude Opus 4.7 is measurably better than Opus 4.6 for Bolt’s longer-running app-building work, up to 10% better in the best cases, without the regressions we’ve come to expect from very agentic models. It pushes the ceiling on what our users can ship in a single session.Below are some highlights and notes from our early testing of Opus 4.7:Instruction following. Opus 4.7 is substantially better at following instructions. Interestingly, this means that prompts written for earlier models can sometimes now produce unexpected results: where previous models interpreted instructions loosely or skipped parts entirely, Opus 4.7 takes the instructions literally. Users should re-tune their prompts and harnesses accordingly.Improved multimodal support. Opus 4.7 has better vision for high-resolution images: it can accept images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models. This opens up a wealth of multimodal uses that depend on fine visual detail: computer-use agents reading dense screenshots, data extractions from complex diagrams, and work that needs pixel-perfect references.1Real-world work. As well as its state-of-the-art score on the Finance Agent evaluation (see table above), our internal testing showed Opus 4.7 to be a more effective finance analyst than Opus 4.6, producing rigorous analyses and models, more professional presentations, and tighter integration across tasks. Opus 4.7 is also state-of-the-art on GDPval-AA, a third-party evaluation of economically valuable knowledge work across finance, legal, and other domains.Memory. Opus 4.7 is better at using file system-based memory. It remembers important notes across long, multi-session work, and uses them to move on to new tasks that, as a result, need less up-front context.The charts below display more evaluation results from our pre-release testing, across a range of different domains:Overall, Opus 4.7 shows a similar safety profile to Opus 4.6: our evaluations show low rates of concerning behavior such as deception, sycophancy, and cooperation with misuse. On some measures, such as honesty and resistance to malicious “prompt injection” attacks, Opus 4.7 is an improvement on Opus 4.6; in others (such as its tendency to give overly detailed harm-reduction advice on controlled substances), Opus 4.7 is modestly weaker. Our alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not fully ideal in its behavior”. Note that Mythos Preview remains the best-aligned model we’ve trained according to our evaluations. Our safety evaluations are discussed in full in the Claude Opus 4.7 System Card.Overall misaligned behavior score from our automated behavioral audit. On this evaluation, Opus 4.7 is a modest improvement on Opus 4.6 and Sonnet 4.6, but Mythos Preview still shows the lowest rates of misaligned behavior.In addition to Claude Opus 4.7 itself, we’re launching the following updates:More effort control: Opus 4.7 introduces a new xhigh (“extra high”) effort level between high and max, giving users finer control over the tradeoff between reasoning and latency on hard problems. In Claude Code, we’ve raised the default effort level to xhigh for all plans. When testing Opus 4.7 for coding and agentic use cases, we recommend starting with high or xhigh effort.On the Claude Platform (API): as well as support for higher-resolution images, we’re also launching task budgets in public beta, giving developers a way to guide Claude’s token spend so it can prioritize work across longer runs.In Claude Code: The new /ultrareview slash command produces a dedicated review session that reads through changes and flags bugs and design issues that a careful reviewer would catch. We’re giving Pro and Max Claude Code users three free ultrareviews to try it out. In addition, we’ve extended auto mode to Max users. Auto mode is a new permissions option where Claude makes decisions on your behalf, meaning that you can run longer tasks with fewer interruptions—and with less risk than if you had chosen to skip all permissions.Opus 4.7 is a direct upgrade to Opus 4.6, but two changes are worth planning for because they affect token usage. First, Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0–1.35× depending on the content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings. This improves its reliability on hard problems, but it does mean it produces more output tokens. Users can control token usage in various ways: by using the effort parameter, adjusting their task budgets, or prompting the model to be more concise. In our own testing, the net effect is favorable—token usage across all effort levels is improved on an internal coding evaluation, as shown below—but we recommend measuring the difference on real traffic. We’ve written a migration guide that provides further advice on upgrading from Opus 4.6 to Opus 4.7.Score on an internal agentic coding evaluation as a function of token usage at each effort level. In this evaluation, the model works autonomously from a single user prompt, and results may not be representative of token usage in interactive coding. See the migration guide for more on tuning effort levels.
...
Read the original on www.anthropic.com »
For anyone who has been (inadvisably) taking my pelican riding a bicycle benchmark seriously as a robust way to test models, here are pelicans from this morning’s two big model releases—Qwen3.6-35B-A3B from Alibaba and Claude Opus 4.7 from Anthropic.
Here’s the Qwen 3.6 pelican, generated using this 20.9GB Qwen3.6-35B-A3B-UD-Q4_K_S.gguf quantized model by Unsloth, running on my MacBook Pro M5 via LM Studio (and the llm-lmstudio plugin)—transcript here:
And here’s one I got from Anthropic’s brand new Claude Opus 4.7 (transcript):
I’m giving this one to Qwen 3.6. Opus managed to mess up the bicycle frame!
I tried Opus a second time passing thinking_level: max. It didn’t do much better (transcript):
A lot of people are convinced that the labs train for my stupid benchmark. I don’t think they do, but honestly this result did give me a little glint of suspicion. So I’m burning one of my secret backup tests—here’s what I got from Qwen3.6-35B-A3B and Opus 4.7 for “Generate an SVG of a flamingo riding a unicycle”:
I’m giving this one to Qwen too, partly for the excellent SVG comment.
The pelican benchmark has always been meant as a joke—it’s mainly a statement on how obtuse and absurd the task of comparing these models is.
The weird thing about that joke is that, for the most part, there has been a direct correlation between the quality of the pelicans produced and the general usefulness of the models. Those first pelicans from October 2024 were junk. The more recent entries have generally been much, much better—to the point that Gemini 3.1 Pro produces illustrations you could actually use somewhere, provided you had a pressing need to illustrate a pelican riding a bicycle.
Today, even that loose connection to utility has been broken. I have enormous respect for Qwen, but I very much doubt that a 21GB quantized version of their latest model is more powerful or useful than Anthropic’s latest proprietary release.
If the thing you need is an SVG illustration of a pelican riding a bicycle though, right now Qwen3.6-35B-A3B running on a laptop is a better bet than Opus 4.7!
...
Read the original on simonwillison.net »
I had coffee last year with a guy - I won’t use his real name - who told me he was “building a business.” I asked what it did. Dropshipping jade face rollers.
I made him say it twice.
He’d found them on Alibaba for $1.20 each, and started selling them through Shopify for $29.99. Never used one himself. Didn’t really know what they were for - something about lymphatic drainage? Reducing puffiness? He said “lymphatic” the way you say a word you’ve only ever read and never heard out loud.
Some guy on YouTube said jade rollers were “trending,” the margins looked insane on paper, so he’d “built” a website with stock photos of a dewy-skinned woman rolling a green rock across her cheekbone and started running Facebook ads at $50 a day. Customers would email asking where their stuff was - shipping from Guangzhou, three to six weeks, sometimes way longer - and he’d copy-paste a response he found on a dropshipping subreddit. He had a Google Doc full of pre-written customer service replies.
Five months in, he was $800 in the hole.
He told me all this like he’d invented the wheel.
I bought him another coffee. I genuinely had no idea what else to do.
Jade Roller Guy has become my go-to example of something that went drastically, terribly wrong with how a whole generation of would-be entrepreneurs thought about work and money. A specific ideology - I’ve been calling it Passive Income Brain - grabbed a huge chunk of the people who were, by temperament and ability, most likely to start real businesses, and it gave them a completely fucked set of priorities.
Somewhere between 2015 and 2022, “passive income” stopped being a boring financial planning term and became, I don’t know how else to put this, a salvation narrative. I mean that literally. There was an eschatology if you want to get nerdy about it. The Rapture was the day your “passive income” exceeded your monthly expenses and you could quit your job forever. People talked about it with that exact energy.
But, of course, the folks making any actual income, of any kind, were the ones selling courses about making passive income. It was an ouroboros. It was an ouroboros that had incorporated in Delaware and was running Facebook ads.
The pitch went something like: you, a sucker, currently trade your time for money. This is what employees do, and employees are suckers. (I’m paraphrasing, but not by much.) Smart people build SYSTEMS. A system is anything that generates revenue without your ongoing involvement. Write an ebook. Build a dropshipping store. Create an online course. Set up affiliate websites.
The specific vehicle doesn’t matter because the important thing isn’t what you build, it’s the structure. You want a machine that generates cash while you sleep, and once you have that machine, you are free.
Free to do what? Sit on a beach, apparently. Every single one of these people wanted to sit on a beach. I’ve never understood this. Have they been to a beach? There’s sand. It gets everywhere. You can sit there for maybe three hours before you want to do literally anything else.
The allure is real. Who doesn’t want money that shows up while you sleep?
I’d fucking love that. I’d love it very much indeed. But “passive income” as an organizing philosophy for your entire business life, for how you think about work, is almost perfectly designed to produce garbage.
When you make “passivity” the thing you’re optimizing for, you stop caring about anything a customer might actually want. Caring is active. Caring takes time. Caring is work.
Giving a shit is, by definition, not passive.
Between 2019 and 2021, roughly 700,000 new Shopify stores opened. The platform went from about a million merchants to 1.7 million in two years. About 90% of those stores failed within their first year. Which is really more a meat grinder, than it is a business model…
We started drowning in a million businesses nobody was actually running. Dropshipping stores with six-week shipping times and customer service that was just copy-pasted templates. Guys who’d put their “brand name” - usually something like ZENITHPRO or AXELVIBE, always in all caps, always vaguely aggressive - on a garlic press identical to four hundred other garlic presses on the same Amazon page. AXELVIBE! For a garlic press!
And the affiliate blogs! Hundreds of thousands of them, pumped full of SEO-optimized reviews of products the authors had never touched, never even seen in person. A fractal of bullshit that technically qualifies as commerce but puts zero dollars of actual value into the world.
Leverage is real; I’m not disputing that. There is a difference between trading hours for dollars and building something that scales. Software does this. Publishing does this. You write a book once, sell it many times, nobody calls that a scam. Fine! That part they got right!
Where it went wrong is that the whole movement confused “build a good product that scales” with “build any mechanism that extracts money without you being involved.” I don’t think that confusion was accidental. I think the confusion was the point. Because if you’re teaching people to build real businesses, you have to sit with hard, boring questions about whether anyone actually wants what you’re selling. But if you’re teaching people to build “passive income streams” you can skip all of that and go straight to the fun tactical shit. How to run Facebook ads, how to set up a Shopify store in a weekend, how to write email sequences that manipulate people into buying things they don’t need.
Nobody talks enough about what the passive income movement did to the content quality of the entire internet. If you’ve tried to google “best [anything]” in the last five years and gotten a wall of nearly identical listicles, all with the same structure (“We tested 47 blenders so you don’t have to!“), all making the same recommendations, all linking to the same Amazon products, you’ve experienced the results.
Those articles weren’t written by people who cared whether you bought a good blender. They were written by people who cared whether you clicked their affiliate link, because that’s what generated passive income, and the incentives made honesty actively counterproductive.
The honest review of blenders is: “most blenders are fine, just get whatever’s on sale, the differences below $100 are basically meaningless.” That review generates zero affiliate revenue. So nobody wrote it.
Instead you got “The Vitamix A3500 is our #1 pick!” with a nice affiliate link, written by someone who has never blended anything in their life. Multiply this across every product category and you start to understand the informational desert we’ve been living in. We broke Google results, at least partly, because an army of passive income seekers had an incentive to flood the internet with plausible-sounding garbage.
I’ve met dozens of smart, capable people who had actual energy, and who spent their entire twenties bouncing between passive income schemes instead of building real skills // real businesses // real careers. The pattern was always the same: six months on a dropshipping store, it fails, pivot to Amazon FBA, that fails, pivot to creating a course about dropshipping (because of course), and then the course doesn’t sell either because by 2021 there were approximately forty thousand courses about dropshipping and the market had been saturated since before they started.
And the whole time they were getting further and further from the thing that actually creates economic value, which is: find a real problem, solve it for real people, care enough to stick around and keep improving. The boring thing. The thing that takes years. The thing that is, to be absolutely clear about this, not passive.
I once saw a guy ask whether he should start a dog walking business and the top response was something like “dog walking isn’t scalable, you should build a dog walking platform instead.” This person liked dogs! He liked walking! He lived in a neighborhood full of busy professionals with dogs!
But the Passive Income Brain thing had gotten so deep into how people talked about business online that “do the simple obvious thing that works for you” was considered naive, and “build a technology platform for an activity you’ve never actually done as a business” was considered smart.
The dog walking guy could have been profitable in a week.
The app guy would have burned through his savings in six months and ended up with a landing page and no users.
By 2020 the passive income world was absolutely crawling with grift: guys posing with rented Lamborghinis in YouTube thumbnails, “digital nomads” whose actual income came entirely from selling the dream of being a digital nomad to other aspiring digital nomads, podcast hosts interviewing each other in an endless circle of mutual promotion where everyone claimed to make $30K/month and nobody could explain what they actually produced. By 2021 or so it started to look like a distributed, socially acceptable MLM. The product was the dream of not working. The customers were people desperate enough to pay for it.
Not everyone in this world was cynical. I genuinely believe that. A lot of the people selling passive income content believed their own pitch. They’d had some real success with a niche site - pulled $3,000/month for a while, it does happen - read the same books everyone else read, figured okay, I’ll teach other people my system. Why not. I would have done the same thing at 24. I’m almost sure of it.
But zoom out and what you had was just an enormous machine converting human ambition into noise. Affiliate spam // dropshipped junk // ebooks about passive income // courses about courses. An entire layer of the internet that was nothing but confident-sounding bullshit produced by people who had optimized for everything except making something worth buying.
The people near the top made money. Everyone else spent months or years chasing a mirage and came out with nothing but a Shopify subscription they forgot to cancel. They thought they’d failed. They hadn’t failed. The system, every system, failed them.
What actually makes money hasn’t changed. You find something people need. You get good at providing it. You charge a fair price and you keep showing up even when it’s tedious and even when you don’t want to. You build relationships over years. You build reputation over years. None of it is passive, and none of it has ever been passive! All of it revolves around giving a shit, day after day, about something specific. I don’t think anyone has ever found a way around that and I don’t think anyone will.
The passive income thing was a fantasy about not having to give a shit.
This is a terrible foundation for pretty much anything.
The affiliate SEO blogs are being slaughtered right now by AI-generated content. The people who spent years producing algorithmically optimized content of no value to humans are getting outcompeted by software that does the exact same thing, faster and cheaper. Facebook ad costs went through the roof and took the dropshipping gold rush with them. The biggest passive income gurus have already pivoted to selling AI courses. The machine keeps running. It just swaps out the brochure.
But I’ve noticed more people talking about what I’d call “give a shit” businesses - people who make furniture, run plumbing companies, write software they actually use themselves. Stuff where the answer to “why does your business exist?” isn’t “to generate passive income for me.” This works a lot better than the laptop-on-the-beach grind.
Jade Roller Guy, if you’re out there: I hope you found something real.
I hope it keeps you busy.
...
Read the original on www.joanwestenberg.com »
A bill introduced by Representative Josh Gottheimer in the House on April 13 would require Apple, Google, and every other operating system vendor to verify the age of anyone setting up a new device in the United States.
The legislation, H. R. 8250, travels under the friendlier name of the Parents Decide Act, and it is among the most aggressive surveillance mandates ever proposed for American consumer technology.
We obtained a copy of the bill for you here.
The press releases describing it lead with children. The text describes something much larger. To confirm a child is under 18, the system has to identify everyone else, too, and the bill builds the infrastructure to do exactly that.
This is child safety as a delivery mechanism for mass identification. The pattern is familiar by now. A genuine harm gets named, a sympathetic victim gets centered, and the solution proposed reshapes the digital lives of three hundred million people who were not the problem.
The Parents Decide Act follows that template with unusual precision. It takes the real suffering of real children and uses it to justify building a national identity layer underneath every device sold in the country, administered by two private companies, with the details to be filled in later.
The mandate sits in Section 2(a)(1), which obligates providers to “Require any user of the operating system to provide the date of birth of the user” both to set up an account and to use the device at all. Adults included.
There is no carve-out for grown users, no opt-out for people who simply want to turn on a phone without handing a date of birth to Apple or Google first.
The age check is the entry fee for owning a computer. What happens to that data afterward gets handed off to the Federal Trade Commission to sort out later. A federal bill that mandates identification as a condition of using a general-purpose computing device represents something the United States has not previously had, which is a national ID requirement for turning on a device.
Gottheimer framed the proposal at a Ridgewood news conference on April 2, standing outside the local YMCA with a coalition of allies. “With each passing day, the internet is becoming more and more treacherous for our kids. We’re not just talking about social media anymore — we’re talking about artificial intelligence and platforms that are shaping how our kids think, feel, and act, often without any real guardrails,” he said.
His diagnosis of the current system is accurate enough. “Children are able to bypass age requirements by entering a different birthday and accessing apps without any real verification. Kids can bypass age requirements by simply typing in a different birthday. That’s it. That’s the system,” he said.
The remedy he proposes just happens to require building new surveillance plumbing underneath every device sold in the country, and routing that plumbing through two of the largest companies on earth. The solution chosen is disproportionate to the problem, and disproportionate in a specific direction, which is the direction of less privacy and less anonymity for everyone.
Section 2(a)(3) directs operating system providers to “Develop a system to allow an app developer to access any information as is necessary” to verify a user’s age.
Translated out of legislative prose, Apple and Google become age brokers for the entire American app ecosystem. Every app that wants to check whether you are over 18, or over 13, or over 21, will be able to ping the operating system for an answer derived from the birth date you handed over at setup. The bill presents this as a convenience. It is a new data pipeline between the OS layer and every developer who plugs into it, and the bill spends remarkably little time explaining how that pipeline will be constrained.
Free speech implications travel through that same pipeline. Once the operating system knows your age with verified certainty, it can tell any app to deliver, restrict, or withhold content accordingly. The bill’s supporters describe this as parental control. The infrastructure it builds is a content control system, running at the OS level, with Apple and Google as the gatekeepers of who sees what.
The First Amendment has historically protected the right to read, watch, and speak without first presenting identification. This bill erodes that principle at its foundation. Once verified age becomes a standard signal flowing from the operating system to every app, the default assumption shifts. Users are no longer presumptively anonymous adults with full access to lawful content. They are identified subjects whose permissions are determined by the data Apple or Google holds about them.
An age-verification layer built to block AI chatbots from minors is also capable of blocking journalism a state deems too violent, political commentary an administration deems too inflammatory, reporting on drugs or protest tactics, or any other subject a future regulator decides requires age gating.
The infrastructure is neutral about content. It cares only that the user has been identified. Every future fight over what Americans are allowed to see online will start from a position where the identification layer already exists, and the only remaining question is who qualifies for access. That is a profound change in how speech works, and the bill enacts it while pointing at children.
What the bill says about data protection is effectively a to-do list for the FTC. Section 2(d)(1)(B) tells the Commission it must eventually issue rules ensuring that birth dates are “collected in a secure manner to maintain the privacy of the user” and are “not stolen or breached.”
Those are outcomes, not mechanisms. The legislation sets no retention limits, no minimization requirements, no restrictions on secondary uses, and no prohibition on linking age data to other identifiers Apple and Google already hold. It offers no guidance on how providers should verify the age of a parent or guardian beyond instructing the FTC to figure that out within 180 days of enactment. The entire architecture of the system is to be drawn up after the fact by regulators working under a safe-harbor provision that shields operating system providers from liability as long as they follow whatever rules eventually emerge.
Congress is being asked to authorize a surveillance system it has not designed, whose operation it does not understand, and whose safeguards do not yet exist.
The Parents Decide Act solves the self-reported-birthday problem by demanding something verifiable, which in practice means a government ID, a credit card, a biometric scan, or some combination.
However, Gottheimer has not specified which. The bill does not either. It’s up to the FTC to decide.
Operating system providers will, and the incentives point toward whatever is cheapest to deploy at scale. Facial analysis is cheap. ID uploads are cheap. What is expensive is building a verification system that does not also create a persistent, cross-referenced database of everyone who has ever activated a phone. The incentives run directly against user privacy, and the bill provides no meaningful counterweight.
The bill also deputizes a duopoly. Requiring “operating system providers” to perform nationwide age verification is a requirement only two companies can easily satisfy in the mobile space, and a handful more across desktop and console platforms.
Smaller OS developers, open-source projects, Linux distributions, custom Android forks, privacy-focused alternatives, all face a compliance burden designed around the assumption that the provider is a trillion-dollar firm with legal staff and biometric-scanning partnerships already in place.
The safe harbor in Section 2(b) protects providers who follow the rules, but following the rules requires infrastructure only the incumbents can build. A law nominally aimed at tech companies entrenches the two tech companies most responsible for the status quo.
Apple and Google become the mandatory identity checkpoints for every app developer in the country, which is a commercial position worth a great deal of money and a great deal of leverage. Any future competitor that wants to build a privacy-respecting operating system will discover the law has made that effectively illegal.
There is also another change buried in the text. The definition of “operating system” in Section 2(g)(4) covers “software that supports the basic functions of a computer, mobile device, or any other general purpose computing device.” That language reaches well beyond phones and tablets.
Laptops run operating systems. Desktop computers run operating systems. Gaming consoles, smart TVs, cars with infotainment software, and a growing catalog of ambient devices all qualify under a plain reading of the definition. The bill does not distinguish between the family iPad and the laptop a college student uses for coursework. Every device with an OS becomes a device that verifies age at setup, and by extension, a device that identifies its user at setup. The scope creep is built into the definitions.
Gottheimer cited cases of teenagers allegedly harmed by AI chatbots and by algorithmically promoted content about self-harm.
What the bill does with those harms is use them as justification for an identity system that applies to every user. The template is consistent: a child is hurt, legislation is drafted, the legislation reshapes the digital environment of everyone, child and adult, subject and bystander alike.
Less invasive alternatives exist and have existed for years.
Device-level parental controls already ship with iOS and Android. Family Sharing and Google Family Link already let parents configure age-appropriate restrictions. App stores already allow per-app age ratings.
None of these require every user in the country to prove their age to Apple or Google when turning on a phone. The bill skips past those options in favor of a mandate that treats universal age verification as the baseline condition of device ownership.
Protecting children does not require building any of this. The bill’s authors chose to build it anyway, and the choice tells you what the bill is actually for.
...
Read the original on reclaimthenet.org »
Last year, I was telegraphed a subliminal mandate from the indie rock powers that be: I was supposed to like Geese. The young Brooklynites make good music, but are they the saviors of rock and roll, the defining rock band of Gen Z, the second coming of The Strokes?
The buzz around the band would suggest so. After their album “Getting Killed” came out in September, the band was unavoidable if you’re the kind of person who refers to concerts as “shows.” When frontman Cameron Winter played an “extremely sold-out” solo set at Carnegie Hall, people in the audience seemed convinced that they’d be able to look back on that night in 50 years and tell their grandchildren that they witnessed a seminal moment in American musical history — the birth of the next Bob Dylan. How could anyone live up to that hype?
That’s why, when Wired reported that Geese’s popularity was a psyop, I felt vindicated — I was right! I knew it! I was smarter than everyone for only casually enjoying Geese!
But it’s never that simple. The real story is that Geese worked with a marketing firm called Chaotic Good, which creates thousands of social media accounts designed to manufacture trends on behalf of their clients, which also include TikTok favorites Alex Warren and Zara Larsson. This revelation has inspired a range of reactions, from feelings of betrayal to confusion at why anyone is mad about a band doing marketing, a normal thing that bands do.
“On TikTok, it’s really easy to get views. You just post trending audios. But artists can’t do that, because they want to promote their own music,” explained Chaotic Good co-founder Andrew Spelman in an interview with Billboard. “So a big part of what we are doing is posting enough volume across enough accounts with enough impressions to try to simulate the idea that the song is trending or moving.”
When you learn how prevalent these marketing strategies are, it kind of feels like you’re a kid who just learned that the Tooth Fairy isn’t real — you probably had a hunch that something was up, but you want to believe in the fantasy that a fluttering fae is sneaking into your room, and every viral success story is a fairy tale.
It’s not just the music industry taking advantage of this marketing strategy — young startup founders are following the same playbook.
While preparing for an interview with the Gen Z founders of the fashion app Phia, I searched TikTok to see what real people were saying about the app. I found videos repeating the same talking points about how Bill Gates’ daughter created an app that helps you save money on luxury products, or how using Phia is like having a personal shopping assistant that wants you to get the best deals. When I clicked on these accounts, I found that many of them only ever posted videos about Phia.
It’s not like I caught Phia in some “gotcha” moment. Founders Phoebe Gates and Sophia Kianni aren’t trying to hide their social media strategy — this is just how marketing works now.
“One thing we’ve been trying lately is basically running a creator farm, so we have a ton of different college students that we pay to make videos about Phia on their own accounts,” Kianni said on her podcast. “This is an approach that’s really focused on volume. We have like ten creators, they post twice a day, and we ultimately reach like 600 videos total.”
On TikTok-like feeds, people watch videos in a vacuum, separate from the rest of a creator’s account. Few viewers will stop to look at what else that person is posting, so they won’t suspect that the post about this cool new app could be an inorganic promotion.
Creators will similarly pay armies of teenagers on Discord to make clips of their streams and post them en masse.
“That’s been going on for a bit,” Karat Financial co-founder Eric Wei told TechCrunch last year. “Drake does it. A lot of the biggest creators and streamers in the world have been doing it — Kai Cenat [a top Twitch streamer] has done it — hitting millions of impressions … If it’s algorithmically determined, clipping suddenly makes sense, because it can come from any random account that just has really good clips.”
Marketing firms like Chaotic Good scale that same approach — instead of paying college students or teenage fans to make videos, they buy hundreds of iPhones and make a bunch of social media accounts that they can use to fabricate a viral trend. Spelman told Billboard that Chaotic Good’s office is “overrun with iPhones,” and that they have so many phones that they’re treated like VIPs at Verizon.
“Unfortunately, a lot of the internet is manipulation … Everything on the internet is fake. One thing that we always say is all opinions are formed in the TikTok comments,” Chaotic Good co-founder Jesse Coren noted.
This is the same line of thinking that fuels the Dead Internet Theory, which argues that bot-generated content dominates the web.
If Chaotic Good’s content armies aren’t posting trending audio, they’re commenting on posts about the company’s clients to control the narrative. Instead of waiting to see how fans will respond to a new song, they can use their accounts to flood the comments of videos and talk about how much they love the song.
For Geese, it’s an insult to be called an industry plant. After songwriter Eliza McLamb wrote the blog post that first connected Geese and Chaotic Good, the firm removed mention of Geese and “narrative campaigns” from its website. (The company told Wired that it did this to protect artists from being “wrapped up in false accusations or misconceptions about how their music was discovered.”)
But like the unapologetic marketing behind some Gen Z startups, the global girl group Katseye has been incredibly clear that they are the definition of industry plants — there’s literally a Netflix docuseries, “Pop Star Academy,” that illustrates how a room full of global record executives turned these six young women into superstars, even pitting potential members against each other in a surprise K-pop-style survival show.
When “Pop Star Academy” came out, I watched it in a state of horror — HYBE and Geffen treated these aspiring teenage pop stars like cattle to mold into human billboards that they could use to sell Erewhon smoothies and hair serums. But over the course of the eight-episode series, I became deeply invested in these girls’ lives. I wanted to watch them thrive in the face of unrelenting industry pressure.
I’m sure that this is exactly what Katseye’s management wanted from the documentary — to cultivate a fervent sense of support and defensiveness over the girls, even if it means painting the executives as the bad guys. Fast-forward a few years, and Katseye is performing a song called “Gnarly” at the Grammys — a track that fans hated at first until, suddenly, they didn’t.
It’s hard not to think about Chaotic Good’s “narrative campaigns,” flooding comment sections to control discourse. Though I hated “Gnarly” when it came out, I decided over time that it’s actually an avant-garde masterpiece. Did I change my mind on my own, or was it changed for me? For as much pride as I took in resisting the hype around Geese, I am so wrapped up in Katseye that I’ve spent hours speculating on Reddit forums about the real story behind Manon’s hiatus.
Maybe Geese is a psyop, and maybe Katseye is an industry plant, but do we actually care?
This is not a rhetorical question. The Geese discourse (which could also be manufactured, now that I think about it!) has inspired such varied responses because we have not established clear social norms around what is necessary marketing and what is inauthentic growth hacking.
We, the fans, get to decide now where we draw the line.
...
Read the original on techcrunch.com »
Simple Made Easy (2011) — Rich’s famous talk from Strange Loop 2011. Defines the distinction between “simple” and “easy.”
Clojure at LispNYC (2007) - the first public talk about Clojure.
Sierra’s Blog on LispNYC Presentation — 2007. An early public introduction to Clojure.
Are We There Yet? (2009) - The Clojure state model and a dissection of time in programming.
Hammock Driven Development (2010) — On thinking deeply about problems before writing code.
The Value of Values (2012) — The case for immutable values over mutable objects.
Writing Datomic in Clojure (2012) - An overview of Datomic and how Clojure was the perfect language to write it in.
Expert to Expert: Rich Hickey and Brian Beckman - Inside Clojure (2013) - a long-form interview with Rich about Clojure.
Effective Programs - 10 Years of Clojure (2017) - Rich reflects on the first 10 years of Clojure and the prioritization of Clojure’s features for solving real-world problems.
Talk Transcripts — Community-maintained transcripts of Clojure talks by Rich Hickey and others.
Rich Hickey Talks - A video playlist of many of Rich’s talks.
...
Read the original on clojure.org »
As Android developers, you have many choices when it comes to the agents, tools, and LLMs you use for app development. Whether you are using Gemini in Android Studio, Gemini CLI, Antigravity, or third-party agents like Claude Code or Codex, our mission is to ensure that high-quality Android development is possible everywhere.
Today, we are introducing a new suite of Android tools and resources for agentic workflows — Android CLI with Android skills and the Android Knowledge Base. This collection of tools is designed to eliminate the guesswork of core Android development workflows when you direct an agent’s work outside of Android Studio, making your agents more efficient, effective, and capable of following the latest recommended patterns and best practices.
Whether you are just starting your development journey on Android, are a seasoned Android developer, or managing apps across mobile and web platforms, building your apps with the latest guidance, tools, and AI-assistance is easier than ever. No matter which environment you begin with these resources, you can always transition your development experience to Android Studio—where the state-of-the-art tools and agents for Android development are available to help your app experience truly shine.
Your agents perform best when they have a lightweight, programmatic interface to interact with the Android SDK and development environment. So, at the heart of this new workflow is a revitalized Android CLI. The new Android CLI serves as the primary interface for Android development from the terminal, featuring commands for environment setup, project creation, and device management—with more modern capabilities and easy updatability in mind.
In our internal experiments, Android CLI improved project and environment setup by reducing LLM token usage by more than 70%, and tasks were completed 3X faster than when agents attempted to navigate these tasks using only the standard toolsets.
Key capabilities available to you include:
* SDK management: Use android sdk install to download only the specific components needed, ensuring a lean development environment.
* Snappy project creation: The android create command generates new projects from official templates, ensuring the recommended architecture and best practices are applied from the very first line of code.
* Rapid device creation and deployment: Create and manage virtual devices with android emulator and deploy apps using android run, eliminating the guesswork involved in manual build and deploy cycles.
* Updatability: Run android update to ensure that you have the latest capabilities available.
While Android CLI will empower your agentic development flows, it’s also been designed to streamline CI, maintenance, and any other scripted automation for the increasingly distributed nature of Android development. Download and try out the Android CLI today!
Traditional documentation can be descriptive, conceptual, and high-level. While perfect for learning, LLMs often require precise, actionable instructions to execute complex workflows without using outdated patterns and libraries.
To bridge this gap, we are launching the Android skills GitHub repository. Skills are modular, markdown-based (SKILL.md) instruction sets that provide a technical specification for a task and are designed to trigger automatically when your prompt matches the skill’s metadata, saving you the hassle of manually attaching documentation to every prompt.
Android skills cover some of the most common workflows that some Android developers and LLMs may struggle with—they help models better understand and execute specific patterns that follow our best practices and guidance on Android development.
In our initial release, the repository includes skills like:
If you’re using Android CLI, you can browse and set up your agent workflow with our growing collection of skills using the android skills command. These skills can also live alongside any other skills you create, or third-party skills created by the Android developer community. Learn more about getting started with Android skills.
The third component we are launching today is the Android Knowledge Base. Accessible through the android docs command and already available in the latest version of Android Studio, this specialized data source enables agents to search and fetch the latest authoritative developer guidelines to use as relevant context.
By accessing the frequently updated knowledge base, agents can ground their responses in the most recent information from Android developer docs, Firebase, Google Developers, and Kotlin docs. This ensures that even if an LLM’s training cutoff is a year old, it can still provide guidance on the latest frameworks and patterns we recommend today.
In addition to empowering developers and agents to handle project setup and boilerplate code, we’ve also designed these new tools and resources to make it easier to transition to Android Studio. That means you can start a prototype quickly with an agent using Android CLI and then open the project in Android Studio to fine-tune your UI with visual tools for code editing, UI design, deep debugging, and advanced profiling that scale with the growing capabilities of your app.
And when it is time to build a high-quality app for large-scale publication across various device types, our agent in Android Studio is here to help, while leveraging the latest development best practices and libraries. Beyond the powerful Agent and Planning Modes for active development, we have introduced an AI-powered New Project flow, which provides an entry point to rapidly prototyping your next great idea for Android.
These built-in agents make it simple to extend your app ideas across phones, foldables, tablets, Wear OS, Android Auto, and Android TV. Equipped with full context of your project’s source code and a comprehensive suite of debugging, profiling, and emulation tools, you have an end-to-end, AI-accelerated toolkit at your disposal.
Android CLI is available in preview today, along with a growing set of Android skills and knowledge for agents. To get started, head over to d.android.com/tools/agents to download Android CLI.
...
Read the original on android-developers.googleblog.com »
Hundreds of previously redacted records reveal how Amazon has put pressure on independent sellers using its platform into raising their prices on the sites of competitors such as Walmart and Target, so that Amazon can appear to have lower prices, California authorities allege.
The global conglomerate became concerned even if a competitor was selling an item for as little as a penny less, according to one segment of the newly unredacted evidence.
The documents — which have never previously been reported on — include internal emails, deposition testimony and confidential corporate presentations that the California attorney general, Rob Bonta, obtained as part of a civil case his office launched in 2022 accusing Amazon of large-scale price-fixing.
The Guardian obtained and reviewed the cache of evidence, which has been filed in San Francisco county superior court but has not yet become publicly available. Within the documents, lawyers for the state of California have unmasked key details, paragraphs and sometimes whole pages that had previously been blacked out. A judge permitted some redactions to remain at Amazon’s request.
In a statement, Bonta said the newly unveiled evidence reinforced his office’s claims that Amazon’s actions “unlawfully punishes sellers whose products are sold at lower prices by other online retailers”.
“Especially while consumers face an affordability crisis, there is no room for illegal practices that impede competition and raise prices,” Bonta said. “California looks forward to our trial in January 2027.”
Amazon has called the claims in the lawsuit “entirely false and misguided”.
“Amazon is consistently identified as America’s lowest-priced online retailer, and it is ironic that the attorney general seeks to have us feature higher prices in ways that would harm consumers and competition,” the company said in a statement.
For years, the state alleges, Amazon has used automated tools to track how independent vendors on its platform price their goods on competitor sites, then leveraged its dominant position in e-commerce to ensure that those prices do not fall below those on Amazon, even though Amazon often charges vendors far more in fees.
The state’s lawsuit claims that Amazon punishes vendors that dare to offer discounts on their own sites or competitor sites like Walmart, suppressing their sales on Amazon by taking away the vendors’ access to critical features, such as its site’s “Buy Box” — the panel on the right side of the site where customers see buttons like “Add to cart” and “Buy Now”.
In one previously redacted deposition, marked “highly confidential”, Mayer Handler, owner of a clothing company called Leveret, testified that he received an email in October 2022 from Amazon notifying him that one of his products was “no longer eligible to be a featured offer” through Amazon’s Buy Box.
The tech giant, he testified, had suppressed the item, a tiger-themed, toddler’s pajama set, because his company was selling it for $19.99 on Amazon, a single cent higher than what his company was offering it for on Walmart.
A. That Amazon — the price on Amazon was higher than the price was on Walmart.
Q. And how much higher?
A. One penny.
Afterwards, Handler testified, his company “changed pricing on Walmart to match or exceed Amazon’s price” or changed the item’s product code to try to throw off Amazon’s price tracking system.
We changed pricing on Walmart to match or exceed Amazon’s price. Or we changed the code.
In response to a question from the Guardian, Handler criticized Amazon for tracking prices across the internet and “shadow” blocking his company’s products — tactics which he said were depriving consumers of “lower prices”.
“Maybe that’s capitalism,” he wrote. “Or that’s a monopoly causing price hikes on the consumer.”
In another unsealed deposition, Terry Esbenshade, a Pennsylvania garden store supplier, testified in October 2024 that whenever his products lost Amazon’s Buy Box because of lower prices elsewhere on the internet, his sales on Amazon would plummet by about 80%. This financial reality forced him to try to raise his products’ prices with other retailers elsewhere, he said.
In one instance, Esbenshade testified, he discovered that one of his company’s better-selling patio tables had “become suppressed” on Amazon.
Esbenshade wasn’t sure why, he recalled, until someone at Amazon suggested he look at Wayfair, another online retailer that happened to be selling his patio table below Amazon’s price.
The businessman went online and set up a new minimum advertised price for the table on Wayfair to ensure it was higher than Amazon’s.
“So that raised the price up, and, voila, my product came back” on Amazon, he said, thanks to the reinstatement of the Buy Box.
Amazon has argued that its practices actually promote, incentivize and reward competition. The company said it works “to ensure its customers see offers with low, competitive prices” and provide “the best possible” customer experience for online shoppers.
“Just like any store owner who wouldn’t want to promote a bad deal to their customers, we don’t highlight or promote offers that are not competitively priced,” the company said in its statement. “It’s part of our commitment to featuring low prices to earn and maintain customer trust.”
The company has also denied that it had ever tried to shield itself from competition through its agreements with independent sellers.
“Amazon denies that the intent or effect of any agreement it has entered into with third-party sellers or vendors is to insulate itself from price competition” or “entrench any position of ‘dominance’”, the company asserted in an answer to the state’s lawsuit.
But Bonta’s office said that the newly unredacted exhibits show Amazon employees have proactively sought to undermine market competition and were aware of the effects of their actions on prices.
In one example, the state alleged, an Amazon engineer described the company’s use of Buy Box suppression and an internal program, known as SC-FOD, to undermine vendors’ willingness to sell products on Temu, a competing e-commerce site.
map them, FOD them, and they move out of Temu
But ye hua hai toh its a huge success for us
In another example, a senior Amazon employee sent an internal email in August 2023 describing how the company’s Buy Box suppressions were causing an Indiana-based home goods and furniture seller to regularly raise his prices on other sites.
“When this happens, they claim they search for the lower price, and when they find it, they raise it to match the price on Amazon,” the employee wrote, which was read aloud in a confidential deposition last year.
Amazon, which recently overtook Walmart to become the world’s largest company by revenue, is America’s No 1 online retailer by a huge margin.
By the end of 2022, Amazon accounted for nearly half of US e-commerce retail spending, compared with less than 8% for Walmart, its nearest competitor, according to numbers compiled by PYMTS.com, an analytics firm. In the third quarter of 2025, Amazon took in 56% of online retail spending compared with Walmart’s 9.6%, PYMTS found.
Amazon did not immediately provide answers to questions from the Guardian ahead of publication. The Guardian will update this article when it receives a response.
The trial in the California attorney general’s lawsuit against Amazon is currently scheduled to begin on 19 January 2027.
...
Read the original on www.theguardian.com »
The last question was asked for the first time, half in jest, on May 21, 2061, at a time when humanity first stepped into the light. The question came about as a result of a five dollar bet over highballs, and it happened this way:
Alexander Adell and Bertram Lupov were two of the faithful attendants of Multivac. As well as any human beings could, they knew what lay behind the cold, clicking, flashing face — miles and miles of face — of that giant computer. They had at least a vague notion of the general plan of relays and circuits that had long since grown past the point where any single human could possibly have a firm grasp of the whole.
Multivac was self-adjusting and self-correcting. It had to be, for nothing human could adjust and correct it quickly enough or even adequately enough — so Adell and Lupov attended the monstrous giant only lightly and superficially, yet as well as any men could. They fed it data, adjusted questions to its needs and translated the answers that were issued. Certainly they, and all others like them, were fully entitled to share In the glory that was Multivac’s.
For decades, Multivac had helped design the ships and plot the trajectories that enabled man to reach the Moon, Mars, and Venus, but past that, Earth’s poor resources could not support the ships. Too much energy was needed for the long trips. Earth exploited its coal and uranium with increasing efficiency, but there was only so much of both.
But slowly Multivac learned enough to answer deeper questions more fundamentally, and on May 14, 2061, what had been theory, became fact.
The energy of the sun was stored, converted, and utilized directly on a planet-wide scale. All Earth turned off its burning coal, its fissioning uranium, and flipped the switch that connected all of it to a small station, one mile in diameter, circling the Earth at half the distance of the Moon. All Earth ran by invisible beams of sunpower.
Seven days had not sufficed to dim the glory of it and Adell and Lupov finally managed to escape from the public function, and to meet in quiet where no one would think of looking for them, in the deserted underground chambers, where portions of the mighty buried body of Multivac showed. Unattended, idling, sorting data with contented lazy clickings, Multivac, too, had earned its vacation and the boys appreciated that. They had no intention, originally, of disturbing it.
They had brought a bottle with them, and their only concern at the moment was to relax in the company of each other and the bottle.
“It’s amazing when you think of it,” said Adell. His broad face had lines of weariness in it, and he stirred his drink slowly with a glass rod, watching the cubes of ice slur clumsily about. “All the energy we can possibly ever use for free. Enough energy, if we wanted to draw on it, to melt all Earth into a big drop of impure liquid iron, and still never miss the energy so used. All the energy we could ever use, forever and forever and forever.”
Lupov cocked his head sideways. He had a trick of doing that when he wanted to be contrary, and he wanted to be contrary now, partly because he had had to carry the ice and glassware. “Not forever,” he said.
“Oh, hell, just about forever. Till the sun runs down, Bert.”
“All right, then. Billions and billions of years. Twenty billion, maybe. Are you satisfied?”
Lupov put his fingers through his thinning hair as though to reassure himself that some was still left and sipped gently at his own drink. “Twenty billion years isn’t forever.”
“Will, it will last our time, won’t it?”
“So would the coal and uranium.”
“All right, but now we can hook up each individual spaceship to the Solar Station, and it can go to Pluto and back a million times without ever worrying about fuel. You can’t do THAT on coal and uranium. Ask Multivac, if you don’t believe me.”
“I don’t have to ask Multivac. I know that.”
“Then stop running down what Multivac’s done for us,” said Adell, blazing up. “It did all right.”
“Who says it didn’t? What I say is that a sun won’t last forever. That’s all I’m saying. We’re safe for twenty billion years, but then what?” Lupov pointed a slightly shaky finger at the other. “And don’t say we’ll switch to another sun.”
There was silence for a while. Adell put his glass to his lips only occasionally, and Lupov’s eyes slowly closed. They rested.
Then Lupov’s eyes snapped open. “You’re thinking we’ll switch to another sun when ours is done, aren’t you?”
“Sure you are. You’re weak on logic, that’s the trouble with you. You’re like the guy in the story who was caught in a sudden shower and Who ran to a grove of trees and got under one. He wasn’t worried, you see, because he figured when one tree got wet through, he would just get under another one.”
“I get it,” said Adell. “Don’t shout. When the sun is done, the other stars will be gone, too.”
“Darn right they will,” muttered Lupov. “It all had a beginning in the original cosmic explosion, whatever that was, and it’ll all have an end when all the stars run down. Some run down faster than others. Hell, the giants won’t last a hundred million years. The sun will last twenty billion years and maybe the dwarfs will last a hundred billion for all the good they are. But just give us a trillion years and everything will be dark. Entropy has to increase to maximum, that’s all.”
“I know all about entropy,” said Adell, standing on his dignity.
“I know as much as you do.”
“Then you know everything’s got to run down someday.”
“All right. Who says they won’t?”
“You did, you poor sap. You said we had all the energy we needed, forever. You said ’forever.’”
“It was Adell’s turn to be contrary. “Maybe we can build things up again someday,” he said.
“You ask Multivac. I dare you. Five dollars says it can’t be done.”
Adell was just drunk enough to try, just sober enough to be able to phrase the necessary symbols and operations into a question which, in words, might have corresponded to this: Will mankind one day without the net expenditure of energy be able to restore the sun to its full youthfulness even after it had died of old age?
Or maybe it could be put more simply like this: How can the net amount of entropy of the universe be massively decreased?
Multivac fell dead and silent. The slow flashing of lights ceased, the distant sounds of clicking relays ended.
Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac. Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
By next morning, the two, plagued with throbbing head and cottony mouth, had forgotten about the incident.
Jerrodd, Jerrodine, and Jerrodette I and II watched the starry picture in the visiplate change as the passage through hyperspace was completed in its non-time lapse. At once, the even powdering of stars gave way to the predominance of a single bright marble-disk, centered.
“That’s X-23,” said Jerrodd confidently. His thin hands clamped tightly behind his back and the knuckles whitened.
The little Jerrodettes, both girls, had experienced the hyperspace passage for the first time in their lives and were self-conscious over the momentary sensation of inside-outness. They buried their giggles and chased one another wildly about their mother, screaming, “We’ve reached X-23 — we’ve reached X-23 — we’ve —”
“Quiet, children,” said Jerrodine sharply. “Are you sure, Jerrodd?”
“What is there to be but sure?” asked Jerrodd, glancing up at the bulge of featureless metal just under the ceiling. It ran the length of the room, disappearing through the wall at either end. It was as long as the ship.
Jerrodd scarcely knew a thing about the thick rod of metal except that it was called a Microvac, that one asked it questions if one wished; that if one did not it still had its task of guiding the ship to a preordered destination; of feeding on energies from the various Sub-galactic Power Stations; of computing the equations for the hyperspacial jumps.
Jerrodd and his family had only to wait and live in the comfortable residence quarters of the ship.
Someone had once told Jerrodd that the “ac” at the end of “Microvac” stood for “analog computer” in ancient English, but he was on the edge of forgetting even that.
Jerrodine’s eyes were moist as she watched the visiplate. “I can’t help it. I feel funny about leaving Earth.”
“Why for Pete’s sake?” demanded Jerrodd. “We had nothing there. We’ll have everything on X-23. You won’t be alone. You won’t be a pioneer. There are over a million people on the planet already. Good Lord, our great grandchildren will be looking for new worlds because X-23 will be overcrowded.”
Then, after a reflective pause, “I tell you, it’s a lucky thing the computers worked out interstellar travel the way the race is growing.”
Jerrodette I said promptly, “Our Microvac is the best Microvac in the world.”
“I think so, too,” said Jerrodd, tousling her hair.
It was a nice feeling to have a Microvac of your own and Jerrodd was glad he was part of his generation and no other. In his father’s youth, the only computers had been tremendous machines taking up a hundred square miles of land. There was only one to a planet. Planetary ACs they were called. They had been growing in size steadily for a thousand years and then, all at once, came refinement. In place of transistors had come molecular valves so that even the largest Planetary AC could be put into a space only half the volume of a spaceship.
Jerrodd felt uplifted, as he always did when he thought that his own personal Microvac was many times more complicated than the ancient and primitive Multivac that had first tamed the Sun, and almost as complicated as Earth’s Planetary AC (the largest) that had first solved the problem of hyperspatial travel and had made trips to the stars possible.
“So many stars, so many planets,” sighed Jerrodine, busy with her own thoughts. “I suppose families will be going out to new planets forever, the way we are now.”
“Not forever,” said Jerrodd, with a smile. “It will all stop someday, but not for billions of years. Many billions. Even the stars run down, you know. Entropy must increase.”
“Entropy, little sweet, is just a word which means the amount of running-down of the universe. Everything runs down, you know, like your little walkie-talkie robot, remember?”
“Can’t you just put in a new power-unit, like with my robot?”
“The stars are the power-units, dear. Once they’re gone, there are no more power-units.”
Jerrodette I at once set up a howl. “Don’t let them, daddy. Don’t let the stars run down.”
“Now look what you’ve done, “ whispered Jerrodine, exasperated.
“How was I to know it would frighten them?” Jerrodd whispered to Jerrodine. “It will quiet them down.” (Jerrodette II was beginning to cry, also.)
Jarrodd shrugged. “Now, now, honeys. I’ll ask Microvac. Don’t worry, he’ll tell us.”
Jerrodd cupped the strip of thin cellufilm and said cheerfully, “See now, the Microvac says it will take care of everything when the time comes so don’t worry.”
Jerrodine said, “and now children, it’s time for bed. We’ll be in our new home soon.”
Jerrodd read the words on the cellufilm again before destroying it: INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.
He shrugged and looked at the visiplate. X-23 was just ahead.
VJ-23X of Lameth stared into the black depths of the three-dimensional, small-scale map of the Galaxy and said, “Are we ridiculous, I wonder, in being so concerned about the matter?”
MQ-17J of Nicron shook his head. “I think not. You know the Galaxy will be filled in five years at the present rate of expansion.”
Both seemed in their early twenties, both were tall and perfectly formed.
“Still,” said VJ-23X, “I hesitate to submit a pessimistic report to the Galactic Council.”
“I wouldn’t consider any other kind of report. Stir them up a bit. We’ve got to stir them up.”
VJ-23X sighed. “Space is infinite. A hundred billion Galaxies are there for the taking. More.”
“A hundred billion is not infinite and it’s getting less infinite all the time. Consider! Twenty thousand years ago, mankind first solved the problem of utilizing stellar energy, and a few centuries later, interstellar travel became possible. It took mankind a million years to fill one small world and then only fifteen thousand years to fill the rest of the Galaxy. Now the population doubles every ten years —”
VJ-23X interrupted. “We can thank immortality for that.”
“Very well. Immortality exists and we have to take it into account. I admit it has its seamy side, this immortality. The Galactic AC has solved many problems for us, but in solving the problems of preventing old age and death, it has undone all its other solutions.”
“Yet you wouldn’t want to abandon life, I suppose.”
“Not at all,” snapped MQ-17J, softening it at once to, “Not yet. I’m by no means old enough. How old are you?”
“I’m still under two hundred. —But to get back to my point. Population doubles every ten years. Once this Galaxy is filled, we’ll have another filled in ten years. Another ten years and we’ll have filled two more. Another decade, four more. In a hundred years, we’ll have filled a thousand Galaxies. In a thousand years, a million Galaxies. In ten thousand years, the entire known Universe. Then what?”
VJ-23X said, “As a side issue, there’s a problem of transportation. I wonder how many sunpower units it will take to move Galaxies of individuals from one Galaxy to the next.”
“A very good point. Already, mankind consumes two sunpower units per year.”
“Most of it’s wasted. After all, our own Galaxy alone pours out a thousand sunpower units a year and we only use two of those.”
“Granted, but even with a hundred per cent efficiency, we can only stave off the end. Our energy requirements are going up in geometric progression even faster than our population. We’ll run out of energy even sooner than we run out of Galaxies. A good point. A very good point.”
“We’ll just have to build new stars out of interstellar gas.”
“There may be some way to reverse entropy. We ought to ask the Galactic AC.”
VJ-23X was not really serious, but MQ-17J pulled out his AC-contact from his pocket and placed it on the table before him.
“I’ve half a mind to,” he said. “It’s something the human race will have to face someday.”
He stared somberly at his small AC-contact. It was only two inches cubed and nothing in itself, but it was connected through hyperspace with the great Galactic AC that served all mankind. Hyperspace considered, it was an integral part of the Galactic AC.
MQ-17J paused to wonder if someday in his immortal life he would get to see the Galactic AC. It was on a little world of its own, a spider webbing of force-beams holding the matter within which surges of sub-mesons took the place of the old clumsy molecular valves. Yet despite its sub-etheric workings, the Galactic AC was known to be a full thousand feet across.
MQ-17J asked suddenly of his AC-contact, “Can entropy ever be reversed?”
VJ-23X looked startled and said at once, “Oh, say, I didn’t really mean to have you ask that.”
“We both know entropy can’t be reversed. You can’t turn smoke and ash back into a tree.”
“Do you have trees on your world?” asked MQ-17J.
The sound of the Galactic AC startled them into silence. Its voice came thin and beautiful out of the small AC-contact on the desk. It said: THERE IS INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.
The two men thereupon returned to the question of the report they were to make to the Galactic Council.
Zee Prime’s mind spanned the new Galaxy with a faint interest in the countless twists of stars that powdered it. He had never seen this one before. Would he ever see them all? So many of them, each with its load of humanity — but a load that was almost a dead weight. More and more, the real essence of men was to be found out here, in space.
Minds, not bodies! The immortal bodies remained back on the planets, in suspension over the eons. Sometimes they roused for material activity but that was growing rarer. Few new individuals were coming into existence to join the incredibly mighty throng, but what matter? There was little room in the Universe for new individuals.
Zee Prime was roused out of his reverie upon coming across the wispy tendrils of another mind.
“I am Dee Sub Wun. Your Galaxy?”
“We call it only the Galaxy. And you?”
“We call ours the same. All men call their Galaxy their Galaxy and nothing more. Why not?”
“True. Since all Galaxies are the same.”
“Not all Galaxies. On one particular Galaxy the race of man must have originated. That makes it different.”
...
Read the original on hex.ooo »
AutoProber is the hardware hacker’s flying probe automation stack for giving your agent everything it needs to go from “there’s a new target on the plate” to probing individual pins in a safe way.
Tell the agent to ingest the project.
Tell the agent to confirm that all parts are functioning.
Have it run homing and then calibration.
Tell the agent that there is a new target on the plate.
It will find where the target is on the plate, then take individual frames,
keeping a record of the XYZ while noting pads, pins, chips, and other
interesting features.
It will stitch the frames together and annotate the map, including pins and
interesting components it identified.
It will add probe targets to the web dashboard for you to approve or deny.
It will probe the approved targets and report back.
All hardware can be controlled through the web dashboard, Python scripts, or by the agent itself.
This repo is a self-contained source-available release candidate. It contains the Python control code, dashboard, CAD files, and documentation needed to create your own AutoProber.
This project can move physical hardware. Treat it as a machine-control system, not a normal web app.
* GRBL Pn:P is ignored. The CNC probe pin is not a trusted endstop.
* The independent safety endstop is read from oscilloscope Channel 4.
* Channel 4 must be continuously monitored during any motion.
* The agent/operator must stop and report. Recovery motion is not automatic.
* Optical endstop wired to an external 5V supply and oscilloscope Channel 4
Default runtime assumptions are documented in the device docs. Replace them with your own lab settings before use.
These are the specific parts or part classes used for the prototype release. Verify current listings, dimensions, voltage, and connector compatibility before buying.
uv sync
PYTHONPATH=. python3 apps/dashboard.py
Start from config/autoprober.example.env. Do not publish lab-specific IPs, hostnames, credentials, calibration files, or captured target images unless you intend to release them.
Do not commit local environment files that contain lab-specific hosts, paths, or target data.
Home and calibrate only when the physical setup is ready.
Execute any probe motion only after microscope-to-probe offset is measured
and stored.
This project is source-available under the PolyForm Noncommercial License 1.0.0.
You may use, modify, and share this project for noncommercial purposes.
* The microscope-to-pogo XY offset must be measured before real probing.
* Calibration must not be fabricated; the runtime calibration file should be
generated on the machine that will move.
* The dashboard is a lab-control tool and should not be exposed to untrusted
networks.
This project is intended for controlled lab work on equipment and targets you are authorized to test. Do not use it to probe, damage, or analyze systems without permission.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.