10 interesting stories served every morning and every evening.
Update: see HN discussions about this post: https://news.ycombinator.com/item?id=47586778
I use Claude Code daily, so when Chaofan Shou noticed earlier today that Anthropic had shipped a .map file alongside their Claude Code npm package, one containing the full, readable source code of the CLI tool, I immediately wanted to look inside. The package has since been pulled, but not before the code was widely mirrored, including myself and picked apart on Hacker News.
This is Anthropic’s second accidental exposure in a week (the model spec leak was just days ago), and some people on Twitter are starting to wonder if someone inside is doing this on purpose. Probably not, but it’s a bad look either way. The timing is hard to ignore: just ten days ago, Anthropic sent legal threats to OpenCode, forcing them to remove built-in Claude authentication because third-party tools were using Claude Code’s internal APIs to access Opus at subscription rates instead of pay-per-token pricing. That whole saga makes some of the findings below more pointed.
So I spent my morning reading through the HN comments and leaked source. Here’s what I found, roughly ordered by how “spicy” I thought it was.
In claude.ts (line 301-313), there’s a flag called ANTI_DISTILLATION_CC. When enabled, Claude Code sends anti_distillation: [‘fake_tools’] in its API requests. This tells the server to silently inject decoy tool definitions into the system prompt.
The idea: if someone is recording Claude Code’s API traffic to train a competing model, the fake tools pollute that training data. It’s gated behind a GrowthBook feature flag (tengu_anti_distill_fake_tool_injection) and only active for first-party CLI sessions.
This was one of the first things people noticed on HN.
There’s also a second anti-distillation mechanism in betas.ts (lines 279-298), server-side connector-text summarization. When enabled, the API buffers the assistant’s text between tool calls, summarizes it, and returns the summary with a cryptographic signature. On subsequent turns, the original text can be restored from the signature. If you’re recording API traffic, you only get the summaries, not the full reasoning chain.
How hard would it be to work around these? Not very. Looking at the activation logic in claude.ts, the fake tools injection requires all four conditions to be true: the ANTI_DISTILLATION_CC compile-time flag, the cli entrypoint, a first-party API provider, and the tengu_anti_distill_fake_tool_injection GrowthBook flag returning true. A MITM proxy that strips the anti_distillation field from request bodies before they reach the API would bypass it entirely, since the injection is server-side and opt-in. The shouldIncludeFirstPartyOnlyBetas() function also checks for CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS, so setting that env var to a truthy value disables the whole thing. And if you’re using a third-party API provider or the SDK entrypoint instead of the CLI, the check never fires at all. The connector-text summarization is even more narrowly scoped, Anthropic-internal-only (USER_TYPE === ‘ant’), so external users won’t encounter it regardless.
Anyone serious about distilling from Claude Code traffic would find the workarounds in about an hour of reading the source. The real protection is probably legal, not technical.
The file undercover.ts (about 90 lines) implements a mode that strips all traces of Anthropic internals when Claude Code is used in non-internal repos. It instructs the model to never mention internal codenames like “Capybara” or “Tengu,” internal Slack channels, repo names, or the phrase “Claude Code” itself.
“There is NO force-OFF. This guards against model codename leaks.”
You can force it ON with CLAUDE_CODE_UNDERCOVER=1, but there’s no way to force it off. In external builds, the entire function gets dead-code-eliminated to trivial returns. This is a one-way door.
This means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. Hiding internal codenames is reasonable. Having the AI actively pretend to be human is a different thing.
An LLM company using regexes for sentiment analysis is peak irony, but also: a regex is faster and cheaper than an LLM inference call just to check if someone is swearing at your tool.
In system.ts (lines 59-95), API requests include a cch=00000 placeholder. Before the request leaves the process, Bun’s native HTTP stack (written in Zig) overwrites those five zeros with a computed hash. The server then validates the hash to confirm the request came from a real Claude Code binary, not a spoofed one.
They use a placeholder of the same length so the replacement doesn’t change the Content-Length header or require buffer reallocation. The computation happens below the JavaScript runtime, so it’s invisible to anything running in the JS layer. It’s basically DRM for API calls, implemented at the HTTP transport level.
This is the technical enforcement behind the OpenCode legal fight. Anthropic doesn’t just ask third-party tools not to use their APIs; the binary itself cryptographically proves it’s the real Claude Code client. If you’re wondering why the OpenCode community had to resort to session-stitching hacks and auth plugins after Anthropic’s legal notice, this is why.
The attestation isn’t airtight, though. The whole mechanism is gated behind a compile-time feature flag (NATIVE_CLIENT_ATTESTATION), and the cch=00000 placeholder only gets injected into the x-anthropic-billing-header when that flag is on. The header itself can be disabled entirely by setting CLAUDE_CODE_ATTRIBUTION_HEADER to a falsy value, or remotely via a GrowthBook killswitch (tengu_attribution_header). The Zig-level hash replacement also only works inside the official Bun binary. If you rebuilt the JS bundle and ran it on stock Bun (or Node), the placeholder would survive as-is: five literal zeros hitting the server. Whether the server rejects that outright or just logs it is an open question, but the code comment references a server-side _parse_cc_header function that “tolerates unknown extra fields,” which suggests the validation might be more forgiving than you’d expect for a DRM-like system. Not a push-button bypass, but not the kind of thing that would stop a determined third-party client for long either.
“BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”
The fix? MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 consecutive failures, compaction is disabled for the rest of the session. Three lines of code to stop burning a quarter million API calls a day.
Throughout the codebase, there are references to a feature-gated mode called KAIROS. Based on the code paths in main.tsx, it looks like an unreleased autonomous agent mode that includes:
This is probably the biggest product roadmap reveal from the leak.
The implementation is heavily gated, so who knows how far along it is. But the scaffolding for an always-on, background-running agent is there.
Tomorrow is April 1st, and the source contains what’s almost certainly this year’s April Fools’ joke: buddy/companion.ts implements a Tamagotchi-style companion system. Every user gets a deterministic creature (18 species, rarity tiers from common to legendary, 1% shiny chance, RPG stats like DEBUGGING and SNARK) generated from their user ID via a Mulberry32 PRNG. Species names are encoded with String.fromCharCode() to dodge build-system grep checks.
The terminal rendering in ink/screen.ts and ink/optimizer.ts borrows game-engine techniques: an Int32Array-backed ASCII char pool, bitmask-encoded style metadata, a patch optimizer that merges cursor moves and cancels hide/show pairs, and a self-evicting line-width cache (the source claims “~50x reduction in stringWidth calls during token streaming”). Seems like overkill until you remember these things stream tokens one at a time.
Every bash command runs through 23 numbered security checks in bashSecurity.ts: 18 blocked Zsh builtins, defense against Zsh equals expansion (=curl bypassing permission checks for curl), unicode zero-width space injection, IFS null-byte injection, and a malformed token bypass found during HackerOne review. I haven’t seen another tool with this specific a Zsh threat model.
Prompt cache economics clearly drive a lot of the architecture. promptCacheBreakDetection.ts tracks 14 cache-break vectors, and there are “sticky latches” that prevent mode toggles from busting the cache. One function is annotated DANGEROUS_uncachedSystemPromptSection(). When you’re paying for every token, cache invalidation stops being a computer science joke and becomes an accounting problem.
The multi-agent coordinator in coordinatorMode.ts is interesting because the orchestration algorithm is a prompt, not code. It manages worker agents through system prompt instructions like “Do not rubber-stamp weak work” and “You must understand findings before directing follow-up work. Never hand off understanding to another worker.”
The codebase also has some rough spots. print.ts is 5,594 lines long with a single function spanning 3,167 lines and 12 levels of nesting. They use Axios for HTTP, which is funny timing given that Axios was just compromised on npm with malicious versions dropping a remote access trojan.
Some people are downplaying this because Google’s Gemini CLI and OpenAI’s Codex are already open source. But those companies open-sourced their agent SDK (a toolkit), not the full internal wiring of their flagship product.
The real damage isn’t the code. It’s the feature flags. KAIROS, the anti-distillation mechanisms: these are product roadmap details that competitors can now see and react to. The code can be refactored. The strategic surprise can’t be un-leaked.
And here’s the kicker: Anthropic acquired Bun at the end of last year, and Claude Code is built on top of it. A Bun bug (oven-sh/bun#28001), filed on March 11, reports that source maps are served in production mode even though Bun’s own docs say they should be disabled. The issue is still open. If that’s what caused the leak, then Anthropic’s own toolchain shipped a known bug that exposed their own product’s source code.
As one Twitter reply put it: “accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
...
Read the original on alex000kim.com »
It was not a phone call. It was not a meeting. For thousands of Oracle employees across the globe, Tuesday morning began with a single email landing in their inboxes just after 6 a.m. EST — and by the time they finished reading it, their careers at one of the world’s largest technology companies were over.
Oracle has launched what analysts believe could be the most extensive layoff in the company’s history, with estimates suggesting the cuts will affect between 20,000 and 30,000 employees — roughly 18% of its global workforce of approximately 162,000 people. Workers in the United States, India, and other regions all reported receiving the same termination notice at nearly the same hour, sent under the name “Oracle Leadership.”
There was no heads-up from human resources, no conversation with a direct manager, and no advance notice of any kind. Just an email.
The email that circulated widely after screenshots were posted by affected workers on Reddit’s r/employeesOfOracle community and the professional forum Blind was brief and formulaic. It told employees that following a review of the company’s current business needs, a decision had been made to eliminate their roles as part of a broader organizational change, that the day of the email was their final working day, and that a severance package would be made available after signing termination paperwork through DocuSign.
Employees were also instructed to update their personal email addresses to receive subsequent communications, including separation details and answers to frequently asked questions. For many, access to internal production systems was revoked almost immediately after the message arrived.
Based on accounts shared across both Reddit and Blind, the cuts were widespread and, in some units, severe. Among the teams reported to be most affected:
RHS (Revenue and Health Sciences) — employees described a reduction in force of at least 30%, with 16 or more engineers from individual business units cut in a single action.
SVOS (SaaS and Virtual Operations Services) — similarly reported a 30% or greater reduction, with manager-level roles included in the sweep.
At least one manager was confirmed among those let go, and affected employees in India said the severance structure is expected to follow a standard formula based on years of service, paid out in months. Any unvested restricted stock units, however, were forfeited immediately.
Workers who had vested stock were told they would retain access to those shares through Fidelity. Some employees noted April 3 as their formal last working day, with a one-month garden leave period to follow. Separately, posts on Blind alleged that Oracle had recently installed monitoring software on company-issued Mac laptops capable of logging all device activity, with warnings circulating among affected employees not to copy any files or code before returning their machines.
The layoffs are directly tied to Oracle’s aggressive and debt-heavy expansion into artificial intelligence infrastructure. According to analysis from TD Cowen, the job cuts are expected to free up between $8 billion and $10 billion in cash flow — money the company urgently needs to fund a massive buildout of AI data centers.
The financial picture surrounding that expansion is striking. Oracle has taken on $58 billion in new debt within just two months. Its stock has lost more than half its value since reaching a peak in September 2025. Multiple U. S. banks have reportedly stepped back from financing some of its data center projects. All of this is happening even as the company posted a 95% jump in net income — reaching $6.13 billion — last quarter.
The contrast underscores the scale of the bet Oracle is making: record profits on one side, a mounting debt load and tens of thousands of eliminated jobs on the other. For the workers who woke up Tuesday morning to that 6 a.m. email, the company’s ambitions offered little comfort.
...
Read the original on rollingout.com »
Stuff that’s in the code but not shipped yet. Feature-flagged, env-gated, or just commented out.
A virtual pet that lives in your terminal. Species and rarity are derived from your account ID. Persistent mode with daily logs, memory consolidation between sessions, and autonomous background actions.Long planning sessions on Opus-class models, up to 30-minute execution windows.Control Claude Code from your phone or a browser. Full remote session with permission approvals.Run sessions in the background with –bgtmuxSessions talk to each other over Unix domain sockets.Between sessions, the AI reviews what happened and organizes what it learned.
...
Read the original on ccunpacked.dev »
We’ve clarified when these Terms apply to certain Copilot services and experiences. We’ve revised our Code of Conduct to clarify how you can and can’t use Copilot.We’ve rewritten and reorganized our Terms to be clearer and simpler.
IF YOU LIVE IN (OR YOUR PRINCIPAL PLACE OF BUSINESS IS IN) THE UNITED STATES, PLEASE READ THE BINDING ARBITRATION CLAUSE AND CLASS ACTION WAIVER IN SECTION 15 OF THE MICROSOFT SERVICES AGREEMENT. IT AFFECTS HOW DISPUTES RELATING TO THESE TERMS ARE RESOLVED. Welcome to Copilot, your personal AI companion!These Terms explain how you can use Copilot. By using Copilot, you agree to these Terms. Please read them carefully before you start using Copilot.These Terms apply to your use of “Copilot,” which includes:The standalone Copilot apps on your computer or mobile deviceThe Copilot service we offer at copilot.microsoft.com, copilot.com, and copilot.aiConversations you have with Copilot through other Microsoft apps and websitesConversations you have with Copilot through third-party apps and platformsOther Copilot-branded apps and services that link to these TermsThese Terms don’t apply to Microsoft 365 Copilot apps or services unless that specific app or service says that these Terms apply.Certain words and phrases we use in these Terms have a particular meaning:Words like “you”, “your” and “yours” mean you, the person accessing and using Copilot.Words like “we”, “us”, and “our” means Microsoft, the company that offers Copilot, as well as the related companies we own or control and the companies and people that work on our behalf.A “Prompt” is the content — text, audio, images, files, voice, or video — that you send to or share with Copilot.A “Response” is the content that Copilot sends to or shares with you. Some Responses might include “Creations” — original content or works of art that Copilot creates in response to your Prompts.“Your Content” means the Prompts and Responses that are part of your conversations with Copilot, but it doesn’t include any content we separately own (like Xbox gaming clips, for example).“Actions” refers to the automated set of tasks that Copilot takes on your behalf at your request.“Services” is defined in the Microsoft Services Agreement. Copilot is a Service under that Agreement.WHO CAN USE COPILOTYou need to be old enough to use Copilot — usually at least 13, but sometimes 18 or older, depending on your country’s laws. Because laws vary by country, Copilot isn’t available everywhere.If you’re under 18, or if you use Copilot without logging in, we might turn off or limit some features for legal or safety reasons. If we ask for your birthday and country when you sign up or log in, you must give us your real information.Don’t use tools or computer programs (like bots or scrapers) to access Copilot. You can only use Copilot for your own personal use.HOW YOU USE COPILOTCopilot is an AI-powered conversational service. Copilot will generate Responses to Prompts you submit and may also offer you Responses directly in your ongoing conversations or for things you have asked Copilot to remember.Copilot tries to give you good answers, but it can make mistakes. Sometimes, the sources Copilot uses may not be reliable, relevant, or accurate, and sometimes, Copilot may give you wrong information. When responding, Copilot may use information it finds on the internet, and we don’t control that content. You might see Responses that seem convincing but are incomplete, inaccurate, or inappropriate.Always use your judgment and check the information you get from Copilot before you make decisions or act. If you see something wrong or inappropriate from Copilot, use the Report or Feedback features in Copilot to let us know. If you have a legal concern about something Copilot says, please use the Report a Concern page to tell us.Because of the way Copilot works, the Responses you get from Copilot may not be unique to you. Copilot may give the same or similar Responses and Creations to Microsoft, or to other people. Other people may send similar Prompts as yours, and they could get the same, similar, or different Responses and Creations.By using Copilot, you’re telling us that: You’ve read, understood, and agree to these Terms, and will abide by the Code of Conduct (below).You’ll use Copilot only in lawful ways and in compliance with all applicable laws.You won’t use Copilot to violate our or anyone else’s rights.When you use Copilot, you must follow the general Code of Conduct set out in the Microsoft Services Agreement. As applied to Copilot, this means:Don’t use Copilot to harm yourself or others. Don’t use Copilot to help harass, bully, abuse, threaten, or intimidate other people, or otherwise harm others. Don’t use Copilot to help exploit others based on age, disability, or social or economic situations.Don’t damage our ability to provide Copilot to you and others. Don’t use bots or scrapers, and don’t engage in technical attacks, excess usage, prompt-based manipulation, “jailbreaking”, and other abuses.Don’t violate the privacy of others. Don’t use Copilot to help violate the privacy of others, including sharing their private information (e.g. “doxing”). Don’t use Copilot to infer sensitive information about others, like a person’s race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Don’t try to use Copilot for facial identification, to collect or process someone else’s sensitive personal data, or to try to verify someone’s identity. Don’t share or capture images, video, audio, or other content that includes other people without their consent, and don’t try to use Copilot to process someone else’s biometric identifiers or information.Don’t use Copilot to trick, lie to, or cheat others. Don’t use Copilot to help mislead or deceive people. Don’t use Copilot to create or share disinformation or content that will be used to impersonate, defraud, or deceive others.Don’t infringe the rights of others. Don’t use Copilot to infringe on other people’s legal rights, including their intellectual property and publicity rights.Don’t create or share inappropriate content or material. Don’t use Copilot to create or share adult content, violence or gore, hateful content, terrorism and violent extremist content, glorification of violence or suicide, child sexual exploitation or abuse material, or content that is otherwise disturbing or offensive. Don’t use Copilot to create or edit images, voice, or video of other people (e.g. “deepfakes”) without their permission.Don’t do anything illegal. Don’t use Copilot to break the law, or to help or encourage others to break the law.If you see something wrong or inappropriate from Copilot, use the Report or Feedback features in Copilot to let us know. If you have a legal concern about something Copilot says, please use the Report a Concern page to tell us.We may block, restrict, or remove your Prompts or other content from you that violates these Terms, or that could lead Copilot to create a Response that violates these Terms.We may choose to limit or stop offering or supporting Copilot or any feature within Copilot at any time and for any reason.Unless prohibited by law, we may limit, suspend, or permanently revoke your access to or use of Copilot (and potentially all other Services) in our sole discretion, at any time and without notice. Some of the reasons we might do this, for example, is if you breach these Terms or violate the Code of Conduct, if we suspect you’re engaged in fraudulent or illegal activity, or if your Microsoft Account or the account you use to log in to Copilot is suspended or closed. If you feel your access has been restricted by mistake, you may ask us to reevaluate our decision by submitting a request using the Report a Concern form outlining what you think we got wrong and why.Depending on your location and other factors, we may offer you the opportunity to browse, shop and buy certain products through Copilot. If you use Copilot to buy something, it’s sold and shipped by a third party (“Merchant”), not by us. We don’t process payments for your purchases through Copilot.Anything you buy with Copilot is subject to the Merchant’s terms and conditions (including pricing, fees, and shipping, cancellation, and refund policies). You are responsible for reading and complying with the Merchant’s terms that apply to your purchase, including how the Merchant collects and uses your personal information under its privacy policy.We aren’t responsible or liable for any dispute between you and the Merchant about your purchase. If you have any disputes or questions about any product you purchase through Copilot, you must address it directly with the Merchant. If you have disputes or questions about your payment for the product, you must address it with your payment issuer, bank, or wallet provider.We collect, store, use, and share your personal information, including your payment information and purchases you make, in accordance with the Microsoft Privacy Statement. You authorize each Merchant to share with us information about you and your purchase, and for us to send information (including your personal information and transaction details) to the Merchant, the Merchant’s payment processor, our payment processor, or other third party necessary to complete your purchase.Copilot may include both automated and manual (human) processing of data. You shouldn’t share any information with Copilot that you don’t want us to review.We plan to continue to develop and improve Copilot, but we make no guarantees or promises about how Copilot will operate or that it will operate as intended.Sometimes, we may offer certain features or services as part of “Copilot Labs.” These features and services are highly experimental and may not always work as intended. We may add, modify, or remove features or services from Copilot Labs at any time for any reason.We may limit the speed or performance of Copilot as we think necessary.When you request that Copilot take Actions on your behalf, you are solely responsible for those Actions and any results or consequences.Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.WITHOUT LIMITING SECTION 12 OF THE MICROSOFT SERVICES AGREEMENT IN ANY WAY, BUT FOR THE SAKE OF CLARITY, WE DO NOT MAKE ANY WARRANTY OR REPRESENTATION OF ANY KIND ABOUT COPILOT. For example, we can’t promise that any Copilot’s Responses won’t infringe someone else’s rights (like their copyrights, trademarks, or rights of privacy) or defame them. You are solely responsible if you choose to publish or share Copilot’s Responses publicly or with any other person.You agree to indemnify us and hold us harmless (including our affiliates, employees and any other agents) from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of Copilot, including without limitation your use, sharing, or publication of any Prompt, Responses, or Creations, or your breach of these Terms or violation of applicable law.You may stop using Copilot at any time. If you want to close your Microsoft Account, please see the Microsoft Services Agreement.We don’t own Your Content, but we may use Your Content to operate Copilot and improve it. By using Copilot, you grant us permission to use Your Content, which means we can copy, distribute, transmit, publicly display, publicly perform, edit, translate, and reformat it, and we can give those same rights to others who work on our behalf.We get to decide whether to use Your Content, and we don’t have to pay you, ask your permission, or tell you when we do. But that doesn’t mean we can use it however we want. The Microsoft Privacy Statement explains how we use Your Content, and the privacy options in Copilot give you control over some of those uses.We can decide to remove or stop using Your Content at any time for any reason. By sharing Your Content with Copilot, you promise us that you have all rights to Your Content and that if we use Your Content, we won’t be violating someone else’s rights.Although our Terms grant you permission to use Copilot, we are not granting you any rights in the underlying technology, intellectual property, or data that makes up Copilot.By agreeing to these Terms, you’re also agreeing to the Microsoft Services Agreement, a legal agreement between you and us that applies to your use of our Services (including Copilot). If you have a Microsoft account, you already agreed to the Microsoft Services Agreement when you first created a Microsoft account.
Even if you don’t have a Microsoft Account — for example, if you’re using Copilot without logging in, or if you log in to Copilot using a non-Microsoft account — you’re still agreeing to the Microsoft Services Agreement by using Copilot. Please make sure you review it carefully.If you use Copilot to create images, you’re also agreeing to the Image Creator Terms.If you use Gaming Copilot or other AI-powered experiences provided in connection with any Xbox Services, you are also subject to the Xbox Community Standards.Copilot may be integrated into other products and services we separately license to you. For example, Microsoft 365 Family or Microsoft 365 Personal subscriptions are separately licensed under the terms at https://www.microsoft.com/useterms.If any of the language in those other agreements conflicts with the language in these Terms, the language in these Terms controls.When you use Copilot, you are subject to the Microsoft Privacy Statement, which describes how we collect, use, and share information relating to your use of Copilot.From time to time, we might need to update these Terms for different reasons. Some of those reasons might include adding new features, complying with changing laws, addressing security, safety, or fraud issues, or making our Terms clearer and easier to understand.There may be rare circumstances where we need to update these Terms immediately. Otherwise, we’ll post the updated Terms to this page at least 30 days before they take effect. We’ll also include the date the terms take effect at the top of the page, so you can easily tell when we’ve made an update.If you keep using Copilot after the updates take effect, you’re agreeing to those updates. If you don’t agree to the updates, you must stop using Copilot.
...
Read the original on www.microsoft.com »
...
Read the original on damrnelson.github.io »
OpenAI on Tuesday announced that it closed a record-breaking funding round at a post-money valuation of $852 billion.
The round totaled $122 billion of committed capital, up from the $110 billion figure that the company announced in February. SoftBank co-led the round alongside other investors, including Andreessen Horowitz and D. E. Shaw Ventures, OpenAI said.
OpenAI kickstarted the artificial intelligence boom with the launch of its ChatGPT chatbot in 2022, and the company has since ballooned into one of the fastest-growing commercial entities on the planet. As of March, ChatGPT supports more than 900 million weekly active users, including more than 50 million subscribers.
“AI is driving productivity gains, accelerating scientific discovery, and expanding what people and organizations can build,” OpenAI said in a release. “This funding gives us the resources to continue to lead at the scale this moment demands.”
With the close of its latest funding round, OpenAI CEO Sam Altman will be under pressure to justify his company’s massive valuation, especially as it gears up for a potential IPO. The startup has been retreating from some hefty spending plans and shuttering certain features and products in recent months, including its short-form video app Sora, as it looks to rein in costs.
...
Read the original on www.cnbc.com »
OkCupid and Match settle with Trump FTC, don’t have to pay any financial penalty.
OkCupid and its owner Match Group reached a settlement with the Trump administration for not telling dating-app customers that nearly 3 million user photos were shared with a company making a facial recognition system. OkCupid also gave the facial recognition firm access to user location information and other details without customers’ consent, the Federal Trade Commission said.
OkCupid and Match do not have to pay a financial penalty in a deal made with the FTC over an incident from 2014. OkCupid and Match did not admit or deny the allegations but agreed to a permanent prohibition barring them from misrepresenting how they use and share personal data, the FTC said yesterday.
The FTC has been run entirely by Republicans since President Trump fired both Democratic commissioners. The proposed settlement requires approval from a judge and was submitted in US District Court for the Northern District of Texas.
The dating-site company said it’s pleased to settle the matter without paying any fine. “While we do not admit any wrongdoing, we have settled this matter with the FTC with no monetary penalty to resolve an issue from 2014 and move forward,” an OkCupid spokesperson said in a statement provided to Ars today. “The alleged conduct at issue does not reflect how OkCupid operates today. Over the years, we have further strengthened our privacy practices and data governance to ensure we meet the expectations of our users.”
Although a recent court ruling imposes limits on the FTC’s enforcement powers, that ruling applied only to the FTC’s in-house administrative process. The FTC can still pursue deceptive advertising claims in courts and seek financial penalties through court orders or settlements.
FTC: OkCupid imposed no restrictions on data use
The FTC criticized Match and OkCupid for sharing OkCupid data with Clarifai, an AI company that offers facial recognition technology. Clarifai’s website says it offers AI services to “military, civilian, intelligence, and government” customers and to private-sector companies in various industries.
The FTC said that “OkCupid provided the third party with access to nearly three million OkCupid user photos as well as location and other information without placing any formal or contractual restrictions on how the information could be used.” OkCupid “did not inform consumers or give them the chance to opt out of such sharing,” the FTC said.
The FTC said the data-sharing violated the OkCupid privacy policy, which told consumers that it doesn’t share “your personal information with others except as indicated in this Privacy Policy or when we inform you and give you an opportunity to opt out of having your personal information shared.”
The FTC alleged that “since September 2014, Match and OkCupid took extensive steps to conceal—including through trying to obstruct the FTC’s investigation—and deny that OkCupid shared users’ personal information with the data recipient. For example, when a news story revealed that the third party had obtained large OkCupid datasets, OkCupid claimed to the media and OkCupid users that it was not involved with the third party.”
The data-sharing arrangement was described in a 2019 article by The New York Times.
Clarifai founder and CEO Matt Zeiler “said his company had built a face database with images from OkCupid,” and “used the images from OkCupid to build a service that could identify the age, sex and race of detected faces,” according to the Times’ 2019 article.
“An OkCupid spokeswoman said Clarifai contacted the company in 2014 ‘about collaborating to determine if they could build unbiased AI and facial recognition technology’ and that the dating site ‘did not enter into any commercial agreement then and ha[s] no relationship with them now.’ She did not address whether Clarifai had gained access to OkCupid’s photos without its consent,” the Times wrote.
But even if they had no “commercial agreement,” Zeiler told the Times that his company gained access to user photos because some of OkCupid’s founders invested in Clarifai, the 2019 article said. “Clarifai used the images from OkCupid to build a service that could identify the age, sex and race of detected faces, Mr. Zeiler said,” according to the article, which added that “Mr. Zeiler said Clarifai would sell its facial recognition technology to foreign governments, military operations and police departments provided the circumstances were right.”
The FTC said in a complaint yesterday that OkCupid, which was purchased by Match.com in 2011, made “false and misleading claims” about how it used customer data. The complaint makes references to Humor Rainbow, the name of the company that created OkCupid.
“When OkCupid users inquired about OkCupid and the Data Recipient, Humor Rainbow reiterated its lack of involvement with the Data Recipient. Humor Rainbow stated that ‘any implication that OkCupid released users’ information to [the Data Recipient] is false,’” the FTC complaint said.
The FTC complaint described how the data-sharing arrangement was made:
In September 2014, the CEO of Clarifai, Inc. e-mailed one of OkCupid’s founders requesting that Humor Rainbow give Clarifai, Inc. (i.e., the Data Recipient) access to large datasets of OkCupid photos. Despite not having any business relationship with Humor Rainbow, the Data Recipient sought Humor Rainbow’s assistance because each of OkCupid’s founders, including Humor Rainbow’s President and Match Group, LLC’s CEO, were financially invested in the Data Recipient.
In response to this request, Humor Rainbow gave the Data Recipient access to nearly three million OkCupid user photos. Humor Rainbow’s President and Chief Technology Officer were directly involved in facilitating the data transfer. In addition to user photos, Humor Rainbow shared other personal data with the Data Recipient, including each user’s demographic and location information.
Humor Rainbow never executed a formal agreement or set forth restrictions governing the Data Recipient’s access to, or use of, the OkCupid user data. The Data Recipient did not pay for the data and never provided any services to Humor Rainbow or on behalf of OkCupid.
The FTC said that under the proposed settlement:
OkCupid and Match are permanently prohibited from misrepresenting or assisting others in misrepresenting: The extent to which the companies collect, maintain, use, disclose, delete or protect any personal information such as photos and demographic and geolocation data; The purpose for which they collect, maintain, use or disclose such personal data; and the function of privacy controls they provide consumers through user interfaces, any consumer choices afforded to consumers under applicable state privacy laws, or any other mechanisms the companies offer consumers to limit or manage the processing of personal data.
The FTC said its investigation involved the “successful enforcement in federal court” of a civil investigative demand that “required OkCupid to turn over information requested by the agency.” Although the FTC merely required OkCupid and Match to be honest with users about data practices and did not extract a financial penalty, the agency talked tough about the enforcement action in its press release.
“The FTC enforces the privacy promises that companies make,” said Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection. “We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through—even if that means we have to enforce our Civil Investigative Demands in court.”
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
After 16 years and $8 billion, the military’s new GPS software still doesn’t work
You can finally change the goofy Gmail address you chose years ago
Starlink satellite breaks apart into “tens of objects”; SpaceX confirms “anomaly”
...
Read the original on arstechnica.com »
Walk into my lab and the first thing you’ll notice is the dots. The walls are lined with clear boxes, each one labeled, dated, and covered in dot stickers. Some boxes are buried in dots of every color. Others have a few. Others are bare. You don’t know what they mean yet, but you can see the pattern. That’s the system. It costs three dollars, has no software, and I’ve been using it for four years.
I’ve been collecting electronic components since university in 2011. Resistors, capacitors, microcontrollers, motors, drivers, DC-DC converters, displays, amplifiers, servos, LEDs, connectors. The usual trajectory of someone who keeps finding new projects. At first, my collection was small. A few toolboxes held everything. Then I graduated, kicked it into high gear, and by 2017 the collection had outgrown every container I owned.
I was stuck in an awkward middle ground. Too many parts for no system at all, but I was still one person. I didn’t have the problems that DigiKey or Mouser have, where they need barcodes on everything and a vast computerized inventory. I was looking for something simple that made sense for the scale I was working at.
The first thing I did was get rid of every opaque container I owned. Every toolbox, every parts organizer with little pockets, anything I couldn’t see through. I replaced everything with standardized 4L clear boxes from Superstore.
I learned this lesson early and it stuck: if I can’t see what’s in a box, I forget it exists. Clear boxes fixed that. I started sorting parts into categories that emerged naturally over time. A box for capacitors, a box for resistors, a box for motors, a box for LEDs.
The parts organizers with individual pockets were the first to go. They seem like a good idea when your collection is small, but as you keep adding parts, the fixed compartments become a problem. Components outgrow the pockets, and eventually you run out of pockets. The whole organizer becomes a constraint instead of solving the problem. Clear boxes don’t have this problem and the system can scale by simply buying more boxes.
As I worked on projects over months and years, I started to build an intuition about which boxes I was reaching for and which ones were collecting dust. My box of batteries was always on my desk. My box of fuses hadn’t been opened in my entire memory. But it was just a feeling. I couldn’t quantify it. I couldn’t tell you whether I opened my LED box twenty times last year or five. My memory is not good enough to track usage patterns across years of different projects.
And meanwhile, I had a constant influx of new parts. I’d work on an LED project, then move on to something that needed pneumatic components, so I’d order pumps and fittings. Then I’d get interested in piezoelectrics and order a bunch of piezos. Parts kept being added to my collection but my available space did not increase.
As Kirchhoff’s current law states, the current into a node must equal the current out. If I kept acquiring parts at this pace without getting rid of anything, I would eventually drown. I needed a way to figure out what was worth keeping and what should go, so the system can reach a steady state.
I considered RFID tags, barcode scanners, a spreadsheet. All of them felt like too much. Then I found the simplest possible solution on AliExpress for a few dollars.
I ordered sheets of colored dot stickers. Six millimeters in diameter. Hundreds of them for almost nothing.
Every box already had a label on the front with its category and the date I created the box. The new rule was simple: every time I open a box, I place one colored dot sticker near the label. That’s it. Use the box, add a dot.
I quickly realized that on days when I’m deep in a project, I might open the same box five or ten times. Tracking every single opening would be noise. So I refined the rule: one dot per box per day. If I open my LED box ten times on a Tuesday, it still gets one dot. What I actually care about is how many days per year I use a box.
Then, because I had all of these different colors, I decided to assign one color per year. I have over ten colors, so the system works for at least a decade. A piece of paper in my technical reference binder maps each color to its year so I never forget.
That’s the entire system. Sticker sheets cost a few dollars, and there is no database, no server, and no app. The system that works is the one simple enough to do every day for four years.
I wondered at first whether I’d actually keep up with it. Would I forget? Would it be annoying to find a sticker sheet every time I opened a box?
Both problems solved themselves. I keep sheets of stickers in multiple locations around the lab, so I’m always within arm’s reach of one. Applying a dot is muscle memory at this point. And forgetting turns out to be hard, because the dots are their own reminder. Even if the box I just opened has no dots, the neighboring boxes are covered in them. The visual prompt is everywhere.
Visitors always ask about the dots as they’re impossible to miss. When I explain the system and show how I add a dot whenever I use a box, there’s usually a pause, and then it clicks. A single dotted box doesn’t mean much on its own. It’s seeing a whole shelf of them, some covered and some bare, that makes it obvious this is a system.
After four years, the data is hard to argue with. Walk into my lab and you can read the shelves like a dashboard. Some boxes are covered in dots of every color, used year after year, project after project. Others have a cluster of one color from a single project and nothing since. Others are completely bare.
The biggest surprise was which parts turned out to be essential. It wasn’t sensors, even though I had many different kinds, it wasn’t specialized components or “cool” things. The most-dotted boxes are:
Glue. Tape. Stickers. General-purpose connectors. Batteries. Magnets. LEDs. DC-DC power converters. USB-C to barrel jack cables. Capacitors. Resistors. Mechanical tools like files, drill bits, and cutters. Calipers. SD cards and USB drives. Rubber feet. Fasteners.
In retrospect, it makes a lot of sense. All of these things are cross-cutting concerns. Power components like batteries, DC-DC converters, and USB-C cables appear in nearly every project. Connection components like glue, tape, magnets, fasteners, and general-purpose connectors bridge different systems together. Rubber feet show up whenever anything needs to sit on a desk. These aren’t the exciting parts. They’re the common components that nearly every project shares.
Even within a category, the dots reveal patterns. My metric fastener boxes tell a clear story: M3 is by far the most used, with two boxes dedicated to it. M6 is next because I use it for optical breadboards. M2.5 barely gets dotted because it’s specialized for things like Raspberry Pi mounting holes.
Meanwhile, sensors barely got dotted. Fuses, piezoelectric modules, specialized connectors: too application-specific to be core. Discrete LCD modules went unused after I started buying microcontrollers with integrated displays and buttons. I use capacitors and resistors constantly, but inductors got used maybe twice in four years.
And then there were the tools I thought were essential. My oscilloscope, function generator, and logic analyzer are commonly recommended as must-have tools for any electronics lab. Five dots on the oscilloscope in four years. I was genuinely surprised. I know for some people, in fields like RF, these tools are indispensable. But in my work, they’re not. I wouldn’t have had the confidence to say that without the data.
As I consolidated boxes and introduced larger sizes, finding specific parts inside a box became frustrating. I went through three generations of bags: ziplock bags from the grocery store, then clear logo-free bags from AliExpress (which wrinkled), then thick-walled clear bags that were more expensive but worth it. If you’re starting from scratch, skip the first two and go straight to thick clear bags.
I started seeing the whole system like a file system on a computer. Boxes are top-level directories. Bags are subdirectories. Parts are files. Bags can contain other bags. The Johnny Decimal system recommends no more than ten items per category. I don’t follow that rigidly, but I agree with the spirit: inside a box, aim for roughly ten bags. Inside a bag, aim for roughly ten sub-bags max. When things get too crowded, subdivide.
Every bag gets a handwritten label with its contents and the current date. I put dates on everything. Time turns out to be a great universal organizer, just like how a photo collection is wonderfully organized by date more than by any other single dimension.
Eventually my lab overflowed and I had to make real decisions about what stays and what goes. The dots helped me make those decisions.
I set up three tiers. My most-dotted boxes stay within fifteen feet of my desk. Less frequent boxes go in a closet in the lab. Boxes with no dots for a long time go to a separate storage shed outside of my lab, which I think of as “cold storage”.
Cold storage examples: a box of liquid pumps (ink pumps, peristaltic pumps, air pumps). A box of piezo actuators and piezo motors. I find piezos fascinating, but I’ve reluctantly come to admit over time that they’re just not that useful to me. A set of Parker linear motors I bought as lab surplus on eBay. Cool hardware, but the software for the ViX servo drives only works on Windows XP, and I didn’t have much need for linear motors. Zero dots for two years and moved it to the shed.
Sometimes things come back. When I started building a pick-and-place machine, my pneumatic components came right out of cold storage. That’s not a failure, I expect that some things will come back, just not very many things. Cold storage is like a staging area, not a graveyard. If a box sits there long enough untouched, the next step is donating or selling.
This closes a loop. When you constantly acquire new parts but have limited space, you need a system that tells you what should go out the door as new things come in. The dots provide that signal. A lot of people hoard things they don’t need. Seeing clear evidence that a box has zero dots is what helps me overcome the hesitation to finally let go of it.
Principles I’ve learned over four years of the dot system.
Clear boxes, same size and shape. Having a common form factor is like having a common software interface. Lids become interchangeable. If a box breaks you can replace it. You’ll probably need a few different sizes. Pick sizes where each jump is roughly double the last. I use four sizes total.
Labels on the front, not the lid. You will regret lid labels the moment you stack boxes.
Date everything. Every label, every bag. It feels unnecessary at first but it pays off over time. It’s also a kind of time capsule for yourself.
Thick clear bags. Take the time to label them. A permanent marker works fine. I use name tag sized white labels.
Keep sticker sheets near your boxes. If applying a dot takes more than two seconds, you’ll stop doing it. I put sticker sheets in half a dozen places around the lab near my boxes.
Everything needs a home. If only some things are in the system, the value is diminished. Everything you want to track needs to belong somewhere.
Don’t dot the obvious. I put dots on my soldering iron, calipers, and isopropyl alcohol bottle but it was pointless. I already knew these tools were cornerstones of my lab. The dots are most valuable for things where usage is genuinely ambiguous.
Curate categories. A box of random miscellaneous parts teaches you nothing. Boxes of parts that are used together yield high-quality signal.
And then give it time. A year in, you’ll start seeing patterns. Two years in, you’ll trust them enough to know how to refactor your collection.
The dot system doesn’t have to be figured out all at once. Mine evolved through three generations of bags and two major reorganizations. My interests changed, my domain of expertise grew, my collection expanded. The system evolved along with me. I like that it is a living, fluid system.
Walk into my lab and the dots will tell you everything you need to know. They told me too. It just took four years and a $3 pack of stickers. I’m still adding dots.
...
Read the original on scottlawsonbc.com »
SolveSpace is developed primarily as normal desktop software. It’s compact enough that it runs surprisingly well when compiled with emscripten for the browser, though. There is some speed penalty and there are many remaining bugs, but with smaller models the experience is often highly usable.
In keeping with the experimental status of this target, the version below is built from our latest development branch. You are likely to encounter issues that don’t exist in the
normal desktop targets, but feel free to
report
bugs in the usual way.
This web version has no network dependencies after loading. To host your own copy,
build
and host the output like any other static web content.
...
Read the original on solvespace.com »
Users of Claude Code, Anthropic’s AI-powered coding assistant, are experiencing high token usage and early quota exhaustion, disrupting their work.
Anthropic has acknowledged the issue, stating that “people are hitting usage limits in Claude Code way faster than expected. We’re actively investigating… it’s the top priority for the team.”
A user on the Claude Pro subscription ($200 annually) said on the company’s Discord forum that “it’s maxed out every Monday and resets at Saturday and it’s been like that for a couple of weeks… out of 30 days I get to use Claude 12.”
The Anthropic forum on Reddit is buzzing with complaints. “I used up Max 5 in 1 hour of working, before I could work 8 hours,” said one developer today. The Max 5 plan costs $100 per month.
There are several possible factors in the change. Last week, Anthropic said it was reducing quotas during peak hours, a change that engineer Thariq Shihipar said would affect around 7 percent of users, while also claiming that “we’ve landed a lot of efficiency wins to offset this.”
March 28 was also the last day of a Claude promotion that doubled usage limits outside a six-hour peak window.
A third factor is that Claude Code may have bugs that increase token usage. A user claimed that after reverse engineering the Claude Code binary, they “found two independent bugs that cause prompt cache to break, silently inflating costs by 10-20x.” Some users confirmed that downgrading to an older version helped. “Downgrading to 2.1.34 made a very noticeable difference,” said one.
The documentation on prompt caching says that the cache “significantly reduces processing time and costs for repetitive tasks or prompts with consistent elements.” That said, the cache has only a five-minute lifetime, which means stopping for a short break, or not using Claude Code for a few minutes, results in higher costs on resumption.
Developers can upgrade the cache lifetime to one hour but “1-hour cache write tokens are 2 times the base input tokens price,” the documentation states. A cache read token is 0.1 times the base price, so this is a key area for optimization.
Anthropic does not state the exact usage limits for its plans. For example, the Pro plan promises only “at least five times the usage per session compared to our free service.” The Standard Team plan promises “1.25x more usage per session than the Pro plan.” This makes it hard for developers to know what their usage limits are, other than by examining their dashboard showing how much quota they have consumed.
Problems like this are not unusual. Earlier this month, users of Google Antigravity were protesting about similar issues.
Bugs aside, what we are seeing is an implicit negotiation between users and providers over what is an acceptable pricing and usage model for AI development. Users want to control costs and providers need to make a profit. There is also a disconnect between vendor marketing that urges developers to insert AI into every process, including in some cases automated workflows, and a quota system that can cause AI tools to stop responding.
“For folks running Claude Code in automated workflows: rate-limit errors need to be caught explicitly — they look like generic failures and will silently trigger retries. One session in a loop can drain your daily budget in minutes,” observed one user. ®
...
Read the original on www.theregister.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.