10 interesting stories served every morning and every evening.




1 1,220 shares, 68 trendiness

fake tools, frustration regexes, undercover mode, and more

Update: see HN dis­cus­sions about this post: https://​news.ycombi­na­tor.com/​item?id=47586778

I use Claude Code daily, so when Chaofan Shou no­ticed ear­lier to­day that Anthropic had shipped a .map file along­side their Claude Code npm pack­age, one con­tain­ing the full, read­able source code of the CLI tool, I im­me­di­ately wanted to look in­side. The pack­age has since been pulled, but not be­fore the code was widely mir­rored, in­clud­ing my­self and picked apart on Hacker News.

This is Anthropic’s sec­ond ac­ci­den­tal ex­po­sure in a week (the model spec leak was just days ago), and some peo­ple on Twitter are start­ing to won­der if some­one in­side is do­ing this on pur­pose. Probably not, but it’s a bad look ei­ther way. The tim­ing is hard to ig­nore: just ten days ago, Anthropic sent le­gal threats to OpenCode, forc­ing them to re­move built-in Claude au­then­ti­ca­tion be­cause third-party tools were us­ing Claude Code’s in­ter­nal APIs to ac­cess Opus at sub­scrip­tion rates in­stead of pay-per-to­ken pric­ing. That whole saga makes some of the find­ings be­low more pointed.

So I spent my morn­ing read­ing through the HN com­ments and leaked source. Here’s what I found, roughly or­dered by how spicy” I thought it was.

In claude.ts (line 301-313), there’s a flag called ANTI_DISTILLATION_CC. When en­abled, Claude Code sends an­ti_dis­til­la­tion: [‘fake_tools’] in its API re­quests. This tells the server to silently in­ject de­coy tool de­f­i­n­i­tions into the sys­tem prompt.

The idea: if some­one is record­ing Claude Code’s API traf­fic to train a com­pet­ing model, the fake tools pol­lute that train­ing data. It’s gated be­hind a GrowthBook fea­ture flag (tengu_anti_distill_fake_tool_injection) and only ac­tive for first-party CLI ses­sions.

This was one of the first things peo­ple no­ticed on HN.

There’s also a sec­ond anti-dis­til­la­tion mech­a­nism in be­tas.ts (lines 279-298), server-side con­nec­tor-text sum­ma­riza­tion. When en­abled, the API buffers the as­sis­tan­t’s text be­tween tool calls, sum­ma­rizes it, and re­turns the sum­mary with a cryp­to­graphic sig­na­ture. On sub­se­quent turns, the orig­i­nal text can be re­stored from the sig­na­ture. If you’re record­ing API traf­fic, you only get the sum­maries, not the full rea­son­ing chain.

How hard would it be to work around these? Not very. Looking at the ac­ti­va­tion logic in claude.ts, the fake tools in­jec­tion re­quires all four con­di­tions to be true: the ANTI_DISTILLATION_CC com­pile-time flag, the cli en­try­point, a first-party API provider, and the ten­gu_an­ti_dis­til­l_­fake_­tool_in­jec­tion GrowthBook flag re­turn­ing true. A MITM proxy that strips the an­ti_dis­til­la­tion field from re­quest bod­ies be­fore they reach the API would by­pass it en­tirely, since the in­jec­tion is server-side and opt-in. The should­In­clude­First­Par­ty­Only­Be­tas() func­tion also checks for CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS, so set­ting that env var to a truthy value dis­ables the whole thing. And if you’re us­ing a third-party API provider or the SDK en­try­point in­stead of the CLI, the check never fires at all. The con­nec­tor-text sum­ma­riza­tion is even more nar­rowly scoped, Anthropic-internal-only (USER_TYPE === ant’), so ex­ter­nal users won’t en­counter it re­gard­less.

Anyone se­ri­ous about dis­till­ing from Claude Code traf­fic would find the workarounds in about an hour of read­ing the source. The real pro­tec­tion is prob­a­bly le­gal, not tech­ni­cal.

The file un­der­cover.ts (about 90 lines) im­ple­ments a mode that strips all traces of Anthropic in­ter­nals when Claude Code is used in non-in­ter­nal re­pos. It in­structs the model to never men­tion in­ter­nal co­de­names like Capybara” or Tengu,” in­ter­nal Slack chan­nels, repo names, or the phrase Claude Code” it­self.

There is NO force-OFF. This guards against model co­de­name leaks.”

You can force it ON with CLAUDE_CODE_UNDERCOVER=1, but there’s no way to force it off. In ex­ter­nal builds, the en­tire func­tion gets dead-code-elim­i­nated to triv­ial re­turns. This is a one-way door.

This means AI-authored com­mits and PRs from Anthropic em­ploy­ees in open source pro­jects will have no in­di­ca­tion that an AI wrote them. Hiding in­ter­nal co­de­names is rea­son­able. Having the AI ac­tively pre­tend to be hu­man is a dif­fer­ent thing.

An LLM com­pany us­ing regexes for sen­ti­ment analy­sis is peak irony, but also: a regex is faster and cheaper than an LLM in­fer­ence call just to check if some­one is swear­ing at your tool.

In sys­tem.ts (lines 59-95), API re­quests in­clude a cch=00000 place­holder. Before the re­quest leaves the process, Bun’s na­tive HTTP stack (written in Zig) over­writes those five ze­ros with a com­puted hash. The server then val­i­dates the hash to con­firm the re­quest came from a real Claude Code bi­nary, not a spoofed one.

They use a place­holder of the same length so the re­place­ment does­n’t change the Content-Length header or re­quire buffer re­al­lo­ca­tion. The com­pu­ta­tion hap­pens be­low the JavaScript run­time, so it’s in­vis­i­ble to any­thing run­ning in the JS layer. It’s ba­si­cally DRM for API calls, im­ple­mented at the HTTP trans­port level.

This is the tech­ni­cal en­force­ment be­hind the OpenCode le­gal fight. Anthropic does­n’t just ask third-party tools not to use their APIs; the bi­nary it­self cryp­to­graph­i­cally proves it’s the real Claude Code client. If you’re won­der­ing why the OpenCode com­mu­nity had to re­sort to ses­sion-stitch­ing hacks and auth plu­g­ins af­ter Anthropic’s le­gal no­tice, this is why.

The at­tes­ta­tion is­n’t air­tight, though. The whole mech­a­nism is gated be­hind a com­pile-time fea­ture flag (NATIVE_CLIENT_ATTESTATION), and the cch=00000 place­holder only gets in­jected into the x-an­thropic-billing-header when that flag is on. The header it­self can be dis­abled en­tirely by set­ting CLAUDE_CODE_ATTRIBUTION_HEADER to a falsy value, or re­motely via a GrowthBook kill­switch (tengu_attribution_header). The Zig-level hash re­place­ment also only works in­side the of­fi­cial Bun bi­nary. If you re­built the JS bun­dle and ran it on stock Bun (or Node), the place­holder would sur­vive as-is: five lit­eral ze­ros hit­ting the server. Whether the server re­jects that out­right or just logs it is an open ques­tion, but the code com­ment ref­er­ences a server-side _parse_cc_header func­tion that tolerates un­known ex­tra fields,” which sug­gests the val­i­da­tion might be more for­giv­ing than you’d ex­pect for a DRM-like sys­tem. Not a push-but­ton by­pass, but not the kind of thing that would stop a de­ter­mined third-party client for long ei­ther.

BQ 2026-03-10: 1,279 ses­sions had 50+ con­sec­u­tive fail­ures (up to 3,272) in a sin­gle ses­sion, wast­ing ~250K API calls/​day glob­ally.”

The fix? MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 con­sec­u­tive fail­ures, com­paction is dis­abled for the rest of the ses­sion. Three lines of code to stop burn­ing a quar­ter mil­lion API calls a day.

Throughout the code­base, there are ref­er­ences to a fea­ture-gated mode called KAIROS. Based on the code paths in main.tsx, it looks like an un­re­leased au­tonomous agent mode that in­cludes:

This is prob­a­bly the biggest prod­uct roadmap re­veal from the leak.

The im­ple­men­ta­tion is heav­ily gated, so who knows how far along it is. But the scaf­fold­ing for an al­ways-on, back­ground-run­ning agent is there.

Tomorrow is April 1st, and the source con­tains what’s al­most cer­tainly this year’s April Fools’ joke: buddy/​com­pan­ion.ts im­ple­ments a Tamagotchi-style com­pan­ion sys­tem. Every user gets a de­ter­min­is­tic crea­ture (18 species, rar­ity tiers from com­mon to leg­endary, 1% shiny chance, RPG stats like DEBUGGING and SNARK) gen­er­ated from their user ID via a Mulberry32 PRNG. Species names are en­coded with String.fromCharCode() to dodge build-sys­tem grep checks.

The ter­mi­nal ren­der­ing in ink/​screen.ts and ink/​op­ti­mizer.ts bor­rows game-en­gine tech­niques: an Int32Array-backed ASCII char pool, bit­mask-en­coded style meta­data, a patch op­ti­mizer that merges cur­sor moves and can­cels hide/​show pairs, and a self-evict­ing line-width cache (the source claims ~50x re­duc­tion in string­Width calls dur­ing to­ken stream­ing”). Seems like overkill un­til you re­mem­ber these things stream to­kens one at a time.

Every bash com­mand runs through 23 num­bered se­cu­rity checks in bash­Se­cu­rity.ts: 18 blocked Zsh builtins, de­fense against Zsh equals ex­pan­sion (=curl by­pass­ing per­mis­sion checks for curl), uni­code zero-width space in­jec­tion, IFS null-byte in­jec­tion, and a mal­formed to­ken by­pass found dur­ing HackerOne re­view. I haven’t seen an­other tool with this spe­cific a Zsh threat model.

Prompt cache eco­nom­ics clearly drive a lot of the ar­chi­tec­ture. prompt­Cache­BreakDe­tec­tion.ts tracks 14 cache-break vec­tors, and there are sticky latches” that pre­vent mode tog­gles from bust­ing the cache. One func­tion is an­no­tated DANGEROUS_uncachedSystemPromptSection(). When you’re pay­ing for every to­ken, cache in­val­i­da­tion stops be­ing a com­puter sci­ence joke and be­comes an ac­count­ing prob­lem.

The multi-agent co­or­di­na­tor in co­or­di­na­tor­Mode.ts is in­ter­est­ing be­cause the or­ches­tra­tion al­go­rithm is a prompt, not code. It man­ages worker agents through sys­tem prompt in­struc­tions like Do not rub­ber-stamp weak work” and You must un­der­stand find­ings be­fore di­rect­ing fol­low-up work. Never hand off un­der­stand­ing to an­other worker.”

The code­base also has some rough spots. print.ts is 5,594 lines long with a sin­gle func­tion span­ning 3,167 lines and 12 lev­els of nest­ing. They use Axios for HTTP, which is funny tim­ing given that Axios was just com­pro­mised on npm with ma­li­cious ver­sions drop­ping a re­mote ac­cess tro­jan.

Some peo­ple are down­play­ing this be­cause Google’s Gemini CLI and OpenAI’s Codex are al­ready open source. But those com­pa­nies open-sourced their agent SDK (a toolkit), not the full in­ter­nal wiring of their flag­ship prod­uct.

The real dam­age is­n’t the code. It’s the fea­ture flags. KAIROS, the anti-dis­til­la­tion mech­a­nisms: these are prod­uct roadmap de­tails that com­peti­tors can now see and re­act to. The code can be refac­tored. The strate­gic sur­prise can’t be un-leaked.

And here’s the kicker: Anthropic ac­quired Bun at the end of last year, and Claude Code is built on top of it. A Bun bug (oven-sh/bun#28001), filed on March 11, re­ports that source maps are served in pro­duc­tion mode even though Bun’s own docs say they should be dis­abled. The is­sue is still open. If that’s what caused the leak, then Anthropic’s own tool­chain shipped a known bug that ex­posed their own pro­duc­t’s source code.

As one Twitter re­ply put it: accidentally ship­ping your source map to npm is the kind of mis­take that sounds im­pos­si­ble un­til you re­mem­ber that a sig­nif­i­cant por­tion of the code­base was prob­a­bly writ­ten by the AI you are ship­ping.”

...

Read the original on alex000kim.com »

2 876 shares, 3 trendiness

Oracle slashes 30,000 jobs with a cold 6 a.m. email

It was not a phone call. It was not a meet­ing. For thou­sands of Oracle em­ploy­ees across the globe, Tuesday morn­ing be­gan with a sin­gle email land­ing in their in­boxes just af­ter 6 a.m. EST — and by the time they fin­ished read­ing it, their ca­reers at one of the world’s largest tech­nol­ogy com­pa­nies were over.

Oracle has launched what an­a­lysts be­lieve could be the most ex­ten­sive lay­off in the com­pa­ny’s his­tory, with es­ti­mates sug­gest­ing the cuts will af­fect be­tween 20,000 and 30,000 em­ploy­ees — roughly 18% of its global work­force of ap­prox­i­mately 162,000 peo­ple. Workers in the United States, India, and other re­gions all re­ported re­ceiv­ing the same ter­mi­na­tion no­tice at nearly the same hour, sent un­der the name Oracle Leadership.”

There was no heads-up from hu­man re­sources, no con­ver­sa­tion with a di­rect man­ager, and no ad­vance no­tice of any kind. Just an email.

The email that cir­cu­lated widely af­ter screen­shots were posted by af­fected work­ers on Reddit’s r/​em­ploy­eesO­fOr­a­cle com­mu­nity and the pro­fes­sional fo­rum Blind was brief and for­mu­laic. It told em­ploy­ees that fol­low­ing a re­view of the com­pa­ny’s cur­rent busi­ness needs, a de­ci­sion had been made to elim­i­nate their roles as part of a broader or­ga­ni­za­tional change, that the day of the email was their fi­nal work­ing day, and that a sev­er­ance pack­age would be made avail­able af­ter sign­ing ter­mi­na­tion pa­per­work through DocuSign.

Employees were also in­structed to up­date their per­sonal email ad­dresses to re­ceive sub­se­quent com­mu­ni­ca­tions, in­clud­ing sep­a­ra­tion de­tails and an­swers to fre­quently asked ques­tions. For many, ac­cess to in­ter­nal pro­duc­tion sys­tems was re­voked al­most im­me­di­ately af­ter the mes­sage ar­rived.

Based on ac­counts shared across both Reddit and Blind, the cuts were wide­spread and, in some units, se­vere. Among the teams re­ported to be most af­fected:

RHS (Revenue and Health Sciences) — em­ploy­ees de­scribed a re­duc­tion in force of at least 30%, with 16 or more en­gi­neers from in­di­vid­ual busi­ness units cut in a sin­gle ac­tion.

SVOS (SaaS and Virtual Operations Services) — sim­i­larly re­ported a 30% or greater re­duc­tion, with man­ager-level roles in­cluded in the sweep.

At least one man­ager was con­firmed among those let go, and af­fected em­ploy­ees in India said the sev­er­ance struc­ture is ex­pected to fol­low a stan­dard for­mula based on years of ser­vice, paid out in months. Any un­vested re­stricted stock units, how­ever, were for­feited im­me­di­ately.

Workers who had vested stock were told they would re­tain ac­cess to those shares through Fidelity. Some em­ploy­ees noted April 3 as their for­mal last work­ing day, with a one-month gar­den leave pe­riod to fol­low. Separately, posts on Blind al­leged that Oracle had re­cently in­stalled mon­i­tor­ing soft­ware on com­pany-is­sued Mac lap­tops ca­pa­ble of log­ging all de­vice ac­tiv­ity, with warn­ings cir­cu­lat­ing among af­fected em­ploy­ees not to copy any files or code be­fore re­turn­ing their ma­chines.

The lay­offs are di­rectly tied to Oracle’s ag­gres­sive and debt-heavy ex­pan­sion into ar­ti­fi­cial in­tel­li­gence in­fra­struc­ture. According to analy­sis from TD Cowen, the job cuts are ex­pected to free up be­tween $8 bil­lion and $10 bil­lion in cash flow — money the com­pany ur­gently needs to fund a mas­sive build­out of AI data cen­ters.

The fi­nan­cial pic­ture sur­round­ing that ex­pan­sion is strik­ing. Oracle has taken on $58 bil­lion in new debt within just two months. Its stock has lost more than half its value since reach­ing a peak in September 2025. Multiple U. S. banks have re­port­edly stepped back from fi­nanc­ing some of its data cen­ter pro­jects. All of this is hap­pen­ing even as the com­pany posted a 95% jump in net in­come — reach­ing $6.13 bil­lion — last quar­ter.

The con­trast un­der­scores the scale of the bet Oracle is mak­ing: record prof­its on one side, a mount­ing debt load and tens of thou­sands of elim­i­nated jobs on the other. For the work­ers who woke up Tuesday morn­ing to that 6 a.m. email, the com­pa­ny’s am­bi­tions of­fered lit­tle com­fort.

...

Read the original on rollingout.com »

3 546 shares, 83 trendiness

Claude Code Unpacked

Stuff that’s in the code but not shipped yet. Feature-flagged, env-gated, or just com­mented out.

A vir­tual pet that lives in your ter­mi­nal. Species and rar­ity are de­rived from your ac­count ID. Persistent mode with daily logs, mem­ory con­sol­i­da­tion be­tween ses­sions, and au­tonomous back­ground ac­tions.Long plan­ning ses­sions on Opus-class mod­els, up to 30-minute ex­e­cu­tion win­dows.Con­trol Claude Code from your phone or a browser. Full re­mote ses­sion with per­mis­sion ap­provals.Run ses­sions in the back­ground with –bgtmuxSessions talk to each other over Unix do­main sock­ets.Be­tween ses­sions, the AI re­views what hap­pened and or­ga­nizes what it learned.

...

Read the original on ccunpacked.dev »

4 528 shares, 24 trendiness

Terms of Use

We’ve clar­i­fied when these Terms ap­ply to cer­tain Copilot ser­vices and ex­pe­ri­ences. We’ve re­vised our Code of Conduct to clar­ify how you can and can’t use Copilot.We’ve rewrit­ten and re­or­ga­nized our Terms to be clearer and sim­pler.

IF YOU LIVE IN (OR YOUR PRINCIPAL PLACE OF BUSINESS IS IN) THE UNITED STATES, PLEASE READ THE BINDING ARBITRATION CLAUSE AND CLASS ACTION WAIVER IN SECTION 15 OF THE MICROSOFT SERVICES AGREEMENT. IT AFFECTS HOW DISPUTES RELATING TO THESE TERMS ARE RESOLVED. Welcome to Copilot, your per­sonal AI com­pan­ion!These Terms ex­plain how you can use Copilot. By us­ing Copilot, you agree to these Terms. Please read them care­fully be­fore you start us­ing Copilot.These Terms ap­ply to your use of Copilot,” which in­cludes:The stand­alone Copilot apps on your com­puter or mo­bile de­viceThe Copilot ser­vice we of­fer at copi­lot.mi­crosoft.com, copi­lot.com, and copi­lot.aiCon­ver­sa­tions you have with Copilot through other Microsoft apps and web­sitesCon­ver­sa­tions you have with Copilot through third-party apps and plat­form­sOther Copilot-branded apps and ser­vices that link to these TermsThese Terms don’t ap­ply to Microsoft 365 Copilot apps or ser­vices un­less that spe­cific app or ser­vice says that these Terms ap­ply.Cer­tain words and phrases we use in these Terms have a par­tic­u­lar mean­ing:Words like you”, your” and yours” mean you, the per­son ac­cess­ing and us­ing Copilot.Words like we”, us”, and our” means Microsoft, the com­pany that of­fers Copilot, as well as the re­lated com­pa­nies we own or con­trol and the com­pa­nies and peo­ple that work on our be­half.A Prompt” is the con­tent — text, au­dio, im­ages, files, voice, or video — that you send to or share with Copilot.A Response” is the con­tent that Copilot sends to or shares with you. Some Responses might in­clude Creations” — orig­i­nal con­tent or works of art that Copilot cre­ates in re­sponse to your Prompts.“Your Content” means the Prompts and Responses that are part of your con­ver­sa­tions with Copilot, but it does­n’t in­clude any con­tent we sep­a­rately own (like Xbox gam­ing clips, for ex­am­ple).“Ac­tions” refers to the au­to­mated set of tasks that Copilot takes on your be­half at your re­quest.“Ser­vices” is de­fined in the Microsoft Services Agreement. Copilot is a Service un­der that Agreement.WHO CAN USE COPILOTYou need to be old enough to use Copilot — usu­ally at least 13, but some­times 18 or older, de­pend­ing on your coun­try’s laws. Because laws vary by coun­try, Copilot is­n’t avail­able every­where.If you’re un­der 18, or if you use Copilot with­out log­ging in, we might turn off or limit some fea­tures for le­gal or safety rea­sons. If we ask for your birth­day and coun­try when you sign up or log in, you must give us your real in­for­ma­tion.Don’t use tools or com­puter pro­grams (like bots or scrap­ers) to ac­cess Copilot. You can only use Copilot for your own per­sonal use.HOW YOU USE COPILOTCopilot is an AI-powered con­ver­sa­tional ser­vice. Copilot will gen­er­ate Responses to Prompts you sub­mit and may also of­fer you Responses di­rectly in your on­go­ing con­ver­sa­tions or for things you have asked Copilot to re­mem­ber.Copi­lot tries to give you good an­swers, but it can make mis­takes. Sometimes, the sources Copilot uses may not be re­li­able, rel­e­vant, or ac­cu­rate, and some­times, Copilot may give you wrong in­for­ma­tion. When re­spond­ing, Copilot may use in­for­ma­tion it finds on the in­ter­net, and we don’t con­trol that con­tent. You might see Responses that seem con­vinc­ing but are in­com­plete, in­ac­cu­rate, or in­ap­pro­pri­ate.Al­ways use your judg­ment and check the in­for­ma­tion you get from Copilot be­fore you make de­ci­sions or act. If you see some­thing wrong or in­ap­pro­pri­ate from Copilot, use the Report or Feedback fea­tures in Copilot to let us know. If you have a le­gal con­cern about some­thing Copilot says, please use the Report a Concern page to tell us.Be­cause of the way Copilot works, the Responses you get from Copilot may not be unique to you. Copilot may give the same or sim­i­lar Responses and Creations to Microsoft, or to other peo­ple. Other peo­ple may send sim­i­lar Prompts as yours, and they could get the same, sim­i­lar, or dif­fer­ent Responses and Creations.By us­ing Copilot, you’re telling us that: You’ve read, un­der­stood, and agree to these Terms, and will abide by the Code of Conduct (below).You’ll use Copilot only in law­ful ways and in com­pli­ance with all ap­plic­a­ble laws.You won’t use Copilot to vi­o­late our or any­one else’s rights.When you use Copilot, you must fol­low the gen­eral Code of Conduct set out in the Microsoft Services Agreement. As ap­plied to Copilot, this means:Don’t use Copilot to harm your­self or oth­ers. Don’t use Copilot to help ha­rass, bully, abuse, threaten, or in­tim­i­date other peo­ple, or oth­er­wise harm oth­ers. Don’t use Copilot to help ex­ploit oth­ers based on age, dis­abil­ity, or so­cial or eco­nomic sit­u­a­tions.Don’t dam­age our abil­ity to pro­vide Copilot to you and oth­ers. Don’t use bots or scrap­ers, and don’t en­gage in tech­ni­cal at­tacks, ex­cess us­age, prompt-based ma­nip­u­la­tion, jailbreaking”, and other abuses.Don’t vi­o­late the pri­vacy of oth­ers. Don’t use Copilot to help vi­o­late the pri­vacy of oth­ers, in­clud­ing shar­ing their pri­vate in­for­ma­tion (e.g. doxing”). Don’t use Copilot to in­fer sen­si­tive in­for­ma­tion about oth­ers, like a per­son’s race, po­lit­i­cal opin­ions, trade union mem­ber­ship, re­li­gious or philo­soph­i­cal be­liefs, sex life, or sex­ual ori­en­ta­tion. Don’t try to use Copilot for fa­cial iden­ti­fi­ca­tion, to col­lect or process some­one else’s sen­si­tive per­sonal data, or to try to ver­ify some­one’s iden­tity. Don’t share or cap­ture im­ages, video, au­dio, or other con­tent that in­cludes other peo­ple with­out their con­sent, and don’t try to use Copilot to process some­one else’s bio­met­ric iden­ti­fiers or in­for­ma­tion.Don’t use Copilot to trick, lie to, or cheat oth­ers. Don’t use Copilot to help mis­lead or de­ceive peo­ple. Don’t use Copilot to cre­ate or share dis­in­for­ma­tion or con­tent that will be used to im­per­son­ate, de­fraud, or de­ceive oth­ers.Don’t in­fringe the rights of oth­ers. Don’t use Copilot to in­fringe on other peo­ple’s le­gal rights, in­clud­ing their in­tel­lec­tual prop­erty and pub­lic­ity rights.Don’t cre­ate or share in­ap­pro­pri­ate con­tent or ma­te­r­ial. Don’t use Copilot to cre­ate or share adult con­tent, vi­o­lence or gore, hate­ful con­tent, ter­ror­ism and vi­o­lent ex­trem­ist con­tent, glo­ri­fi­ca­tion of vi­o­lence or sui­cide, child sex­ual ex­ploita­tion or abuse ma­te­r­ial, or con­tent that is oth­er­wise dis­turb­ing or of­fen­sive. Don’t use Copilot to cre­ate or edit im­ages, voice, or video of other peo­ple (e.g. deepfakes”) with­out their per­mis­sion.Don’t do any­thing il­le­gal. Don’t use Copilot to break the law, or to help or en­cour­age oth­ers to break the law.If you see some­thing wrong or in­ap­pro­pri­ate from Copilot, use the Report or Feedback fea­tures in Copilot to let us know. If you have a le­gal con­cern about some­thing Copilot says, please use the Report a Concern page to tell us.We may block, re­strict, or re­move your Prompts or other con­tent from you that vi­o­lates these Terms, or that could lead Copilot to cre­ate a Response that vi­o­lates these Terms.We may choose to limit or stop of­fer­ing or sup­port­ing Copilot or any fea­ture within Copilot at any time and for any rea­son.Un­less pro­hib­ited by law, we may limit, sus­pend, or per­ma­nently re­voke your ac­cess to or use of Copilot (and po­ten­tially all other Services) in our sole dis­cre­tion, at any time and with­out no­tice. Some of the rea­sons we might do this, for ex­am­ple, is if you breach these Terms or vi­o­late the Code of Conduct, if we sus­pect you’re en­gaged in fraud­u­lent or il­le­gal ac­tiv­ity, or if your Microsoft Account or the ac­count you use to log in to Copilot is sus­pended or closed. If you feel your ac­cess has been re­stricted by mis­take, you may ask us to reeval­u­ate our de­ci­sion by sub­mit­ting a re­quest us­ing the Report a Concern form out­lin­ing what you think we got wrong and why.De­pend­ing on your lo­ca­tion and other fac­tors, we may of­fer you the op­por­tu­nity to browse, shop and buy cer­tain prod­ucts through Copilot. If you use Copilot to buy some­thing, it’s sold and shipped by a third party (“Merchant”), not by us. We don’t process pay­ments for your pur­chases through Copilot.Anything you buy with Copilot is sub­ject to the Merchant’s terms and con­di­tions (including pric­ing, fees, and ship­ping, can­cel­la­tion, and re­fund poli­cies). You are re­spon­si­ble for read­ing and com­ply­ing with the Merchant’s terms that ap­ply to your pur­chase, in­clud­ing how the Merchant col­lects and uses your per­sonal in­for­ma­tion un­der its pri­vacy pol­icy.We aren’t re­spon­si­ble or li­able for any dis­pute be­tween you and the Merchant about your pur­chase. If you have any dis­putes or ques­tions about any prod­uct you pur­chase through Copilot, you must ad­dress it di­rectly with the Merchant. If you have dis­putes or ques­tions about your pay­ment for the prod­uct, you must ad­dress it with your pay­ment is­suer, bank, or wal­let provider.We col­lect, store, use, and share your per­sonal in­for­ma­tion, in­clud­ing your pay­ment in­for­ma­tion and pur­chases you make, in ac­cor­dance with the Microsoft Privacy Statement. You au­tho­rize each Merchant to share with us in­for­ma­tion about you and your pur­chase, and for us to send in­for­ma­tion (including your per­sonal in­for­ma­tion and trans­ac­tion de­tails) to the Merchant, the Merchant’s pay­ment proces­sor, our pay­ment proces­sor, or other third party nec­es­sary to com­plete your pur­chase.Copi­lot may in­clude both au­to­mated and man­ual (human) pro­cess­ing of data. You should­n’t share any in­for­ma­tion with Copilot that you don’t want us to re­view.We plan to con­tinue to de­velop and im­prove Copilot, but we make no guar­an­tees or promises about how Copilot will op­er­ate or that it will op­er­ate as in­tended.Some­times, we may of­fer cer­tain fea­tures or ser­vices as part of Copilot Labs.” These fea­tures and ser­vices are highly ex­per­i­men­tal and may not al­ways work as in­tended. We may add, mod­ify, or re­move fea­tures or ser­vices from Copilot Labs at any time for any rea­son.We may limit the speed or per­for­mance of Copilot as we think nec­es­sary.When you re­quest that Copilot take Actions on your be­half, you are solely re­spon­si­ble for those Actions and any re­sults or con­se­quences.Copi­lot is for en­ter­tain­ment pur­poses only. It can make mis­takes, and it may not work as in­tended. Don’t rely on Copilot for im­por­tant ad­vice. Use Copilot at your own risk.WITH­OUT LIMITING SECTION 12 OF THE MICROSOFT SERVICES AGREEMENT IN ANY WAY, BUT FOR THE SAKE OF CLARITY, WE DO NOT MAKE ANY WARRANTY OR REPRESENTATION OF ANY KIND ABOUT COPILOT. For ex­am­ple, we can’t promise that any Copilot’s Responses won’t in­fringe some­one else’s rights (like their copy­rights, trade­marks, or rights of pri­vacy) or de­fame them. You are solely re­spon­si­ble if you choose to pub­lish or share Copilot’s Responses pub­licly or with any other per­son.You agree to in­dem­nify us and hold us harm­less (including our af­fil­i­ates, em­ploy­ees and any other agents) from and against any claims, losses, and ex­penses (including at­tor­neys’ fees) aris­ing from or re­lat­ing to your use of Copilot, in­clud­ing with­out lim­i­ta­tion your use, shar­ing, or pub­li­ca­tion of any Prompt, Responses, or Creations, or your breach of these Terms or vi­o­la­tion of ap­plic­a­ble law.You may stop us­ing Copilot at any time. If you want to close your Microsoft Account, please see the Microsoft Services Agreement.We don’t own Your Content, but we may use Your Content to op­er­ate Copilot and im­prove it. By us­ing Copilot, you grant us per­mis­sion to use Your Content, which means we can copy, dis­trib­ute, trans­mit, pub­licly dis­play, pub­licly per­form, edit, trans­late, and re­for­mat it, and we can give those same rights to oth­ers who work on our be­half.We get to de­cide whether to use Your Content, and we don’t have to pay you, ask your per­mis­sion, or tell you when we do. But that does­n’t mean we can use it how­ever we want. The Microsoft Privacy Statement ex­plains how we use Your Content, and the pri­vacy op­tions in Copilot give you con­trol over some of those uses.We can de­cide to re­move or stop us­ing Your Content at any time for any rea­son. By shar­ing Your Content with Copilot, you promise us that you have all rights to Your Content and that if we use Your Content, we won’t be vi­o­lat­ing some­one else’s rights.Al­though our Terms grant you per­mis­sion to use Copilot, we are not grant­ing you any rights in the un­der­ly­ing tech­nol­ogy, in­tel­lec­tual prop­erty, or data that makes up Copilot.By agree­ing to these Terms, you’re also agree­ing to the Microsoft Services Agreement, a le­gal agree­ment be­tween you and us that ap­plies to your use of our Services (including Copilot). If you have a Microsoft ac­count, you al­ready agreed to the Microsoft Services Agreement when you first cre­ated a Microsoft ac­count.

Even if you don’t have a Microsoft Account — for ex­am­ple, if you’re us­ing Copilot with­out log­ging in, or if you log in to Copilot us­ing a non-Mi­crosoft ac­count — you’re still agree­ing to the Microsoft Services Agreement by us­ing Copilot. Please make sure you re­view it care­fully.If you use Copilot to cre­ate im­ages, you’re also agree­ing to the Image Creator Terms.If you use Gaming Copilot or other AI-powered ex­pe­ri­ences pro­vided in con­nec­tion with any Xbox Services, you are also sub­ject to the Xbox Community Standards.Copilot may be in­te­grated into other prod­ucts and ser­vices we sep­a­rately li­cense to you. For ex­am­ple, Microsoft 365 Family or Microsoft 365 Personal sub­scrip­tions are sep­a­rately li­censed un­der the terms at https://​www.mi­crosoft.com/​useterms.If any of the lan­guage in those other agree­ments con­flicts with the lan­guage in these Terms, the lan­guage in these Terms con­trols.When you use Copilot, you are sub­ject to the Microsoft Privacy Statement, which de­scribes how we col­lect, use, and share in­for­ma­tion re­lat­ing to your use of Copilot.From time to time, we might need to up­date these Terms for dif­fer­ent rea­sons. Some of those rea­sons might in­clude adding new fea­tures, com­ply­ing with chang­ing laws, ad­dress­ing se­cu­rity, safety, or fraud is­sues, or mak­ing our Terms clearer and eas­ier to un­der­stand.There may be rare cir­cum­stances where we need to up­date these Terms im­me­di­ately. Otherwise, we’ll post the up­dated Terms to this page at least 30 days be­fore they take ef­fect. We’ll also in­clude the date the terms take ef­fect at the top of the page, so you can eas­ily tell when we’ve made an up­date.If you keep us­ing Copilot af­ter the up­dates take ef­fect, you’re agree­ing to those up­dates. If you don’t agree to the up­dates, you must stop us­ing Copilot.

...

Read the original on www.microsoft.com »

5 470 shares, 25 trendiness

Historical GitHub Uptime Charts

...

Read the original on damrnelson.github.io »

6 464 shares, 22 trendiness

OpenAI closes record-breaking $122 billion funding round as anticipation builds for IPO

OpenAI on Tuesday an­nounced that it closed a record-break­ing fund­ing round at a post-money val­u­a­tion of $852 bil­lion.

The round to­taled $122 bil­lion of com­mit­ted cap­i­tal, up from the $110 bil­lion fig­ure that the com­pany an­nounced in February. SoftBank co-led the round along­side other in­vestors, in­clud­ing Andreessen Horowitz and D. E. Shaw Ventures, OpenAI said.

OpenAI kick­started the ar­ti­fi­cial in­tel­li­gence boom with the launch of its ChatGPT chat­bot in 2022, and the com­pany has since bal­looned into one of the fastest-grow­ing com­mer­cial en­ti­ties on the planet. As of March, ChatGPT sup­ports more than 900 mil­lion weekly ac­tive users, in­clud­ing more than 50 mil­lion sub­scribers.

AI is dri­ving pro­duc­tiv­ity gains, ac­cel­er­at­ing sci­en­tific dis­cov­ery, and ex­pand­ing what peo­ple and or­ga­ni­za­tions can build,” OpenAI said in a re­lease. This fund­ing gives us the re­sources to con­tinue to lead at the scale this mo­ment de­mands.”

With the close of its lat­est fund­ing round, OpenAI CEO Sam Altman will be un­der pres­sure to jus­tify his com­pa­ny’s mas­sive val­u­a­tion, es­pe­cially as it gears up for a po­ten­tial IPO. The startup has been re­treat­ing from some hefty spend­ing plans and shut­ter­ing cer­tain fea­tures and prod­ucts in re­cent months, in­clud­ing its short-form video app Sora, as it looks to rein in costs.

...

Read the original on www.cnbc.com »

7 447 shares, 22 trendiness

OkCupid gave 3 million dating-app photos to facial recognition firm, FTC says

OkCupid and Match set­tle with Trump FTC, don’t have to pay any fi­nan­cial penalty.

OkCupid and its owner Match Group reached a set­tle­ment with the Trump ad­min­is­tra­tion for not telling dat­ing-app cus­tomers that nearly 3 mil­lion user pho­tos were shared with a com­pany mak­ing a fa­cial recog­ni­tion sys­tem. OkCupid also gave the fa­cial recog­ni­tion firm ac­cess to user lo­ca­tion in­for­ma­tion and other de­tails with­out cus­tomers’ con­sent, the Federal Trade Commission said.

OkCupid and Match do not have to pay a fi­nan­cial penalty in a deal made with the FTC over an in­ci­dent from 2014. OkCupid and Match did not ad­mit or deny the al­le­ga­tions but agreed to a per­ma­nent pro­hi­bi­tion bar­ring them from mis­rep­re­sent­ing how they use and share per­sonal data, the FTC said yes­ter­day.

The FTC has been run en­tirely by Republicans since President Trump fired both Democratic com­mis­sion­ers. The pro­posed set­tle­ment re­quires ap­proval from a judge and was sub­mit­ted in US District Court for the Northern District of Texas.

The dat­ing-site com­pany said it’s pleased to set­tle the mat­ter with­out pay­ing any fine. While we do not ad­mit any wrong­do­ing, we have set­tled this mat­ter with the FTC with no mon­e­tary penalty to re­solve an is­sue from 2014 and move for­ward,” an OkCupid spokesper­son said in a state­ment pro­vided to Ars to­day. The al­leged con­duct at is­sue does not re­flect how OkCupid op­er­ates to­day. Over the years, we have fur­ther strength­ened our pri­vacy prac­tices and data gov­er­nance to en­sure we meet the ex­pec­ta­tions of our users.”

Although a re­cent court rul­ing im­poses lim­its on the FTCs en­force­ment pow­ers, that rul­ing ap­plied only to the FTCs in-house ad­min­is­tra­tive process. The FTC can still pur­sue de­cep­tive ad­ver­tis­ing claims in courts and seek fi­nan­cial penal­ties through court or­ders or set­tle­ments.

FTC: OkCupid im­posed no re­stric­tions on data use

The FTC crit­i­cized Match and OkCupid for shar­ing OkCupid data with Clarifai, an AI com­pany that of­fers fa­cial recog­ni­tion tech­nol­ogy. Clarifai’s web­site says it of­fers AI ser­vices to military, civil­ian, in­tel­li­gence, and gov­ern­ment” cus­tomers and to pri­vate-sec­tor com­pa­nies in var­i­ous in­dus­tries.

The FTC said that OkCupid pro­vided the third party with ac­cess to nearly three mil­lion OkCupid user pho­tos as well as lo­ca­tion and other in­for­ma­tion with­out plac­ing any for­mal or con­trac­tual re­stric­tions on how the in­for­ma­tion could be used.” OkCupid did not in­form con­sumers or give them the chance to opt out of such shar­ing,” the FTC said.

The FTC said the data-shar­ing vi­o­lated the OkCupid pri­vacy pol­icy, which told con­sumers that it does­n’t share your per­sonal in­for­ma­tion with oth­ers ex­cept as in­di­cated in this Privacy Policy or when we in­form you and give you an op­por­tu­nity to opt out of hav­ing your per­sonal in­for­ma­tion shared.”

The FTC al­leged that since September 2014, Match and OkCupid took ex­ten­sive steps to con­ceal—in­clud­ing through try­ing to ob­struct the FTCs in­ves­ti­ga­tion—and deny that OkCupid shared users’ per­sonal in­for­ma­tion with the data re­cip­i­ent. For ex­am­ple, when a news story re­vealed that the third party had ob­tained large OkCupid datasets, OkCupid claimed to the me­dia and OkCupid users that it was not in­volved with the third party.”

The data-shar­ing arrange­ment was de­scribed in a 2019 ar­ti­cle by The New York Times.

Clarifai founder and CEO Matt Zeiler said his com­pany had built a face data­base with im­ages from OkCupid,” and used the im­ages from OkCupid to build a ser­vice that could iden­tify the age, sex and race of de­tected faces,” ac­cord­ing to the Times’ 2019 ar­ti­cle.

An OkCupid spokes­woman said Clarifai con­tacted the com­pany in 2014 about col­lab­o­rat­ing to de­ter­mine if they could build un­bi­ased AI and fa­cial recog­ni­tion tech­nol­o­gy’ and that the dat­ing site did not en­ter into any com­mer­cial agree­ment then and ha[s] no re­la­tion­ship with them now.’ She did not ad­dress whether Clarifai had gained ac­cess to OkCupid’s pho­tos with­out its con­sent,” the Times wrote.

But even if they had no commercial agree­ment,” Zeiler told the Times that his com­pany gained ac­cess to user pho­tos be­cause some of OkCupid’s founders in­vested in Clarifai, the 2019 ar­ti­cle said. Clarifai used the im­ages from OkCupid to build a ser­vice that could iden­tify the age, sex and race of de­tected faces, Mr. Zeiler said,” ac­cord­ing to the ar­ti­cle, which added that Mr. Zeiler said Clarifai would sell its fa­cial recog­ni­tion tech­nol­ogy to for­eign gov­ern­ments, mil­i­tary op­er­a­tions and po­lice de­part­ments pro­vided the cir­cum­stances were right.”

The FTC said in a com­plaint yes­ter­day that OkCupid, which was pur­chased by Match.com in 2011, made false and mis­lead­ing claims” about how it used cus­tomer data. The com­plaint makes ref­er­ences to Humor Rainbow, the name of the com­pany that cre­ated OkCupid.

When OkCupid users in­quired about OkCupid and the Data Recipient, Humor Rainbow re­it­er­ated its lack of in­volve­ment with the Data Recipient. Humor Rainbow stated that any im­pli­ca­tion that OkCupid re­leased users’ in­for­ma­tion to [the Data Recipient] is false,’” the FTC com­plaint said.

The FTC com­plaint de­scribed how the data-shar­ing arrange­ment was made:

In September 2014, the CEO of Clarifai, Inc. e-mailed one of OkCupid’s founders re­quest­ing that Humor Rainbow give Clarifai, Inc. (i.e., the Data Recipient) ac­cess to large datasets of OkCupid pho­tos. Despite not hav­ing any busi­ness re­la­tion­ship with Humor Rainbow, the Data Recipient sought Humor Rainbow’s as­sis­tance be­cause each of OkCupid’s founders, in­clud­ing Humor Rainbow’s President and Match Group, LLCs CEO, were fi­nan­cially in­vested in the Data Recipient.

In re­sponse to this re­quest, Humor Rainbow gave the Data Recipient ac­cess to nearly three mil­lion OkCupid user pho­tos. Humor Rainbow’s President and Chief Technology Officer were di­rectly in­volved in fa­cil­i­tat­ing the data trans­fer. In ad­di­tion to user pho­tos, Humor Rainbow shared other per­sonal data with the Data Recipient, in­clud­ing each user’s de­mo­graphic and lo­ca­tion in­for­ma­tion.

Humor Rainbow never ex­e­cuted a for­mal agree­ment or set forth re­stric­tions gov­ern­ing the Data Recipient’s ac­cess to, or use of, the OkCupid user data. The Data Recipient did not pay for the data and never pro­vided any ser­vices to Humor Rainbow or on be­half of OkCupid.

The FTC said that un­der the pro­posed set­tle­ment:

OkCupid and Match are per­ma­nently pro­hib­ited from mis­rep­re­sent­ing or as­sist­ing oth­ers in mis­rep­re­sent­ing: The ex­tent to which the com­pa­nies col­lect, main­tain, use, dis­close, delete or pro­tect any per­sonal in­for­ma­tion such as pho­tos and de­mo­graphic and ge­olo­ca­tion data; The pur­pose for which they col­lect, main­tain, use or dis­close such per­sonal data; and the func­tion of pri­vacy con­trols they pro­vide con­sumers through user in­ter­faces, any con­sumer choices af­forded to con­sumers un­der ap­plic­a­ble state pri­vacy laws, or any other mech­a­nisms the com­pa­nies of­fer con­sumers to limit or man­age the pro­cess­ing of per­sonal data.

The FTC said its in­ves­ti­ga­tion in­volved the successful en­force­ment in fed­eral court” of a civil in­ves­tiga­tive de­mand that required OkCupid to turn over in­for­ma­tion re­quested by the agency.” Although the FTC merely re­quired OkCupid and Match to be hon­est with users about data prac­tices and did not ex­tract a fi­nan­cial penalty, the agency talked tough about the en­force­ment ac­tion in its press re­lease.

The FTC en­forces the pri­vacy promises that com­pa­nies make,” said Christopher Mufarrige, di­rec­tor of the FTCs Bureau of Consumer Protection. We will in­ves­ti­gate, and where ap­pro­pri­ate, take ac­tion against com­pa­nies that promise to safe­guard your data but fail to fol­low through—even if that means we have to en­force our Civil Investigative Demands in court.”

Jon is a Senior IT Reporter for Ars Technica. He cov­ers the tele­com in­dus­try, Federal Communications Commission rule­mak­ings, broad­band con­sumer af­fairs, court cases, and gov­ern­ment reg­u­la­tion of the tech in­dus­try.

After 16 years and $8 bil­lion, the mil­i­tary’s new GPS soft­ware still does­n’t work

You can fi­nally change the goofy Gmail ad­dress you chose years ago

Starlink satel­lite breaks apart into tens of ob­jects”; SpaceX con­firms anomaly”

...

Read the original on arstechnica.com »

8 368 shares, 25 trendiness

A Dot a Day Keeps the Clutter Away — Scott Lawson

Walk into my lab and the first thing you’ll no­tice is the dots. The walls are lined with clear boxes, each one la­beled, dated, and cov­ered in dot stick­ers. Some boxes are buried in dots of every color. Others have a few. Others are bare. You don’t know what they mean yet, but you can see the pat­tern. That’s the sys­tem. It costs three dol­lars, has no soft­ware, and I’ve been us­ing it for four years.

I’ve been col­lect­ing elec­tronic com­po­nents since uni­ver­sity in 2011. Resistors, ca­pac­i­tors, mi­cro­con­trollers, mo­tors, dri­vers, DC-DC con­vert­ers, dis­plays, am­pli­fiers, ser­vos, LEDs, con­nec­tors. The usual tra­jec­tory of some­one who keeps find­ing new pro­jects. At first, my col­lec­tion was small. A few tool­boxes held every­thing. Then I grad­u­ated, kicked it into high gear, and by 2017 the col­lec­tion had out­grown every con­tainer I owned.

I was stuck in an awk­ward mid­dle ground. Too many parts for no sys­tem at all, but I was still one per­son. I did­n’t have the prob­lems that DigiKey or Mouser have, where they need bar­codes on every­thing and a vast com­put­er­ized in­ven­tory. I was look­ing for some­thing sim­ple that made sense for the scale I was work­ing at.

The first thing I did was get rid of every opaque con­tainer I owned. Every tool­box, every parts or­ga­nizer with lit­tle pock­ets, any­thing I could­n’t see through. I re­placed every­thing with stan­dard­ized 4L clear boxes from Superstore.

I learned this les­son early and it stuck: if I can’t see what’s in a box, I for­get it ex­ists. Clear boxes fixed that. I started sort­ing parts into cat­e­gories that emerged nat­u­rally over time. A box for ca­pac­i­tors, a box for re­sis­tors, a box for mo­tors, a box for LEDs.

The parts or­ga­niz­ers with in­di­vid­ual pock­ets were the first to go. They seem like a good idea when your col­lec­tion is small, but as you keep adding parts, the fixed com­part­ments be­come a prob­lem. Components out­grow the pock­ets, and even­tu­ally you run out of pock­ets. The whole or­ga­nizer be­comes a con­straint in­stead of solv­ing the prob­lem. Clear boxes don’t have this prob­lem and the sys­tem can scale by sim­ply buy­ing more boxes.

As I worked on pro­jects over months and years, I started to build an in­tu­ition about which boxes I was reach­ing for and which ones were col­lect­ing dust. My box of bat­ter­ies was al­ways on my desk. My box of fuses had­n’t been opened in my en­tire mem­ory. But it was just a feel­ing. I could­n’t quan­tify it. I could­n’t tell you whether I opened my LED box twenty times last year or five. My mem­ory is not good enough to track us­age pat­terns across years of dif­fer­ent pro­jects.

And mean­while, I had a con­stant in­flux of new parts. I’d work on an LED pro­ject, then move on to some­thing that needed pneu­matic com­po­nents, so I’d or­der pumps and fit­tings. Then I’d get in­ter­ested in piezo­electrics and or­der a bunch of piezos. Parts kept be­ing added to my col­lec­tion but my avail­able space did not in­crease.

As Kirchhoff’s cur­rent law states, the cur­rent into a node must equal the cur­rent out. If I kept ac­quir­ing parts at this pace with­out get­ting rid of any­thing, I would even­tu­ally drown. I needed a way to fig­ure out what was worth keep­ing and what should go, so the sys­tem can reach a steady state.

I con­sid­ered RFID tags, bar­code scan­ners, a spread­sheet. All of them felt like too much. Then I found the sim­plest pos­si­ble so­lu­tion on AliExpress for a few dol­lars.

I or­dered sheets of col­ored dot stick­ers. Six mil­lime­ters in di­am­e­ter. Hundreds of them for al­most noth­ing.

Every box al­ready had a la­bel on the front with its cat­e­gory and the date I cre­ated the box. The new rule was sim­ple: every time I open a box, I place one col­ored dot sticker near the la­bel. That’s it. Use the box, add a dot.

I quickly re­al­ized that on days when I’m deep in a pro­ject, I might open the same box five or ten times. Tracking every sin­gle open­ing would be noise. So I re­fined the rule: one dot per box per day. If I open my LED box ten times on a Tuesday, it still gets one dot. What I ac­tu­ally care about is how many days per year I use a box.

Then, be­cause I had all of these dif­fer­ent col­ors, I de­cided to as­sign one color per year. I have over ten col­ors, so the sys­tem works for at least a decade. A piece of pa­per in my tech­ni­cal ref­er­ence binder maps each color to its year so I never for­get.

That’s the en­tire sys­tem. Sticker sheets cost a few dol­lars, and there is no data­base, no server, and no app. The sys­tem that works is the one sim­ple enough to do every day for four years.

I won­dered at first whether I’d ac­tu­ally keep up with it. Would I for­get? Would it be an­noy­ing to find a sticker sheet every time I opened a box?

Both prob­lems solved them­selves. I keep sheets of stick­ers in mul­ti­ple lo­ca­tions around the lab, so I’m al­ways within ar­m’s reach of one. Applying a dot is mus­cle mem­ory at this point. And for­get­ting turns out to be hard, be­cause the dots are their own re­minder. Even if the box I just opened has no dots, the neigh­bor­ing boxes are cov­ered in them. The vi­sual prompt is every­where.

Visitors al­ways ask about the dots as they’re im­pos­si­ble to miss. When I ex­plain the sys­tem and show how I add a dot when­ever I use a box, there’s usu­ally a pause, and then it clicks. A sin­gle dot­ted box does­n’t mean much on its own. It’s see­ing a whole shelf of them, some cov­ered and some bare, that makes it ob­vi­ous this is a sys­tem.

After four years, the data is hard to ar­gue with. Walk into my lab and you can read the shelves like a dash­board. Some boxes are cov­ered in dots of every color, used year af­ter year, pro­ject af­ter pro­ject. Others have a clus­ter of one color from a sin­gle pro­ject and noth­ing since. Others are com­pletely bare.

The biggest sur­prise was which parts turned out to be es­sen­tial. It was­n’t sen­sors, even though I had many dif­fer­ent kinds, it was­n’t spe­cial­ized com­po­nents or cool” things. The most-dot­ted boxes are:

Glue. Tape. Stickers. General-purpose con­nec­tors. Batteries. Magnets. LEDs. DC-DC power con­vert­ers. USB-C to bar­rel jack ca­bles. Capacitors. Resistors. Mechanical tools like files, drill bits, and cut­ters. Calipers. SD cards and USB dri­ves. Rubber feet. Fasteners.

In ret­ro­spect, it makes a lot of sense. All of these things are cross-cut­ting con­cerns. Power com­po­nents like bat­ter­ies, DC-DC con­vert­ers, and USB-C ca­bles ap­pear in nearly every pro­ject. Connection com­po­nents like glue, tape, mag­nets, fas­ten­ers, and gen­eral-pur­pose con­nec­tors bridge dif­fer­ent sys­tems to­gether. Rubber feet show up when­ever any­thing needs to sit on a desk. These aren’t the ex­cit­ing parts. They’re the com­mon com­po­nents that nearly every pro­ject shares.

Even within a cat­e­gory, the dots re­veal pat­terns. My met­ric fas­tener boxes tell a clear story: M3 is by far the most used, with two boxes ded­i­cated to it. M6 is next be­cause I use it for op­ti­cal bread­boards. M2.5 barely gets dot­ted be­cause it’s spe­cial­ized for things like Raspberry Pi mount­ing holes.

Meanwhile, sen­sors barely got dot­ted. Fuses, piezo­elec­tric mod­ules, spe­cial­ized con­nec­tors: too ap­pli­ca­tion-spe­cific to be core. Discrete LCD mod­ules went un­used af­ter I started buy­ing mi­cro­con­trollers with in­te­grated dis­plays and but­tons. I use ca­pac­i­tors and re­sis­tors con­stantly, but in­duc­tors got used maybe twice in four years.

And then there were the tools I thought were es­sen­tial. My os­cil­lo­scope, func­tion gen­er­a­tor, and logic an­a­lyzer are com­monly rec­om­mended as must-have tools for any elec­tron­ics lab. Five dots on the os­cil­lo­scope in four years. I was gen­uinely sur­prised. I know for some peo­ple, in fields like RF, these tools are in­dis­pens­able. But in my work, they’re not. I would­n’t have had the con­fi­dence to say that with­out the data.

As I con­sol­i­dated boxes and in­tro­duced larger sizes, find­ing spe­cific parts in­side a box be­came frus­trat­ing. I went through three gen­er­a­tions of bags: zi­plock bags from the gro­cery store, then clear logo-free bags from AliExpress (which wrin­kled), then thick-walled clear bags that were more ex­pen­sive but worth it. If you’re start­ing from scratch, skip the first two and go straight to thick clear bags.

I started see­ing the whole sys­tem like a file sys­tem on a com­puter. Boxes are top-level di­rec­to­ries. Bags are sub­di­rec­to­ries. Parts are files. Bags can con­tain other bags. The Johnny Decimal sys­tem rec­om­mends no more than ten items per cat­e­gory. I don’t fol­low that rigidly, but I agree with the spirit: in­side a box, aim for roughly ten bags. Inside a bag, aim for roughly ten sub-bags max. When things get too crowded, sub­di­vide.

Every bag gets a hand­writ­ten la­bel with its con­tents and the cur­rent date. I put dates on every­thing. Time turns out to be a great uni­ver­sal or­ga­nizer, just like how a photo col­lec­tion is won­der­fully or­ga­nized by date more than by any other sin­gle di­men­sion.

Eventually my lab over­flowed and I had to make real de­ci­sions about what stays and what goes. The dots helped me make those de­ci­sions.

I set up three tiers. My most-dot­ted boxes stay within fif­teen feet of my desk. Less fre­quent boxes go in a closet in the lab. Boxes with no dots for a long time go to a sep­a­rate stor­age shed out­side of my lab, which I think of as cold stor­age”.

Cold stor­age ex­am­ples: a box of liq­uid pumps (ink pumps, peri­staltic pumps, air pumps). A box of piezo ac­tu­a­tors and piezo mo­tors. I find piezos fas­ci­nat­ing, but I’ve re­luc­tantly come to ad­mit over time that they’re just not that use­ful to me. A set of Parker lin­ear mo­tors I bought as lab sur­plus on eBay. Cool hard­ware, but the soft­ware for the ViX servo dri­ves only works on Windows XP, and I did­n’t have much need for lin­ear mo­tors. Zero dots for two years and moved it to the shed.

Sometimes things come back. When I started build­ing a pick-and-place ma­chine, my pneu­matic com­po­nents came right out of cold stor­age. That’s not a fail­ure, I ex­pect that some things will come back, just not very many things. Cold stor­age is like a stag­ing area, not a grave­yard. If a box sits there long enough un­touched, the next step is do­nat­ing or sell­ing.

This closes a loop. When you con­stantly ac­quire new parts but have lim­ited space, you need a sys­tem that tells you what should go out the door as new things come in. The dots pro­vide that sig­nal. A lot of peo­ple hoard things they don’t need. Seeing clear ev­i­dence that a box has zero dots is what helps me over­come the hes­i­ta­tion to fi­nally let go of it.

Principles I’ve learned over four years of the dot sys­tem.

Clear boxes, same size and shape. Having a com­mon form fac­tor is like hav­ing a com­mon soft­ware in­ter­face. Lids be­come in­ter­change­able. If a box breaks you can re­place it. You’ll prob­a­bly need a few dif­fer­ent sizes. Pick sizes where each jump is roughly dou­ble the last. I use four sizes to­tal.

Labels on the front, not the lid. You will re­gret lid la­bels the mo­ment you stack boxes.

Date every­thing. Every la­bel, every bag. It feels un­nec­es­sary at first but it pays off over time. It’s also a kind of time cap­sule for your­self.

Thick clear bags. Take the time to la­bel them. A per­ma­nent marker works fine. I use name tag sized white la­bels.

Keep sticker sheets near your boxes. If ap­ply­ing a dot takes more than two sec­onds, you’ll stop do­ing it. I put sticker sheets in half a dozen places around the lab near my boxes.

Everything needs a home. If only some things are in the sys­tem, the value is di­min­ished. Everything you want to track needs to be­long some­where.

Don’t dot the ob­vi­ous. I put dots on my sol­der­ing iron, calipers, and iso­propyl al­co­hol bot­tle but it was point­less. I al­ready knew these tools were cor­ner­stones of my lab. The dots are most valu­able for things where us­age is gen­uinely am­bigu­ous.

Curate cat­e­gories. A box of ran­dom mis­cel­la­neous parts teaches you noth­ing. Boxes of parts that are used to­gether yield high-qual­ity sig­nal.

And then give it time. A year in, you’ll start see­ing pat­terns. Two years in, you’ll trust them enough to know how to refac­tor your col­lec­tion.

The dot sys­tem does­n’t have to be fig­ured out all at once. Mine evolved through three gen­er­a­tions of bags and two ma­jor re­or­ga­ni­za­tions. My in­ter­ests changed, my do­main of ex­per­tise grew, my col­lec­tion ex­panded. The sys­tem evolved along with me. I like that it is a liv­ing, fluid sys­tem.

Walk into my lab and the dots will tell you every­thing you need to know. They told me too. It just took four years and a $3 pack of stick­ers. I’m still adding dots.

...

Read the original on scottlawsonbc.com »

9 340 shares, 14 trendiness

Experimental Web Version

SolveSpace is de­vel­oped pri­mar­ily as nor­mal desk­top soft­ware. It’s com­pact enough that it runs sur­pris­ingly well when com­piled with em­scripten for the browser, though. There is some speed penalty and there are many re­main­ing bugs, but with smaller mod­els the ex­pe­ri­ence is of­ten highly us­able.

In keep­ing with the ex­per­i­men­tal sta­tus of this tar­get, the ver­sion be­low is built from our lat­est de­vel­op­ment branch. You are likely to en­counter is­sues that don’t ex­ist in the

nor­mal desk­top tar­gets, but feel free to

re­port

bugs in the usual way.

This web ver­sion has no net­work de­pen­den­cies af­ter load­ing. To host your own copy,

build

and host the out­put like any other sta­tic web con­tent.

...

Read the original on solvespace.com »

10 308 shares, 12 trendiness

Anthropic admits Claude Code quotas running out too fast

Users of Claude Code, Anthropic’s AI-powered cod­ing as­sis­tant, are ex­pe­ri­enc­ing high to­ken us­age and early quota ex­haus­tion, dis­rupt­ing their work.

Anthropic has ac­knowl­edged the is­sue, stat­ing that people are hit­ting us­age lim­its in Claude Code way faster than ex­pected. We’re ac­tively in­ves­ti­gat­ing… it’s the top pri­or­ity for the team.”

A user on the Claude Pro sub­scrip­tion ($200 an­nu­ally) said on the com­pa­ny’s Discord fo­rum that it’s maxed out every Monday and re­sets at Saturday and it’s been like that for a cou­ple of weeks… out of 30 days I get to use Claude 12.”

The Anthropic fo­rum on Reddit is buzzing with com­plaints. I used up Max 5 in 1 hour of work­ing, be­fore I could work 8 hours,” said one de­vel­oper to­day. The Max 5 plan costs $100 per month.

There are sev­eral pos­si­ble fac­tors in the change. Last week, Anthropic said it was re­duc­ing quo­tas dur­ing peak hours, a change that en­gi­neer Thariq Shihipar said would af­fect around 7 per­cent of users, while also claim­ing that we’ve landed a lot of ef­fi­ciency wins to off­set this.”

March 28 was also the last day of a Claude pro­mo­tion that dou­bled us­age lim­its out­side a six-hour peak win­dow.

A third fac­tor is that Claude Code may have bugs that in­crease to­ken us­age. A user claimed that af­ter re­verse en­gi­neer­ing the Claude Code bi­nary, they found two in­de­pen­dent bugs that cause prompt cache to break, silently in­flat­ing costs by 10-20x.” Some users con­firmed that down­grad­ing to an older ver­sion helped. Downgrading to 2.1.34 made a very no­tice­able dif­fer­ence,” said one.

The doc­u­men­ta­tion on prompt caching says that the cache significantly re­duces pro­cess­ing time and costs for repet­i­tive tasks or prompts with con­sis­tent el­e­ments.” That said, the cache has only a five-minute life­time, which means stop­ping for a short break, or not us­ing Claude Code for a few min­utes, re­sults in higher costs on re­sump­tion.

Developers can up­grade the cache life­time to one hour but 1-hour cache write to­kens are 2 times the base in­put to­kens price,” the doc­u­men­ta­tion states. A cache read to­ken is 0.1 times the base price, so this is a key area for op­ti­miza­tion.

Anthropic does not state the ex­act us­age lim­its for its plans. For ex­am­ple, the Pro plan promises only at least five times the us­age per ses­sion com­pared to our free ser­vice.” The Standard Team plan promises 1.25x more us­age per ses­sion than the Pro plan.” This makes it hard for de­vel­op­ers to know what their us­age lim­its are, other than by ex­am­in­ing their dash­board show­ing how much quota they have con­sumed.

Problems like this are not un­usual. Earlier this month, users of Google Antigravity were protest­ing about sim­i­lar is­sues.

Bugs aside, what we are see­ing is an im­plicit ne­go­ti­a­tion be­tween users and providers over what is an ac­cept­able pric­ing and us­age model for AI de­vel­op­ment. Users want to con­trol costs and providers need to make a profit. There is also a dis­con­nect be­tween ven­dor mar­ket­ing that urges de­vel­op­ers to in­sert AI into every process, in­clud­ing in some cases au­to­mated work­flows, and a quota sys­tem that can cause AI tools to stop re­spond­ing.

For folks run­ning Claude Code in au­to­mated work­flows: rate-limit er­rors need to be caught ex­plic­itly — they look like generic fail­ures and will silently trig­ger re­tries. One ses­sion in a loop can drain your daily bud­get in min­utes,” ob­served one user. ®

...

Read the original on www.theregister.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.