10 interesting stories served every morning and every evening.




1 2,193 shares, 86 trendiness

Claude Opus 4.6

The new Claude Opus 4.6 im­proves on its pre­de­ces­sor’s cod­ing skills. It plans more care­fully, sus­tains agen­tic tasks for longer, can op­er­ate more re­li­ably in larger code­bases, and has bet­ter code re­view and de­bug­ging skills to catch its own mis­takes. And, in a first for our Opus-class mod­els, Opus 4.6 fea­tures a 1M to­ken con­text win­dow in beta. Opus 4.6 can also ap­ply its im­proved abil­i­ties to a range of every­day work tasks: run­ning fi­nan­cial analy­ses, do­ing re­search, and us­ing and cre­at­ing doc­u­ments, spread­sheets, and pre­sen­ta­tions. Within Cowork, where Claude can mul­ti­task au­tonomously, Opus 4.6 can put all these skills to work on your be­half.The mod­el’s per­for­mance is state-of-the-art on sev­eral eval­u­a­tions. For ex­am­ple, it achieves the high­est score on the agen­tic cod­ing eval­u­a­tion Terminal-Bench 2.0 and leads all other fron­tier mod­els on Humanity’s Last Exam, a com­plex mul­ti­dis­ci­pli­nary rea­son­ing test. On GDPval-AA—an eval­u­a­tion of per­for­mance on eco­nom­i­cally valu­able knowl­edge work tasks in fi­nance, le­gal, and other do­mains1—Opus 4.6 out­per­forms the in­dus­try’s next-best model (OpenAI’s GPT-5.2) by around 144 Elo points,2 and its own pre­de­ces­sor (Claude Opus 4.5) by 190 points. Opus 4.6 also per­forms bet­ter than any other model on BrowseComp, which mea­sures a mod­el’s abil­ity to lo­cate hard-to-find in­for­ma­tion on­line.As we show in our ex­ten­sive sys­tem card, Opus 4.6 also shows an over­all safety pro­file as good as, or bet­ter than, any other fron­tier model in the in­dus­try, with low rates of mis­aligned be­hav­ior across safety eval­u­a­tions.Opus 4.6 is state-of-the-art on real-world work tasks across sev­eral pro­fes­sional do­mains.Opus 4.6 gets the high­est score in the in­dus­try for deep, multi-step agen­tic search.In Claude Code, you can now as­sem­ble agent teams to work on tasks to­gether. On the API, Claude can use com­paction to sum­ma­rize its own con­text and per­form longer-run­ning tasks with­out bump­ing up against lim­its. We’re also in­tro­duc­ing adap­tive think­ing, where the model can pick up on con­tex­tual clues about how much to use its ex­tended think­ing, and new ef­fort con­trols to give de­vel­op­ers more con­trol over in­tel­li­gence, speed, and cost. We’ve made sub­stan­tial up­grades to Claude in Excel, and we’re re­leas­ing Claude in PowerPoint in a re­search pre­view. This makes Claude much more ca­pa­ble for every­day work.Claude Opus 4.6 is avail­able to­day on claude.ai, our API, and all ma­jor cloud plat­forms. If you’re a de­vel­oper, use claude-opus-4-6 via the Claude API. Pricing re­mains the same at $5/$25 per mil­lion to­kens; for full de­tails, see our pric­ing page.We cover the model, our new prod­uct up­dates, our eval­u­a­tions, and our ex­ten­sive safety test­ing in depth be­low.We build Claude with Claude. Our en­gi­neers write code with Claude Code every day, and every new model first gets tested on our own work. With Opus 4.6, we’ve found that the model brings more fo­cus to the most chal­leng­ing parts of a task with­out be­ing told to, moves quickly through the more straight­for­ward parts, han­dles am­bigu­ous prob­lems with bet­ter judg­ment, and stays pro­duc­tive over longer ses­sions.Opus 4.6 of­ten thinks more deeply and more care­fully re­vis­its its rea­son­ing be­fore set­tling on an an­swer. This pro­duces bet­ter re­sults on harder prob­lems, but can add cost and la­tency on sim­pler ones. If you’re find­ing that the model is over­think­ing on a given task, we rec­om­mend di­al­ing ef­fort down from its de­fault set­ting (high) to medium. You can con­trol this eas­ily with the /effort pa­ra­me­ter.Here are some of the things our Early Access part­ners told us about Claude Opus 4.6, in­clud­ing its propen­sity to work au­tonomously with­out hand-hold­ing, its suc­cess where pre­vi­ous mod­els failed, and its ef­fect on how teams work:

Claude Opus 4.6 is the strongest model Anthropic has shipped. It takes com­pli­cated re­quests and ac­tu­ally fol­lows through, break­ing them into con­crete steps, ex­e­cut­ing, and pro­duc­ing pol­ished work even when the task is am­bi­tious. For Notion users, it feels less like a tool and more like a ca­pa­ble col­lab­o­ra­tor.Early test­ing shows Claude Opus 4.6 de­liv­er­ing on the com­plex, multi-step cod­ing work de­vel­op­ers face every day—es­pe­cially agen­tic work­flows that de­mand plan­ning and tool call­ing. This starts un­lock­ing long-hori­zon tasks at the fron­tier.Claude Opus 4.6 is a huge leap for agen­tic plan­ning. It breaks com­plex tasks into in­de­pen­dent sub­tasks, runs tools and sub­agents in par­al­lel, and iden­ti­fies block­ers with real pre­ci­sion.Claude Opus 4.6 is the best model we’ve tested yet. Its rea­son­ing and plan­ning ca­pa­bil­i­ties have been ex­cep­tional at pow­er­ing our AI Teammates. It’s also a fan­tas­tic cod­ing model — its abil­ity to nav­i­gate a large code­base and iden­tify the right changes to make is state of the art.Claude Opus 4.6 rea­sons through com­plex prob­lems at a level we haven’t seen be­fore. It con­sid­ers edge cases that other mod­els miss and con­sis­tently lands on more el­e­gant, well-con­sid­ered so­lu­tions. We’re par­tic­u­larly im­pressed with Opus 4.6 in Devin Review, where it’s in­creased our bug catch­ing rates.Claude Opus 4.6 feels no­tice­ably bet­ter than Opus 4.5 in Windsurf, es­pe­cially on tasks that re­quire care­ful ex­plo­ration like de­bug­ging and un­der­stand­ing un­fa­mil­iar code­bases. We’ve no­ticed Opus 4.6 thinks longer, which pays off when deeper rea­son­ing is needed.Claude Opus 4.6 rep­re­sents a mean­ing­ful leap in long-con­text per­for­mance. In our test­ing, we saw it han­dle much larger bod­ies of in­for­ma­tion with a level of con­sis­tency that strength­ens how we de­sign and de­ploy com­plex re­search work­flows. Progress in this area gives us more pow­er­ful build­ing blocks to de­liver truly ex­pert-grade sys­tems pro­fes­sion­als can trust.Across 40 cy­ber­se­cu­rity in­ves­ti­ga­tions, Claude Opus 4.6 pro­duced the best re­sults 38 of 40 times in a blind rank­ing against Claude 4.5 mod­els. Each model ran end to end on the same agen­tic har­ness with up to 9 sub­agents and 100+ tool calls.Claude Opus 4.6 is the new fron­tier on long-run­ning tasks from our in­ter­nal bench­marks and test­ing. It’s also been highly ef­fec­tive at re­view­ing code.Claude Opus 4.6 achieved the high­est BigLaw Bench score of any Claude model at 90.2%. With 40% per­fect scores and 84% above 0.8, it’s re­mark­ably ca­pa­ble for le­gal rea­son­ing.Claude Opus 4.6 au­tonomously closed 13 is­sues and as­signed 12 is­sues to the right team mem­bers in a sin­gle day, man­ag­ing a ~50-person or­ga­ni­za­tion across 6 repos­i­to­ries. It han­dled both prod­uct and or­ga­ni­za­tional de­ci­sions while syn­the­siz­ing con­text across mul­ti­ple do­mains, and it knew when to es­ca­late to a hu­man.Claude Opus 4.6 is an up­lift in de­sign qual­ity. It works beau­ti­fully with our de­sign sys­tems and it’s more au­tonomous, which is core to Lovable’s val­ues. People should be cre­at­ing things that mat­ter, not mi­cro­manag­ing AI.Claude Opus 4.6 ex­cels in high-rea­son­ing tasks like multi-source analy­sis across le­gal, fi­nan­cial, and tech­ni­cal con­tent. Box’s eval showed a 10% lift in per­for­mance, reach­ing 68% vs. a 58% base­line, and near-per­fect scores in tech­ni­cal do­mains.Claude Opus 4.6 gen­er­ates com­plex, in­ter­ac­tive apps and pro­to­types in Figma Make with an im­pres­sive cre­ative range. The model trans­lates de­tailed de­signs and multi-lay­ered tasks into code on the first try, mak­ing it a pow­er­ful start­ing point for teams to ex­plore and build ideas.Claude Opus 4.6 is the best Anthropic model we’ve tested. It un­der­stands in­tent with min­i­mal prompt­ing and went above and be­yond, ex­plor­ing and cre­at­ing de­tails I did­n’t even know I wanted un­til I saw them. It felt like I was work­ing with the model, not wait­ing on it.Both hands-on test­ing and evals show Claude Opus 4.6 is a mean­ing­ful im­prove­ment for de­sign sys­tems and large code­bases, use cases that drive enor­mous en­ter­prise value. It also one-shot­ted a fully func­tional physics en­gine, han­dling a large multi-scope task in a sin­gle pass.Claude Opus 4.6 is the biggest leap I’ve seen in months. I’m more com­fort­able giv­ing it a se­quence of tasks across the stack and let­ting it run. It’s smart enough to use sub­agents for the in­di­vid­ual pieces.Claude Opus 4.6 han­dled a multi-mil­lion-line code­base mi­gra­tion like a se­nior en­gi­neer. It planned up front, adapted its strat­egy as it learned, and fin­ished in half the time.We only ship mod­els in v0 when de­vel­op­ers will gen­uinely feel the dif­fer­ence. Claude Opus 4.6 passed that bar with ease. Its fron­tier-level rea­son­ing, es­pe­cially with edge cases, helps v0 to de­liver on our num­ber-one aim: to let any­one el­e­vate their ideas from pro­to­type to pro­duc­tion.The per­for­mance jump with Claude Opus 4.6 feels al­most un­be­liev­able. Real-world tasks that were chal­leng­ing for Opus [4.5] sud­denly be­came easy. This feels like a wa­ter­shed mo­ment for spread­sheet agents on Shortcut.Across agen­tic cod­ing, com­puter use, tool use, search, and fi­nance, Opus 4.6 is an in­dus­try-lead­ing model, of­ten by a wide mar­gin. The table be­low shows how Claude Opus 4.6 com­pares to our pre­vi­ous mod­els and to other in­dus­try mod­els on a va­ri­ety of bench­marks.Opus 4.6 is much bet­ter at re­triev­ing rel­e­vant in­for­ma­tion from large sets of doc­u­ments. This ex­tends to long-con­text tasks, where it holds and tracks in­for­ma­tion over hun­dreds of thou­sands of to­kens with less drift, and picks up buried de­tails that even Opus 4.5 would miss.A com­mon com­plaint about AI mod­els is context rot,” where per­for­mance de­grades as con­ver­sa­tions ex­ceed a cer­tain num­ber of to­kens. Opus 4.6 per­forms markedly bet­ter than its pre­de­ces­sors: on the 8-needle 1M vari­ant of MRCR v2—a nee­dle-in-a-haystack bench­mark that tests a mod­el’s abil­ity to re­trieve in­for­ma­tion hidden” in vast amounts of text—Opus 4.6 scores 76%, whereas Sonnet 4.5 scores just 18.5%. This is a qual­i­ta­tive shift in how much con­text a model can ac­tu­ally use while main­tain­ing peak per­for­mance.All in all, Opus 4.6 is bet­ter at find­ing in­for­ma­tion across long con­texts, bet­ter at rea­son­ing af­ter ab­sorb­ing that in­for­ma­tion, and has sub­stan­tially bet­ter ex­pert-level rea­son­ing abil­i­ties in gen­eral.Fi­nally, the charts be­low show how Claude Opus 4.6 per­forms on a va­ri­ety of bench­marks that as­sess its soft­ware en­gi­neer­ing skills, mul­ti­lin­gual cod­ing abil­ity, long-term co­her­ence, cy­ber­se­cu­rity ca­pa­bil­i­ties, and its life sci­ences knowl­edge.Opus 4.6 main­tains fo­cus over time and earns $3,050.53 more than Opus 4.5 on Vending-Bench 2.Opus 4.6 finds real vul­ner­a­bil­i­ties in code­bases bet­ter than any other model.Opus 4.6 per­forms al­most bet­ter than Opus 4.5 on com­pu­ta­tional bi­ol­ogy, struc­tural bi­ol­ogy, or­ganic chem­istry, and phy­lo­ge­net­ics tests.These in­tel­li­gence gains do not come at the cost of safety. On our au­to­mated be­hav­ioral au­dit, Opus 4.6 showed a low rate of mis­aligned be­hav­iors such as de­cep­tion, syco­phancy, en­cour­age­ment of user delu­sions, and co­op­er­a­tion with mis­use. Overall, it is just as well-aligned as its pre­de­ces­sor, Claude Opus 4.5, which was our most-aligned fron­tier model to date. Opus 4.6 also shows the low­est rate of over-re­fusals—where the model fails to an­swer be­nign queries—of any re­cent Claude model.The over­all mis­aligned be­hav­ior score for each re­cent Claude model on our au­to­mated be­hav­ioral au­dit (described in full in the Claude Opus 4.6 sys­tem card).For Claude Opus 4.6, we ran the most com­pre­hen­sive set of safety eval­u­a­tions of any model, ap­ply­ing many dif­fer­ent tests for the first time and up­grad­ing sev­eral that we’ve used be­fore. We in­cluded new eval­u­a­tions for user well­be­ing, more com­plex tests of the mod­el’s abil­ity to refuse po­ten­tially dan­ger­ous re­quests, and up­dated eval­u­a­tions of the mod­el’s abil­ity to sur­rep­ti­tiously per­form harm­ful ac­tions. We also ex­per­i­mented with new meth­ods from in­ter­pretabil­ity, the sci­ence of the in­ner work­ings of AI mod­els, to be­gin to un­der­stand why the model be­haves in cer­tain ways—and, ul­ti­mately, to catch prob­lems that stan­dard test­ing might miss.A de­tailed de­scrip­tion of all ca­pa­bil­ity and safety eval­u­a­tions is avail­able in the Claude Opus 4.6 sys­tem card.We’ve also ap­plied new safe­guards in ar­eas where Opus 4.6 shows par­tic­u­lar strengths that might be put to dan­ger­ous as well as ben­e­fi­cial uses. In par­tic­u­lar, since the model shows en­hanced cy­ber­se­cu­rity abil­i­ties, we’ve de­vel­oped six new cy­ber­se­cu­rity probes—meth­ods of de­tect­ing harm­ful re­sponses—to help us track dif­fer­ent forms of po­ten­tial mis­use.We’re also ac­cel­er­at­ing the cy­berde­fen­sive uses of the model, us­ing it to help find and patch vul­ner­a­bil­i­ties in open-source soft­ware (as we de­scribe in our new cy­ber­se­cu­rity blog post). We think it’s crit­i­cal that cy­berde­fend­ers use AI mod­els like Claude to help level the play­ing field. Cybersecurity moves fast, and we’ll be ad­just­ing and up­dat­ing our safe­guards as we learn more about po­ten­tial threats; in the near fu­ture, we may in­sti­tute real-time in­ter­ven­tion to block abuse.We’ve made sub­stan­tial up­dates across Claude, Claude Code, and the Claude Developer Platform to let Opus 4.6 per­form at its best.On the API, we’re giv­ing de­vel­op­ers bet­ter con­trol over model ef­fort and more flex­i­bil­ity for long-run­ning agents. To do so, we’re in­tro­duc­ing the fol­low­ing fea­tures:Adap­tive think­ing. Previously, de­vel­op­ers only had a bi­nary choice be­tween en­abling or dis­abling ex­tended think­ing. Now, with adap­tive think­ing, Claude can de­cide when deeper rea­son­ing would be help­ful. At the de­fault ef­fort level (high), the model uses ex­tended think­ing when use­ful, but de­vel­op­ers can ad­just the ef­fort level to make it more or less se­lec­tive.Ef­fort. There are now four ef­fort lev­els to choose from: low, medium, high (default), and max. We en­cour­age de­vel­op­ers to ex­per­i­ment with dif­fer­ent op­tions to find what works best.Con­text com­paction (beta). Long-running con­ver­sa­tions and agen­tic tasks of­ten hit the con­text win­dow. Context com­paction au­to­mat­i­cally sum­ma­rizes and re­places older con­text when the con­ver­sa­tion ap­proaches a con­fig­urable thresh­old, let­ting Claude per­form longer tasks with­out hit­ting lim­its.1M to­ken con­text (beta). Opus 4.6 is our first Opus-class model with 1M to­ken con­text. Premium pric­ing ap­plies for prompts ex­ceed­ing 200k to­kens ($10/$37.50 per mil­lion in­put/​out­put to­kens).128k out­put to­kens. Opus 4.6 sup­ports out­puts of up to 128k to­kens, which lets Claude com­plete larger-out­put tasks with­out break­ing them into mul­ti­ple re­quests.US-only in­fer­ence. For work­loads that need to run in the United States, US-only in­fer­ence is avail­able at 1.1× to­ken pric­ing.Across Claude and Claude Code, we’ve added fea­tures that al­low knowl­edge work­ers and de­vel­op­ers to tackle harder tasks with more of the tools they use every day.We’ve in­tro­duced agent teams in Claude Code as a re­search pre­view. You can now spin up mul­ti­ple agents that work in par­al­lel as a team and co­or­di­nate au­tonomously—best for tasks that split into in­de­pen­dent, read-heavy work like code­base re­views. You can take over any sub­agent di­rectly us­ing Shift+Up/Down or tmux.Claude now also works bet­ter with the of­fice tools you al­ready use. Claude in Excel han­dles long-run­ning and harder tasks with im­proved per­for­mance, and can plan be­fore act­ing, in­gest un­struc­tured data and in­fer the right struc­ture with­out guid­ance, and han­dle multi-step changes in one pass. Pair that with Claude in PowerPoint, and you can first process and struc­ture your data in Excel, then bring it to life vi­su­ally in PowerPoint. Claude reads your lay­outs, fonts, and slide mas­ters to stay on brand, whether you’re build­ing from a tem­plate or gen­er­at­ing a full deck from a de­scrip­tion. Claude in PowerPoint is now avail­able in re­search pre­view for Max, Team, and Enterprise plans.

...

Read the original on www.anthropic.com »

2 1,211 shares, 51 trendiness

I miss thinking hard.

Before you read this post, ask your­self a ques­tion: When was the last time you truly thought hard?

By thinking hard,” I mean en­coun­ter­ing a spe­cific, dif­fi­cult prob­lem and spend­ing mul­ti­ple days just sit­ting with it to over­come it.

a) All the time. b) Never. c) Somewhere in be­tween.

If your an­swer is (a) or (b), this post is­n’t for you. But if, like me, your re­sponse is (c), you might get some­thing out of this, if only the feel­ing that you aren’t alone.

First, a dis­claimer: this post has no an­swers, not even sug­ges­tions. It is sim­ply a way to vent some­thing I’ve been feel­ing for the last few months.

I be­lieve my per­son­al­ity is built on two pri­mary traits:

The Builder (The de­sire to cre­ate, ship, and be prag­matic).

The Thinker (The need for deep, pro­longed men­tal strug­gle).

The builder is pretty self ex­plana­tory, it’s mo­ti­vated by ve­loc­ity and util­ity. It is the part of me that craves the tran­si­tion from idea” to reality.” It loves the dopamine hit of a suc­cess­ful de­ploy, the sat­is­fac­tion of build­ing sys­tems to solve real prob­lems, and the knowl­edge that some­one, some­where, is us­ing my tool.

To ex­plain the Thinker , I need to go back to my uni­ver­sity days study­ing physics. Every now and then, we would get home­work prob­lems that were sig­nif­i­cantly harder than av­er­age. Even if you had a de­cent grasp of the sub­ject, just com­ing up with an ap­proach was dif­fi­cult.

I ob­served that stu­dents fell into three cat­e­gories when fac­ing these prob­lems (well, four, if you count the 1% of ge­niuses for whom no prob­lem was too hard).

* Type 1: The ma­jor­ity. After a few tries, they gave up and went to the pro­fes­sor or a TA for help.

* Type 2: The Researchers. They went to the li­brary to look for sim­i­lar prob­lems or in­sights to make the prob­lem ap­proach­able. They usu­ally suc­ceeded.

I fell into the third cat­e­gory, which, in my ex­pe­ri­ence, was al­most as rare as the ge­nius 1%. My method was sim­ply to think. To think hard and long. Often for sev­eral days or weeks, all my non-I/​O brain time was re­lent­lessly chew­ing on pos­si­ble ways to solve the prob­lem, even while I was asleep.

This method never failed me. I al­ways felt that deep pro­longed think­ing was my su­per­power. I might not be as fast or nat­u­rally gifted as the top 1%, but given enough time, I was con­fi­dent I could solve any­thing. I felt a deep sat­is­fac­tion in that process.

That sat­is­fac­tion is why soft­ware en­gi­neer­ing was ini­tially so grat­i­fy­ing. It hit the right bal­ance. It sat­is­fied The Builder (feeling pro­duc­tive and prag­matic by cre­at­ing use­ful things) and The Thinker (solving re­ally hard prob­lems). Thinking back, the pro­jects where I grew the most as an en­gi­neer were al­ways the ones with a good num­ber of re­ally hard prob­lems that needed cre­ative so­lu­tions.

But re­cently, the num­ber of times I truly pon­der a prob­lem for more than a cou­ple of hours has de­creased tremen­dously.

Yes, I blame AI for this.

I am cur­rently writ­ing much more, and more com­pli­cated soft­ware than ever, yet I feel I am not grow­ing as an en­gi­neer at all. When I started med­i­tat­ing on why I felt stuck,” I re­al­ized I am starv­ing The Thinker.

Vibe cod­ing” sat­is­fies the Builder. It feels great to see to pass from idea to re­al­ity in a frac­tion of a time that would take oth­er­wise. But it has dras­ti­cally cut the times I need to came up with cre­ative so­lu­tions for tech­ni­cal prob­lems. I know many peo­ple who are purely Builders, for them this era is the best thing that ever hap­pened. But for me, some­thing is miss­ing.

I know what you might be think­ing: If you can vibe code’ your way through it, the prob­lem was­n’t ac­tu­ally hard.”

I think that misses the point. It’s not that AI is good for hard prob­lems, it’s not even that good for easy prob­lems. I’m con­fi­dent that my third man­ual rewrite of a mod­ule would be much bet­ter than any­thing the AI can out­put. But I am also a prag­ma­tist.

If I can get a so­lu­tion that is close enough” in a frac­tion of the time and ef­fort, it is ir­ra­tional not to take the AI route. And that is the real prob­lem: I can­not sim­ply turn off my prag­ma­tism.

At the end of the day, I am a Builder. I like build­ing things. The faster I build, the bet­ter. Even if I wanted to re­ject AI and go back to the days where the Thinker’s needs were met by cod­ing, the Builder in me would strug­gle with the in­ef­fi­ciency.

Even though the AI al­most cer­tainly won’t come up with a 100% sat­is­fy­ing so­lu­tion, the 70% so­lu­tion it achieves usu­ally hits the good enough” mark.

To be hon­est, I don’t know. I am still fig­ur­ing it out.

I’m not sure if my two halves can be sat­is­fied by cod­ing any­more. You can al­ways aim for harder pro­jects, hop­ing to find prob­lems where AI fails com­pletely. I still en­counter those oc­ca­sion­ally, but the num­ber of prob­lems re­quir­ing deep cre­ative so­lu­tions feels like it is di­min­ish­ing rapidly.

I have tried to get that feel­ing of men­tal growth out­side of cod­ing. I tried get­ting back in touch with physics, read­ing old text­books. But that was­n’t suc­cess­ful ei­ther. It is hard to jus­tify spend­ing time and men­tal ef­fort solv­ing physics prob­lems that aren’t rel­e­vant or state-of-the-art when I know I could be build­ing things.

My Builder side won’t let me just sit and think about un­solved prob­lems, and my Thinker side is starv­ing while I vibe-code. I am not sure if there will ever be a time again when both needs can be met at once.

Now we have the right to give this be­ing the well-known name that al­ways des­ig­nates what no power of imag­i­na­tion, no flight of the bold­est fan­tasy, no in­tently de­vout heart, no ab­stract think­ing how­ever pro­found, no en­rap­tured and trans­ported spirit has ever at­tained: God. But this ba­sic unity is of the past; it no longer is. It has, by chang­ing its be­ing, to­tally and com­pletely shat­tered it­self. God has died and his death was the life of the world.”

- Philipp Mainländer

...

Read the original on www.jernesto.com »

3 1,118 shares, 46 trendiness

Owning a $5M data center

These days it seems you need a tril­lion fake dol­lars, or lunch with politi­cians to get your own data cen­ter. They may help, but they’re not re­quired. At comma we’ve been run­ning our own data cen­ter for years. All of our model train­ing, met­rics, and data live in our own data cen­ter in our own of­fice. Having your own data cen­ter is cool, and in this blog post I will de­scribe how ours works, so you can be in­spired to have your own data cen­ter too.

If your busi­ness re­lies on com­pute, and you run that com­pute in the cloud, you are putting a lot of trust in your cloud provider. Cloud com­pa­nies gen­er­ally make on­board­ing very easy, and off­board­ing very dif­fi­cult. If you are not vig­i­lant you will sleep­walk into a sit­u­a­tion of high cloud costs and no way out. If you want to con­trol your own des­tiny, you must run your own com­pute.

Self-reliance is great, but there are other ben­e­fits to run­ning your own com­pute. It in­spires good en­gi­neer­ing. Maintaining a data cen­ter is much more about solv­ing real-world chal­lenges. The cloud re­quires ex­per­tise in com­pany-spe­cific APIs and billing sys­tems. A data cen­ter re­quires knowl­edge of Watts, bits, and FLOPs. I know which one I rather think about.

Avoiding the cloud for ML also cre­ates bet­ter in­cen­tives for en­gi­neers. Engineers gen­er­ally want to im­prove things. In ML many prob­lems go away by just us­ing more com­pute. In the cloud that means im­prove­ments are just a bud­get in­crease away. This locks you into in­ef­fi­cient and ex­pen­sive so­lu­tions. Instead, when all you have avail­able is your cur­rent com­pute, the quick­est im­prove­ments are usu­ally speed­ing up your code, or fix­ing fun­da­men­tal is­sues.

Finally there’s cost, own­ing a data cen­ter can be far cheaper than rent­ing in the cloud. Especially if your com­pute or stor­age needs are fairly con­sis­tent, which tends to be true if you are in the busi­ness of train­ing or run­ning mod­els. In com­ma’s case I es­ti­mate we’ve spent ~5M on our data cen­ter, and we would have spent 25M+ had we done the same things in the cloud.

Our data cen­ter is pretty sim­ple. It’s main­tained and built by only a cou­ple en­gi­neers and tech­ni­cians. Your needs may be slightly dif­fer­ent, our im­ple­men­ta­tion should pro­vide use­ful con­text.

To run servers you need power. We cur­rently use about 450kW at max. Operating a data cen­ter ex­poses you to many fun en­gi­neer­ing chal­lenges, but procur­ing power is not one of them. San Diego power cost is over 40c/kWh, ~3x the global av­er­age. It’s a ripoff, and over­priced sim­ply due to po­lit­i­cal dys­func­tion. We spent $540,112 on power in 2025, a big part of the data cen­ter cost. In a fu­ture blog post I hope I can tell you about how we pro­duce our own power and you should too.

Data cen­ters need cool dry air. Typically this is achieved with a CRAC sys­tem, but they are power-hun­gry. San Diego has a mild cli­mate and we opted for pure out­side air cool­ing. This gives us less con­trol of the tem­per­a­ture and hu­mid­ity, but uses only a cou­ple dozen kW. We have dual 48” in­take fans and dual 48” ex­haust fans to keep the air cool. To en­sure low hu­mid­ity (

The ma­jor­ity of our cur­rent com­pute is 600 GPUs in 75 TinyBox Pro ma­chines. They were built in-house, which saves us money and en­sures they suit our needs. Our self-built ma­chines fail at a sim­i­lar rate to pre-built ma­chines we’ve bought, but we’re ca­pa­ble of fix­ing them our­selves quickly. They have 2 CPUs and 8 GPUs each, and work as both train­ing ma­chines and gen­eral com­pute work­ers.

For data stor­age we have a few racks of Dell ma­chines (R630 and R730). They are filled with SSDs for a to­tal of ~4PB of stor­age. We use SSDs for re­li­a­bil­ity and speed. Our main stor­age ar­rays have no re­dun­dancy and each node needs to be able to sat­u­rate the net­work band­width with ran­dom ac­cess reads. For the stor­age ma­chines this means read­ing up to 20Gbps of each 80TB chunk.

Other than stor­age and com­pute ma­chines we have sev­eral one-off ma­chines to run ser­vices. This in­cludes a router, cli­mate con­troller, data in­ges­tion ma­chine, stor­age mas­ter servers, met­ric servers, re­dis servers, and a few more.

Running the net­work re­quires switches, but at this scale we don’t need to bother with com­pli­cated switch topolo­gies. We have 3 100Gbps in­ter­con­nected Z9264F switches, which serve as the main eth­er­net net­work. We have two more in­fini­band switches to in­ter­con­nect the 2 tiny­box pro groups for train­ing all-re­duce.

To ef­fec­tively use all these com­pute and stor­age ma­chines you need some in­fra. At this scale, ser­vices don’t need re­dun­dancy to achieve 99% up­time. We use a sin­gle mas­ter for all ser­vices, which makes things pretty sim­ple.

All servers get ubuntu in­stalled with pxe­boot and are man­aged by salt.

All of our stor­age ar­rays use mkv. The main ar­ray is 3PB of non-re­dun­dant stor­age host­ing our dri­ving data we train on. We can read from this ar­ray at ~1TB/s, which means we can train di­rectly on the raw data with­out caching. Redundancy is not needed since no spe­cific data is crit­i­cal.

We have an ad­di­tional ~300TB non-re­dun­dant ar­ray to cache in­ter­me­di­ate processed re­sults. And lastly, we have a re­dun­dant mkv stor­age ar­ray to store all of our trained mod­els and train­ing met­rics. Each of these 3 ar­rays have a sep­a­rate sin­gle mas­ter server.

We use slurm to man­age the com­pute nodes, and com­pute jobs. We sched­ule two types of dis­trib­uted com­pute. Pytorch train­ing jobs, and mini­ray work­ers.

To train mod­els across mul­ti­ple GPU nodes we use torch.dis­trib­uted FSDP. We have 2 sep­a­rate train­ing par­ti­tions, each in­tra-con­nected with Infiniband for train­ing across ma­chines. We wrote our own train­ing frame­work which han­dles the train­ing loop boil­er­plate, but it’s mostly just py­torch.

We have a cus­tom model ex­per­i­ment track­ing ser­vice (similar to wandb or ten­sor­board). It pro­vides a dash­board for track­ing ex­per­i­ments, and shows cus­tom met­rics and re­ports. It is also the in­ter­face for the mkv stor­age ar­ray that hosts the model weights. The train­ing runs store the model weights there with a uuid, and they are avail­able to down­load for who­ever needs to run them. The met­rics and re­ports for our lat­est mod­els are also open.

Besides train­ing we have many other com­pute tasks. This can be any­thing from run­ning tests, run­ning mod­els, pre-pro­cess­ing data, or even run­ning agent roll­outs for on-pol­icy train­ing. We wrote a light­weight open-source task sched­uler called mini­ray that al­lows you to run ar­bi­trary python code on idle ma­chines. This is a sim­pler ver­sion of dask, with a fo­cus on ex­treme sim­plic­ity. Slurm will sched­ule any idle ma­chine to be an ac­tive mini­ray worker, and ac­cept pend­ing tasks. All the task in­for­ma­tion is hosted in a cen­tral re­dis server.

Miniray work­ers with GPUs will spin up a tri­ton in­fer­ence server to run model in­fer­ence with dy­namic batch­ing. A mini­ray worker can thus eas­ily and ef­fi­ciently run any of the mod­els hosted in the model mkv stor­age ar­ray.

Miniray makes it ex­tremely easy to scale par­al­lel tasks to hun­dreds of ma­chines. For ex­am­ple, the con­trols chal­lenge record was set by just hav­ing ~1hr of ac­cess to our data cen­ter with mini­ray.

All our code is in a monorepo that we have cloned on our work­sta­tions. This monorepo is kept small (

The most com­plex thing we do at comma is train dri­ving mod­els on-pol­icy, these train­ing runs re­quire train­ing data to be gen­er­ated dur­ing train­ing by run­ning sim­u­lated dri­ving roll­outs with the most re­cent model weights. Here’s a real-world com­mand we just used to train such a model. This train­ing run uses all of the in­fra­struc­ture de­scribed above. While only this small com­mand is needed to kick every­thing off, it or­ches­trates a lot of mov­ing parts.

Does all this stuff sound ex­cit­ing? Then build your own dat­a­cen­ter for your­self or your com­pany! You can also come work here.

...

Read the original on blog.comma.ai »

4 1,100 shares, 43 trendiness

France dumps Zoom and Teams as Europe seeks digital autonomy from the US

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

LONDON (AP) — In France, civil ser­vants will ditch Zoom and Teams for a home­grown video con­fer­ence sys­tem. Soldiers in Austria are us­ing open source of­fice soft­ware to write re­ports af­ter the mil­i­tary dropped Microsoft Office. Bureaucrats in a German state have also turned to free soft­ware for their ad­min­is­tra­tive work.

Around Europe, gov­ern­ments and in­sti­tu­tions are seek­ing to re­duce their use of dig­i­tal ser­vices from U. S. Big Tech com­pa­nies and turn­ing to do­mes­tic or free al­ter­na­tives. The push for digital sov­er­eignty” is gain­ing at­ten­tion as the Trump ad­min­is­tra­tion strikes an in­creas­ingly bel­liger­ent pos­ture to­ward the con­ti­nent, high­lighted by re­cent ten­sions over Greenland that in­ten­si­fied fears that Silicon Valley gi­ants could be com­pelled to cut off ac­cess.

Concerns about data pri­vacy and wor­ries that Europe is not do­ing enough to keep up with the United States and Chinese tech lead­er­ship are also fu­el­ing the drive.

The French gov­ern­ment ref­er­enced some of these con­cerns when it an­nounced last week that 2.5 mil­lion civil ser­vants would stop us­ing video con­fer­ence tools from U. S. providers — in­clud­ing Zoom, Microsoft Teams, Webex and GoTo Meeting — by 2027 and switch to Visio, a home­grown ser­vice.

The ob­jec­tive is to put an end to the use of non-Eu­ro­pean so­lu­tions, to guar­an­tee the se­cu­rity and con­fi­den­tial­ity of pub­lic elec­tronic com­mu­ni­ca­tions by re­ly­ing on a pow­er­ful and sov­er­eign tool,” the an­nounce­ment said.

We can­not risk hav­ing our sci­en­tific ex­changes, our sen­si­tive data, and our strate­gic in­no­va­tions ex­posed to non-Eu­ro­pean ac­tors,” David Amiel, a civil ser­vice min­is­ter, said in a press re­lease.

Microsoft said it con­tin­ues to partner closely with the gov­ern­ment in France and re­spect the im­por­tance of se­cu­rity, pri­vacy, and dig­i­tal trust for pub­lic in­sti­tu­tions.”

The com­pany said it is focused on pro­vid­ing cus­tomers with greater choice, stronger data pro­tec­tion, and re­silient cloud ser­vices — en­sur­ing data stays in Europe, un­der European law, with ro­bust se­cu­rity and pri­vacy pro­tec­tions.”

Zoom, Webex and GoTo Meeting did not re­spond to re­quests for com­ment.

French President Emmanuel Macron has been push­ing dig­i­tal sov­er­eignty for years. But there’s now a lot more political mo­men­tum be­hind this idea now that we need to de-risk from U. S. tech,” Nick Reiners, se­nior ge­ot­ech­nol­ogy an­a­lyst at the Eurasia Group.

It feels kind of like there’s a real zeit­geist shift,” Reiners said

It was a hot topic at the World Economic Forum’s an­nual meet­ing of global po­lit­i­cal and busi­ness elites last month in Davos, Switzerland. The European Commission’s of­fi­cial for tech sov­er­eignty, Henna Virkkunen, told an au­di­ence that Europe’s re­liance on oth­ers can be weaponized against us.”

That’s why it’s so im­por­tant that we are not de­pen­dent on one coun­try or one com­pany when it comes to very crit­i­cal fields of our econ­omy or so­ci­ety,” she said, with­out nam­ing coun­tries or com­pa­nies.

A de­ci­sive mo­ment came last year when the Trump ad­min­is­tra­tion sanc­tioned the International Criminal Court’s top pros­e­cu­tor af­ter the tri­bunal, based in The Hague, Netherlands, is­sued an ar­rest war­rant for Israeli Prime Minister Benjamin Netanyahu, an ally of President Donald Trump.

The sanc­tions led Microsoft to can­cel Khan’s ICC email, a move that was first re­ported by The Associated Press and sparked fears of a kill switch” that Big Tech com­pa­nies can use to turn off ser­vice at will.

Microsoft main­tains it kept in touch with the ICC throughout the process that re­sulted in the dis­con­nec­tion of its sanc­tioned of­fi­cial from Microsoft ser­vices. At no point did Microsoft cease or sus­pend its ser­vices to the ICC.”

Microsoft President Brad Smith has re­peat­edly sought to strengthen trans-At­lantic ties, the com­pa­ny’s press of­fice said, and pointed to an in­ter­view he did last month with CNN in Davos in which he said that jobs, trade and in­vest­ment. as well as se­cu­rity, would be af­fected by a rift over Greenland.

Europe is the American tech sec­tor’s biggest mar­ket af­ter the United States it­self. It all de­pends on trust. Trust re­quires di­a­logue,” Smith said.

Other in­ci­dents have added to the move­ment. There’s a grow­ing sense that re­peated EU ef­forts to rein in tech gi­ants such as Google with block­buster an­titrust fines and sweep­ing dig­i­tal rule books haven’t done much to curb their dom­i­nance.

Billionaire Elon Musk is also a fac­tor. Officials worry about re­ly­ing on his Starlink satel­lite in­ter­net sys­tem for com­mu­ni­ca­tions in Ukraine.

Washington and Brussels wran­gled for years over data trans­fer agree­ments, trig­gered by for­mer National Security Agency con­trac­tor Edward Snowden’s rev­e­la­tions of U. S. cy­ber-snoop­ing.

With on­line ser­vices now mainly hosted in the cloud through data cen­ters, Europeans fear that their data is vul­ner­a­ble.

U. S. cloud providers have re­sponded by set­ting up so-called sovereign cloud” op­er­a­tions, with data cen­ters lo­cated in European coun­tries, owned by European en­ti­ties and with phys­i­cal and re­mote ac­cess only for staff who are European Union res­i­dents.

The idea is that only Europeans can take de­ci­sions so that they can’t be co­erced by the U. S.,” Reiners said.

The German state of Schleswig-Holstein last year mi­grated 44,000 em­ployee in­boxes from Microsoft to an open source email pro­gram. It also switched from Microsoft’s SharePoint file shar­ing sys­tem to Nextcloud, an open source plat­form, and is even con­sid­er­ing re­plac­ing Windows with Linux and tele­phones and video­con­fer­enc­ing with open source sys­tems.

We want to be­come in­de­pen­dent of large tech com­pa­nies and en­sure dig­i­tal sov­er­eignty,” Digitalization Minister Dirk Schrödter said in an October an­nounce­ment.

The French city of Lyon said last year that it’s de­ploy­ing free of­fice soft­ware to re­place Microsoft. Denmark’s gov­ern­ment and the cities of Copenhagen and Aarhus have also been try­ing out open-source soft­ware.

We must never make our­selves so de­pen­dent on so few that we can no longer act freely,” Digital Minister Caroline Stage Olsen wrote on LinkedIn last year. Too much pub­lic dig­i­tal in­fra­struc­ture is cur­rently tied up with very few for­eign sup­pli­ers.”

The Austrian mil­i­tary said it has also switched to LibreOffice, a soft­ware pack­age with word proces­sor, spread­sheet and pre­sen­ta­tion pro­grams that mir­rors Microsoft 365’s Word, Excel and PowerPoint.

The Document Foundation, a non­profit based in Germany that’s be­hind LibreOffice, said the mil­i­tary’s switch reflects a grow­ing de­mand for in­de­pen­dence from sin­gle ven­dors.” Reports also said the mil­i­tary was con­cerned that Microsoft was mov­ing file stor­age on­line to the cloud — the stan­dard ver­sion of LibreOffice is not cloud-based.

Some Italian cities and re­gions adopted the soft­ware years ago, said Italo Vignoli, a spokesman for The Document Foundation. Back then, the ap­peal was not need­ing to pay for soft­ware li­censes. Now, it’s the main rea­son is to avoid be­ing locked into a pro­pri­etary sys­tem.

At first, it was: we will save money and by the way, we will get free­dom,” Vignoli said. Today it is: we will be free and by the way, we will also save some money.”

Associated Press writer Molly Quell in The Hague, Netherlands con­tributed to this re­port.

This ver­sion cor­rects the con­tri­bu­tion line to Molly Quell in­stead of Molly Hague.

...

Read the original on apnews.com »

5 931 shares, 37 trendiness

Voxtral transcribes at the speed of sound.

Today, we’re re­leas­ing Voxtral Transcribe 2, two next-gen­er­a­tion speech-to-text mod­els with state-of-the-art tran­scrip­tion qual­ity, di­ariza­tion, and ul­tra-low la­tency. The fam­ily in­cludes Voxtral Mini Transcribe V2 for batch tran­scrip­tion and Voxtral Realtime for live ap­pli­ca­tions. Voxtral Realtime is open-weights un­der the Apache 2.0 li­cense.

We’re also launch­ing an au­dio play­ground in Mistral Studio to test tran­scrip­tion in­stantly, pow­ered by Voxtral Transcribe 2, with di­ariza­tion and time­stamps.

Voxtral Mini Transcribe V2: State-of-the-art tran­scrip­tion with speaker di­ariza­tion, con­text bi­as­ing, and word-level time­stamps in 13 lan­guages.

Voxtral Realtime: Purpose-built for live tran­scrip­tion with la­tency con­fig­urable down to sub-200ms, en­abling voice agents and real-time ap­pli­ca­tions.

Best-in-class ef­fi­ciency: Industry-leading ac­cu­racy at a frac­tion of the cost, with Voxtral Mini Transcribe V2 achiev­ing the low­est word er­ror rate, at the low­est price point.

Open weights: Voxtral Realtime ships un­der Apache 2.0, de­ploy­able on edge for pri­vacy-first ap­pli­ca­tions.

Voxtral Realtime is pur­pose-built for ap­pli­ca­tions where la­tency mat­ters. Unlike ap­proaches that adapt of­fline mod­els by pro­cess­ing au­dio in chunks, Realtime uses a novel stream­ing ar­chi­tec­ture that tran­scribes au­dio as it ar­rives. The model de­liv­ers tran­scrip­tions with de­lay con­fig­urable down to sub-200ms, un­lock­ing a new class of voice-first ap­pli­ca­tions.

Word er­ror rate (lower is bet­ter) across lan­guages in the FLEURS tran­scrip­tion bench­mark.

At 2.4 sec­onds de­lay, ideal for sub­ti­tling, Realtime matches Voxtral Mini Transcribe V2, our lat­est batch model. At 480ms de­lay, it stays within 1-2% word er­ror rate, en­abling voice agents with near-of­fline ac­cu­racy.

The model is na­tively mul­ti­lin­gual, achiev­ing strong tran­scrip­tion per­for­mance in 13 lan­guages, in­clud­ing English, Chinese, Hindi, Spanish, Arabic, French, Portuguese, Russian, German, Japanese, Korean, Italian, and Dutch. With a 4B pa­ra­me­ter foot­print, it runs ef­fi­ciently on edge de­vices, en­sur­ing pri­vacy and se­cu­rity for sen­si­tive de­ploy­ments.

We’re re­leas­ing the model weights un­der Apache 2.0 on the Hugging Face Hub.

Average di­ariza­tion er­ror rate (lower is bet­ter) across five English bench­marks (Switchboard, CallHome, AMI-IHM, AMI-SDM, SBCSAE) and the TalkBank mul­ti­lin­gual bench­mark (German, Spanish, English, Chinese, Japanese).

Average word er­ror rate (lower is bet­ter) across the top-10 lan­guages in the FLEURS tran­scrip­tion bench­mark.

Voxtral Mini Transcribe V2 de­liv­ers sig­nif­i­cant im­prove­ments in tran­scrip­tion and di­ariza­tion qual­ity across lan­guages and do­mains. At ap­prox­i­mately 4% word er­ror rate on FLEURS and $0.003/min, Voxtral of­fers the best price-per­for­mance of any tran­scrip­tion API. It out­per­forms GPT-4o mini Transcribe, Gemini 2.5 Flash, Assembly Universal, and Deepgram Nova on ac­cu­racy, and processes au­dio ap­prox­i­mately 3x faster than ElevenLabs’ Scribe v2 while match­ing on qual­ity at one-fifth the cost.

Generate tran­scrip­tions with speaker la­bels and pre­cise start/​end times. Ideal for meet­ing tran­scrip­tion, in­ter­view analy­sis, and multi-party call pro­cess­ing. Note: with over­lap­ping speech, the model typ­i­cally tran­scribes one speaker.

Provide up to 100 words or phrases to guide the model to­ward cor­rect spellings of names, tech­ni­cal terms, or do­main-spe­cific vo­cab­u­lary. Particularly use­ful for proper nouns or in­dus­try ter­mi­nol­ogy that stan­dard mod­els of­ten miss. Context bi­as­ing is op­ti­mized for English; sup­port for other lan­guages is ex­per­i­men­tal.

Generate pre­cise start and end time­stamps for each word, en­abling ap­pli­ca­tions like sub­ti­tle gen­er­a­tion, au­dio search, and con­tent align­ment.

Like Realtime, this model now sup­ports 13 lan­guages: English, Chinese, Hindi, Spanish, Arabic, French, Portuguese, Russian, German, Japanese, Korean, Italian, and Dutch. Non-English per­for­mance sig­nif­i­cantly out­paces com­peti­tors.

Maintains tran­scrip­tion ac­cu­racy in chal­leng­ing acoustic en­vi­ron­ments, such as fac­tory floors, busy call cen­ters, and field record­ings.

Process record­ings up to 3 hours in a sin­gle re­quest.

Word er­ror rate (lower is bet­ter) across lan­guages in the FLEURS tran­scrip­tion bench­mark.

Test Voxtral Transcribe 2 di­rectly in Mistral Studio. Upload up to 10 au­dio files, tog­gle di­ariza­tion, choose time­stamp gran­u­lar­ity, and add con­text bias terms for do­main-spe­cific vo­cab­u­lary. Supports .mp3, .wav, .m4a, .flac, .ogg up to 1GB each.

Transcribe mul­ti­lin­gual record­ings with speaker di­ariza­tion that clearly at­trib­utes who said what and when. At Voxtral’s price point, an­no­tate large vol­umes of meet­ing con­tent at in­dus­try-lead­ing cost ef­fi­ciency.

Build con­ver­sa­tional AI with sub-200ms tran­scrip­tion la­tency. Connect Voxtral Realtime to your LLM and TTS pipeline for re­spon­sive voice in­ter­faces that feel nat­ural.

Transcribe calls in real time, en­abling AI sys­tems to an­a­lyze sen­ti­ment, sug­gest re­sponses, and pop­u­late CRM fields while con­ver­sa­tions are still hap­pen­ing. Speaker di­ariza­tion en­sures clear at­tri­bu­tion be­tween agents and cus­tomers.

Generate live mul­ti­lin­gual sub­ti­tles with min­i­mal la­tency. Context bi­as­ing han­dles proper nouns and tech­ni­cal ter­mi­nol­ogy that trip up generic tran­scrip­tion ser­vices.

Monitor and tran­scribe in­ter­ac­tions for reg­u­la­tory com­pli­ance, with di­ariza­tion pro­vid­ing clear speaker at­tri­bu­tion and time­stamps en­abling pre­cise au­dit trails.

Both mod­els sup­port GDPR and HIPAA-compliant de­ploy­ments through se­cure on-premise or pri­vate cloud se­tups.

Voxtral Mini Transcribe V2 is avail­able now via API at $0.003 per minute. Try it now in the new Mistral Studio au­dio play­ground or in Le Chat.

Voxtral Realtime is avail­able via API at $0.006 per minute and as open weights on Hugging Face.

If you’re ex­cited about build­ing world-class speech AI and putting fron­tier mod­els into the hands of de­vel­op­ers every­where, we’d love to hear from you. Apply to join our team.

The next chap­ter of AI is yours.

...

Read the original on mistral.ai »

6 892 shares, 75 trendiness

I Now Assume that All Ads on Apple News Are Scams

In 2024, Apple signed a deal with Taboola to serve ads in its app, no­tably Apple News. John Gruber, writ­ing in Daring Fireball said at the time:

If you told me that the ads in Apple News have been sold by Taboola for the last few years, I’d have said, Oh, that makes sense.” Because the ads in Apple News — at least the ones I see1 — al­ready look like chum­box Taboola ads. Even worse, they’re in­cred­i­bly rep­e­ti­tious.

I use Apple News to keep up on top­ics that I don’t find in sources I pay for (The Guardian and The New York Times). But there’s no way I’m go­ing to pay the ex­or­bi­tant price Apple wants for Apple News+ — £13 — be­cause, while you get more pub­li­ca­tions, you still get ads.

And those ads have got­ten worse re­cently. Many if not most of them look like and prob­a­bly are scams. Here are a few ex­am­ples from Apple News to­day.

Here are three ads that are scammy; the first two were clearly gen­er­ated by AI, and the third may have been cre­ated by AI.

Why are they scams? When I searched do­main in­for­ma­tion for the do­mains, I found that they were reg­is­tered very re­cently.

This re­cent reg­is­tra­tion does­n’t nec­es­sar­ily mean they are scams, but they don’t in­spire much con­fi­dence.

Here’s one ex­am­ple. This ad from Tidenox, whose web­site says I am re­tir­ing, show­ing a photo of an el­derly woman, who says, For 26 years, Tidenox has been port of your jour­ney in cre­at­ing earth and com­fort at home.” The im­age of the re­tir­ing owner is prob­a­bly made by AI. (Update: some­one on Hacker News pointed out the partly masked Google Gemini logo on the bot­tom right. I had­n’t spot­ted that, in part be­cause I don’t use any AI im­age gen­er­a­tion tools.)

These fake going out of busi­ness ads” have been around for a few years, and even the US Better Business Bureau warns about them, as they take peo­ples’ money then shut down. Does Apple care? Does Taboola care? Does Apple care that Taboola serves ads like this? My guess: no, no, and no.

Note the reg­is­tra­tion date for the tide­nox.com do­main. It’s nowhere near 26 years old, and it’s reg­is­tered in China:

Shame on Apple for cre­at­ing a hon­ey­pot for scam ads in what they con­sider to be a pre­mium news ser­vice. This com­pany can­not be trusted with ads in its prod­ucts any more.

...

Read the original on kirkville.com »

7 875 shares, 33 trendiness

SpaceX

...

Read the original on www.spacex.com »

8 864 shares, 28 trendiness

Notepad++ Hijacked by State-Sponsored Hackers

...

Read the original on notepad-plus-plus.org »

9 844 shares, 34 trendiness

My AI Adoption Journey

My ex­pe­ri­ence adopt­ing any mean­ing­ful tool is that I’ve nec­es­sar­ily gone through three phases: (1) a pe­riod of in­ef­fi­ciency (2) a pe­riod of ad­e­quacy, then fi­nally (3) a pe­riod of work­flow and life-al­ter­ing dis­cov­ery.

In most cases, I have to force my­self through phase 1 and 2 be­cause I usu­ally have a work­flow I’m al­ready happy and com­fort­able with. Adopting a tool feels like work, and I do not want to put in the ef­fort, but I usu­ally do in an ef­fort to be a well-rounded per­son of my craft.

This is my jour­ney of how I found value in AI tool­ing and what I’m try­ing next with it. In an ocean of overly dra­matic, hyped takes, I hope this rep­re­sents a more nu­anced, mea­sured ap­proach to my views on AI and how they’ve changed over time.

Immediately cease try­ing to per­form mean­ing­ful work via a chat­bot (e.g. ChatGPT, Gemini on the web, etc.). Chatbots have real value and are a daily part of my AI work­flow, but their util­ity in cod­ing is highly lim­ited be­cause you’re mostly hop­ing they come up with the right re­sults based on their prior train­ing, and cor­rect­ing them in­volves a hu­man (you) to tell them they’re wrong re­peat­edly. It is in­ef­fi­cient.

I think every­one’s first ex­pe­ri­ence with AI is a chat in­ter­face. And I think every­one’s first ex­pe­ri­ence try­ing to code with AI has been ask­ing a chat in­ter­face to write code.

While I was still a heavy AI skep­tic, my first oh wow” mo­ment was past­ing a screen­shot of Zed’s com­mand palette into Gemini, ask­ing it to re­pro­duce it with SwiftUI, and be­ing truly flab­ber­gasted that it did it very well. The com­mand palette that ships for ma­cOS in Ghostty to­day is only very lightly mod­i­fied from what Gemini pro­duced for me in sec­onds.

But when I tried to re­pro­duce that be­hav­ior for other tasks, I was left dis­ap­pointed. In the con­text of brown­field pro­jects, I found the chat in­ter­face pro­duced poor re­sults very of­ten, and I found my­self very frus­trated copy­ing and past­ing code and com­mand out­put to and from the in­ter­face. It was very ob­vi­ously far less ef­fi­cient than me do­ing the work my­self.

To find value, you must use an agent. An agent is the in­dus­try-adopted term for an LLM that can chat and in­voke ex­ter­nal be­hav­ior in a loop1

At a bare min­i­mum, the agent must have the abil­ity to: read files, ex­e­cute pro­grams, and make HTTP re­quests.

The next phase on my jour­ney I tried

Claude Code. I’ll cut to the chase: I ini­tially was­n’t im­pressed. I just was­n’t get­ting good re­sults out of my ses­sions. I felt I had to touch up every­thing it pro­duced and this process was tak­ing more time than if I had just done it my­self. I read blog posts, watched videos, but just was­n’t that im­pressed.

Instead of giv­ing up, I forced my­self to re­pro­duce all my man­ual com­mits

with agen­tic ones. I lit­er­ally did the work twice. I’d do the work man­u­ally, and then I’d fight an agent to pro­duce iden­ti­cal re­sults in terms of qual­ity and func­tion (without it be­ing able to see my man­ual so­lu­tion, of course).

This was ex­cru­ci­at­ing, be­cause it got in the way of sim­ply get­ting things done. But I’ve been around the block with non-AI tools enough to know that fric­tion is nat­ural, and I can’t come to a firm, de­fen­si­ble con­clu­sion with­out ex­haust­ing my ef­forts.

But, ex­per­tise formed. I quickly dis­cov­ered for my­self from first prin­ci­ples what oth­ers were al­ready say­ing, but dis­cov­er­ing it my­self re­sulted in a stronger fun­da­men­tal un­der­stand­ing.

Break down ses­sions into sep­a­rate clear, ac­tion­able tasks. Don’t try

to draw the owl” in one mega ses­sion.

For vague re­quests, split the work into sep­a­rate plan­ning vs. ex­e­cu­tion

ses­sions.

If you give an agent a way to ver­ify its work, it more of­ten than

not fixes its own mis­takes and pre­vents re­gres­sions.

More gen­er­ally, I also found the edges of what agents — at the time — were good at, what they weren’t good at, and for the tasks they were good at how to achieve the re­sults I wanted.

All of this led to sig­nif­i­cant ef­fi­ciency gains, to the point where I was start­ing to nat­u­rally use agents in a way that I felt was no slower than do­ing it my­self (but I still did­n’t feel it was any faster, since I was mostly babysit­ting an agent).

The neg­a­tive space here is worth re­it­er­at­ing: part of the ef­fi­ciency gains here were un­der­stand­ing when not to reach for an agent. Using an agent for some­thing it’ll likely fail at is ob­vi­ously a big waste of time and hav­ing the knowl­edge to avoid that com­pletely leads to time sav­ings2.

At this stage, I was find­ing ad­e­quate value with agents that I was happy to use them in my work­flow, but still did­n’t feel like I was see­ing any net ef­fi­ciency gains. I did­n’t care though, I was con­tent at this point with AI as a tool.

To try to find some ef­fi­ciency, I next started up a new pat­tern:

block out the last 30 min­utes of every day to kick off one or more agents.

My hy­poth­e­sis was that per­haps I could gain some ef­fi­ciency if the agent can make some pos­i­tive progress in the times I can’t work any­ways. Basically: in­stead of try­ing to do more in the time I have, try to do more in the time I don’t have.

Similar to the pre­vi­ous task, I at first found this both un­suc­cess­ful and an­noy­ing. But, I once again quickly found dif­fer­ent cat­e­gories of work that were re­ally help­ful:

* Deep re­search ses­sions where I’d ask agents to sur­vey some

field, such as find­ing all li­braries in a spe­cific lan­guage with

a spe­cific li­cense type and pro­duc­ing multi-page sum­maries for each

on their pros, cons, de­vel­op­ment ac­tiv­ity, so­cial sen­ti­ment, etc.

* Parallel agents at­tempt­ing dif­fer­ent vague ideas I had but did­n’t

have time to get started on. I did­n’t ex­pect them to pro­duce some­thing

I’d ever ship here, but per­haps could il­lu­mi­nate some un­known un­knowns

when I got to the task the next day.

* Issue and PR triage/​re­view. Agents are good at us­ing gh (GitHub CLI),

so I man­u­ally scripted a quick way to spin up a bunch in par­al­lel to

triage is­sues. I would NOT al­low agents to re­spond, I just wanted

re­ports the next day to try to guide me to­wards high value or low ef­fort

tasks.

To be clear, I did not go as far as oth­ers went to have agents run­ning in loops all night. In most cases, agents com­pleted their tasks in less than half an hour. But, the lat­ter part of the work­ing day, I’m usu­ally tired and com­ing out of flow and find my­self too per­son­ally in­ef­fi­cient, so shift­ing my ef­fort to spin­ning up these agents I found gave me a warm start” the next morn­ing that got me work­ing more quickly than I would’ve oth­er­wise.

I was happy, and I was start­ing to feel like I was do­ing more than I was do­ing prior to AI, if only slightly.

By this point, I was get­ting very con­fi­dent about what tasks my AI was and was­n’t great at. I had re­ally high con­fi­dence with cer­tain tasks that the AI would achieve a mostly-cor­rect so­lu­tion. So the next step on my jour­ney was: let agents do all of that work while I worked on other tasks.

More specif­i­cally, I would start each day by tak­ing the re­sults of my prior night’s triage agents, fil­ter them man­u­ally to find the is­sues that an agent will al­most cer­tainly solve well, and then keep them go­ing in the back­ground (one at a time, not in par­al­lel).

Meanwhile, I’d work on some­thing else. I was­n’t go­ing to so­cial me­dia (any more than usual with­out AI), I was­n’t watch­ing videos, etc. I was in my own, nor­mal, pre-AI deep think­ing mode work­ing on some­thing I wanted to work on or had to work on.

Very im­por­tant at this stage: turn off agent desk­top no­ti­fi­ca­tions.

Context switch­ing is very ex­pen­sive. In or­der to re­main ef­fi­cient, I found that it was my job as a hu­man to be in con­trol of when I in­ter­rupt the agent, not the other way around. Don’t let the agent no­tify you. During nat­ural breaks in your work, tab over and check on it, then carry on.

Importantly, I think the work on some­thing else” helps coun­ter­act the highly pub­li­cized Anthropic skill for­ma­tion pa­per. Well, you’re trad­ing off: not form­ing skills for the tasks you’re del­e­gat­ing to the agent while con­tin­u­ing to form skills nat­u­rally in the tasks you con­tinue to work on man­u­ally.

At this point I was firmly in the no way I can go back” ter­ri­tory. I felt more ef­fi­cient, but even if I was­n’t, the thing I liked the most was that I could now fo­cus my cod­ing and think­ing on tasks I re­ally loved while still ad­e­quately com­plet­ing the tasks I did­n’t.

At risk of stat­ing the ob­vi­ous: agents are much more ef­fi­cient when they pro­duce the right re­sult the first time, or at worst pro­duce a re­sult that re­quires min­i­mal touch-ups. The most sure-fire way to achieve this is to give the agent fast, high qual­ity tools to au­to­mat­i­cally tell it when it is wrong.

I don’t know if there is a broad in­dus­try-ac­cepted term for this yet, but I’ve grown to call­ing this harness en­gi­neer­ing.” It is the idea that any­time you find an agent makes a mis­take, you take the time to en­gi­neer a so­lu­tion such that the agent never makes that mis­take again. I don’t need to in­vent any new terms here; if an­other one ex­ists, I’ll jump on the band­wagon.

This comes in two forms:

Better im­plicit prompt­ing (AGENTS.md). For sim­ple things, like the agent re­peat­edly run­ning the wrong com­mands or find­ing the wrong APIs, up­date the AGENTS.md (or equiv­a­lent). Here is

an ex­am­ple from Ghostty. Each line in that file is based on a bad agent be­hav­ior, and it al­most com­pletely re­solved them all.

Actual, pro­grammed tools. For ex­am­ple, scripts to take screen­shots, run fil­tered tests, etc etc. This is usu­ally paired with an AGENTS.md change to let it know about this ex­ist­ing.

This is where I’m at to­day. I’m mak­ing an earnest ef­fort when­ever I see an agent do a Bad Thing to pre­vent it from ever do­ing that bad thing again. Or, con­versely, I’m mak­ing an earnest ef­fort for agents to be able to ver­ify they’re do­ing a Good Thing.

Simultaneous to step 5, I’m also op­er­at­ing un­der the goal of

hav­ing an agent run­ning at all times. If an agent is­n’t run­ning, I ask my­self is there some­thing an agent could be do­ing for me right now?”

I par­tic­u­larly like to com­bine this with slower, more thought­ful mod­els like Amp’s deep mode (which is ba­si­cally just GPT-5.2-Codex) which can take up­wards of 30+ min­utes to make small changes. The flip side of that is that it does tend to pro­duce very good re­sults.

I’m not [yet?] run­ning mul­ti­ple agents, and cur­rently don’t re­ally want to.

I find hav­ing the one agent run­ning is a good bal­ance for me right now be­tween be­ing able to do deep, man­ual work I find en­joy­able, and babysit­ting my kind of stu­pid and yet mys­te­ri­ously pro­duc­tive ro­bot friend.

The have an agent run­ning at all times” goal is still just a goal. I’d say right now I’m maybe ef­fec­tive at hav­ing a back­ground agent run­ning 10 to 20% of a nor­mal work­ing day. But, I’m ac­tively work­ing to im­prove that.

And that’s where I’m at to­day.

Through this jour­ney, I’ve per­son­ally reached a point where I’m hav­ing suc­cess with mod­ern AI tool­ing and I be­lieve I’m ap­proach­ing it with the proper mea­sured view that is grounded in re­al­ity. I re­ally don’t care one way or the other if AI is here to stay3, I’m a soft­ware crafts­man that just wants to build stuff for the love of the game.

The whole land­scape is mov­ing so rapidly that I’m sure I’ll look back at this post very quickly and laugh at my naivete. But, as they say, if you can’t be em­barassed about your past self, you’re prob­a­bly not grow­ing. I just hope I’ll grow in the right di­rec­tion!

I have no skin in the game here4, and there are of course other rea­sons be­hind util­ity to avoid us­ing AI. I fully re­spect any­one’s in­di­vid­ual de­ci­sions re­gard­ing it. I’m not here to con­vince you! For those in­ter­ested, I just wanted to share my per­sonal ap­proach to nav­i­gat­ing these new tools and give a glimpse about how I ap­proach new tools

in gen­eral, re­gard­less of AI.

...

Read the original on mitchellh.com »

10 807 shares, 32 trendiness

Defeating a 40-year-old copy protection dongle

He’s just this guy, you know?

How To

How To: Tape backup and re­cov­ery

How To: Image pro­cess­ing and man­age­ment

Musings

That’s right — this lit­tle de­vice is what stood be­tween me and the abil­ity to run an even older piece of soft­ware that I re­cently un­earthed dur­ing an ex­pe­di­tion of soft­ware ar­chae­ol­ogy.

For a bit more back­ground, I was re­cently in­volved in help­ing a friend’s ac­count­ing firm to move away from us­ing an ex­tremely legacy soft­ware pack­age that they had locked them­selves into us­ing for the last four decades.

This soft­ware was built us­ing a pro­gram­ming lan­guage called RPG (“Report Program Generator”), which is older than COBOL (!), and was used with IBMs midrange com­put­ers such as the System/3, System/32, and all the way up to the AS/400. Apparently, RPG was sub­se­quently ported to MS-DOS, so that the same soft­ware tools built with RPG could run on per­sonal com­put­ers, which is how we ended up here.

This ac­count­ing firm was ac­tu­ally us­ing a Windows 98 com­puter (yep, in 2026), and run­ning the RPG soft­ware in­side a DOS con­sole win­dow. And it turned out that, in or­der to run this soft­ware, it re­quires a spe­cial hard­ware copy-pro­tec­tion don­gle to be at­tached to the com­put­er’s par­al­lel port! This was a rel­a­tively com­mon prac­tice in those days, par­tic­u­larly with enterprise” soft­ware ven­dors who wanted to pro­tect their very im­por­tant™ soft­ware from unau­tho­rized use.

Sadly, most of the text and mark­ings on the don­gle’s la­bel has been worn or scratched off, but we can make out sev­eral clues:

The words Stamford, CT, and what’s very likely the logo of a com­pany called Software Security Inc”. The only ev­i­dence for the ex­is­tence of this com­pany is this record of them ex­hibit­ing their wares at SIGGRAPH con­fer­ences in the early 1990s, as well as sev­eral patents is­sued to them, re­lat­ing to soft­ware pro­tec­tion.

A word that seems to say RUNTIME, which will be­come clear in a bit.

My first course of ac­tion was to take a disk im­age of the Windows 98 PC that was run­ning this soft­ware, and get it run­ning in an em­u­la­tor, so that we could see what the soft­ware ac­tu­ally does, and per­haps ex­port the data from this soft­ware into a more mod­ern for­mat, to be used with mod­ern ac­count­ing tools. But of course all of this re­quires the hard­ware don­gle; none of the ac­count­ing tools seem to work with­out it plugged in.

Before do­ing any­thing, I looked through the disk im­age for any ad­di­tional in­ter­est­ing clues, and found plenty of fas­ci­nat­ing (and ar­chae­o­log­i­cally sig­nif­i­cant?) stuff:

We’ve got a com­piler for the RPG II lan­guage (excellent!), made by a com­pany called Software West Inc.

Even bet­ter, there are two ver­sions of the RPG II com­piler, re­leased on var­i­ous dates in the 1990s by Software West.

We’ve got the com­plete source code of the ac­count­ing soft­ware, writ­ten in RPG. It looks like the full ac­count­ing pack­age con­sists of nu­mer­ous RPG mod­ules, with a gnarly com­bi­na­tion of DOS batch files for or­ches­trat­ing them, all set up as a menu” sys­tem for the user to nav­i­gate us­ing num­ber com­bi­na­tions. Clearly the au­thor of this ac­count­ing sys­tem was orig­i­nally an IBM main­frame pro­gram­mer, and in­sisted on bring­ing those skills over to DOS, with mixed re­sults.

I be­gan by play­ing around with the RPG com­piler in iso­la­tion, and I learned very quickly that it’s the RPG com­piler it­self that re­quires the hard­ware don­gle, and then the com­piler au­to­mat­i­cally in­jects the same copy-pro­tec­tion logic into any ex­e­cuta­bles it gen­er­ates. This ex­plains the text that seems to say RUNTIME on the don­gle.

The com­piler con­sists of a few ex­e­cutable files, no­tably RPGC. EXE, which is the com­piler, and SEU.EXE, which is a source ed­i­tor (“Source Entry Utility”). Here’s what we get when we launch SEU with­out the don­gle, af­ter a cou­ple of sec­onds:

A bit rude, but this gives us an im­por­tant clue: this pro­gram must be try­ing to com­mu­ni­cate over the par­al­lel port over the course of a few sec­onds (which could give us an op­por­tu­nity to pause it for de­bug­ging, and see what it’s do­ing dur­ing that time), and then ex­its with a mes­sage (which we can now find in a dis­as­sem­bly of the pro­gram, and trace how it gets there).

A great tool for dis­as­sem­bling ex­e­cuta­bles of this vin­tage is Reko. It un­der­stands 16-bit real mode ex­e­cuta­bles, and even at­tempts to de­com­pile them into read­able C code that cor­re­sponds to the dis­as­sem­bly.

And so, look­ing at the de­com­piled/​dis­as­sem­bled code in Reko, I ex­pected to find in and out in­struc­tions, which would be the tell­tale sign of the pro­gram try­ing to com­mu­ni­cate with the par­al­lel port through the PCs I/O ports. However… I did­n’t see an in or out in­struc­tion any­where! But then I no­ticed some­thing: Reko dis­as­sem­bled the ex­e­cutable into two segments”: 0800 and 0809, and I was only look­ing at seg­ment 0809.

If we look at seg­ment 0800, we see the smok­ing gun: in and out in­struc­tions, mean­ing that the copy-pro­tec­tion rou­tine is def­i­nitely here, and best of all, the en­tire code seg­ment is a mere 0x90 bytes, which sug­gests that the en­tire rou­tine should be pretty easy to un­ravel and un­der­stand. For some rea­son, Reko was not able to de­com­pile this code into a C rep­re­sen­ta­tion, but it still pro­duced a dis­as­sem­bly, which will work just fine for our pur­poses. Maybe this was a prim­i­tive form of ob­fus­ca­tion from those early days, which is now con­fus­ing Reko and pre­vent­ing it from as­so­ci­at­ing this chunk of code with the rest of the pro­gram… who knows.

Here is a GitHub Gist with the dis­as­sem­bly of this code, along with my an­no­ta­tions and notes. My x86 as­sem­bly knowl­edge is a lit­tle rusty, but here is the gist of what this code does:

It’s def­i­nitely a sin­gle self-con­tained rou­tine, in­tended to be called us­ing a far” CALL in­struc­tion, since it re­turns with a RETF in­struc­tion.

It be­gins by de­tect­ing the ad­dress of the par­al­lel port, by read­ing the BIOS data area. If the com­puter has more than one par­al­lel port, the don­gle must be con­nected to the first par­al­lel port (LPT1).

It per­forms a loop where it writes val­ues to the data reg­is­ter of the par­al­lel port, and then reads the sta­tus reg­is­ter, and ac­cu­mu­lates re­sponses in the BH and BL reg­is­ters.

At the end of the rou­tine, the result” of the whole pro­ce­dure is stored in the BX reg­is­ter (BH and BL to­gether), which will pre­sum­ably be verified” by the caller of the rou­tine.

Very im­por­tantly, there does­n’t seem to be any input” into this rou­tine. It does­n’t pop any­thing from the stack, nor does it care about any reg­is­ter val­ues passed into it. Which can only mean that the re­sult of this rou­tine is com­pletely con­stant! No mat­ter what com­pli­cated back-and-forth it does with the don­gle, the re­sult of this rou­tine should al­ways be the same.

With the knowl­edge that this rou­tine must exit with some magic value stored in BX, we can now patch the first few bytes of the rou­tine to do just that! Not yet know­ing which value to put in BX, let’s start with 1234:

BB 34 12 MOV BX, 1234h

CB RETF

Only the first four bytes need patch­ing — set BX to our de­sired value, and get out of there (RETF). Running the patched ex­e­cutable with these new bytes still fails (expectedly) with the same mes­sage of No don­gle, no edit”, but it fails im­me­di­ately, in­stead of af­ter sev­eral sec­onds of talk­ing to the par­al­lel port. Progress!

Stepping through the dis­as­sem­bly more closely, we get an­other ma­jor clue: The only value that BH can be at the end of the rou­tine is 76h (this is hard-coded into the rou­tine). So, our to­tal value for the magic num­ber in BX must be of the form 76xx. In other words, only the BL value re­mains un­known:

BB __ 76 MOV BX, 76__h

CB RETF

Since BL is an 8-bit reg­is­ter, it can only have 256 pos­si­ble val­ues. And what do we do when we have 256 com­bi­na­tions to try? Brute force it! I whipped up a script that plugs a value into that par­tic­u­lar byte (from 0 to 255) and pro­gram­mat­i­cally launches the ex­e­cutable in DosBox, and ob­serves the out­put. Lo and be­hold, it worked! The brute forc­ing did­n’t take long at all, be­cause the cor­rect num­ber turned out to be… 6. Meaning that the to­tal magic num­ber in BX should be 7606h:

BB 06 76 MOV BX, 7606h

CB RETF

Bingo!

And then, pro­ceed­ing to ex­am­ine the other ex­e­cutable files in the com­piler suite, the par­al­lel port rou­tine turns out to be ex­actly the same. All of the ex­e­cuta­bles have the ex­act same copy pro­tec­tion logic, as if it was rub­ber-stamped onto them. In fact, when the com­piler (RPGC.EXE) com­piles some RPG source code, it seems to copy the par­al­lel port rou­tine from it­self into the com­piled pro­gram. That’s right: the patched ver­sion of the com­piler will pro­duce ex­e­cuta­bles with the same patched copy pro­tec­tion rou­tine! Very con­ve­nient.

I must say, this copy pro­tec­tion mech­a­nism seems a bit… sim­plis­tic? A hard­ware don­gle that just passes back a con­stant num­ber? Defeatable with a four-byte patch? Is this re­ally wor­thy of a patent? But who am I to pass judg­ment. It’s pos­si­ble that I haven’t fully un­der­stood the logic, and the copy pro­tec­tion will some­how re-sur­face in an­other way. It’s also pos­si­ble that the cre­ators of the RPG com­piler (Software West, Inc) did­n’t take proper ad­van­tage of the hard­ware don­gle, and used it in a way that is so eas­ily by­passed.

In any case, Software West’s RPG II com­piler is now free from the con­straint of the par­al­lel port don­gle! And at some point soon, I’ll work on purg­ing any PII from the com­piler di­rec­to­ries, and make this com­piler avail­able as an ar­ti­fact of com­put­ing his­tory. It does­n’t seem to be avail­able any­where else on the web. If any­one read­ing this was as­so­ci­ated with Software West Inc, feel free to get in touch — I have many ques­tions!

...

Read the original on dmitrybrant.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.