10 interesting stories served every morning and every evening.




1 648 shares, 41 trendiness

JavaScript™

Update 2025/02/04: Oracle asks the USPTO to dis­miss our pe­ti­tion. Read more

Update 2024/11/22: We’ve filed a pe­ti­tion to can­cel with the USPTO. Read more

Update 2025/02/04: Oracle asks the USPTO to dis­miss our pe­ti­tion. Read more

Update 2024/11/22: We’ve filed a pe­ti­tion to can­cel with the USPTO. Read more

You have long ago aban­doned the JavaScript trade­mark, and it is caus­ing

wide­spread, un­war­ranted con­fu­sion and dis­rup­tion.

JavaScript is the world’s most pop­u­lar pro­gram­ming lan­guage, pow­er­ing web­sites every­where. Yet, few of the mil­lions who pro­gram in it re­al­ize that JavaScript is a trade­mark you, Oracle, con­trol. The dis­con­nect is glar­ing: JavaScript has be­come a gen­eral-pur­pose term used by count­less in­di­vid­u­als and com­pa­nies, in­de­pen­dent of any Oracle prod­uct.

Oracle’s hold on the JavaScript trade­mark clearly fits the le­gal de­f­i­n­i­tion of

trade­mark aban­don­ment. A pre­vi­ous

blog post ad­dressed this is­sue, re­quest­ing that you, Oracle, re­lease the trade­mark. Unsurprisingly, the re­quest was met with si­lence. It is there­fore time to take ac­tive steps in or­der to bring the JavaScript trade­mark into the pub­lic do­main, where it be­longs.

A mark shall be deemed to be abandoned” if ei­ther of the fol­low­ing oc­curs:

When

its use has

been dis­con­tin­ued with in­tent not to re­sume

such use.

Intent not to re­sume may be in­ferred from cir­cum­stances. Nonuse for 3

con­sec­u­tive years shall be prima fa­cie ev­i­dence of aban­don­ment.

Use”

of

a mark means

the bona

fide use of

such mark made

in the or­di­nary course of trade, and not made merely to re­serve a right in

a mark.

When any course of con­duct of the owner, in­clud­ing acts of omis­sion as well

as com­mis­sion, causes

the mark to

be­come the generic name for the goods or ser­vices on or in con­nec­tion with

which it is used or oth­er­wise to lose its sig­nif­i­cance as

a mark.

Purchaser mo­ti­va­tion shall not be a test for de­ter­min­ing aban­don­ment un­der

this para­graph.

In the case of JavaScript, both cri­te­ria ap­ply.

The JavaScript trade­mark is cur­rently held by Oracle America, Inc. (US Serial Number: 75026640, US Registration Number: 2416017). How did this come to be?

In 1995, Netscape part­nered with Sun Microsystems to cre­ate in­ter­ac­tive web­sites. Brendan Eich fa­mously spent only 10 days cre­at­ing the first ver­sion of JavaScript, a dy­namic pro­gram­ming lan­guage with a rough syn­tac­tic lin­eage from Sun’s Java lan­guage. As a re­sult of this part­ner­ship, Sun held the JavaScript trade­mark. In 2009, Oracle ac­quired Sun Microsystems and the JavaScript trade­mark as a re­sult.

The trade­mark is sim­ply a relic of this ac­qui­si­tion. Neither Sun nor Oracle has ever built a prod­uct us­ing the mark. Legal staff, year af­ter year, have re­newed the trade­mark with­out ques­tion. It’s likely that only a few within Oracle even know they pos­sess the JavaScript trade­mark, and even if they do, they likely don’t un­der­stand the frus­tra­tion it causes within the de­vel­oper com­mu­nity.

Oracle has aban­doned the JavaScript trade­mark through nonuse.

Oracle has never se­ri­ously of­fered a prod­uct called JavaScript. In the 1990s and early 2000s, Netscape Navigator, which sup­ported JavaScript as a browser fea­ture, was a key player. However, Netscape’s us­age and in­flu­ence faded by 2003, and the browser saw its fi­nal re­lease in 2008. JavaScript, mean­while, evolved into a widely used, in­de­pen­dent pro­gram­ming lan­guage, em­bed­ded in mul­ti­ple browsers, en­tirely sep­a­rate from Oracle.

The most re­cent

spec­i­men, filed with the USPTO in 2019, ref­er­ences nodejs.org (a pro­ject cre­ated by Ryan Dahl, the au­thor of this let­ter) and Oracle’s

JavaScript Extension Toolkit (JET). But Node.js is not an Oracle prod­uct, and JET is merely a set of JavaScript li­braries for Oracle ser­vices, par­tic­u­larly Oracle Cloud. There are mil­lions of JavaScript li­braries; JET is not spe­cial.

Oracle also of­fers GraalVM, a JVM that can ex­e­cute JavaScript, among other lan­guages. But GraalVM is far from a canon­i­cal JavaScript im­ple­men­ta­tion; en­gines like V8, JavaScriptCore, and SpiderMonkey hold that role. GraalVM’s prod­uct page does­n’t even men­tion JavaScript”; you must dig into the doc­u­men­ta­tion to find its sup­port.

Oracle’s use of JavaScript in GraalVM and JET does not re­flect gen­uine use of

the trade­mark. These weak con­nec­tions do not sat­isfy the re­quire­ment for con­sis­tent, real-world use in trade.

A mark can also be con­sid­ered aban­doned if it be­comes a generic term.

In 1996, Netscape

an­nounced

a meet­ing of the ECMA International stan­dards or­ga­ni­za­tion to stan­dard­ize the JavaScript pro­gram­ming lan­guage. Sun (now Oracle), re­fused to give up the JavaScript” mark for this use though, so it was de­cided that the lan­guage would be called ECMAScript” in­stead. (Microsoft hap­pily of­fered up JScript”, but no-one else wanted that.) Brendan Eich, the cre­ator of JavaScript and a co-sig­na­tory of this let­ter,

wrote in 2006

that ECMAScript was al­ways an un­wanted trade name that sounds like a skin dis­ease.”

Ecma International formed TC39, a tech­ni­cal steer­ing com­mit­tee, which pub­lishes ECMA-262, the spec­i­fi­ca­tion for JavaScript. This com­mit­tee in­cludes par­tic­i­pants from all ma­jor browsers, like Google’s Chrome, Apple’s Safari, and Mozilla’s Firefox, as well as rep­re­sen­ta­tives from server-side JavaScript run­times like Node.js and Deno.

Oracle’s own­er­ship of the JavaScript trade­mark only causes con­fu­sion. The term JavaScript” is used freely by mil­lions of de­vel­op­ers, com­pa­nies, and or­ga­ni­za­tions around the world, with no in­ter­fer­ence from Oracle. Oracle has done noth­ing to as­sert its rights over the JavaScript name, likely be­cause they do not be­lieve their claim to the mark would hold up in court. Unlike typ­i­cal trade­mark hold­ers who pro­tect their trade­marks by ex­tract­ing li­cens­ing fees or en­forc­ing us­age re­stric­tions, Oracle has al­lowed the JavaScript name to be used by any­one. This in­ac­tion fur­ther sup­ports the ar­gu­ment that the trade­mark has lost its sig­nif­i­cance and has be­come generic.

Programmers work­ing with JavaScript have formed in­nu­mer­able com­mu­nity or­ga­ni­za­tions. These or­ga­ni­za­tions, like the stan­dards bod­ies, have been forced to painstak­ingly avoid nam­ing the pro­gram­ming lan­guage they are built around—for ex­am­ple, JSConf. Sadly, with­out risk­ing a le­gal trade­mark chal­lenge against Oracle, there can be no JavaScript Conference” nor a JavaScript Specification.” The world’s most pop­u­lar pro­gram­ming lan­guage can­not even have a con­fer­ence in its name.

There is a vast mis­align­ment be­tween the trade­mark’s own­er­ship and its

wide­spread, generic use.

By law, a trade­mark is aban­doned if it is ei­ther not used or be­comes a generic

term. Both ap­ply to JavaScript.

It’s time for the USPTO to end the JavaScript trade­mark and rec­og­nize it as a generic name for the world’s most pop­u­lar pro­gram­ming lan­guage, which has mul­ti­ple im­ple­men­ta­tions across the in­dus­try.

Oracle, you likely have no real busi­ness in­ter­est in the mark. It’s re­newed sim­ply be­cause le­gal staff are ob­lig­ated to re­new all trade­marks, re­gard­less of their rel­e­vance or use.

We urge you to re­lease the mark into the pub­lic do­main. However, ask­ing nicely has been tried be­fore, and it was met with si­lence. If you do not act, we will chal­lenge your own­er­ship by fil­ing a pe­ti­tion for can­cel­la­tion with the USPTO.

...

Read the original on javascript.tm »

2 488 shares, 28 trendiness

How Elites Could Shape Mass Preferences as AI Reduces Persuasion Costs

...

Read the original on arxiv.org »

3 459 shares, 64 trendiness

Why are 38 percent of Stanford students saying they're disabled?

The stu­dents at America’s elite uni­ver­si­ties are sup­posed to be the smartest, most promis­ing young peo­ple in the coun­try. And yet, shock­ing per­cent­ages of them are claim­ing aca­d­e­mic ac­com­mo­da­tions de­signed for stu­dents with learn­ing dis­abil­i­ties.

In an ar­ti­cle pub­lished this week in The Atlantic, ed­u­ca­tion re­porter Rose Horowitch lays out some shock­ing num­bers. At Brown and Harvard, 20 per­cent of un­der­grad­u­ate stu­dents are dis­abled. At Amherst College, that’s 34 per­cent. At Stanford University, it’s a galling 38 per­cent. Most of these stu­dents are claim­ing men­tal health con­di­tions and learn­ing dis­abil­i­ties, like anx­i­ety, de­pres­sion, and ADHD.

Obviously, some­thing is off here. The idea that some of the most elite, se­lec­tive uni­ver­si­ties in America—schools that re­quire 99th per­centile SATs and ster­ling es­says—would be ed­u­cat­ing large num­bers of gen­uinely learn­ing dis­abled stu­dents is clearly bo­gus. A stu­dent with real cog­ni­tive strug­gles is much more likely to end up in com­mu­nity col­lege, or not in higher ed­u­ca­tion at all, right?

The pro­fes­sors Horowitz in­ter­viewed largely back up this the­ory. You hear students with dis­abil­i­ties’ and it’s not kids in wheel­chairs,” one pro­fes­sor told Horowitch. It’s just not. It’s rich kids get­ting ex­tra time on tests.” Talented stu­dents get to col­lege, start strug­gling, and run for a di­ag­no­sis to avoid bad grades. Ironically, the very schools that cog­ni­tively chal­lenged stu­dents are most likely to at­tend—com­mu­nity col­leges—have far lower rates of dis­abled stu­dents, with only three to four per­cent of such stu­dents get­ting ac­com­mo­da­tions.

To be fair, some of the stu­dents re­ceiv­ing these ac­com­mo­da­tions do need them. But the cur­rent lan­guage of the Americans with Disabilities Act (ADA) al­lows stu­dents to get ex­pan­sive ac­com­mo­da­tions with lit­tle more than a doc­tor’s note.

While some stu­dents are no doubt seek­ing these ac­com­mo­da­tions as semi-con­scious cheaters, I think most gen­uinely iden­tify with the men­tal health con­di­tion they’re us­ing to get ex­tra time on tests. Over the past few years, there’s been a ris­ing push to see men­tal health and neu­rode­vel­op­men­tal con­di­tions as not just a med­ical fact, but an iden­tity marker. Will Lindstrom, the di­rec­tor of the Regents’ Center for Learning Disorders at the University of Georgia, told Horowitch that he sees a grow­ing num­ber of stu­dents with this per­spec­tive. It’s al­most like it’s part of their iden­tity,” Lindstrom told her. By the time we see them, they’re con­vinced they have a neu­rode­vel­op­men­tal dis­or­der.”

What’s dri­ving this trend? Well, the way con­di­tions like ADHD, autism, and anx­i­ety get talked about on­line—the place where most young peo­ple first learn about these con­di­tions—is prob­a­bly a con­tribut­ing fac­tor. Online cre­ators tend to paint a very broad pic­ture of the con­di­tions they de­scribe. A quick scroll of TikTok re­veals cre­ators la­bel­ing every­thing from al­ways wear­ing head­phones, to be­ing bad at man­ag­ing your time, to doo­dling in class as a sign that some­one may have a di­ag­nos­able con­di­tion. According to these videos, who is­n’t dis­abled?

The re­sult is a deeply dis­torted view of normal.” If ever strug­gling to fo­cus or ex­pe­ri­enc­ing bore­dom is a sign you have ADHD, the im­pli­ca­tion is that a normal,” nondis­abled per­son has es­sen­tially no prob­lems. A neurotypical” per­son, the think­ing goes, can churn out a 15-page pa­per with no hint of pro­cras­ti­na­tion, main­tain per­fect fo­cus dur­ing a bor­ing lec­ture, and never ex­pe­ri­ence so­cial anx­i­ety or awk­ward­ness. This view is buf­feted by the cur­rent way many of these con­di­tions are di­ag­nosed. As Horowitch points out, when the lat­est is­sue of the DSM, the man­ual psy­chi­a­trists use to di­ag­nose pa­tients, was re­leased in 2013, it sig­nif­i­cantly low­ered the bar for an ADHD di­ag­no­sis. When the de­f­i­n­i­tion of these con­di­tions is set so lib­er­ally, it’s easy to imag­ine a highly in­tel­li­gent Stanford stu­dent be­com­ing con­vinced that any sign of aca­d­e­mic strug­gle proves they’re learn­ing dis­abled, and any prob­lems mak­ing friends are a sign they have autism.

Risk-aversion, too, seems like a com­pelling fac­tor dri­ving bright stu­dents to claim learn­ing dis­abil­i­ties. Our na­tion’s most promis­ing stu­dents are also its least as­sured. So afraid of fail­ure—of bad grades, of a poorly-re­ceived es­say—they take any sign of strug­gle as a di­ag­nos­able con­di­tion. A few decades ago, a stu­dent who en­tered col­lege and found the ma­te­r­ial harder to mas­ter and their time less eas­ily man­aged than in high school would have been seen as rel­a­tively nor­mal. Now, every time she picks up her phone, a bar­rage of in­flu­encers is clam­or­ing to tell her this is a sign she has ADHD. Discomfort and dif­fi­culty are no longer per­ceived as typ­i­cal parts of grow­ing up.

In this con­text, it’s easy to read the rise of aca­d­e­mic ac­com­mo­da­tions among the na­tion’s most in­tel­li­gent stu­dents as yet an­other man­i­fes­ta­tion of the risk-aver­sion en­demic in the striv­ing chil­dren of the up­per mid­dle class. For most of the elite-col­lege stu­dents who re­ceive them, aca­d­e­mic ac­com­mo­da­tions are a pro­tec­tion against fail­ure and self-doubt. Unnecessary ac­com­mo­da­tions are a two-front form of cheat­ing—they give you an un­just leg-up on your fel­low stu­dents, but they also al­low you to cheat your­self out of gen­uine in­tel­lec­tual growth. If you mask learn­ing de­fi­cien­cies with ex­tra time on texts, soothe so­cial anx­i­ety by for­go­ing pre­sen­ta­tions, and ne­glect time man­age­ment skills with dead­line ex­ten­sions, you might forge a path to bet­ter grades. But you’ll also find your­self less ca­pa­ble of tack­ling the chal­lenges of adult life.

...

Read the original on reason.com »

4 408 shares, 32 trendiness

Why I Ignore The Spotlight as a Staff Engineer

Lately I’ve been read­ing Sean Goedecke’s es­says on be­ing a Staff+ en­gi­neer. His work (particularly Software en­gi­neer­ing un­der the spot­light and It’s Not Your Codebase) is ra­zor-sharp and feels painfully fa­mil­iar to any­one in Big Tech.

On pa­per, I fit the mold he de­scribes: I’m a Senior Staff en­gi­neer at Google. Yet, read­ing his work left me with a lin­ger­ing sense of un­ease. At first, I dis­missed this as cyn­i­cism. After re­flect­ing, how­ever, I re­al­ized the prob­lem was­n’t Sean’s writ­ing but my read­ing.

Sean is­n’t be­ing bleak; he is ac­cu­rately de­scrib­ing how to deal with a world where en­gi­neers are fun­gi­ble as­sets and pri­or­i­ties shift quar­terly. But my job looks noth­ing like that and I know deep down that if I tried to op­er­ate in that en­vi­ron­ment or in the way he de­scribed I’d burn out within months.

Instead I’ve fol­lowed an al­ter­nate path, one that op­ti­mizes for sys­tems over spot­lights, and stew­ard­ship over fun­gi­bil­ity.

The foun­da­tional rea­son for our di­verg­ing paths is that Sean and I op­er­ate in en­tirely dif­fer­ent worlds with dif­fer­ent laws gov­ern­ing them.

From Sean’s re­sume, my un­der­stand­ing is that he has pri­mar­ily worked in prod­uct teams build­ing for ex­ter­nal cus­tomers. Business goals pivot quar­terly, and suc­cess is mea­sured by rev­enue or MAU. Optimizing for the Spotlight” makes com­plete sense in this en­vi­ron­ment. Product de­vel­op­ment at big tech scale is a crowded room: VPs, PMs and UX de­sign­ers all have strong opin­ions. To suc­ceed, you have to be ag­ile and en­sure you are work­ing specif­i­cally on what ex­ec­u­tives are cur­rently look­ing at.

On the other hand, I’ve spent my en­tire ca­reer much more be­hind the scenes: in de­vel­oper tools and in­fra teams.

My team’s cus­tomers are thou­sands of en­gi­neers in Android, Chrome, and through­out Google . End users of Google prod­ucts don’t even know we ex­ist; our fo­cus is on mak­ing sure de­vel­op­ers have the tools to col­lect prod­uct and per­for­mance met­rics and de­bug is­sues us­ing de­tailed traces.

In this en­vi­ron­ment, our re­la­tion­ship with lead­er­ship is very dif­fer­ent. We’re never the hot pro­ject every­one wants,” so ex­ecs are not fight­ing to work with us. In fact, my team has his­tor­i­cally strug­gled to hire PMs. The PM ca­reer lad­der at Google in­cen­tivizes splashy ex­ter­nal launches so we can­not pro­vide good promotion ma­te­r­ial” for them. Also, our feed­back comes di­rectly from en­gi­neers. Adding a PM in the mid­dle causes a loss in trans­la­tion, slow­ing down a tight, high-band­width feed­back loop.

All of this to­gether means our team op­er­ates bottom-up”: in­stead of ex­ecs telling us you should do X”, we fig­ure out what we think will have the most im­pact to our cus­tomers and work on build­ing those fea­tures and tools. Execs en­sure that we’re ac­tu­ally solv­ing these prob­lems by con­sid­er­ing our im­pact on more prod­uct fac­ing teams.

In the prod­uct en­vi­ron­ments Sean de­scribes, where goals pivot quar­terly and fea­tures are of­ten ex­per­i­men­tal, speed is the ul­ti­mate cur­rency. You need to ship, it­er­ate, and of­ten move on be­fore the mar­ket shifts. But in Infrastructure and Developer Experience, con­text is the cur­rency.

Treating en­gi­neers as fun­gi­ble as­sets de­stroys con­text. You might gain fresh eyes, but you lose the im­plicit knowl­edge of how sys­tems ac­tu­ally break. Stewardship, stay­ing with a sys­tem long-term, un­locks com­pound­ing re­turns that are im­pos­si­ble to achieve on a short ro­ta­tion.

The first is ef­fi­ciency via pat­tern match­ing. When you stay in one do­main for years, new re­quests are rarely truly new.” I am not just de­bug­ging code; I am de­bug­ging the in­ter­sec­tion of my tools and hun­dreds of di­verse en­gi­neer­ing teams. When a new team comes to me with a unique” prob­lem, I can of­ten reach back in time: We tried this ap­proach in 2021 with the Camera team; here is ex­actly why it failed, and here is the ar­chi­tec­ture that ac­tu­ally works”.

But the more pow­er­ful re­turn is sys­temic in­no­va­tion. If you ro­tate teams every year, you are lim­ited to solv­ing acute bugs that are vis­i­ble right now. Some prob­lems, how­ever, only re­veal their shape over long hori­zons.

Take Bigtrace, a pro­ject I re­cently led; it was a so­lu­tion that emerged solely be­cause I stuck around long enough to see the shape of the prob­lem:

* Start of 2023 (Observation): I be­gan notic­ing a pat­tern. Teams across Google were col­lect­ing ter­abytes or even petabytes of per­for­mance traces, but they were strug­gling to process them. Engineers were writ­ing brit­tle, cus­tom pipelines to parse data, of­ten com­plain­ing about how slow and painful it was to it­er­ate on their analy­sis.

* Most of 2023 (Research): I did­n’t jump to build a pro­duc­tion sys­tem. Instead, I spent the best part of a year pro­to­typ­ing qui­etly in the back­ground while work­ing on other pro­jects. I gath­ered feed­back from these same en­gi­neers who had com­plained and be­cause I had es­tab­lished long-term re­la­tion­ships, they gave me hon­est and in­tro­spec­tive feed­back. I learned what sort of UX, la­tency and through­put re­quire­ments they had and fig­ured out how I could meet them.

* End of 2023 to Start of 2024 (Execution): We built and launched Bigtrace, a dis­trib­uted big data query en­gine for traces. Today, it processes over 2 bil­lion traces a month and is a crit­i­cal part of the daily work­flow for 100+ en­gi­neers.

If I had fol­lowed the ad­vice to optimize for fun­gi­bil­ity” (i.e. if I had switched teams in 2023 to chase a new pro­ject) Bigtrace would not ex­ist.

Instead, I would have left dur­ing the re­search phase and my suc­ces­sor would have seen the same noise” of en­gi­neers com­plain­ing. But with­out the his­tor­i­cal con­text to rec­og­nize a miss­ing puz­zle piece, I think they would have strug­gled to build some­thing like Bigtrace.

One of the most se­duc­tive ar­gu­ments for chas­ing the Spotlight” is that it guar­an­tees re­sources and ex­ec­u­tive at­ten­tion. But that at­ten­tion is a dou­ble-edged sword.

High-visibility pro­jects are of­ten volatile. They come with shift­ing ex­ec­u­tive whims, po­lit­i­cal ma­neu­ver­ing, and of­ten end up in sit­u­a­tions where long-term qual­ity is sac­ri­ficed for short-term sur­vival. For some en­gi­neers, nav­i­gat­ing this chaos is a thrill. For those of us who care about sys­tem sta­bil­ity, it feels like a trap.

The ad­van­tage of stew­ard­ship is that it gen­er­ates a dif­fer­ent kind of cap­i­tal: trust. When you have spent years de­liv­er­ing re­li­able tools, you earn the po­lit­i­cal cap­i­tal to say No” to the spot­light when it threat­ens the prod­uct.

Recently, the spot­light has been on AI. Every team is un­der pres­sure to in­cor­po­rate it. We have been asked re­peat­edly: Why don’t you in­te­grate LLMs into Perfetto?” If I were op­ti­miz­ing for vis­i­bil­ity, the an­swer would be ob­vi­ous: build an LLM wrap­per, demo it to lead­er­ship, and claim we are AI-first.” It would be an easy win for my ca­reer.

But as a stew­ard of the sys­tem, I know that one of Perfetto’s core val­ues is pre­ci­sion. When a ker­nel de­vel­oper is de­bug­ging a race con­di­tion, they need ex­act time­stamps, not a hal­lu­ci­na­tion. Users trust that when we tell them X is the prob­lem” that it ac­tu­ally is the prob­lem and they’re not go­ing to go chas­ing their tail for the next week, de­bug­ging an is­sue which does­n’t ex­ist.

But it’s im­por­tant not to take this too far: skep­ti­cism should­n’t be­come ob­struc­tion­ism. With AI, it’s not no for­ever” but not un­til it can be done right” .

A spot­light-seek­ing en­gi­neer might view this ap­proach as a missed op­por­tu­nity; I view it as pro­tect­ing what makes our prod­uct great: user trust.

The most com­mon fear en­gi­neers have about leav­ing the Spotlight” is ca­reer stag­na­tion. The logic goes: If I’m not launch­ing flashy fea­tures at Google I/O, and my work is­n’t on my VPs top 5 list, how will I ever get pro­moted to Staff+?

It is true that you lose the cur­rency of Executive Visibility.” But in in­fra­struc­ture, you gain two al­ter­nate cur­ren­cies that are just as valu­able, and po­ten­tially more sta­ble.

In a prod­uct or­ga­ni­za­tion, you of­ten need to im­press your man­ager’s man­ager. In an in­fra­struc­ture or­ga­ni­za­tion, you need to im­press your cus­tomers’ man­agers.

I call this the Shadow Hierarchy. You don’t need your VP to un­der­stand the in­tri­ca­cies of your code. You need the Staff+ Engineers in other crit­i­cal or­ga­ni­za­tions to need your tools.

When a Senior Staff Engineer in Pixel tells their VP, We lit­er­ally can­not de­bug the next Pixel phone with­out Perfetto”, that state­ment car­ries im­mense weight. It trav­els up their re­port­ing chain, crosses over at the Director/VP level, and comes back down to your man­ager.

This kind of ad­vo­cacy is pow­er­ful be­cause it is tech­ni­cal, not po­lit­i­cal. It is hard to fake. When you are a stew­ard of a crit­i­cal sys­tem, your pro­mo­tion packet is filled with tes­ti­mo­ni­als from the most re­spected en­gi­neers in the com­pany say­ing, This per­son’s work en­abled our suc­cess”.

While prod­uct teams might be por­ing over daily ac­tive users or rev­enue, we rely on met­rics track­ing en­gi­neer­ing health:

* Utility: Every bug fixed us­ing our tools is an en­gi­neer find­ing us use­ful. It is the purest mea­sure of util­ity.

* Criticality: If the Pixel team uses Perfetto to de­bug a launch-block­ing stut­ter, or Chrome uses it to fix a mem­ory leak, our im­pact is im­plic­itly tied to their suc­cess.

* Ubiquity: Capturing a sig­nif­i­cant per­cent­age of the en­gi­neer­ing pop­u­la­tion proves you’ve cre­ated a tech­ni­cal lingua franca”. This be­comes es­pe­cially ob­vi­ous when you see dis­con­nected parts of the com­pany col­lab­o­rat­ing with each other, us­ing shared Perfetto traces as a reference every­one un­der­stands”.

* Scale: Ingesting petabytes of data or pro­cess­ing bil­lions of traces proves ar­chi­tec­tural re­silience bet­ter than any de­sign doc.

When you com­bine Criticality (VIP teams need this) with Utility (bugs are be­ing fixed), you cre­ate a pro­mo­tion case that is im­mune to ex­ec­u­tive re­or­ga­ni­za­tions.

I am far from the first to no­tice the idea of there are mul­ti­ple ways to be a staff soft­ware en­gi­neer”. In his book Staff Engineer, Will Larson cat­e­go­rizes Staff-plus en­gi­neers into four dis­tinct ar­che­types.

Sean de­scribes the Solver or the Right Hand: en­gi­neers who act as agents of ex­ec­u­tive will, drop­ping into fires and mov­ing on once the prob­lem is sta­bi­lized. I am de­scrib­ing the Architect or the Tech Lead: roles de­fined by long-term own­er­ship of a spe­cific do­main and deep tech­ni­cal con­text.

I can hear the crit­i­cism al­ready: You just got lucky find­ing your team. Most of us don’t have that lux­ury.”

There are two caveats to all my ad­vice in this post. First, the strat­egy I have em­ployed so far re­quires a com­pany prof­itable enough to sus­tain long-term in­fra­struc­ture. This path gen­er­ally does not ex­ist in star­tups or early growth com­pa­nies; it is op­ti­mized for Big Tech.

Second, luck does play a role in land­ing on a good team. It is very hard to ac­cu­rately eval­u­ate team and com­pany cul­ture from the out­side. But while find­ing the team might have in­volved luck, stay­ing there for al­most a decade was a choice.

And, at least in my ex­pe­ri­ence, my team is not par­tic­u­larly spe­cial: I can name five other teams in Android alone . Sure, they might have a di­rec­tor change here or a VP change there, but the core mis­sion and the en­gi­neer­ing team re­mained sta­ble.

The rea­son these teams seem rare is not that they don’t ex­ist, but that they are of­ten ig­nored. Because they don’t of­fer the rapid, vis­i­ble wins” of a prod­uct launch nor are they work­ing on the shiny cool fea­tures”, they at­tract less com­pe­ti­tion. If you are mo­ti­vated by shipping to bil­lions of users” or see­ing your friends and fam­ily use some­thing you built, you won’t find that sat­is­fac­tion here. That is the price of ad­mis­sion.

But if you want to build long-term sys­tems and are will­ing to trade ex­ter­nal val­i­da­tion for deep tech­ni­cal own­er­ship, you just need to look be­hind the cur­tain.

The tech in­dus­try loves to tell you to move fast. But there is an­other path. It is a path where lever­age comes from depth, pa­tience, and the quiet sat­is­fac­tion of build­ing the foun­da­tion that oth­ers stand on.

You don’t have to chase the spot­light to have a mean­ing­ful, high-im­pact ca­reer at a big com­pany. Sometimes, the most am­bi­tious thing you can do is stay put, dig in, and build some­thing that lasts. To sit with a prob­lem space for years un­til you un­der­stand it well enough to build a Bigtrace.

...

Read the original on lalitm.com »

5 368 shares, 34 trendiness

Microsoft drops AI sales targets in half after salespeople miss their quotas

Microsoft has low­ered sales growth tar­gets for its AI agent prod­ucts af­ter many sales­peo­ple missed their quo­tas in the fis­cal year end­ing in June, ac­cord­ing to a re­port Wednesday from The Information. The ad­just­ment is re­port­edly un­usual for Microsoft, and it comes af­ter the com­pany missed a num­ber of am­bi­tious sales goals for its AI of­fer­ings.

AI agents are spe­cial­ized im­ple­men­ta­tions of AI lan­guage mod­els de­signed to per­form mul­ti­step tasks au­tonomously rather than sim­ply re­spond­ing to sin­gle prompts. So-called agentic” fea­tures have been cen­tral to Microsoft’s 2025 sales pitch: At its Build con­fer­ence in May, the com­pany de­clared that it has en­tered the era of AI agents.”

The com­pany has promised cus­tomers that agents could au­to­mate com­plex tasks, such as gen­er­at­ing dash­boards from sales data or writ­ing cus­tomer re­ports. At its Ignite con­fer­ence in November, Microsoft an­nounced new fea­tures like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for build­ing and de­ploy­ing agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to de­liver than the com­pany ex­pected.

According to The Information, one US Azure sales unit set quo­tas for sales­peo­ple to in­crease cus­tomer spend­ing on a prod­uct called Foundry, which helps cus­tomers de­velop AI ap­pli­ca­tions, by 50 per­cent. Less than a fifth of sales­peo­ple in that unit met their Foundry sales growth tar­gets. In July, Microsoft low­ered those tar­gets to roughly 25 per­cent growth for the cur­rent fis­cal year. In an­other US Azure unit, most sales­peo­ple failed to meet an ear­lier quota to dou­ble Foundry sales, and Microsoft cut their quo­tas to 50 per­cent for the cur­rent fis­cal year.

...

Read the original on arstechnica.com »

6 363 shares, 31 trendiness

Transparent Leadership Beats Servant Leadership

: Parenting and lead­er­ship is sim­i­lar. Teach a man to fish, etc.

I spent a cou­ple of years man­ag­ing a team, and I en­tered that role — like many — with­out know­ing any­thing about how to do it. I tried to fig­ure out how to be a good man­ager, and do­ing so I ended up read­ing a lot about ser­vant lead­er­ship. It never quite sat right with me, though. Servant lead­er­ship seems to me a lot like curl­ing par­ent­ing: the leader/​par­ent an­tic­i­pate prob­lems and sweep the way for their di­rect re­ports/​chil­dren.

To be clear, this prob­a­bly feels very good (initially, any­way) for the di­rect re­ports/​chil­dren. But the ser­vant leader/​curl­ing par­ent quickly be­comes an over­worked sin­gle point of fail­ure, and once they leave there is no­body else who knows how to han­dle the ob­sta­cles the leader moved out of the way for every­one. In the worst cases, they leave be­hind a group of peo­ple who have been com­pletely iso­lated from the rest of the or­gan­i­sa­tion, and has no idea what their pur­pose is and how to fit in with the rest of the world.

I would like to in­vent my own buzz­word: trans­par­ent lead­er­ship. In my book, a good leader

ex­plains val­ues and prin­ci­ples em­braced by the or­gan­i­sa­tion to aid them in

mak­ing aligned de­ci­sions on their own,

cre­ates di­rect links be­tween sup­ply and de­mand (instead of de­lib­er­ately mak­ing

them­selves a mid­dle man),

al­lows their di­rect re­ports ca­reer growth by grad­u­ally tak­ing over lead­er­ship

re­spon­si­bil­i­ties,

The mid­dle man­ager that does­n’t per­form any use­ful work is a fun stereo­type, but I also think it’s a good tar­get to aim for. The dif­fer­ence lies in what to do once one has ren­dered one­self re­dun­dant. A com­mon re­sponse is to in­vent new work, ask for sta­tus re­ports, and add bu­reau­cracy.

A bet­ter re­sponse is to go back to work­ing on tech­ni­cal prob­lems. This keeps the man­ager’s skills fresh and gets them more re­spect from their re­ports. The man­ager should turn into a high-pow­ered spare worker, rather than a pa­per-shuf­fler.

...

Read the original on entropicthoughts.com »

7 359 shares, 19 trendiness

Self-host and scale web apps without Kubernetes complexity

Self-host and scale web apps

with­out the com­plex­ity

Take your Docker Compose apps to pro­duc­tion with zero-down­time de­ploy­ments, au­to­matic HTTPS, and cross-ma­chine scal­ing. No Kubernetes re­quired.

# Start with any cloud VM or your own server

$ uc ma­chine init [email protected]

# Deploy your app with au­to­matic HTTPS

$ uc run –name my-app -p app.ex­am­ple.com:8000/​https app-im­age:lat­est

✨ Your app is avail­able at https://​app.ex­am­ple.com

# Achieve high avail­abil­ity by adding more ma­chines and scal­ing the app

$ uc ma­chine add [email protected]

$ uc scale my-app 2

PaaS-like work­flow on your own servers

Deploy with the sim­plic­ity of Heroku or Fly.io while keep­ing full con­trol over your in­fra­struc­ture.

Full con­trol over your servers and data

SSH into ma­chines and de­bug with stan­dard tools

Build, push, and de­ploy with one com­mand

No con­trol plane or quo­rum to man­age

Uncloud re­places com­plex clus­ters with a sim­ple net­work of ma­chines work­ing seam­lessly to­gether — no main­te­nance over­head, just re­li­able in­fra­struc­ture.

Each ma­chine joins a WireGuard mesh net­work with au­to­matic peer dis­cov­ery

and NAT tra­ver­sal. Containers get unique IPs and can com­mu­ni­cate di­rectly

across ma­chines.

Unlike tra­di­tional or­ches­tra­tors, there’s no cen­tral con­trol plane to

main­tain. Each ma­chine main­tains a syn­chro­nised copy of the clus­ter

state

through peer-to-peer com­mu­ni­ca­tion, keep­ing clus­ter op­er­a­tions

func­tional

even if some ma­chines go of­fline.

Control your en­tire in­fra­struc­ture us­ing in­tu­itive Docker-like com­mands from

any­where. Deploy, mon­i­tor, and scale ap­pli­ca­tions across all your ma­chines

while the CLI only needs SSH ac­cess to a sin­gle ma­chine.

Run your apps on any Linux ma­chine — from cloud VMs and ded­i­cated servers to bare metal at your of­fice or home.

Get free TLS cer­tifi­cates and au­to­matic HTTPS for your do­mains with zero con­fig­u­ra­tion us­ing built-in Caddy re­verse proxy.

Distribute traf­fic across con­tainer repli­cas run­ning on dif­fer­ent ma­chines for im­proved re­li­a­bil­ity and per­for­mance.

Access any ser­vice by its name from any con­tainer us­ing the built-in DNS that au­to­mat­i­cally tracks ser­vices across your net­work.

Define your en­tire app stack in a fa­mil­iar Docker Compose file. No need to learn a new con­fig for­mat.

Mix cloud providers and your own hard­ware freely to op­ti­mise costs and per­for­mance, with­out chang­ing how you de­ploy or man­age apps.

Deploy a highly avail­able web app with au­to­matic HTTPS across mul­ti­ple re­gions and on-premises in just a cou­ple min­utes.

...

Read the original on uncloud.run »

8 358 shares, 0 trendiness

Anthropic taps IPO lawyers as it races OpenAI to go public

Roula Khalaf, Editor of the FT, se­lects her favourite sto­ries in this weekly newslet­ter.

Roula Khalaf, Editor of the FT, se­lects her favourite sto­ries in this weekly newslet­ter.

Anthropic has tapped law firm Wilson Sonsini to be­gin work on one of the largest ini­tial pub­lic of­fer­ings ever, which could come as soon as 2026, as the ar­ti­fi­cial in­tel­li­gence start-up races OpenAI to the pub­lic mar­ket.

The maker of the Claude chat­bot, which is in talks for a pri­vate fund­ing round that would value it at more than $300bn, chose the US west coast law firm in re­cent days, ac­cord­ing to two peo­ple with knowl­edge of the de­ci­sion.

The start-up, led by chief ex­ec­u­tive Dario Amodei, had also dis­cussed a po­ten­tial IPO with big in­vest­ment banks, ac­cord­ing to mul­ti­ple peo­ple with knowl­edge of those talks. The peo­ple char­ac­terised the dis­cus­sions as pre­lim­i­nary and in­for­mal, sug­gest­ing that the com­pany was not close to pick­ing its IPO un­der­writ­ers.

Nonetheless, these moves rep­re­sent a sig­nif­i­cant step up in Anthropic’s prepa­ra­tions for an IPO that would test the ap­petite of pub­lic mar­kets to back the mas­sive, loss­mak­ing re­search labs at the heart of the AI boom.

Wilson Sonsini has ad­vised Anthropic since 2022, in­clud­ing on com­mer­cial as­pects of multi­bil­lion-dol­lar in­vest­ments from Amazon, and has worked on high-pro­file tech IPOs such as Google, LinkedIn and Lyft.

Its in­vestors are en­thu­si­as­tic about an IPO, ar­gu­ing that Anthropic can seize the ini­tia­tive from its larger ri­val OpenAI by list­ing first.

Anthropic could be pre­pared to list in 2026, ac­cord­ing to one per­son with knowl­edge of its plans. Another per­son close to the com­pany cau­tioned that an IPO so soon was un­likely.

It’s fairly stan­dard prac­tice for com­pa­nies op­er­at­ing at our scale and rev­enue level to ef­fec­tively op­er­ate as if they are pub­licly traded com­pa­nies,” said an Anthropic spokesper­son. We haven’t made any de­ci­sions about when or even whether to go pub­lic, and don’t have any news to share at this time.”

OpenAI was also un­der­tak­ing pre­lim­i­nary work to ready it­self for a pub­lic of­fer­ing, ac­cord­ing to peo­ple with knowl­edge of its plans, though they cau­tioned it was too soon to set even an ap­prox­i­mate date for a list­ing.

But both com­pa­nies may also be ham­pered by the fact that their rapid growth and the as­tro­nom­i­cal costs of train­ing AI mod­els make their fi­nan­cial per­for­mance dif­fi­cult to fore­cast.

The pair will also be at­tempt­ing IPOs at val­u­a­tions that are un­prece­dented for US tech start-ups. OpenAI was val­ued at $500bn in October. Anthropic re­ceived a $15bn com­mit­ment from Microsoft and Nvidia last month, which will form part of a fund­ing round ex­pected to value the group be­tween $300bn and $350bn.

Anthropic had been work­ing through an in­ter­nal check­list of changes re­quired to go pub­lic, ac­cord­ing to one per­son fa­mil­iar with the process.

The San Francisco-headquartered start-up hired Krishna Rao, who worked at Airbnb for six years and was in­stru­men­tal in that com­pa­ny’s IPO, as chief fi­nan­cial of­fi­cer last year.

Wilson Sonsini did not re­spond to a re­quest for com­ment.

...

Read the original on www.ft.com »

9 347 shares, 24 trendiness

RAM is so expensive, Samsung won't even sell it to Samsung

When you pur­chase through links in our ar­ti­cles, we may earn a small com­mis­sion. This does­n’t af­fect our ed­i­to­r­ial in­de­pen­dence.

RAM is so ex­pen­sive, Samsung won’t even sell it to Samsung

Due to ris­ing prices from the AI bub­ble, Samsung Semiconductor re­port­edly re­fused a RAM or­der for new Galaxy phones from Samsung Electronics.

The price of eggs has noth­ing on the price of com­puter mem­ory right now. Thanks to a sup­ply crunch from the AI bub­ble, RAM chips are the new gold, with prices on con­sumer PC mem­ory kits bal­loon­ing out of con­trol. In an ob­ject les­son in the ridicu­lous­ness of an eco­nomic bub­ble, Samsung won’t even sell its mem­ory to… Samsung.

Here’s the sit­u­a­tion. Samsung makes every­thing from re­frig­er­a­tors to su­per­mas­sive oil tankers. Getting all that stuff made re­quires an or­ga­ni­za­tion that’s lit­er­ally dozens of af­fil­i­ated com­pa­nies and sub­sidiaries, which don’t nec­es­sar­ily work as closely or har­mo­niously as you might as­sume. For this story, we’re talk­ing about Samsung Electronics, which makes Galaxy phones, tablets, lap­tops, watches, etc., and Samsung Semiconductor Global, which man­u­fac­tures mem­ory and other chips and sup­plies the global mar­ket. That global mar­ket in­cludes both Samsung sub­sidiaries and their com­peti­tors—lap­tops from Samsung, Dell, and Lenovo sit­ting on a Best Buy store shelf might all have Samsung-manufactured mem­ory sit­ting in their RAM slots.

Samsung sub­sidiaries are, nat­u­rally, go­ing to look to Samsung Semiconductor first when they need parts. Such was re­port­edly the case for Samsung Electronics, in search of mem­ory sup­plies for its newest smart­phones as the com­pany ramps up pro­duc­tion for 2026 flag­ship de­signs. But with so much RAM hard­ware go­ing into new AI data cen­ters—and those com­pa­nies will­ing to pay top dol­lar for their hard­ware—mem­ory man­u­fac­tur­ers like Samsung, SK Hynix, and Micron are pri­or­i­tiz­ing data cen­ter sup­pli­ers to max­i­mize prof­its.

The end re­sult, ac­cord­ing to a re­port from SE Daily spot­ted by SamMobile, is that Samsung Semiconductor re­jected the orig­i­nal or­der for smart­phone DRAM chips from Samsung Electronics’ Mobile Experience di­vi­sion. The smart­phone man­u­fac­tur­ing arm of the com­pany had hoped to nail down pric­ing and sup­ply for an­other year. But re­ports say that due to chipflation,” the phone-mak­ing di­vi­sion must rene­go­ti­ate quar­terly, with a long-term sup­ply deal re­jected by its cor­po­rate sib­ling. A short-term deal, with higher prices, was re­port­edly ham­mered out.

Assuming that this in­for­ma­tion is ac­cu­rate—and to be clear, we can’t in­de­pen­dently con­firm it—con­sumers will see prices rise for Samsung phones and other mo­bile hard­ware. But that’s hardly a sur­prise. Finished elec­tron­ics prob­a­bly won’t see the same me­te­oric rise in prices as con­sumer-grade RAM mod­ules, but this ris­ing tide is flood­ing all the boats. Raspberry Pi, which strives to keep its mod-friendly elec­tron­ics as cheap as pos­si­ble, has re­cently had to bring prices up and called out mem­ory costs as the cul­prit. Lenovo, the world’s largest PC man­u­fac­turer, is stock­pil­ing mem­ory sup­plies as a bul­wark against the mar­ket.

But if you’re hop­ing to see prices lower in 2026, don’t hold your breath. According to a fore­cast from mem­ory sup­plier TeamGroup, com­po­nent prices have tripled re­cently, caus­ing fin­ished mod­ules to jump in prices as quickly as 100 per­cent in a month. Absent some kind of dis­as­trous mar­ket col­lapse, prices are ex­pected to con­tinue ris­ing into next year, and sup­ply could re­main con­strained well into 2027 or later.

Michael is a 10-year vet­eran of tech­nol­ogy jour­nal­ism, cov­er­ing every­thing from Apple to ZTE. On PCWorld he’s the res­i­dent key­board nut, al­ways us­ing a new one for a re­view and build­ing a new me­chan­i­cal board or ex­pand­ing his desk­top battlestation” in his off hours. Michael’s pre­vi­ous by­lines in­clude Android Police, Digital Trends, Wired, Lifehacker, and How-To Geek, and he’s cov­ered events like CES and Mobile World Congress live. Michael lives in Pennsylvania where he’s al­ways look­ing for­ward to his next kayak­ing trip.

AMDs FSR Redstone tech to get wider roll­out in December

...

Read the original on www.pcworld.com »

10 314 shares, 32 trendiness

The RAM Shortage Comes for Us All

Memory price in­fla­tion comes for us all, and if you’re not af­fected yet, just wait.

I was build­ing a new PC last month us­ing some parts I had bought ear­lier this year. The 64 Gigabyte T-Create DDR5 mem­ory kit I used cost $209 then. Today? The same kit costs $650!

Just in the past week, we found out Raspberry Pi’s in­creas­ing their sin­gle board com­puter prices. Micron’s killing the Crucial brand of RAM and stor­age de­vices com­pletely, mean­ing there’s gonna be one fewer con­sumer mem­ory man­u­fac­turer. Samsung can’t even buy RAM from them­selves to build their own Smartphones, and small ven­dors like Libre Computer and Mono are see­ing RAM prices dou­ble, triple, or even worse, and they’re not even buy­ing the lat­est RAM tech!

I think PC builders might be the first crowd to get im­pacted across the board—just look at these in­sane graphs from PC Parts Picker, show­ing RAM prices go­ing from like $30 to $120 for DDR4, or like $150 to five hun­dred dol­lars for 64 gigs of DDR5.

But the im­pacts are only just start­ing to hit other mar­kets.

Libre Computer men­tioned on Twitter a sin­gle 4 gi­ga­byte mod­ule of LPDDR4 mem­ory costs $35. That’s more ex­pen­sive than every other com­po­nent on one of their sin­gle board com­put­ers com­bined! You can’t sur­vive sell­ing prod­ucts at a loss, so once the cur­rent pro­duc­tion batches are sold through, ei­ther prices will be in­creased, or cer­tain prod­uct lines will go out of stock.

The smaller the com­pany, the worse the price hit will be. Even Raspberry Pi, who I’m sure has a lit­tle more mar­gin built in, al­ready raised SBC prices (and in­tro­duced a 1 GB Pi 5—maybe a good ex­cuse for de­vel­op­ers to drop Javascript frame­works and pro­gram for lower mem­ory re­quire­ments again?).

Cameras, gam­ing con­soles, tablets, al­most any­thing that has mem­ory will get hit sooner or later.

I can’t be­lieve I’m say­ing this, but com­pared to the cur­rent mar­ket, Apple’s in­sane mem­ory up­grade pric­ing is… ac­tu­ally in line with the rest of the in­dus­try.

The rea­son for all this, of course, is AI dat­a­cen­ter build­outs. I have no clue if there’s any price fix­ing go­ing on like there was a few decades ago—that’s some­thing con­spir­acy the­o­rists can de­bate—but the prob­lem is there’s only a few com­pa­nies pro­duc­ing all the world’s mem­ory sup­plies.

And those com­pa­nies all re­al­ized they can make bil­lions more dol­lars mak­ing RAM just for AI dat­a­cen­ter prod­ucts, and ne­glect the rest of the mar­ket.

So they’re shut­ting down their con­sumer mem­ory lines, and de­vot­ing all pro­duc­tion to AI.

Even com­pa­nies like GPU board man­u­fac­tur­ers are get­ting shafted; Nvidia’s not giv­ing mem­ory to them along with their chips like they used to, ba­si­cally telling them good luck, you’re on your own for VRAM now!”

Which is es­pe­cially rich, be­cause Nvidia’s prof­it­ing ob­scenely off of all this stuff.

That’s all bad enough, but some peo­ple see a sil­ver lin­ing. I’ve seen some peo­ple say well, once the AI bub­ble bursts, at least we’ll have a ton of cheap hard­ware flood­ing the mar­ket!”

And yes, in past decades, that might be one out­come.

But the prob­lem here is the RAM they’re mak­ing, a ton of it is ei­ther in­te­grated into spe­cial­ized GPUs that won’t run on nor­mal com­put­ers, or be­ing fit­ted into spe­cial types of mem­ory mod­ules that don’t work on con­sumer PCs, ei­ther. (See: HBM).

That, and the GPUs and servers be­ing de­ployed now don’t even run on nor­mal power and cool­ing, they’re part of mas­sive sys­tems that would take a ton of ef­fort to get run­ning in even the most well-equipped home­labs. It’s not like the clas­sic Dell R720 that just needs some air and a wall out­let to run.

That is to say, we might be hit­ting a weird era where the PC build­ing hobby is gut­ted, SBCs get pro­hib­i­tively ex­pen­sive, and any­one who did­n’t stock­pile parts ear­lier this year is, pretty much, in a lurch.

Even Lenovo ad­mits to stock­pil­ing RAM, mak­ing this like the toi­let pa­per sit­u­a­tion back in 2020, ex­cept for mas­sive cor­po­ra­tions. Not enough sup­ply, so com­pa­nies who can af­ford to get some will buy it all up, hop­ing to stave off the short­ages that will prob­a­bly last longer, partly be­cause of that stock­pil­ing.

I don’t think it’s com­pletely out­landish to think some com­pa­nies will start scav­eng­ing mem­ory chips (ala dos­dude1) off other sys­tems for stock, es­pe­cially if RAM prices keep go­ing up.

It’s ei­ther that, or just stop mak­ing prod­ucts. There are some echoes to the global chip short­ages that hit in 2021-2022, and that re­ally shook up the mar­ket for smaller com­pa­nies.

I hate to see it hap­pen­ing again, but some­how, here we are a few years later, ex­cept this time, the AI bub­ble is to blame.

Sorry for not hav­ing a pos­i­tive note to end this on, but I guess… maybe it’s a good time to dig into that pile of old pro­jects you never fin­ished in­stead of buy­ing some­thing new this year.

How long will this last? That’s any­body’s guess. But I’ve al­ready put off some pro­jects I was gonna do for 2026, and I’m sure I’m not the only one.

...

Read the original on www.jeffgeerling.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.