10 interesting stories served every morning and every evening.




1 756 shares, 42 trendiness

NVIDIA is full of shit

Since the dis­as­trous launch of the RTX 50 se­ries, NVIDIA has been un­able to es­cape neg­a­tive head­lines: scalper bots are snatch­ing GPUs away from con­sumers be­fore of­fi­cial sales even be­gin, power con­nec­tors con­tinue to melt, with no fix in sight, mar­ket­ing is be­com­ing in­creas­ingly de­cep­tive, GPUs are miss­ing pro­cess­ing units when they leave the fac­tory, and the dri­vers, for which NVIDIA has al­ways been praised, are cur­rently falling apart. And to top it all off, NVIDIA is be­com­ing in­creas­ingly in­sis­tent that me­dia push a cer­tain nar­ra­tive when re­port­ing on their hard­ware.

Just like with every other GPU launch in re­cent mem­ory, this one has also been ripe with scalper bots snatch­ing up stock be­fore any real per­son could get any for them­selves. Retailers have re­ported that they’ve re­ceived very lit­tle stock to be­gin with. This in turn sparked ru­mors about NVIDIA pur­pose­fully keep­ing stock low to make it look like the cards are in high de­mand to drive prices. And sure enough, on sec­ondary mar­kets, the cards go way above MSRP and some re­tail­ers have started to bun­dle the cards with other in­ven­tory (PSUs, mon­i­tors, key­boards and mice, etc.) to in­flate the price even fur­ther and get rid of stuff in their ware­house peo­ple would­n’t buy oth­er­wise—and you don’t even get a work­ing com­puter out of spend­ing over twice as much as a GPU alone would cost you.

I had a look at GPU prices for pre­vi­ous gen­er­a­tion mod­els for both AMD and NVIDIA as re­cently as May 2025 and I was­n’t sur­prised to find even RTX 40 se­ries are still very much over­priced, with the GeForce RTX 4070 (lower mid-tier) start­ing at $800 (MSRP: $599), whereas the same money can get you a Radeon RX 7900 XT (the sec­ond best GPU in AMDs last gen­er­a­tion lineup). The dis­crep­ancy in bang for buck could­n’t be more jar­ring. And that’s be­fore con­sid­er­ing that NVIDIA gave out de­fec­tive chips to board part­ners that were miss­ing ROPs (Raster Operations Pipelines) from the fac­tory, thus re­duc­ing their per­for­mance. Or, how NVIDIA put it in a state­ment to The Verge:

We have iden­ti­fied a rare is­sue af­fect­ing less than 0.5% (half a per­cent) of GeForce RTX 5090 / 5090D and 5070 Ti GPUs which have one fewer ROP than spec­i­fied. The av­er­age graph­i­cal per­for­mance im­pact is 4%, with no im­pact on AI and Compute work­loads. Affected con­sumers can con­tact the board man­u­fac­turer for a re­place­ment. The pro­duc­tion anom­aly has been cor­rected.

Those 4% can make an RTX 5070 Ti per­form at the lev­els of an RTX 4070 Ti Super, com­pletely erad­i­cat­ing the rea­son you’d get an RTX 5070 Ti in the first place. Not to men­tion that the gen­er­a­tional per­for­mance up­lift over the RTX 40 se­ries was al­ready re­ceived quite poorly in gen­eral. NVIDIA also had to later amend their state­ment to The Verge and ad­mit the RTX 5080 was also miss­ing ROPs.

It’s adding in­sult to in­jury with the cards’ gen­eral un­ob­tainium and it be­comes even more ridicu­lous when you com­pare NVIDIA to an­other tril­lion dol­lar com­pany that is also in the busi­ness of sell­ing hard­ware to con­sumers: Apple.

How is it that one can sup­ply cus­tomers with enough stock on launch con­sis­tently for decades, and the other can’t? The only rea­son I can think of is, that NVIDIA just does­n’t care. They’re mak­ing the big bucks with data cen­ter GPUs now, sell­ing the shov­els that drive the AI bull­shit gold rush, to the point that sell­ing to con­sumers is in­creas­ingly be­com­ing a round­ing er­ror on their bal­ance sheets.

The RTX 50 se­ries are the sec­ond gen­er­a­tion of NVIDIA cards to use the 12VHPWR con­nec­tor. The RTX 40 se­ries be­came in­fa­mous as the GPU se­ries with melt­ing power con­nec­tors. So did they fix that?

No. The ca­bles can still melt, both on the GPU and PSU. It’s a de­sign flaw in the board of the GPU it­self which can­not be fixed un­less the cir­cuitry of the cards is re­placed with a new de­sign.

With the RTX 30 cards, each power in­put (i.e. the ca­bles from the power sup­ply) had its own shunt re­sis­tor. If one pin in a power in­put had not been con­nected prop­erly, an­other pin would have had to take over in its stead. If both pins were not car­ry­ing any cur­rent, there would have been no phase on the shunt re­sis­tor and the card would not have started up. You’d get a black screen, but the hard­ware would still be fine.

NVIDIA, in its in­fi­nite wis­dom, changed this de­sign start­ing with the RTX 40 se­ries.

Instead of in­di­vid­ual shunt re­sis­tors for each power in­put, the shunt re­sis­tors are now con­nected in par­al­lel to all pins of the power in­put from a sin­gle 12VHPWR con­nec­tor. Additionally, the lines are re­com­bined be­hind the re­sis­tors. This mind-bog­gling de­sign flaw makes it im­pos­si­ble for the card to de­tect if pins are un­evenly loaded, since as much as the card is con­cerned, every­thing comes in through the same sin­gle line.

Connecting the shunt re­sis­tors in par­al­lel also makes them pretty much use­less since if one fails, the other will still have a phase and the card will hap­pily keep draw­ing power and not be any the wiser. If the card is sup­plied with 100W on each pin and 5 of the 6 pins don’t sup­ply a cur­rent, then a sin­gle pin has to sup­ply the en­tire 600W the card de­mands. No wire is de­signed for this amount of power draw. As a re­sult, ex­ces­sive fric­tion oc­curs from too many elec­trons trav­el­ing through the ca­ble all at once and it melts (see: Joule heat­ing).

NVIDIA re­al­ized that the de­sign around the shunt re­sis­tors in the RTX 40 se­ries was kinda stu­pid, so they re­vised it: by elim­i­nat­ing the re­dun­dant shunt re­sis­tor, but chang­ing noth­ing else about the flawed de­sign.

There’s some­thing to be said about the fact NVIDIA in­tro­duced the 12VHPWR con­nec­tor to the ATX stan­dard to al­low for only a sin­gle con­nec­tor to sup­ply their cards with up to 600W of power but mak­ing it way less safe to op­er­ate at these loads. Worse yet, NVIDIA says the four sensing pins” on top of the load bear­ing 12 pins are sup­posed to pre­vent the GPU from pulling too much power. The fact of the mat­ter is, how­ever, that the sensing pins” only tell the GPU how much it’s al­lowed to pull when the sys­tem turns on, but they do not con­tin­u­ously mon­i­tor the power draw—that would be for the shunt re­sis­tors on the GPU board, which we es­tab­lished, NVIDIA kept tak­ing out.

If I had to guess, NVIDIA must’ve been very con­fi­dent that the sensing pins” are a suit­able sub­sti­tu­tion for those shunt re­sis­tors in the­ory, but prac­tice showed that they were not at all ac­count­ing for user er­ror. That was their main ex­cuse af­ter af­ter it blew up in their face and they in­ves­ti­gated. And in­deed, if the 12VHPWR con­nec­tor is­n’t prop­erly in­serted, pins could not make proper con­tact, caus­ing the re­main­ing wires to carry more load. This is some­thing that the sensing pins” can­not de­tect, de­spite their name and NVIDIA sell­ing it as some sort of safety mea­sure.

NVIDIA also clearly did not fac­tor in the com­puter cases on the mar­ket that peo­ple would pair these cards with. The RTX 4090 was mas­sive, a real hec­cin chonker. It was so huge in fact, that it kicked off the trend of need­ing sup­port brack­ets to keep the GPU from sag­ging and strain­ing the PCIe slot. It also had its power con­nec­tor stick­ing out to the side of the card and com­puter cases were not pro­vid­ing enough clear­ance to not bend the plug. As was clar­i­fied af­ter the first re­ports of molten ca­bles came up, bend­ing a 12VHPWR ca­ble with­out at least 35mm (1.38in) clear­ance could loosen the con­nec­tion of the pins and cre­ate the prob­lem of the melt­ing con­nec­tors—some­thing that was­n’t a prob­lem with the bat­tle tested 6- and 8-pin PCIe con­nec­tors we’ve been us­ing up to this point.

Board part­ners like ASUS try to work around that de­sign flaw by in­tro­duc­ing in­ter­me­di­ate shunt re­sis­tors for each in­di­vid­ual load bear­ing pin be­fore the ones ac­cord­ing to NVIDIAs de­signs, but these don’t solve the un­der­ly­ing is­sue, that the card won’t shut it­self down if any of the lines aren’t draw­ing enough or any power. What you get at most is an in­di­ca­tor LED light­ing up and some soft­ware telling you Hey, uh, some­thing seems off, maybe take a look?”

The fact NVIDIA in­sists on keep­ing the 12VHPWR con­nec­tor around and not do jack shit about the de­sign flaws in their cards to pre­vent it from de­stroy­ing it­self from the slight­est mis­use should de­ter you from con­sid­er­ing any card from them that uses it.

Over the years NVIDIA has re­leased a num­ber of pro­pri­etary tech­nolo­gies to mar­ket that only work on their hard­ware—DLSS, CUDA, NVENC and G-Sync to just name a few. The tight cou­pling with with NVIDIAs hard­ware guar­an­tees com­pat­i­bil­ity and per­for­mance.

However, this comes at a con­sid­er­able price these days, as men­tioned ear­lier. If you’re think­ing about an up­grade you’re ei­ther look­ing at a down-pay­ment on a house or an up­root­ing of your en­tire hard­ware and soft­ware stack if you switch ven­dors.

If you’re a cre­ator, CUDA and NVENC are pretty much in­dis­pens­able, or edit­ing and ex­port­ing videos in Adobe Premiere or DaVinci Resolve will take you a lot longer. Same for live stream­ing, as us­ing NVENC in OBS of­floads video ren­der­ing to the GPU for smooth frame rates while stream­ing high-qual­ity video.

Speaking of games: G-Sync in gam­ing mon­i­tors also re­quires a lock-in with NVIDIA hard­ware, both on the GPU side and the mon­i­tor it­self. G-Sync mon­i­tors have a spe­cial chip in­side that NVIDIA GPUs can talk to in or­der to align frame tim­ings. This chip is ex­pen­sive and mon­i­tor man­u­fac­tur­ers have to get cer­ti­fied by NVIDIA. Therefore mon­i­tor man­u­fac­tur­ers charge a pre­mium for such mon­i­tors.

The com­pet­ing open stan­dard is FreeSync, spear­headed by AMD. Since 2019, NVIDIA also sup­ports FreeSync, but un­der their G-Sync Compatible” brand­ing. Personally, I would­n’t bother with G-Sync when a com­pet­ing, open stan­dard ex­ists and dif­fer­ences are neg­li­gi­ble.

The PC, as gam­ing plat­form, has long been held in high re­gards for its back­wards com­pat­i­bil­ity. With the RTX 50 se­ries, NVIDIA broke that go­ing for­ward.

PhysX, which NVIDIA in­tro­duced into their GPU lineup with the ac­qui­si­tion of Ageia in 2008, is a tech­nol­ogy that al­lows a game to cal­cu­late game world physics on an NVIDIA GPU. After the launch of the RTX 50 se­ries cards it was re­vealed that they lack sup­port for the 32-bit vari­ant of the tech. This causes games like Mirror’s Edge (2009) and Borderlands 2 (2012) that still run on to­day’s com­put­ers to take un­godly dips into sin­gle digit frame rates, be­cause the physics cal­cu­la­tions are forcibly per­formed on the CPU in­stead of the GPU.

Even though the first 64-bit con­sumer CPUs hit the mar­ket as early as 2003 (AMD Opteron, Athlon 64), 32-bit games were still very com­mon around these times, as Microsoft would not re­lease 64-bit ver­sions of Windows to con­sumers un­til Vista in 2006. NVIDIA later re­leased the source code for the GPU sim­u­la­tion ker­nel on GitHub. The pes­simist in me thinks they did this be­cause they can’t be both­ered to main­tain this them­selves and of­fload that main­te­nance bur­den to the pub­lic.

Back in 2018 when the RTX 20 se­ries launched as the first GPUs with hard­ware ac­cel­er­ated ray trac­ing, it sure was im­pres­sive and novel to have this tech in con­sumer graph­ics cards. However, NVIDIA also in­tro­duced up­scal­ing tech along­side it to coun­ter­bal­ance the in­sane com­pu­ta­tional ex­pense it in­tro­duced. From the be­gin­ning, the two were closely in­ter­linked. If you wanted ray trac­ing in Cyberpunk 2077 (the only game at the time that re­ally made use of the tech), you also had to en­able up­scal­ing if you did­n’t want your game­play ex­pe­ri­ence to be­come a (ridiculously pretty) PowerPoint slide show.

That up­scal­ing tech is the now ubiq­ui­tous DLSS, or Deep Learning Super Sampling. It ren­ders a game at a lower res­o­lu­tion in­ter­nally and then up­scales it to the tar­get res­o­lu­tion with spe­cial­ized ac­cel­er­a­tor chips on the GPU die. The only is­sue back then was that be­cause the tech was so new, barely any game made use of it.

What al­ways rubbed me the wrong way about how DLSS was mar­keted is that it was­n’t only for the less pow­er­ful GPUs in NVIDIAs line-up. No, it was mar­keted for the top of the line $1,000+ RTX 20 se­ries flag­ship mod­els to achieve the graph­i­cal fi­delity with all the bells and whis­tles. That, to me, was a warn­ing sign that maybe, just maybe, ray trac­ing was in­tro­duced pre­ma­turely and half-baked. Back then I the­o­rized, that by tightly cou­pling this sort of up­scal­ing tech to high-end cards and ray traced graph­ics, it sets a bad prece­dent. The kind of graph­ics NVIDIA was sell­ing us on were be­yond the cards’ ac­tual ca­pa­bil­i­ties.

Needing to up­scale to keep frame rates smooth al­ready seemed fake” to me. If that amount of money for a sin­gle PC com­po­nent still can’t pro­duce those graph­ics with­out us­ing soft­ware trick­ery to achieve ac­cept­able frame rates, then what am I spend­ing that money for to be­gin with ex­actly?

Fast-forward to to­day and noth­ing has re­ally changed, be­sides NVIDIA now charg­ing dou­ble the amount for the flag­ship RTX 5090. And guess what? It still does­n’t do Cyberpunk 2077—the flag­ship ray trac­ing game—with full ray trac­ing at a playable fram­er­ate in na­tive 4K, only with DLSS en­abled.

So 7 years into ray traced real-time com­puter graph­ics and we’re still nowhere near 4K gam­ing at 60 FPS, even at $1,999. Sure, you could ar­gue to sim­ply turn RT off and per­for­mance im­proves. But then, that’s not why you spent all that money for, right? Pure gen­er­a­tional up­lift in per­for­mance of the hard­ware it­self is minis­cule. They’re sell­ing us a so­lu­tion to a prob­lem they them­selves in­tro­duced and co-opted every de­vel­oper to in­clude the tech into their games. Now they’re do­ing an even more com­pu­ta­tion­ally ex­pen­sive ver­sion of ray trac­ing: path trac­ing. So all the gen­er­a­tional im­prove­ments we could’ve had are nul­li­fied again.

And even if you did­n’t spend a lot of money on a GPU, what you get is­n’t go­ing to be pow­er­ful enough to make those ray traced graph­ics pop and still run well. So most peo­ples’ ex­pe­ri­ence with ray trac­ing is: turn it on to see how it looks, re­al­ize it eats al­most all your FPS and never turn it on ever again, think­ing ray trac­ing is a waste. So what­ever ben­e­fits in re­al­is­tic light­ing was to be achieved is also nul­li­fied, be­cause de­vel­op­ers will still need to do light­ing the old-fash­ioned way for the peo­ple who don’t or can’t use ray trac­ing.

Making the use of up­scal­ing tech a re­quire­ment, at every GPU price point, for every AAA game, to achieve ac­cept­able lev­els of per­for­mance gives the im­pres­sion that the games we’re sold are tar­get­ing hard­ware that ei­ther does­n’t even ex­ist yet or no­body can af­ford, and we need con­stant band-aids to make it work. Pretty much all up­scalers force TAA for anti-alias­ing and it makes the en­tire im­age on the screen look blurry as fuck the lower the res­o­lu­tion is.

Take for ex­am­ple this Red Dead Redemption 2 footage show­ing TAA in ac­tion”, your $1,000+ at work:

Frame gen­er­a­tion ex­ac­er­bates this prob­lem fur­ther by adding to the ghost­ing of TAA be­cause it guessti­mates where pix­els will prob­a­bly go in an AI gen­er­ated frame in be­tween ac­tu­ally ren­dered frames. And when it’s off it re­ally looks off. Both in tan­dem look like some­one smeared your screen with vase­line. And this is what they ex­pect us to pay a pre­mium for? For the hard­ware and the games?!

Combine that with GPU prices be­ing ab­solutely ridicu­lous in re­cent years and it all takes on the form of a scam.

As use­ful or im­pres­sive a tech­nol­ogy as DLSS might be, game stu­dios re­ly­ing as heav­ily on it as they do, is turn­ing out to be detri­men­tal to the vi­sual qual­ity of their games and in­cen­tivizes aim­ing for a level of graph­i­cal fi­delity and com­plex­ity with di­min­ish­ing re­turns. Games from 2025 don’t look that dra­mat­i­cally dif­fer­ent or bet­ter than games 10 years prior, yet they run way worse de­spite more mod­ern and pow­er­ful hard­ware. Games these days de­mand such a high amount of com­pute that the use of up­scal­ing tech like DLSS is be­com­ing manda­tory. The most egre­gious ex­am­ple of this be­ing Monster Hunter Wilds, which states in its sys­tem re­quire­ments, that it needs frame gen­er­a­tion to run at ac­cept­able lev­els.

Meanwhile, Jensen Huang came up on stage dur­ing the keynote for the RTX 50 se­ries cards and proudly pro­claimed:

What he meant by that, as it turns out, is the RTX 5070 only get­ting there with every trick DLSS has to of­fer, in­clud­ing new DLSS 4 Multi-Frame Generation only avail­able on RTX 50 cards at the low­est qual­ity set­ting and all DLSS trick­ery turned up to the max.

You can­not tell me this is any­where near ac­cept­able lev­els of im­age qual­ity for thou­sands of bucks (video time-stamped):

Not only does that en­tail ren­der­ing games at a lower in­ter­nal res­o­lu­tion, you also have to tell your GPU to pull 3 ad­di­tional made up frames out of its ass so NVIDIA can waltz around claim­ing Runs [insanely de­mand­ing game here] as 5,000 FPS!!!” for the higher num­ber = bet­ter mas­tur­ba­tor crowd. All the while the im­age gets smeared to shit, be­cause NVIDIA just rein­vented the mo­tion smooth­ing op­tion from your TVs set­tings menu, but badly and also it’s AI now. Else what would all those Tensor-cores be do­ing than waste space on the GPU die that could’ve gone to ac­tual ren­der units? NVIDIA likes you to be­lieve DLSS can cre­ate FPS out of thin air and they’re try­ing to prove it with du­bi­ous sta­tis­tics—only dis­clos­ing in barely read­able fine print, that it’s a de­lib­er­ately cho­sen very small sam­ple size, so the num­bers look more im­pres­sive.

The res­o­lu­tion is fake, the frames are fake, too, and so is the mar­keted per­for­mance. Never mind that frame gen­er­a­tion in­tro­duces in­put lag that NVIDIA needs to counter-bal­ance with their Reflex” tech­nol­ogy, lest what you see on your screen is­n’t ac­tu­ally where you think it is be­cause, again, the frames faked in by Frame Generation did­n’t orig­i­nate from the game logic. They cre­ate prob­lems for them­selves, that they then cre­ate solutions” for in an end­less cy­cle of try­ing to keep up the smoke screen that these cards do more than they’re ac­tu­ally equipped to do, so a 20% pre­mium for a 10% up­lift in per­for­mance has the faintest re­sem­blance of jus­ti­fi­ca­tion.

I was afraid DLSS would get used to fake im­prove­ments where there are barely any back then and I feel noth­ing if not vin­di­cated for how NVIDIA is play­ing it up, while jack­ing up prices fur­ther and fur­ther with each gen­er­a­tion. None of that is raw per­for­mance of their cards. This is down­right de­ceit­ful bull­shit.

NVIDIA ly­ing on their own pre­sen­ta­tions about the real per­for­mance of their cards is one thing. It’s an­other thing en­tirely, when they start brib­ing and threat­en­ing re­view­ers, to steer the ed­i­to­r­ial di­rec­tion in NVIDIAs fa­vor.

In December 2020, hard­ware re­view chan­nel Hardware Unboxed re­ceived an email from NVIDIA Senior PR Manager Bryan Del Rizzo, af­ter they re­viewed NVIDIA cards on pure ras­ter­i­za­tion per­for­mance with­out DLSS or ray trac­ing, say­ing that per­for­mance did not live up to their ex­pec­ta­tions:

We have reached a crit­i­cal junc­ture in the adop­tion of ray trac­ing, and it has gained in­dus­try wide sup­port from top ti­tles, de­vel­op­ers, game en­gines, APIs, con­soles and GPUs. As you know, NVIDIA is all in for ray trac­ing. RT is im­por­tant and core to the fu­ture of gam­ing. But it’s also only one part of our fo­cused R&D ef­forts on rev­o­lu­tion­iz­ing video games and cre­at­ing a bet­ter ex­pe­ri­ence for gamers. This phi­los­o­phy is also re­flected in de­vel­op­ing tech­nolo­gies such as DLSS, Reflex and Broadcast that of­fer im­mense value to con­sumers who are pur­chas­ing a GPU. They don’t get free GPUs—they work hard for their money and they keep their GPUs for mul­ti­ple years.De­spite all of this progress, your GPU re­views and rec­om­men­da­tions con­tinue to fo­cus sin­gu­larly on ras­ter­i­za­tion per­for­mance and you have largely dis­counted all of the other tech­nolo­gies we of­fer to gamers. It is very clear from your com­mu­nity com­men­tary that you do not see things the same way that we, gamers, and the rest of the in­dus­try do.Our Founders Edition boards and other NVIDIA prod­ucts are be­ing al­lo­cated to me­dia out­lets that rec­og­nize the chang­ing land­scape of gam­ing and the fea­tures that are im­por­tant to gamers and any­one buy­ing a GPU to­day—be it for gam­ing, con­tent cre­ation or stu­dio and stream­ing.Hard­ware Unboxed should con­tinue to work with out add-in card part­ners to se­cure GPUs to re­view. Of course, you will still have ac­cess to ob­tain pre-re­lease dri­vers and press ma­te­ri­als. That won’t change.We are open to re­vis­it­ing this in the fu­ture should your ed­i­to­r­ial di­rec­tion change.

Hardware Unboxed was thus banned from re­ceiv­ing re­view sam­ples of NVIDIAs Founder Edition cards. It did­n’t take long for NVIDIA to back-pad­dle af­ter the heav­ily pub­li­cized out­cry blew up in their face.

Which makes it all the more sur­pris­ing, that a cou­ple years later, they’re try­ing to pull this again. With Gamers Nexus of all out­lets.

As Steve Burke ex­plains in the video, NVIDIA ap­proached him from the an­gle, that in or­der to still be given ac­cess to NVIDIA en­gi­neers for in­ter­views and spe­cials for their chan­nel, Gamers Nexus needs to in­clude Multi-Frame Generation met­rics into their bench­mark charts dur­ing re­views. Steve right­fully claims that this tac­tic of in­tim­i­dat­ing me­dia by tak­ing away ac­cess un­til they re­view NVIDIA cards in a way that agrees with the nar­ra­tive NVIDIA wants to up­hold, tar­nishes the le­git­i­macy of every re­view of every NVIDIA card ever made, past and pre­sent. It cre­ates an en­vi­ron­ment of dis­trust that is not at all con­duc­tive when you’re try­ing to be a tech re­viewer right now.

This also co­in­cided with the launch of the RTX 5060, a sup­pos­edly more bud­get friendly of­fer­ing. Interestingly, NVIDIA did not pro­vide re­view­ers with the nec­es­sary dri­vers to test the GPU prior to launch. Instead, the card and the dri­vers launched at the same time all of these re­view­ers were off at Computex, a com­puter expo in Taipei, Taiwan. The only out­lets that did get to talk about the card prior to re­lease were cherry-picked by NVIDIA, and even then it was merely pre­views of de­tails NVIDIA al­lowed them to talk about, not in­de­pen­dent re­views. Because if they would’ve been prop­erly re­viewed, they’d all come to the same con­clu­sions: that the 8 GB of VRAM would make this $299 budget card” age very poorly be­cause that is not enough VRAM to last long in to­day’s gam­ing land­scape.

But it prob­a­bly does­n’t mat­ter any­ways, be­cause NVIDIA is also busy tar­nish­ing the rep­u­ta­tion of their dri­vers, re­leas­ing hot­fix af­ter hot­fix in an at­tempt to stop their cards, old and new, from crash­ing seem­ingly ran­domly, when en­coun­ter­ing cer­tain com­bi­na­tions of games, DLSS and Multi-Frame Generation set­tings. Users of older gen­er­a­tion NVIDIA cards can sim­ply roll back to a pre­vi­ous ver­sion of the dri­ver to al­le­vi­ate these is­sues, but RTX 50 se­ries own­ers don’t get this lux­ury, be­cause older dri­vers won’t make their shiny new cards go.

With over 90% of the PC mar­ket run­ning on NVIDIA tech, they’re the clear win­ner of the GPU race. The losers are every sin­gle one of us.

Ever since NVIDIA re­al­ized there is tons of more money to be made on every­thing that is not part of putting mov­ing pix­els on a screen, they’ve taken that op­por­tu­nity head on. When the gold rush for crypto-min­ing started, they were among the first to sell heav­ily price-in­flated, GPU-shaped shov­els to any­body with more money than brains. Same now with the AI gold rush. PC gamers were hung out to dry.

NVIDIA knows we’re stuck with them and it’s in­fu­ri­at­ing. They keep pulling their shenani­gans and they will keep do­ing it un­til some­one cuts them down a cou­ple notches. But the only ones who could step up to the task won’t do it.

AMD did­n’t even at­tempt at fac­ing NVIDIA at the high-end seg­ment this gen­er­a­tion, in­stead try­ing to com­pete on merely the value propo­si­tions for the mid-range. Intel is seem­ingly still on the fence if they re­ally wanna sell ded­i­cated GPUs while shuf­fling their C-suite and gen­er­ally be­ing in dis­ar­ray. Both of them could be com­pelling op­tions when you’re on a bud­get, if it just was­n’t for the fact that NVIDIA has a long­stand­ing habit of pro­duc­ing pro­pri­etary tech that only runs well on their hard­ware. Now they’ve poi­soned the well with con­vinc­ing every­body that ray trac­ing is some­thing every game needs now and games that in­cor­po­rate it do so on an NVIDIA tech-stack which runs like shit on any­thing that is not NVIDIA. That is not a level play­ing field.

When The way it’s meant to be played” slowly turns into The only way it does­n’t run like ass” it cre­ates a moat around NVIDIA that’s ob­vi­ously hard to com­pete with. And gamers aren’t con­cerned about this be­cause at the end of the day, all they care about is that the game runs well and looks pretty.

But I want you to con­sider this: Games im­bued with such tech cre­ates a ven­dor lock-in ef­fect. It gives NVIDIA con­sid­er­able lever­age in terms of how games are made, which GPUs you con­sider buy­ing to run these games and how well they will even­tu­ally, ac­tu­ally run on your sys­tem. If all games that in­clude NVIDIAs tech are made in a way that make it so you have to reach for the more ex­pen­sive mod­els, you can be sure that’s a soft power move NVIDIA is gonna pull.

And as we es­tab­lished, it looks like they’re al­ready do­ing that. Tests show that the lower-end NVIDIA graph­ics cards can­not (and prob­a­bly were never in­tended to) per­form well enough, even with DLSS, be­cause in or­der to get any­thing out of DLSS you need more VRAM, which these lower-end cards don’t have enough of. So they’re al­ready up­selling you on more ex­pen­sive mod­els by cut­ting cor­ners in ways that make it a no-brainer” to spend more money on more ex­pen­sive cards, when you oth­er­wise would­n’t have.

And they’re us­ing their mar­ket dom­i­nance to con­trol the nar­ra­tive in the me­dia, to make sure you keep giv­ing them money and keep you un- or at the very least mis­in­formed. When you don’t have to com­pete, but don’t have any im­prove­ments to sell ei­ther (or have no in­cen­tive for ac­tual, real R&D) you do what every mo­nop­o­list does and wring out your con­sumer base un­til you’ve bled them dry.

A few years back I would’ve ar­gued that that’s their pre­rog­a­tive if they pro­vide the bet­ter tech­ni­cal so­lu­tions to prob­lems in graph­ics de­vel­op­ment. Today, I be­lieve that they are ma­raud­ing mo­nop­o­lists, who are too high on their own sup­ply and they’re ru­in­ing it for every­body. If NVIDIA had real gen­er­a­tional im­prove­ments to sell, they would­n’t do it by sell­ing us out­right lies.

And I hate that they’re get­ting away with it, time and time again, for over seven years.

...

Read the original on blog.sebin-nyshkim.net »

2 443 shares, 25 trendiness

being too ambitious is a clever form of self-sabotage

There is a mo­ment, just be­fore cre­ation be­gins, when the work ex­ists in its most per­fect form in your imag­i­na­tion. It lives in a crys­talline space be­tween in­ten­tion and ex­e­cu­tion, where every word is pre­cisely cho­sen, every brush­stroke de­lib­er­ate, every note in­evitable, but only in your mind. In this prelap­sar­ian state, the work is flaw­less be­cause it is noth­ing: a ghost of pure po­ten­tial that haunts the cre­ator with its im­pos­si­ble beauty.

This is the mo­ment we learn to love too much.

We be­come cu­ra­tors of imag­i­nary mu­se­ums, we craft elab­o­rate shrines to our un­re­al­ized pro­jects… The novel that will re­de­fine lit­er­a­ture. The startup that will solve hu­man suf­fer­ing. The art­work that will fi­nally make the in­vis­i­ble vis­i­ble.

But the mo­ment you be­gin to make some­thing real, you kill the per­fect ver­sion that lives in your mind.

Creation is not birth; it is mur­der. The mur­der of the im­pos­si­ble in ser­vice of the pos­si­ble.

We are per­haps the only species that suf­fers from our own imag­i­na­tion. A bird build­ing a nest does not first con­ceive of the per­fect nest and then suf­fer from the in­ad­e­quacy of twigs and mud. A spi­der spin­ning a web does not pause, par­a­lyzed by vi­sions of geo­met­ric per­fec­tion be­yond her cur­rent ca­pa­bil­i­ties. But hu­mans? We pos­sess the strange gift of be­ing haunted by vi­sions of what could be, tor­mented by the gap be­tween our as­pi­ra­tions and our abil­i­ties.

This tor­ment has a name in cog­ni­tive sci­ence: the taste-skill dis­crep­ancy.” Your taste (your abil­ity to rec­og­nize qual­ity) de­vel­ops faster than your skill (your abil­ity to pro­duce it). This cre­ates what Ira Glass fa­mously called the gap,” but I think of it as the thing that sep­a­rates cre­ators from con­sumers.

Watch a child draw. They cre­ate fear­lessly, un­self­con­sciously, be­cause they have not yet de­vel­oped the curse of so­phis­ti­cated taste! They draw pur­ple trees and fly­ing ele­phants with the con­fi­dence of some­one who has never been told that trees aren’t pur­ple, that ele­phants don’t fly. But some­where around age eight or nine, taste ar­rives like a harsh critic, and sud­denly the gap opens. The child can see that their draw­ing does­n’t match the im­pos­si­ble stan­dard their de­vel­op­ing aes­thetic sense has con­jured.

This is what leads most of us to stop draw­ing. Not be­cause we lack tal­ent, but be­cause we’ve de­vel­oped the abil­ity to judge be­fore we’ve de­vel­oped the abil­ity to ex­e­cute. We be­come con­nois­seurs of our own in­ad­e­quacy.

And this is where our minds, in their des­per­ate at­tempt, de­vise an el­e­gant es­cape. Faced with this un­bear­able gap, we de­velop what re­searchers call productive avoid­ance” — stay­ing busy with plan­ning, re­search­ing, and dream­ing while avoid­ing the vul­ner­a­ble act of cre­at­ing some­thing con­crete that might fail. It feels like work be­cause it en­gages all our in­tel­lec­tual fac­ul­ties. But it func­tions as avoid­ance be­cause it pro­tects us from the ter­ri­fy­ing pos­si­bil­ity of cre­at­ing some­thing im­per­fect. I see this in wannabe founders lis­ten­ing to pod­casts on loop, wannabe TikTokkers watch­ing hours of videos as research,” and wannabe nov­el­ists who spend years de­vel­op­ing char­ac­ter back­sto­ries for books they never be­gin.

The spi­der does­n’t face this prob­lem. It spins webs ac­cord­ing to an­cient ge­netic in­struc­tions, each one re­mark­ably sim­i­lar to the last. But hu­man cre­ativ­ity re­quires us to nav­i­gate the treach­er­ous ter­ri­tory be­tween what we can imag­ine and what we can ac­tu­ally do. We are cursed with vi­sions of per­fec­tion and blessed with the ca­pac­ity to fail to­ward them.

In a pho­tog­ra­phy class­room at the University of Florida, Jerry Uelsmann un­know­ingly de­signed the per­fect ex­per­i­ment for un­der­stand­ing ex­cel­lence. He di­vided his stu­dents into two groups.

The quan­tity group would be graded on vol­ume: one hun­dred pho­tos for an A, ninety pho­tos for a B, eighty pho­tos for a C, and so on.

The qual­ity group only need to pre­sent one per­fect photo.

At se­mes­ter’s end, all the best pho­tos came from the quan­tity group.

The quan­tity group learned some­thing that can­not be taught: that ex­cel­lence emerges from in­ti­macy with im­per­fec­tion, that mas­tery is built through be­friend­ing fail­ure, that the path to cre­at­ing one per­fect thing runs di­rectly through cre­at­ing many im­per­fect things.

Think about what those hun­dred at­tempts ac­tu­ally were: a hun­dred con­ver­sa­tions with light. A hun­dred ex­per­i­ments in com­po­si­tion. A hun­dred op­por­tu­ni­ties to see the gap be­tween in­ten­tion and re­sult, and to ad­just. A hun­dred chances to dis­cover that re­al­ity has opin­ions about your vi­sion, and that those opin­ions are of­ten more in­ter­est­ing than your orig­i­nal plan.

The qual­ity group, mean­while, spent their se­mes­ter in the­o­ret­i­cal pur­ga­tory… an­a­lyz­ing per­fect pho­tographs, study­ing ideal com­po­si­tions, re­search­ing op­ti­mal tech­niques. They de­vel­oped so­phis­ti­cated knowl­edge about pho­tog­ra­phy with­out de­vel­op­ing the em­bod­ied wis­dom that comes only from re­peat­edly press­ing the shut­ter and liv­ing with the con­se­quences.

They be­came ex­perts in the map while the quan­tity group was ex­plor­ing the ter­ri­tory. When the se­mes­ter ended, the qual­ity group could tell you why a pho­to­graph was ex­cel­lent. The quan­tity group could make ex­cel­lent pho­tographs.

When you imag­ine achiev­ing some­thing, the same neural re­ward cir­cuits fire as when you ac­tu­ally achieve it. This cre­ates what neu­ro­sci­en­tists call goal sub­sti­tu­tion”—your brain be­gins to treat plan­ning as ac­com­plish­ing. The plan­ning feels so sat­is­fy­ing be­cause, neu­ro­log­i­cally, it is sat­is­fy­ing. You’re get­ting a real high from an imag­i­nary achieve­ment.

But here’s where it gets in­ter­est­ing: this neu­ro­log­i­cal quirk serves us beau­ti­fully in some con­texts and de­stroys us in oth­ers. An Olympic ath­lete vi­su­al­iz­ing their rou­tine cre­ates neural path­ways that im­prove ac­tual per­for­mance. They’re us­ing imag­i­na­tion to en­hance ca­pa­bil­ity they al­ready pos­sess. A sur­geon men­tally re­hears­ing a com­plex pro­ce­dure is op­ti­miz­ing skills they’ve al­ready de­vel­oped through years of prac­tice.

But when imag­i­na­tion be­comes a sub­sti­tute for prac­tice rather than an en­hance­ment of it, the same mech­a­nism be­comes a trap. The as­pir­ing nov­el­ist who spends months craft­ing the per­fect out­line gets the same neu­ro­log­i­cal re­ward as the nov­el­ist who spends months ac­tu­ally writ­ing. The brain can’t tell the dif­fer­ence be­tween pro­duc­tive prepa­ra­tion and elab­o­rate pro­cras­ti­na­tion.

The al­go­rith­mic ma­chin­ery of at­ten­tion has, of course, en­gi­neered sim­ple com­par­i­son. But it has also seem­ingly erased the process that makes mas­tery pos­si­ble. A time-lapse of some­one cre­at­ing a mas­ter­piece gets mil­lions of views. A real-time video of some­one strug­gling through their hun­dredth mediocre at­tempt dis­ap­pears into al­go­rith­mic ob­scu­rity.

Instagram shows you the fin­ished paint­ing, never the failed color ex­per­i­ments. TikTok shows you the per­fect per­for­mance, never the thou­sand im­per­fect re­hearsals. LinkedIn shows you the pro­mo­tion an­nounce­ment, never the years of unglam­orous skill-build­ing that made it pos­si­ble.

This cre­ates what me­dia the­o­rist Neil Postman would have rec­og­nized as a technological epis­te­mol­ogy:” the plat­forms don’t just change what we see, they change what we think knowl­edge looks like. We be­gin to be­lieve that learn­ing should be im­me­di­ately vis­i­ble, that progress should be con­sis­tently up­ward, that strug­gle is ev­i­dence of in­ad­e­quacy rather than ne­ces­sity.

The truth is that every mas­ter­piece ex­ists within an in­vis­i­ble ecol­ogy of lesser works. The great paint­ing emerges from hun­dreds of stud­ies, sketches, and failed at­tempts. The bril­liant book grows from years of mediocre writ­ing. The break­through in­no­va­tion builds on count­less small im­prove­ments and par­tial fail­ures. We see the oak tree, never the acorns. The sym­phony, never the scales. The mas­ter­piece, never the ap­pren­tice­ship.

Too much am­bi­tion dis­rupts this nat­ural ecol­ogy; it de­mands that every at­tempt be sig­nif­i­cant, every ef­fort be wor­thy of the ul­ti­mate vi­sion. But the ecol­ogy of mas­tery re­quires some­thing our cul­ture has sys­tem­at­i­cally de­val­ued: the priv­i­lege of be­ing a be­gin­ner.

Watch a four-year-old fin­ger-paint. They don’t cre­ate for Instagram likes or gallery walls or mar­ket val­i­da­tion. They cre­ate for the pure joy of watch­ing col­ors bleed into each other, for the sat­is­fy­ing squish of paint be­tween fin­gers, for the magic of mak­ing some­thing ex­ist that did­n’t ex­ist be­fore. They pos­sess the free­dom to cre­ate with­out the bur­den of ex­pec­ta­tion.

Learning any­thing as an adult means re­claim­ing this be­gin­ner’s priv­i­lege. It means giv­ing your­self per­mis­sion to be bad at some­thing, to cre­ate things that serve no pur­pose other than your own dis­cov­ery and de­light. The be­gin­ner’s mind un­der­stands that mas­tery emerges from play, that ex­cel­lence grows from ex­per­i­men­ta­tion, that the path to cre­at­ing some­thing great runs di­rectly through cre­at­ing many things that aren’t great at all.

My alma mater, Olin College of Engineering, had a motto that rewired how I think about every­thing: Do-Learn.” Those two words con­tain a rev­o­lu­tion. Not learn-then-do,” which im­plies you must earn per­mis­sion to act. Not think-then-execute,” which sug­gests the­ory should pre­cede prac­tice. But the rad­i­cal idea that do­ing is learn­ing! That un­der­stand­ing emerges from your hands as much as your head, that wis­dom lives in the con­ver­sa­tion be­tween in­ten­tion and re­al­ity.

This phi­los­o­phy saved me from my own per­fec­tion­ism more times than I can count. When I wanted to learn cook­ing, I did­n’t read recipes end­lessly; I burned onions and dis­cov­ered how heat ac­tu­ally be­haves. When I wanted to learn a lan­guage, I did­n’t mem­o­rize gram­mar rules; I stum­bled through con­ver­sa­tions with na­tive speak­ers who cor­rected my mis­takes in real time. When I wanted to learn how to mon­e­tize on YouTube, I did­n’t write elab­o­rate con­tent strate­gies; I started post­ing videos and let the bru­tal feed­back teach me what ac­tu­ally res­onated.

Do-Learn” gave me per­mis­sion to start be­fore I was ready, fail early, fail of­ten, to dis­cover through mak­ing rather than think­ing my way to readi­ness.

Here’s what hap­pens to those brave enough to ac­tu­ally be­gin: you dis­cover that start­ing is only the first chal­lenge. The real test comes later, at the quit­ting point” —that in­evitable mo­ment when the ini­tial ex­cite­ment fades and the work re­veals its true na­ture.

The quit­ting point ar­rives dif­fer­ently for dif­fer­ent peo­ple, but it al­ways ar­rives. For writ­ers, maybe it’s around page 30 of their novel, when the ini­tial burst of in­spi­ra­tion runs out and they re­al­ize they have no idea what hap­pens next. For en­tre­pre­neurs, maybe it’s af­ter the first few months, when the mar­ket does­n’t re­spond as en­thu­si­as­ti­cally as friends and fam­ily did. For artists, it might come when they see their work ob­jec­tively for the first time and re­al­ize the enor­mous gap be­tween their vi­sion and their cur­rent ca­pa­bil­ity.

This is the mo­ment that sep­a­rates the quan­tity group from the qual­ity group: not at the be­gin­ning, but in the mid­dle, when the work stops be­ing fun and starts be­ing work.

The quan­tity group has an ad­van­tage here! They’ve al­ready be­come in­ti­mate with im­per­fec­tion. They’ve learned that each at­tempt is data, not judg­ment. They’ve de­vel­oped what psy­chol­o­gists call task ori­en­ta­tion” rather than ego ori­en­ta­tion;” they’re fo­cused on im­prov­ing the work rather than pro­tect­ing their self-im­age.

But the qual­ity group ap­proaches this mo­ment with a dif­fer­ent psy­chol­ogy. Having spent so much time craft­ing per­fect plans, they in­ter­pret early strug­gles as ev­i­dence that some­thing is wrong! They ex­pected the work to val­i­date their vi­sion, but in­stead it re­veals the dis­tance be­tween in­ten­tion and ca­pa­bil­ity.

I think this is where most cre­ative pro­jects die — not from lack of tal­ent or re­sources, but from mis­un­der­stand­ing the na­ture of the work it­self. The quit­ting point feels like fail­ure, but it’s ac­tu­ally where the real work be­gins.

It’s the tran­si­tion from work­ing with imag­i­nary ma­te­ri­als to work­ing with real ones, from the­ory to prac­tice, from plan­ning to build­ing.

The quit­ting point is the mo­ment you dis­cover whether you want to be some­one who had a great idea or some­one who made some­thing real.

Counterintuitively, the path to cre­at­ing your best work of­ten be­gins with per­mis­sion to cre­ate your worst.

When you lower the stakes, you en­ter into a con­ver­sa­tion with re­al­ity. Reality has opin­ions about your work that are of­ten more in­ter­est­ing than your own. Reality shows you what works and what does­n’t. Reality in­tro­duces you to happy ac­ci­dents and un­ex­pected di­rec­tions. Reality is the col­lab­o­ra­tor you did­n’t know you needed.

This is how stan­dards are ac­tu­ally achieved… through process, not procla­ma­tion. The pho­tog­ra­pher who takes a hun­dred pho­tos de­vel­ops stan­dards through prac­tice. The writer who writes daily de­vel­ops judg­ment through rep­e­ti­tion. The en­tre­pre­neur who starts small de­vel­ops wis­dom through ex­pe­ri­ence.

Last week, some­thing I wrote went vi­ral on Substack. In a mat­ter of days, I gained over a thou­sand new sub­scribers, watched my piece get shared across plat­forms, and felt that in­tox­i­cat­ing rush of work that res­onates be­yond your own echo cham­ber. I’m deeply grate­ful, truly. But al­most im­me­di­ately, a fa­mil­iar pit opened in my stom­ach. What now? What if the next one does­n’t land? How do you fol­low some­thing that took on a life of its own?

I found my­self open­ing blank pages and clos­ing them again, par­a­lyzed by the very suc­cess I’d worked to­ward for years.

When I ex­pressed this fear, a reader named Harsh (@harshdarji) left this com­ment: You are a shooter, your job is to keep shoot­ing. Don’t even think about misses. Because as soon as you start wor­ry­ing about the misses, you’ll start doubt­ing your abil­ity.”

Not much of a sports gal, but the metaphor moved me. And the irony was­n’t lost on me! Here I was, dis­pens­ing ad­vice about cre­ative con­sis­tency and the dan­gers of per­fec­tion­ism, yet falling into the ex­act trap I warn oth­ers about.

I started writ­ing on Substack in December 2022. It’s now mid-2025, and I’ve just reached my goal of be­ing in the top 50 Tech Substacks in the world. There was so much do­ing, do­ing, do­ing be­fore this one hit. Dozens of pieces that barely made a rip­ple. Months of show­ing up to write for an au­di­ence I was­n’t sure ex­isted.

But suc­cess has a way of mak­ing you for­get the very process that cre­ated it. It whis­pers se­duc­tive lies about re­peata­bil­ity, about for­mu­las, about the pos­si­bil­ity of con­trol­ling out­comes rather than fo­cus­ing on in­puts. It makes you think you need to top” your last suc­cess in­stead of sim­ply con­tin­u­ing the prac­tice that made suc­cess pos­si­ble in the first place.

I need to re­mind my­self:

Your mas­ter­piece won’t emerge from your mind fully formed like Athena from Zeus’s head. It will emerge from your will­ing­ness to start badly and im­prove steadily. It will emerge from your com­mit­ment to show­ing up con­sis­tently rather than bril­liantly. It will emerge from your abil­ity to see fail­ure as in­for­ma­tion rather than in­dict­ment.

The work that will mat­ter most to you, the work that will sur­prise you with its sig­nif­i­cance, is prob­a­bly much smaller than you imag­ine and much closer than you think.

My Olin pro­fes­sors were right about those two words. Do. Learn. But what I did­n’t fully in­ter­nal­ize un­til af­ter grad­u­a­tion: the learn­ing never stops re­quir­ing the do­ing. The do­ing never stops re­quir­ing learn­ing. The work changes me. I change the work. The work changes me again.

We are still the only species cursed with vi­sions of what could be. But per­haps that’s hu­man­i­ty’s most beau­ti­ful ac­ci­dent. To be haunted by pos­si­bil­i­ties we can­not yet reach, to be dri­ven by dreams that ex­ceed our cur­rent grasp. The curse and the gift are the same thing: we see fur­ther than we can walk, dream big­ger than we can build, imag­ine more than we can cre­ate.

And so we make im­per­fect things in ser­vice of per­fect vi­sions. We write rough drafts to­ward mas­ter­pieces we may never achieve. We build pro­to­types of fu­tures we can barely en­vi­sion. We close the gap be­tween imag­i­na­tion and re­al­ity one flawed at­tempt at a time.

The pho­tog­ra­phy pro­fes­sor di­vided his class and waited. He knew what the dark­room would teach them, what the de­vel­op­ing chem­i­cals would re­veal. Fifty rolls of film later, some stu­dents had learned to make beauty from mess. Others had learned to make the­o­ries from anx­i­ety.

The film did­n’t care about their in­ten­tions. It only re­sponded to their will­ing­ness to press the shut­ter.

Your hands are al­ready dirty. The work is wait­ing. Lower the stakes, and be­gin.

...

Read the original on maalvika.substack.com »

3 385 shares, 12 trendiness

Mini NASes marry NVMe to Intel's efficient chip

I’m in the process of re­build­ing my home­lab from the ground up, mov­ing from a 24U full-size 4-post rack to a mini rack.

One of the most dif­fi­cult de­vices to down­size (especially eco­nom­i­cally) is a NAS. But as my needs have changed, I’m buck­ing the trend of all data­hoard­ers and I need less stor­age than the 120 TB (80 TB us­able) I cur­rently have.

It turns out, when you stop run­ning an en­tire YouTube chan­nel in your home (I’m in a stu­dio now), you don’t need more than a few ter­abytes, so my new con­ser­v­a­tive es­ti­mate is 6 ter­abytes of us­able space. That’s within the realm of NVMe SSD stor­age for a few hun­dred bucks, so that’s my new tar­get.

Three new mini NASes were re­leased over the past year that are great can­di­dates, and I have re­la­tion­ships with all three com­pa­nies mak­ing them, so I am lucky to have been of­fered re­view units of each:

I’ve com­piled all my ex­pe­ri­ence with the three NASes into one con­cise YouTube video, which I’ve em­bed­ded be­low:

However, I thought I’d at least give a few notes here for those in­ter­ested in read­ing, not watch­ing.

Generally, all three mini NASes use an Intel N100/N150 chip, and divvy up its 9 PCIe Gen 3 lanes into 4 (or in the Beelink’s case, 6) M.2 NVMe SSD slots. They all have 2.5 Gbps net­work­ing, though the GMKtec and Beelink have dual 2.5 Gbps NICs.

The dif­fer­ence is in the ex­e­cu­tion, and each box has one or two mi­nor is­sues that keep me from giv­ing a whole-hearted rec­om­men­da­tion. When you’re deal­ing with tiny de­vices, there’s al­ways a com­pro­mise. So you have to see which com­pro­mises you’re most will­ing to deal with. (Or just buy a full size NAS if you have the space/​power for it.)

I pre­vi­ously re­viewed this NAS in April; see my blog post The (almost) per­fect mini NAS for my mini rack.

That almost’ is do­ing a lot of heavy lift­ing, though; there were in­her­ent cool­ing is­sues if you ran the box with four NVMe dri­ves, and it was bad enough GMKtec went through a de­sign re­vi­sion.

Their newer ver­sion of the G9 has a much larger cool­ing vent on the side, and I be­lieve they may have tweaked some other as­pects of the de­sign. I’m not sure how it ends up, though, so I’ll have to post an up­dated re­view if I can get my hands on one of these up­dated mod­els.

The K100 is even smaller than the G9, and it keeps things cool much bet­ter, likely ow­ing to much more ven­ti­la­tion on the sides, a heatsink that cov­ers VRMs (Voltage Regulation Modules) and some of the other hot chips, and a full metal en­clo­sure.

The ma­jor down­side is de­spite cost­ing $299 (over $100 more than the G9′s base spec), it drops eMMC (so you have to in­stall an OS on one of the 4 NVMe SSDs, or on an ex­ter­nal USB stick), and drops WiFi (this is wired only—and a sin­gle 2.5 Gbps port ver­sus 2 on the other two mini NASes.

The BIOS is also very light on cus­tomiza­tion, only re­ally al­low­ing tweak­ing the power re­store be­hav­ior and per­for­mance pro­file.

But it’s very quiet (less than 37 dBa un­der load), ab­solutely tiny, and uses the least power of all the Intel mini NASes I tested.

Speaking of quiet, the ME mini is even more quiet. It’s not silent, but the larger fan and chimney’ heatsink de­sign (reminiscent of Apple’s Trash Can Mac) mean it can keep from throt­tling even in performance’ mode in­def­i­nitely—and barely scratch 35 dBa while do­ing so.

It has not 4 but 6 NVMe slots, though 5 of those slots are PCIe Gen 3 x1 (one lane of band­width is 8 GT/sec), and the last slot is x2 (two lanes).

If you or­der one with a Crucial SSD pre-in­stalled, it will be in­stalled in that last x2 slot for max­i­mum per­for­mance—and the test unit I was shipped came with Windows 11 pre­in­stalled.

But it has built-in eMMC (64 GB), and I in­stalled Ubuntu on that for my test­ing. Another nice fea­ture is a built-in power sup­ply, which is quite rare on these mini PCs. Often you buy the thing based on the size of the mini PC, then hang­ing out back, there’s a DC power sup­ply the same size as the mini PC!

Not here, it’s got a small power sup­ply tucked in­side one part of the heatsink, though I’m not sure how much ther­mal trans­fer there is be­tween the heatsink and the power sup­ply. I did­n’t en­counter any over­heat­ing is­sues, though, and even with the pre­in­stalled Crucial SSD only touch­ing the ther­mal pad where the NVMe con­troller chip sits (there was an air gap be­tween the ther­mal pad and all the flash stor­age chips), I did­n’t have any con­cerns over ther­mals.

It did run a lit­tle hot­ter over­all than the K100, but it was also in full per­for­mance/​turbo boost mode, whereas the K100 comes from the fac­tory with a more bal­anced power pro­file.

The G9 is def­i­nitely the win­ner in terms of price, but the cool­ing trade­offs at least with the ini­tial re­vi­sion I re­viewed were not worth it, be­cause it would lock up and re­boot if it over­heated. The ME mini is cur­rently $209 (starting) on pre-sale, but that price could go up:

All three NASes would per­form fine for my home­lab needs, giv­ing at least around 250 MB/sec of read/​write per­for­mance, though the Beelink seems to suf­fer a lit­tle split­ting out all those NVMe slots with x1 band­width:

And as I men­tioned ear­lier, the K100 was def­i­nitely the most ef­fi­cient, partly due to it ship­ping with a bal­anced power pro­file in­stead of performance’, and also by the fact it ditches fea­tures like WiFi and eMMC which eat up a lit­tle more power:

In the end, there’s no clear win­ner for all cases. The GMKtec is the bud­get op­tion, and sup­pos­edly they have a new ther­mal de­sign that should solve the sta­bil­ity is­sues I was en­coun­ter­ing. The K100 is tiny, uses the least en­ergy, and runs the coolest… but is also the most ex­pen­sive, and has no built-in eMMC. The Beelink is the most ex­pand­able, and is cur­rently cheaper than the K100, but that’s a pre-sale price. And the ex­tra drive slots means each drive only taps into one lane of band­width in­stead of two.

So if you’re in the mar­ket for a tiny home­lab stor­age server, pick one based on your own re­quire­ments.

For me, I’m lean­ing to­wards the K100, but only if I can find a good deal on 4 TB NVMe SSDs, be­cause I need at least 6 TB of us­able space in a RAIDZ1 ar­ray.

...

Read the original on www.jeffgeerling.com »

4 352 shares, 22 trendiness

House Passes Tax Bill Sending to President for Signature – Details Inside

Today, the House passed the Senate’s ver­sion of the One Big Beautiful Bill Act” (OBBBA), marking a sig­nif­i­cant over­haul to fed­eral tax pol­icy. The sign­ing re­flects a ma­jor pivot in leg­isla­tive pri­or­i­ties to­ward do­mes­tic pro­duc­tion and pro-busi­ness tax pol­icy.

The new law re­stores 100% bonus de­pre­ci­a­tion, re­in­states im­me­di­ate ex­pens­ing for U. S.-based R&D, ter­mi­nates dozens of Inflation Reduction Act (IRA) clean en­ergy pro­grams, and per­ma­nently ex­tends in­di­vid­ual tax cuts. It also in­tro­duces fresh in­cen­tives for mid­dle-class fam­i­lies and man­u­fac­tur­ers with de­tails out­lined be­low.

...

Read the original on www.kbkg.com »

5 263 shares, 10 trendiness

Everything around LLMs is still magical and wishful thinking

Everything around LLMs is still mag­i­cal and wish­ful think­ing

Hacker News brought this gem of a com­ment in a yet an­other dis­cus­sion about AI:

Much of the crit­i­cism of AI on HN feels dri­ven by devs who have not fully in­gested what is go­ing with MCP, tools etc. right now as not looked deeper than mak­ing API calls to an LLM

As I re­sponded, this is crypto all over again. If you dare ques­tion any­thing around AI, you’re just a clue­less mo­ron who has­n’t re­alised the one true mean­ing of things.

Another per­son chimed in with an as­tute ob­ser­va­tion:

The huge gap be­tween the peo­ple who claim It helps me some/​most of the time” and the other peo­ple who claim I’ve tried every­thing and it’s all bad” is re­ally in­ter­est­ing to me.

The an­swer to this is easy, sim­ple, and rather ob­vi­ous. However, in an in­dus­try in­creas­ingly over­whelmed by mag­i­cal, wish­ful think­ing, I haven’t seen many peo­ple ad­dress this.

So why is there such a gap? Why do some peo­ple see LLMs as mag­i­cal wish-grant­ing mir­a­cles, and oth­ers dis­miss as use­less?

I’ve an­swered in the com­ments, and I’ll re­pro­duce the an­swer here.

Because we only see very dis­jointed de­scrip­tions, with no at­tempt to quan­tify what we’re talk­ing about.

For every de­scrip­tion of how LLMs work or don’t work we know only some, but not all of the fol­low­ing:

Do we know which pro­jects peo­ple work on? No

Do we know which code­bases (greenfield, ma­ture, pro­pri­etary etc.) peo­ple work on? No

Do we know the level of ex­per­tise the peo­ple have? No. Is the ex­per­tise in the same do­main, code­base, lan­guage that they ap­ply LLMs to? We don’t know.

How much ad­di­tional work did they have re­view­ing, fix­ing, de­ploy­ing, fin­ish­ing etc.? We don’t know.

Even if you have one per­son de­scrib­ing all of the above, you will not be able to com­pare their ex­pe­ri­ence to any­one else’s be­cause you have no idea what oth­ers an­swer for any of those bul­let points.

And that’s be­fore we get into how all these sys­tems and agents are com­pletely non-de­ter­min­is­tic, and what works now may not work even 1 minute from now for the ex­act same prob­lem.

And that’s be­fore we ask the ques­tion of how a se­nior en­gi­neer’s ex­pe­ri­ence with a green­field pro­ject in React with one agent and model can even be com­pared to a non-cod­ing de­signer in a closed-source pro­pri­etary code­base in OCaml with a dif­fer­ent agent and model (or even the same agent/​model, be­cause of non-de­ter­min­ism).

And yet, hype and magic have such a sway over our in­dus­try that seem­ingly a ma­jor­ity of peo­ple just buy in to what­ever claim, how­ever out­ra­geous or truth­ful it is.

It’s es­pe­cially egre­gious when it comes from industry lead­ers” which just say things like this

I’ve been us­ing Claude Code for a cou­ple of days, and it has been ab­solutely ruth­less in chew­ing through legacy bugs in my gnarly old code base. It’s like a wood chip­per fu­eled by dol­lars. It can power through shock­ingly im­pres­sive tasks, us­ing noth­ing but chat.

You don’t even se­lect con­text. You just open your heart and your wal­let, and Claude Code takes the wheel.

… As long as the bank au­tho­riza­tions keep com­ing through, it will push on bug fixes un­til they’re de­ployed in pro­duc­tion, and then start scan­ning through the user logs to see how well it’s do­ing.

And yet there are 1.8k likes and 204 re­posts.

So yeah. If you don’t turn off the part of your brain re­spon­si­ble for crit­i­cal think­ing and buy into the hype hook line and sinker, you’re a clue­less mo­ron who does­n’t un­der­stand the true mean­ing of things.

Wait. What about you, the au­thor?”, you may ask.

I’ve used most of the tools avail­able un­der the sun in mul­ti­ple com­bi­na­tions. I have side pro­jects en­tirely de­signed by Vercel’s v0. I have a full mon­i­tor­ing app built in SwiftUI (I know zero Swift) with Claude Code. I cre­ate posters for events I host with Midjourney. I vibe-coded an MCP server in Elixir (but not in phoenix.new).

Like most skep­tics and crit­ics, I use these tools daily.

And 50% of the time they work 50% of the time.

It’s a non-de­ter­min­is­tic sta­tis­ti­cal ma­chine. When it works, it may feel like magic. But it’s nei­ther magic nor is it en­gi­neer­ing.

The whole dis­course around LLMs as­sumes it’s strictly one of the two.

And here we are.

...

Read the original on dmitriid.com »

6 240 shares, 9 trendiness

EverQuest

It is­n’t al­ways or even usu­ally the pi­o­neers who reap the re­wards of the trails they blaze. As of­ten as not, some prag­matic Johnny-come-lately pops in to make off with the booty.

Such was the case in the MMORPG space in the late 1990s. There Ultima Online demon­strated that there was an au­di­ence for a per­sis­tent fan­tasy world where peo­ple could live out al­ter­na­tive ex­is­tences to­gether through the magic of the Internet. Yet it was an­other game called EverQuest that turned the proof of con­cept into a thriv­ing busi­ness that en­thralled hun­dreds of thou­sands of play­ers for years on end, gen­er­at­ing enor­mous amounts of money in the process. For, while the first-mover ad­van­tage should not be un­der­es­ti­mated, there’s some­thing to be said for be­ing the sec­ond mover as well. EverQuest got to watch from back­stage as Ultima Online flubbed line af­ter line and stum­bled over as­sorted pieces of scenery. Then, with a list in hand of what not to do, it was able to stride con­fi­dently onto cen­ter stage to a stand­ing ova­tion. No one ever said that show busi­ness is fair.

EverQuest came to evince a markedly dif­fer­ent per­son­al­ity than Ultima Online, but its ori­gin story bears some un­canny sim­i­lar­i­ties to that of the older ri­val it de­mol­ished. Like Ultima Online, EverQuest was born as a sort of skunk-works pro­ject within a larger com­pany whose up­per man­age­ment re­ally was­n’t all that in­ter­ested in it. Like Ultima Online, EverQuest en­joyed the sup­port of just one ex­ec­u­tive within said com­pany, who set it in mo­tion and then pro­tected and nour­ished it like the prover­bial mother hen. And like the ex­ec­u­tive be­hind Ultima Online, the one be­hind EverQuest plucked a pair of de­sign­ers out of ut­ter ob­scu­rity to help him hatch the egg.

Perhaps the most sur­pris­ing as­pect of the EverQuest ori­gin story is the name of the com­pany where it all went down: Sony Interactive Studios America. Suffice to say that, if you were to guess circa 1996 which pub­lisher and stu­dio would launch a mar­ket-trans­form­ing MMORPG later in the decade, Sony would not be high in your rank­ings. The Japanese mega-corp was fly­ing high at the time, with a promi­nent foot­print in most sec­tors of home elec­tron­ics and main­stream en­ter­tain­ment, but it had hardly any pres­ence at all on per­sonal com­put­ers. The Sony PlayStation, launched in September of 1995 in North America and Europe, was on its way to be­com­ing the most suc­cess­ful sin­gle games con­sole of the twen­ti­eth cen­tury, a true mass-mar­ket cul­tural sen­sa­tion that broad­ened the de­mo­graphic for videogames and for­ever changed the way that the pub­lic per­ceived them. With a main­stream pile dri­ver like that to hand, why should Sony want to waste its time with a wonky vir­tual world for nerds cos­play­ing as dwarves and mages?

It wound up do­ing so thanks to one man. At the be­gin­ning of 1996, John Smedley had been work­ing for a few years as a pro­ducer at Sony Interactive, which fo­cused al­most ex­clu­sively on sports games for the PlayStation. Just 28 years old, Smedley al­ready had a cor­ner of­fice with a view and a salary to match, as he and his col­leagues rode the wave of the con­sole’s in­cred­i­ble early suc­cess.

There was just one prob­lem: Smedley did­n’t par­tic­u­larly like sports, whether they hap­pened to be played on the field or on the tele­vi­sion screen. He had grown up as one of the kids that the jocks made fun of, the kind who walked to school every day with a Dungeons & Dragons rule book or two un­der his arm. It was only thanks to op­por­tunism and hap­pen­stance that he had wound up helm­ing pro­jects aimed at gamers who wor­shiped John Madden rather than Gary Gygax. Now, he thought that the bur­geon­ing Internet would soon make it pos­si­ble to re­al­ize an old dream of 1980s nerds like him: that of play­ing Dungeons & Dragons on­line, when­ever it suited you, in­stead of only when you could arrange to meet in per­son with five or so like-minded friends — as­sum­ing you even had such friends. He had a rough blue­print for how it might work, in the form of Neverwinter Nights, a game on America Online that let you ef­fec­tively play one of the old sin­gle-player SSI Gold Box CRPGS over the Internet, tak­ing a per­sis­tent char­ac­ter through a se­ries of ad­ven­tures with friends and strangers. It was lim­ited in a thou­sand ways, but it was, so Smedley be­lieved, the har­bin­ger of a whole new cat­e­gory of game. And, af­ter work­ing for so long on games he re­ally did­n’t care about, he wanted to make one that he could feel pas­sion­ate about.

Smedley took his idea to his boss Kelly Flock, the newly ar­rived head of Sony Interactive. It was a crazy thing to pro­pose on the face of it, hav­ing ab­solutely noth­ing to do with any­thing the stu­dio had ever done be­fore nor any of the strate­gic pri­or­i­ties of the mother cor­po­ra­tion; the PlayStation did­n’t have any on­line ca­pa­bil­i­ties what­so­ever, mean­ing this game would have to run on per­sonal com­put­ers. But Sony was flush with PlayStation cash and bravado, and Flock was ap­par­ently in a gen­er­ous mood. He told Smedley that he could take $800,000 and hire a team to in­ves­ti­gate the fea­si­bil­ity of his idea, as long as he con­tin­ued to de­vote the ma­jor­ity of his time to his pri­mary job of churn­ing out crowd-pleas­ing sports games.

Those of you fa­mil­iar with the tale of Ultima Online will rec­og­nize Sony Interactive stand­ing in for Origin Systems, and John Smedley tak­ing the role of Richard Garriott. EverQuest’s equiv­a­lent of Raph and Kristen Koster, who swept into Origin from the ob­scure world of tex­tual MUDs to cre­ate Ul­tima Online in their im­age, was a pair of friends named Brad McQuaid and Steve Clover. They were pro­gram­ming au­toma­tion and book­keep­ing sys­tems for a San Diego plant nurs­ery dur­ing the early 1990s, work­ing on a sin­gle-player CRPG of their own dur­ing their off hours. They called it WarWizard. Unfortunately, it was for the Commodore Amiga, a dy­ing plat­form in North America. Unable to in­ter­est a pub­lisher in a game in an un­fash­ion­able genre for a com­puter that was fast dis­ap­pear­ing, they re­leased War­Wiz­ard un­der the share­ware model in 1993; the fol­low­ing year, they made an MS-DOS port avail­able as well. By McQuaid and Clover’s own later re­ports, it gar­nered about 1500 reg­is­tra­tions — not bad for a share­ware game, but def­i­nitely not enough to let the friends quit their day job.

Undaunted, they pushed ahead with a War­Wiz­ard 2. Desperate for feed­back, they up­loaded a pre­view of the se­quel to the Internet. On a lark, McQuaid ap­pended a note: We are re­leas­ing this demo as a busi­ness card of sorts, in or­der to in­tro­duce games pub­lish­ers, de­vel­op­ers, and in­vestors to our com­pany, MicroGenesis. If you have any ques­tion what­so­ever, please con­tact Brad McQuaid.” This hope­ful — not to say naïve — shot in the dark would would change both of their lives.

For one day not long af­ter his meet­ing with his boss, John Smedley stum­bled across the demo, thought it was pretty im­pres­sive for the work of two guys with a day job, no­ticed that the two guys in ques­tion were liv­ing in Sony Interactive’s home­town of San Diego, and de­cided to take them up on their of­fer and con­tact them. Thus Brad McQuaid picked up his phone one rainy evening to hear a Sony pro­ducer on the other end of the line, ask­ing him and his part­ner to come visit him in his slick glass-walled of­fice down­town. It seemed too in­cred­i­ble to be true — but it was.

So, McQuaid and Clover, feel­ing un­com­fort­able and thor­oughly out of place, were ush­ered by a sec­re­tary past the PlayStations in the an­te­rooms and the NFL and MLB posters lin­ing the walls at Sony Interactive, to see the star pro­ducer in his na­tive habi­tat. What did these peo­ple want with the likes of them, two scruffy mis­fits hus­tling to make a buck ped­dling turn-based mon­ster-fight­ing games on the share­ware mar­ket? Then, as soon as the door shut be­hind the sec­re­tary, they felt sud­denly at home. John Smedley was, they learned to their re­lief, one of them: a kid who had grown up play­ing Dungeons & Dragons in his school’s cafe­te­ria and Ultima on his Apple II. It turned out that Smedley did­n’t want them to fin­ish WarWizard 2 for Sony Interactive; he wanted them to make some­thing even more ex­cit­ing. He ex­plained his vi­sion of a CRPG that you could play on­line, and asked them whether they’d like to help him make it. They said that they would. Smedley now learned that McQuaid and Clover were, like the Kosters over at Origin, pas­sion­ate MUDders as well as semi-pro­fes­sional sin­gle-player CRPG de­vel­op­ers. They knew ex­actly what kind of ex­pe­ri­ence Smedley was en­vi­sion­ing, and were over­flow­ing with ideas about how to bring it to fruition. Smedley knew right then that he’d hit pay dirt.

McQuaid and Clover were hired by Sony Interactive in March of 1996. They then pro­ceeded to spend about six months in a win­dow­less of­fice far less plush than that of John Smedley, cre­at­ing a de­sign doc­u­ment for the game that they were al­ready call­ing EverQuest; the name had felt so right as soon as it was pro­posed by Clover that an­other one was never se­ri­ously dis­cussed. Smedley in­sisted that the doc­u­ment de­scribe the game down to the very last de­tail. Here we see a marked con­trast to the de­vel­op­ment process that led to Ultima Online, which came into its own grad­u­ally and it­er­a­tively, through a long string of playable de­sign pro­to­types. Smedley’s back­ground as a pro­ducer of games that sim­ply had to ship by a cer­tain date — the National Football League was not likely to de­lay its sea­son opener in or­der to give that year’s NFL videogame an ex­tra week or two in the oven — had taught him that the best way to make soft­ware ef­fi­ciently was to know ex­actly what you were in­tend­ing to make be­fore you wrote the first line of code.

At this point, then, we’re al­ready be­gin­ning to see some of the dif­fer­ences in per­son­al­ity be­tween Ul­tima Online and EverQuest emerge. The Kosters were ide­al­ists and the­o­rists at heart, who treated Ultima Online al­most as a so­ci­o­log­i­cal ex­per­i­ment, an at­tempt to cre­ate a vir­tual space that would in turn give birth to a gen­uine dig­i­tal so­ci­ety. Smedley, McQuaid, and Clover, on the other hand, had less high­fa­lutin am­bi­tions. EverQuest was to be a place to hang out with friends and a fun game to play with them, full stop. The more grandiose of the dreams nursed by the Kosters — dreams of elec­tions and gov­ern­ments, of a real econ­omy dri­ven by real peo­ple play­ing as shop­keep­ers, tai­lors, tour guides, and con­struc­tion fore­men, of a vir­tual world with a fully im­ple­mented nat­ural ecol­ogy and a craft­ing sys­tem that would let play­ers build any­thing and every­thing for them­selves — were nowhere to be found in the fi­nal 80-page de­sign doc­u­ment that McQuaid and Clover pre­sented and Smedley ap­proved in September of 1996. They all agreed that a bla­tantly ar­ti­fi­cial, gam­i­fied vir­tual world was­n’t a prob­lem, so long as it was fun. In these pri­or­i­ties lay most of what would make their game such a suc­cess, as well as most of what ide­al­ists like the Kosters would find dis­ap­point­ing about it and the later MMORPGs that would mimic its ap­proaches.

In both the broad strokes and many of the de­tails, the think­ing of McQuaid and Clover was heav­ily in­flu­enced by an open-source MUD toolkit called DikuMUD that had been re­leased by a group of stu­dents at the University of Copenhagen in 1991. Its re­la­tion­ship to other MUDs fore­shad­owed the re­la­tion­ship of the even­tual EverQuest to Ultima Online: DikuMUD was all about keep­ing the pro­ceed­ings stream­lined and fun. As the game-de­sign the­o­rist Flatfingers has writ­ten on his blog, it em­pha­sized easy-to-un­der­stand and ac­tion-ori­ented com­bat over other forms of in­ter­ac­tion [and] sim­pli­fied in­ter­ac­tions down to eas­ily track­able, table-dri­ven sta­tis­tics.” The sim­plic­ity and ac­ces­si­bil­ity of the DikuMUD en­gine from the play­er’s per­spec­tive, com­bined with the equal ease of set­ting a new in­stance of it up on the server side, had made it the dom­i­nant force in tex­tual MUDs by the mid-1990s, much to the dis­plea­sure of peo­ple like the Kosters, who pre­ferred more sim­u­la­tion­ally in­tense vir­tual worlds. This de­sign di­a­log was now about to be re­peated in the graph­i­cal con­text.

Then, too, there is one other im­por­tant in­flu­ence on EverQuest that we can’t af­ford to ne­glect. While McQuaid and Clover were still work­ing on their de­sign doc­u­ment, they saw 3DOs early, half­heart­edly sup­ported graph­i­cal MMORPG Meridian 59 go through beta test­ing. It con­vinced them that first-per­son 3D graph­ics were the way to go — an­other point of de­par­ture with Ultima Online, which clung to an old-school over­head third-per­son view, just like the sin­gle-player Ultima CRPGs be­fore it. In the age of DOOM and Quake, McQuaid and Clover judged, noth­ing less than im­mer­sive 3D would do for their game. And so an­other key­stone and dif­fer­en­tia­tor fell into place.

With the de­sign doc­u­ment com­pleted, Smedley found a larger room to house the pro­ject in Sony Interactive’s build­ing and slowly put a team into place around his two wun­derkinds. Some of the pro­gram­mers and artists who joined them were hired from out­side, while oth­ers were moved over from other parts of the com­pany as their cur­rent pro­jects were com­pleted. (It turned out that Smedley had­n’t been the only clos­eted nerd at Sony Interactive con­demned to make sports games…) As the more out­go­ing and as­sertive of Smedley’s orig­i­nal pair of re­cruits, Brad McQuaid took the role of pro­ducer and day-to-day pro­ject lead, while Steve Clover be­came the lead pro­gram­mer as well as de­signer. Perhaps the most im­por­tant of the new­com­ers was Rosie Cosgrove (now Rosie Strzalkowski), the lead artist. She shaped the game’s vi­sual aes­thetic, a blend­ing of the epic and the whim­si­cal, full of bright pri­mary col­ors and pas­tels that popped off the screen. Recognizing that photo-re­al­ism was­n’t go­ing to be pos­si­ble with the cur­rent state of 3D-graphics tech­nol­ogy, she em­braced the jank­i­ness. The graph­ics would be­come just one more sign that EverQuest, in con­trast to that other big MMORPG, was all about straight­for­ward, even slightly silly fun, with no de­gree or in­ter­est in so­ci­ol­ogy re­quired.

While the team was co­a­lesc­ing, they had the price­less op­por­tu­nity to ob­serve the suc­cesses and tribu­la­tions of their ri­val vir­tual world from Origin Systems, which, true to the it­er­a­tive ap­proach to game de­vel­op­ment, was con­duct­ing a se­ries of small-scale pub­lic test­ing rounds. A wa­ter­shed was reached in June of 1997, when Ultima Online con­ducted a two-month beta test, its biggest one ever and the last one be­fore the game’s of­fi­cial re­lease. Needless to say, every­one on the EverQuest team watched the pro­ceed­ings closely. What caught all of the in­ter­ested ob­servers by sur­prise — not least the ide­al­ists at Origin Systems — was the quan­tity of play­ers who found their fun nei­ther as no­ble ad­ven­tur­ers nor as shop­keep­ers, tai­lors, tour guides, politi­cians, or con­struc­tion fore­men, but rather as mass mur­der­ers, killing their fel­low play­ers the sec­ond they let their guard down. It ought to have been a five-alarm wake-up call for Origin, be­ing the first in­du­bitable har­bin­ger of a per­sis­tent prob­lem that would pave the way for EverQuest to re­place its older, bet­ter cre­den­tialed ri­val as the MMORPG du jour. But they re­fused to coun­te­nance the ob­vi­ous so­lu­tion of just mak­ing it pro­gram­mat­i­cally im­pos­si­ble for one player to kill an­other.

After Ultima Online launched for real in September of 1997, the de­vel­op­ers be­hind it con­tin­ued to strug­gle to find a way of ad­dress­ing the prob­lem of player mur­der with­out com­pro­mis­ing their most cher­ished ideals of a fun­da­men­tally player-dri­ven on­line so­ci­ety. They en­cour­aged their cit­i­zens to form po­lice forces, and im­ple­mented small changes to try to help the law-and-or­der con­tin­gent out, such as print­ing the names of those player char­ac­ters who had killed at least five other player char­ac­ters in scar­let let­ters. None of it worked; in­stead of a badge of shame, the scar­let let­ters be­came a badge of honor for the griefers” who lived to cause chaos and dis­tress. In his own words, Raph Koster put his play­ers through a slow-drip tor­ture of slowly tight­en­ing be­hav­ior rules, try­ing to save the emer­gence while tamp­ing down the bad be­hav­ior. The cost was the loss of hun­dreds of thou­sands of play­ers.” After a wildly vac­il­lat­ing start, Ultima Online sta­bi­lized by mid-1998 at about 90,000 ac­tive sub­scribers. That was­n’t noth­ing by any means — on the con­trary, it rep­re­sented about $1 mil­lion worth of rev­enue for Origin every sin­gle month — but it nev­er­the­less left a huge open­ing for an­other game that would be more prag­matic, less ide­o­log­i­cal, and by ex­ten­sion less mur­der­ous, that would be more fo­cused on sim­ple fun.

Steve Clover signed up for Ul­tima Online and logged on as soon as he could do so. His first hour in the world was much the same as that of count­less thou­sands of play­ers to come, many of whom would never log in again.

I cre­ated my own sword. I crafted my own ar­mor and all that. I put all this stuff on, I head out to do some ad­ven­tur­ing, and all of a sud­den the screen starts slow­ing down. I’m like, oh, this is weird. What’s go­ing on? And about a hun­dred guys run on screen and [beat] me to death, right?

I said, that will not hap­pen in our game. That ab­solutely will not hap­pen.

So, in the emerg­ing par­lance of the MMORPG, EverQuest would be strictly a PvE,” or player ver­sus en­vi­ron­ment,” game, rather than a PvP” game. The most im­por­tant sin­gle key to its ex­tra­or­di­nary suc­cess was ar­guably this one de­ci­sion to make it lit­er­ally im­pos­si­ble to at­tack your fel­low play­ers. For it would give EverQuest’s world of Norrath the rep­u­ta­tion of a friendly, wel­com­ing place in com­par­i­son to the per­pet­ual blood sport that was life in Ul­tima Online’s Britannia. Perhaps there is some po­lit­i­cal phi­los­o­phy to be found in EverQuest af­ter all: that re­mov­ing the temp­ta­tion to com­mit crime serves to make every­one a lit­tle bit nicer to each other.

In the mean­time, while Ul­tima Online was cap­tur­ing head­lines, the nascent EverQuest kept a low pro­file. It was sel­dom seen in the glossy gam­ing mag­a­zines dur­ing 1997 and 1998; the jour­nal-of-record Computer Gaming World pub­lished only one half-page pre­view in all that time. Instead EverQuest re­lied on a grass-roots, guer­rilla-mar­ket­ing ef­fort, led by none other than Brad McQuaid. He was all over the news­groups, web­sites, and chat chan­nels pop­u­lated by hard­core MUDders and dis­grun­tled refugees from mur­der­ous Britannia. One of his col­leagues es­ti­mated that he spent half his av­er­age work­ing day evan­ge­liz­ing, query­ing, and de­bat­ing on the Internet. (Because McQuaid’s work­ing days, like those of every­one else on the team, tended to be in­or­di­nately long, this was less of a prob­lem than it might oth­er­wise have been.) His ef­forts grad­u­ally paid off. EverQuest was voted Best Online Only Game by crit­ics who at­tended the an­nual E3 show in May of 1998, de­spite hav­ing had only a back­room, in­vi­ta­tion-only pres­ence there. The peo­ple mak­ing it be­lieved more than ever now that there was a pent-up hunger out there for a more ac­ces­si­ble, fun-fo­cused al­ter­na­tive to Ultima Online. They be­lieved it still more when they moved into the pub­lic beta-test­ing stage, and were swamped by ap­pli­cants want­ing to join up. The last stage of test­ing in­volved fully 25,000 play­ers, more than had par­tic­i­pated in Ultima Online’s fi­nal beta.

In the midst of the run-up to launch day, John Smedley was plunged into a last-minute scram­ble to find a new home for his brain­child. Sony Interactive had by now been re­branded 989 Studios, a punchier name re­flect­ing its on­go­ing fo­cus on sports games. Meanwhile the Sony mother ship had be­gun ques­tion­ing the pres­ence of this on­line-only com­puter game at a stu­dio whose iden­tity was sin­gle-player PlayStation games. EverQuest would not be just an­other ship-it-and-move-on sports ti­tle; it would re­quire a whole in­fra­struc­ture of servers and the data pipelines to feed them, along with a sub­stan­tial sup­port staff to main­tain it all and gen­er­ate a never-end­ing stream of new con­tent for the play­ers. Considered in this con­text, the name of EverQuest seemed all too apro­pos. What did 989 Studios know about run­ning a for­ever game? And was it re­ally worth the ef­fort to learn when there was so much money to be made in those bread-and-but­ter sports games? One day, Kelly Flock called John Smedley into his of­fice to tell him that he could­n’t con­tinue to feed and nur­ture his baby. If he wanted to keep EverQuest alive, he would have to find an­other care­giver.

Luckily, there was an­other di­vi­sion at Sony known as Sony Online Entertainment that was try­ing to make a go of it as an Internet gam­ing por­tal. Through a se­ries of cor­po­rate con­tor­tions that we need not delve into too deeply here, Smedley’s skunk works was spun off into a nom­i­nally in­de­pen­dent com­pany known as Verant Interactive, with Sony Online as its chief in­vestor.

All of this was hap­pen­ing dur­ing the fevered fi­nal months of test­ing. And yet, re­mark­ably, the folks on the front lines were scarcely aware of the cri­sis at all; know­ing that they had more than enough to worry about al­ready, Smedley chival­rously shielded them from the stress that was keep­ing him awake at night. I don’t re­mem­ber a, Hey, guys, we’re get­ting can­celled,’” says EverQuest World Builder” — that was his of­fi­cial ti­tle — Geoffrey Zatkin. What I re­mem­ber is, Hey, guys, we’re spin­ning out to our own stu­dio. You’re no longer go­ing to be Sony em­ploy­ees. You’re go­ing to be em­ploy­ees of Verant Interactive.’” The best news of all was that Smedley was fi­nally able to give up his hated sports games and join them full-time as the head of Verant.

EverQuest went live on March 16, 1999, a day that ought to go down in his­tory as mark­ing the end of the early, ex­per­i­men­tal phase of graph­i­cal MMORPGs and mark­ing their ar­rival as a se­ri­ous com­mer­cial force in gam­ing. To be sure, that orig­i­nal EverQuest client does­n’t look much like we ex­pect a piece of pol­ished com­mer­cial en­ter­tain­ment soft­ware to look to­day; the 3D view, which fills barely half the screen as a sneaky way of keep­ing frame rates up, is sur­rounded by gar­ish-look­ing but­tons, icons, and sta­tus bars that seemed to have been plopped down more or less at ran­dom, with a scrolling MUD-like text win­dow that’s al­most as large as the world view tak­ing pride of place in the mid­dle of it all. But at the time, it was all very cut­ting edge, mak­ing the MMORPGs that had come be­fore it look pos­i­tively an­ti­quated in com­par­i­son. A late de­ci­sion to re­quire a 3D-accelerator card to even start the client had caused much de­bate at Verant. Would they be giv­ing up too many po­ten­tial sub­scribers thereby?

They need­n’t have wor­ried. A healthy 10,000 peo­ple signed up on the first day, and that pace was main­tained for days af­ter­ward.

Like the worlds of Ultima Online and all of the early MMORPGs, EverQuest’s world of Norrath was ac­tu­ally many sep­a­rate in­stances of same, each run­ning on its own server that was ca­pa­ble of host­ing no more than a few thou­sand play­ers at one time. Verant had thought they were pre­pared for an on­slaught of sub­scribers — the best of all pos­si­ble prob­lems for a new MMORPG to have — by hav­ing plenty of servers set up and ready to go. But they had failed to fol­low the lead of Ultima Online in one other im­por­tant re­spect: whereas Origin Systems scat­tered their servers around the coun­try, Verant ran all of theirs out of a sin­gle build­ing in San Diego. As ur­ban leg­end would have it, EverQuest con­sumed so much band­width af­ter its launch that it dis­rupted Internet con­nec­tions through­out the city, un­til more ca­bles could be laid. This is al­most cer­tainly an ex­ag­ger­a­tion, but it is true that the pipes go­ing di­rectly into Verant’s of­fices at least were woe­fully in­ad­e­quate. Everyone scram­bled to ad­dress the emer­gency. John Smedley re­mem­bers personally log­ging into the Cisco routers” to try to tweak a few more bytes worth of through­put out of the things: I could ac­tu­ally work with the Versatile Interface Processor cards al­most as well as any of our net­work en­gi­neers at the time.” Again, though, too many cus­tomers is al­ways a bet­ter prob­lem to have than the al­ter­na­tive, and this one was grad­u­ally solved.

Computer Gaming World did­n’t pub­lish its EverQuest re­view un­til the July 1999 is­sue. This was a sur­pris­ingly late date, even given the stan­dard two-month print-mag­a­zine lead time, and it pointed to the emerg­ing re­al­ity of the glossy mag­a­zines be­com­ing es­tranged from their tra­di­tional read­er­ship, who were now get­ting more and more of their news and re­views on­line, the same place where they were do­ing more and more of their ac­tual gam­ing. Nevertheless, Thierry Nguyen’s be­lated re­view for the mag­a­zine was a fair and co­gent one, es­pe­cially in the in­evitable com­par­i­son with Ultima Online — and in an­other, less in­evitable com­par­i­son that makes more sense than you might ini­tially think.

Ultima Online is a world sim­u­la­tion; EverQuest is a so­cial hack-and-slash. Ultima Online has more free­dom built into it, and you can ac­tu­ally make a liv­ing off of trade skills. EverQuest is more about sheer ad­ven­ture and com­bat, and the trade skills are use­ful, but you can’t re­ally be a tai­lor or a baker.

EverQuest is the Diablo of 1999. An odd com­par­i­son, you say? Well, here’s how they’re alike: they both of­fer a very sim­ple premise (“go forth and thwack many crea­tures to gain lev­els and loot”), and de­spite this sim­ple premise (or maybe be­cause of it), they’re both damn ad­dic­tive and fun.

Diablo in a vastly larger, truly per­sis­tent world re­ally is­n’t a ter­ri­ble way to think about EverQuest. While the folks at Origin Systems ex­pected their play­ers to make their own fun, to see what lay be­hind yon­der hill for the sake of the jour­ney, Verant gave theirs a ma­trix of pre-crafted quests and goals to pur­sue. While Ultima Online’s world of Britannia be­longed to its in­hab­i­tants, EverQuest’s world of Norrath be­longed to Verant; you just got to play in it. Happily for every­body, do­ing so could be a lot of fun. Sometimes the most de­li­cious sort of free­dom is free­dom from re­spon­si­bil­ity.

By October of 1999, EverQuest had more than 150,000 sub­scribers, leav­ing Ultima Online in its dust. Raph Koster be­lieves, prob­a­bly cor­rectly, that this trounc­ing of his own vir­tual world was dri­ven as much by the safety” of hav­ing no play­ers killing other play­ers as it was by EverQuest’s trendy 3D graph­ics. Ul­tima Online would fi­nally re­lent and open safe servers of its own in 2000, but that was bolt­ing the gate af­ter the mounted mur­der­ers had al­ready gal­loped through.

That same October of 1999, Microsoft launched Asheron’s Call, an­other 3D MMORPG that pre­vented its play­ers from killing other play­ers. Yet even with all of the ruth­less mar­ket­ing mus­cle and the mas­sive server in­fra­struc­ture of the biggest mo­nop­oly in tech­nol­ogy be­hind it, it never came close to ri­val­ing EverQuest in pop­u­lar­ity. It would be a long time be­fore any other vir­tual world would. By the end of 2000, EverQuest was clos­ing in on 350,000 sub­scribers. The fol­low­ing year, it hit 400,000 sub­scribers. Its growth then slowed down con­sid­er­ably, but still it did not halt; EverQuest would peak at 550,000 sub­scribers in 2005.

In May of 2000, Verant Interactive’s brief-lived pe­riod of nom­i­nal in­de­pen­dence came to an end, when the spin­off was ab­sorbed back into Sony. Soon af­ter, the old Sony Online Entertainment sub­sidiary was shut down, hav­ing failed to set the world on fire with its own sim­ple on­line games based on tele­vi­sion game shows like Wheel of Fortune and Jeop­ardy!, and Verant ap­pro­pri­ated its name.

In ad­di­tion to charg­ing its sub­scribers a re­cur­ring fee of $10 per month, this new edi­tion of Sony Online dis­cov­ered a valu­able sec­ondary rev­enue stream in boxed ex­pan­sion packs for EverQuest. No fewer than ten of these were re­leased be­tween 2000 and 2005, in­tro­duc­ing new re­gions of Norrath to ex­plore, new mon­sters to fight, new races and classes to fight them as, new spells to cast, and new magic items to col­lect, whilst also re­fin­ing the graph­ics and in­ter­face on the client side to keep pace with com­pet­ing MMORPGs. Some ar­gued that a pay­ing cus­tomer was rea­son­ably en­ti­tled to ex­pect at least some of this ad­di­tional con­tent and re­fine­ment to be de­liv­ered as part of the base sub­scrip­tion pack­age. And in­deed, those look­ing for a mea­sure of po­etic jus­tice here were per­chance not en­tirely de­prived. There is rea­son to sus­pect that all these ex­pan­sions be­gan in time to act as a drag on the game’s growth: the need to shell out hun­dreds of dol­lars and carry home a ver­i­ta­ble pile of boxes in or­der to be­come a fully vested cit­i­zen of Norrath was likely one of the rea­sons that EverQuest’s growth curve lev­eled off when it did. Sony Online could still prof­itably sell ex­pan­sions to the faith­ful, but those same ex­pan­sions made the bar­rier to en­try higher and higher for new­com­ers.

Still, the fact re­mains that EverQuest was for six years the most pop­u­lar MMORPG of them all, in de­fi­ance of a gamer cul­ture whose ap­petite for nov­elty was no­to­ri­ous. There was no short­age of would-be chal­lengers in its space; by a cou­ple of years into the new mil­len­nium, scarcely a month went by with­out some new MMORPG throw­ing its hat into the ring. And small won­der: to pub­lish­ers, the idea of a game that you could keep charg­ing peo­ple for was tempt­ing to say the least. Some of the new­com­ers sur­vived, some even thrived for a while with sub­scriber counts as high as 250,000, but none came close to match­ing EverQuest in mag­ni­tude or longevity. A vir­tual world like Norrath had a pe­cu­liar stick­i­ness about it that was­n’t a fac­tor with other types of games. To leave EverQuest and go play some­where else meant to leave be­hind a char­ac­ter you might have spent years build­ing up, and, even more poignantly, to leave be­hind an en­tire cir­cle of on­line friends that you had as­sem­bled over the course of that time. This was a tough pill for most peo­ple to swal­low, no mat­ter how en­tic­ing Arthurian Britain, the galaxy far, far away of Star Wars, or a world out of Japanese anime might sound in com­par­i­son to the fairly generic, cookie-cut­ter fan­tasy world of Norrath.

The huge num­bers of sub­scribers led to knock-on ef­fects that EverQuest’s de­vel­op­ers had never an­tic­i­pated. Within months of the game’s launch, en­ter­pris­ing play­ers be­gan sell­ing in-world loot on sites like eBay; soon the most suc­cess­ful of these vir­tual auc­tion­eers were mak­ing thou­sands of dol­lars every month. What’s crazy? Me play­ing for twelve hours a day or some­one pay­ing real money for an item that does­n’t ex­ist?” asked one mem­ber of this new en­tre­pre­neur­ial class who was pro­filed in The Los Angeles Times. Well, we’re both crazy. God bless America.”

A jour­nal­ist named R. V. Kelly 2, who had never con­sid­ered him­self a gamer be­fore, tried EverQuest just to see what all the fuss was about, and got so en­tranced that he wound up writ­ing a book about these emerg­ing new vir­tual worlds.

This is­n’t a game at all, I re­al­ized. It’s a vast, sep­a­rate uni­verse. People ex­plore here. They con­verse. They trans­act busi­ness, form bonds of friend­ship, swear vows of vengeance, es­cape from dire cir­cum­stances, joke, fight to over­come ad­ver­sity, and learn here. And it’s bet­ter than the real world be­cause there are no phys­i­cal con­se­quences for mak­ing mis­takes. You can de­rive the same sense of sat­is­fac­tion for do­ing things well that you find in the real world, but you don’t suf­fer any pain or an­guish when you fail. So, the game con­tains most of the good found in real life, but none of the bad.

Yet there were also dan­gers bound up with the al­lure of a vir­tual world where fail­ure had no con­se­quences — es­pe­cially for those whose real lives were less than ideal. On Thanksgiving Day, 2001, a young Wisconsinite named Shawn Woolley was dis­cov­ered by his mother sit­ting in front of his com­puter dead, the ri­fle he had used to shoot him­self ly­ing nearby. The mon­i­tor still dis­played the EverQuest lo­gin screen. He had been play­ing the game ra­bidly for months, to the ex­clu­sion of every­thing else. He’d had no job, no stud­ies, no friends in the real world. He’d ef­fec­tively up­loaded his en­tire ex­is­tence to the world of Norrath. And this had been the re­sult. Had his lonely iso­la­tion from the world around him come first, or had EverQuest caused him to iso­late him­self? Perhaps some of both. One can’t help but think of the clas­sic ad­dic­t’s an­swer when asked why he does­n’t give up the habit that is mak­ing his life mis­er­able: Because then I’d have no life at all.” It seemed that this was lit­er­ally true — or be­came true — in the case of Shawn Woolley.

This tragedy cast num­bers that Sony Online might once have been proud to trum­pet in rather a dif­fer­ent light. Not long be­fore Woolley’s death, one Edward Castronova, an as­so­ci­ate pro­fes­sor of eco­nom­ics at California State University, Fullerton, had con­ducted a de­tailed sur­vey of the us­age habits of EverQuest sub­scribers. He found that the av­er­age player spent four and a half hours in the game every day, and that 31 per­cent played more than 40 hours every week — i.e., more than a typ­i­cal full-time job. Surely that could­n’t be healthy.

Widespread cov­er­age of the the death of Shawn Woolley ig­nited a main­stream con­ver­sa­tion about the po­ten­tially detri­men­tal ef­fects of on­line videogames in gen­eral and EverQuest in par­tic­u­lar. A fa­ther was re­ported to have smoth­ered his in­fant son with­out re­al­iz­ing it, so dis­tracted was he by the world of Norrath on his com­puter screen. A cou­ple was re­ported to have left their three-year-old be­hind in a hot car to die, so ea­ger were they to get into the house and log into EverQuest. Parents said that their EverQuest-addled chil­dren be­haved as if they had demons liv­ing in­side them.” Wives told of life as EverQuest wid­ows: I do not trust him [to be alone] with our daugh­ter, sim­ply be­cause when I am here she will be cry­ing and he will not do any­thing about it.”

The sto­ries were lurid and doubt­less quite of­ten ex­ag­ger­ated, but the con­cern was valid. Unlike the de­bates of the 1980s and 1990s, which had prin­ci­pally re­volved around the ef­fects of videogame vi­o­lence on the ado­les­cent psy­che and had re­lied largely on flawed or bi­ased stud­ies and anec­do­tal data, this one had some real sub­stance to it. One did­n’t need to be a Luddite to be­lieve that play­ing a sin­gle videogame as much as — or to the ex­clu­sion of — a full-time job could­n’t pos­si­bly be good for any­one. Elizabeth Woolley, the mother of Shawn Woolley, be­came the face of the Everquest op­po­si­tion move­ment. She was cer­tainly no Luddite. On the con­trary, she was a com­puter pro­fes­sional who had laughed at the hear­ings on videogame vi­o­lence con­ducted by Joe Lieberman in the United States Senate and like­wise dis­missed the anti-game hys­te­ria sur­round­ing the re­cent Columbine school shoot­ings that had been car­ried out by a pair of trou­bled DOOM-loving teenagers. All that notwith­stand­ing, she saw, or be­lieved she saw, a sin­is­ter in­ten­tion­al­ity be­hind this ad­dic­tive game that its own most loyal play­ers called EverSmack or Ever­Crack: I know the analy­sis that goes into a game be­fore they even start writ­ing the code; every­thing is very in­ten­tional. And peo­ple would go, Ah, that’s so funny, how ad­dict­ing.’ And I’m like, no, it’s not funny at all.”

She was­n’t alone in vaguely ac­cus­ing Sony Online of be­ing less than morally unim­peach­able. According to one read­ing, pop­u­lar among old-school MUDders, the EverQuest team had co-opted many of the ideas be­hind MUDs whilst toss­ing aside the most im­por­tant one of all, that of a truly em­pow­ered com­mu­nity of play­ers, in fa­vor of top-down cor­po­rate con­trol and de­lib­er­ate psy­cho­log­i­cal ma­nip­u­la­tion as a means to their end of ever-in­creas­ing prof­its. One of the ear­li­est aca­d­e­mic treat­ments of EverQuest, by Timothy Rowlands, posits (in typ­i­cally tan­gled aca­d­e­mic dic­tion) that

from the out­set, EverQuest’s de­sign­ers, mo­ti­vated by profit, were in­ter­ested in try­ing to har­ness (read co-opt, com­modi­tize) the so­cial­ity that had made the vir­tual worlds of MUDs so suc­cess­ful. Resisting the lin­ear­ity of older sin­gle-player games in which the play­ers move their avatars through a se­ries of pre­de­ter­mined lev­els, MMOs pre­sent a space in which the hero nar­ra­tive, pred­i­cated upon the po­ten­tial for cli­max — though pre­sent in the form of quests and the ac­cu­mu­la­tion of avatar cap­i­tal — is ul­ti­mately un­re­al­iz­able. Because the aim is to keep sub­scribers play­ing in­def­i­nitely, even the ar­bi­trary end points (level caps) are with­out clo­sure. In Campbellian lan­guage, there can be no epiphany, no mo­ment of apoth­e­o­ses as the hero over­comes his tri­als…

For me, the ex­is­ten­tial ham­ster wheel de­scribed by Rowlands — him­self a re­cov­er­ing EverQuest ad­dict — smacks a bit too much of the life I lead of­fline, the one that comes down to, to para­phrase Roy Rogers, just one damn thing af­ter an­other. Combine this with my aware­ness of the lim­i­ta­tions of on­line so­cial­iz­ing, and we can per­haps be­gin to see why I’ve never been much in­ter­ested in MMORPGs as a gamer. Literary type that I am, if of­fered a choice be­tween a sec­ond life on the com­puter and an in­ter­ac­tive story of the kind that I can ac­tu­ally fin­ish, I’ll take the story — the one with the be­gin­ning, mid­dle, and end — every sin­gle time. I can’t help but think that I may have been lucky to be born with such a predilec­tion.

Lest we be tempted to take all of this too far, it should be noted that EverQuest in its hey­day was, how­ever psy­cho­log­i­cally per­ilous it might or might not have been, a po­ten­tial prob­lem for only a van­ish­ingly small num­ber of peo­ple in re­la­tion to the pop­u­la­tion as a whole: by the met­rics of tele­vi­sion, movies, or even oth­ers forms of gam­ing, 550,000 sub­scribers was noth­ing. Nevertheless, the de­bates which EverQuest ig­nited fore­shad­owed other, far more broad-based ones to come in the fast-ap­proach­ing epoch of so­cial me­dia: de­bates about screen time, about the grind­ing stress of try­ing to keep up with the on­line Joneses, about why so many peo­ple have come to see dig­i­tal spaces as more at­trac­tive than real ones full of trees and skies and flow­ers, about whether dig­i­tal re­la­tion­ships can or should ever re­place in-per­son smiles, tears, and hugs. Meanwhile the ac­cu­sa­tions of sin­is­ter in­tent which Elizabeth Woolley and Timothy Rowlands lev­eled against EverQuest’s de­sign­ers and ad­min­is­tra­tors were, even if mis­placed in this case, har­bin­gers of games of the fu­ture that would in­deed be con­sciously en­gi­neered not to max­i­mize fun but to max­i­mize en­gage­ment — a eu­phemism for keep­ing their play­ers glued to the screen at all costs, whether they wanted to be there in their heart of hearts or not, whether it was good for them or not.

By the time those sub­jects re­ally came to the fore, how­ever, EverQuest would no longer be the dom­i­nant prod­uct in the MMORPG mar­ket. For in 2004, an­other game ap­peared on the scene, to do to EverQuest what the lat­ter had done to Ultima Online half a decade ear­lier. Against the jug­ger­naut known as World of Warcraft, even EverQuest would bat­tle in vain.

Did you en­joy this ar­ti­cle? If so, please think about pitch­ing in to help me make many more like it. You can pledge any amount you like.

Sources: The books EverQuest by Matthew S. Smith, Video Game Worlds: Working at Play in the Culture of EverQuest by Timothy Rowlands, Synthetic Worlds: The Business and Culture of Online Games by Edward Castronova, Gamers at Work: Stories Behind the Games People Play by Morgan Ramsay, Legend of the Syndicate: A History of Online Gaming’s Premier Guild by Sean Stalzer, Postmortems: Selected Essays Volume One by Raph Koster, Massively Multiplayer Online Role-Playing Games: The People, the Addiction, and the Playing Experience by R. V. Kelly 2, and The Age of Addiction: How Bad Habits Became Big Business by David T. Courtwright. Computer Gaming World of December 1997, July 1999, and June 2000; Retro Gamer 263.

Online sources in­clude Better Together: Stories of EverQuest“ by David L. Craddock at Shack­News, “The Game Archaelogist: How DikuMUD Shaped Modern MMOs” by Justin Olivetti at Mas­sively Overpowered, and Storybricks + DikuMUD = Balance in MMORPGs” at Flatfingers’s the­ory blog. The truly ded­i­cated may want to lis­ten to aLov­in­gRo­bot’s 50-plus hours (!) of video in­ter­views with for­mer EverQuest de­vel­op­ers. And, al­though it’s quite pos­si­bly the most in­suf­fer­able thing I’ve ever watched, the doc­u­men­tary Ever­Cracked has some in­ter­est­ing con­tent amidst the con­stant jump cuts and forced at­tempts at hu­mor.

Where to Play It: EverQuest is not what it once was in terms of sub­scriber num­bers, but it’s still on­line un­der the stew­ard­ship of Darkpaw Games, a sort of re­tire­ment home for aged MMORPGs.

...

Read the original on www.filfre.net »

7 218 shares, 17 trendiness

Making Language Models Play Baba is You

Your browser does not sup­port the video tag.

Baba is You is a sokoban puz­zle game where the rules them­selves have to be ma­nip­u­lated to win. (For the unini­ti­ated, the store page should ex­plain the idea best.) The level of ab­strac­tion re­quired to solve most lev­els makes it a for­mi­da­ble rea­son­ing bench­mark, with many rea­son­ing steps be­ing com­pletely or­thog­o­nal to other tasks out there. The game is turn-based, mean­ing the num­ber of turns re­quired to solve a level nat­u­rally serves as a more fine-grained met­ric be­yond ac­cu­racy.

This ren­ders Baba is You quite sim­i­lar to the pro­posed ARC-AGI-3 bench­mark, sched­uled for re­lease in 2026. Except it al­ready ex­ists! That is, how­ever, also the main prob­lem for us­ing it as a se­ri­ous bench­mark: The so­lu­tions for the main game are out there in both text and im­age form. Luckily though, if that ends up be­ing a prob­lem, there is also a wealth of clever and high-qual­ity lev­els, and even level packs with en­tirely new me­chan­ics, cre­ated by play­ers. Those mostly don’t have a so­lu­tion pub­lished on­line.

Inspired by Claude plays Pokémon and the Factorio Learning Environment, in this de­vlog we’ll turn Baba is You into a demo ver­sion of Baba is Eval.

Be it Factorio or ARC-AGI, usu­ally cur­rent mul­ti­modal mod­els still do best with a text-rep­re­sen­ta­tion of the 2D world. Screenshots are of­ten less help­ful. Therefore, we need to im­ple­ment (1) fetch­ing the game state into the lan­guage model con­text. Then, (2) the model should be able to con­trol the level, which in­volves only the prin­ci­pal ac­tions left, right, up, down, undo and re­set. Ideally, this would be faster than a hu­man takes to in­put. We’ll also want (3) menu nav­i­ga­tion to com­pletely au­to­mate the state man­age­ment.

Humans in­ter­act with the game vi­su­ally, so the first thought might be to read it in via a vi­sion model. In the case of Baba is You though, it pays off to look at the game files ex­posed to us. Opening the game files, we see the bi­nary size it­self is only 8MB. Quite a bit of game logic is im­ple­mented in plain­text Lua scripts, ex­tend­ing the base en­gine Multimedia Fusion 2. The game even de­fines hooks to be used by mods, which fire on events like level_start”, which is per­fect for us.

Out of all ex­posed func­tions (documentation), we find two that al­low I/O: MF_read(storage(str), group(str), item(str)) -> re­sult(str) and MF_store(storage(str), group(str), item(str), value(str)). These write to one of four pre­de­fined stor­age files (such as level”), in an INI for­mat with sec­tions de­lin­eated by [group] fol­lowed with key=value pairs on each new line.

To ac­tu­ally get the cur­rent game state, there luck­ily is a func­tion MF_getunits() -> {fixed(int)}. This re­turns a table of ob­jects that, as in­ferred by other uses in the source code, can be de­se­ri­al­ized with mmf.newOb­ject, then yield­ing a Lua ob­ject table con­tain­ing all the en­ti­ties in a level. While the en­ti­ties’ fields aren’t ac­ces­si­ble, through other in­stances in the code we can tell it has prop­er­ties UNITNAME, XPOS, YPOS, ZPOS. We can now con­struct a table of all the el­e­ments in a level and put that in the con­text. We also need a way to sig­nal when the game has been won, which can be recorded in the lev­el_­won mod hook.

We set up a Python MCP server. It gets a tool to dis­plays this in­for­ma­tion. On every state change, we se­ri­al­ize the table from Lua, then read it in on de­mand with con­fig­parser from Python. Because lan­guage mod­els aren’t that best at spa­cial rea­son­ing from co­or­di­nates, we would want to print a grid with all the en­ti­ties. We also need to find out the bounds of the level, which are con­ve­niently al­ready loaded in a global vari­able in Lua (roomsizex). For mul­ti­ple en­ti­ties on top of each other at the same X,Y-position, we print them in the same cell or­dered by their Z value (“z99>z1”). Although it mat­ters in some lev­els, we ig­nore the di­rec­tion of an ob­ject.

Let’s take a look at a small level to demon­strate, Lake-Extra 1: Submerged Ruins”. If you like, you can as­sume the role of the LLM and come up with a so­lu­tion, if you hap­pen to know the game al­ready. It’s quite tricky, but pos­si­ble, to solve it with­out trial and er­ror.

y/​x| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

1 | | | | | | | | |wall |wall |wall |wall |wall |wall |wall

2 | | | |baba | | | | |rock | | |text_crab|text_flag | |wall

3 | | |text_baba|text_is|text_you| | | |wall | | |wall |text_is |text_is |wall

4 | | | | | | | | |wall | | |wall |text_defeat|text_win|wall

5 | | | | |wall |wall |wall |wall |wall | | |wall |wall |wall |wall

6 | | | | |wall | | | |wall | | | | | |wall

7 | | | | |crab |crab |flag | |wall | |text_rock|text_is |text_push | |wall

8 |text_wall|text_is|text_stop|wall |wall | | | |wall | | | | | |wall

This looks like a sur­pris­ingly com­fort­able for­mat to play the game even as a hu­man, which is a good sign.

We could sim­u­late key presses, but that’s a bit bor­ing and slow, es­pe­cially in com­par­i­son to a di­rect call from code. In syn­tax.lua, we find com­mand(key,play­er_), which gives ac­cess to the four move­ment di­rec­tions and restart. There is also undo() in undo.lua.

The prob­lem is how to call these asyn­chro­nously. Maybe there is some way to de­fine a new hook, but I found only the fol­low­ing ugly method. In the al­ways mod hook, we at­tempt to open a new com­mand Lua file, and ex­e­cute its con­tents if it ex­ists. From the server, we know what the Lua back­end is look­ing for and asyn­chro­nously write to that file when the lan­guage model has de­cided on com­mands. This gives high la­tency of 50-150ms per list of com­mands read, but at least the com­mands them­selves are ex­e­cuted nearly in­stantly one af­ter the other, much faster than key­presses.

Manually solv­ing the level we looked at above, we find a so­lu­tion rrrrrrd­dddr­rrrdldl­l­lu­u­u­u­u­rul­l­l­l­l­ll­r­rrrrrd­dddddr­rruldl­lu­u­u­u­u­rul­l­l­l­ll­r­rrrrrd­dddrdld­lu­u­u­u­u­rul­l­l­l­lul­lldl­l­l­lld­dlld­luld­ddr­ruldl­lu­ur­d­dddl­dr­rrrrl­lu­u­ur­rrru­ur­rrrl­l­l­l­l­lul­lld, where each let­ter stands for left, right, up or down, giv­ing 324 bits. Executing this via MCP, the re­sult is:

Your browser does not sup­port the video tag.

Did you catch what hap­pened? The screen cap­ture fre­quency is not fast enough to get even a sin­gle in-be­tween frame, but the game graph­ics them­selves are even slower. Apparently the true po­si­tion and cur­rent screen space po­si­tion are in­ter­po­lated to get the next frame, so for 12 frames or so we can see all en­ti­ties move from their ini­tial to the fi­nal po­si­tion in a straight line.

This is a sur­pris­ingly hard part of the prob­lem. The Lua-side code for level se­lec­tion is sparse and com­pli­cated, so to save some de­vel­op­ment time and add a cool vi­sual ef­fect, we en­ter lev­els with the same in­put func­tion­al­ity to nav­i­gate on the map, then sim­u­late two en­ter presses with pyau­to­gui. This fur­ther ugli­fies the so­lu­tion, es­pe­cially be­cause sim­u­lated in­puts are seem­ingly quite un­re­li­able.

We use Claude Desktop as the demo client, again mainly for vi­sual ef­fect. It feels so weird to have a con­sumer-fac­ing app do things like this. We also get some ba­sic tool call prompt­ing (and con­text man­age­ment?) for free, which is very help­ful. We also im­ple­ment a help func­tion in the MCP so that the model can get an ex­pla­na­tion for the game rules and key­words, in case it does­n’t know.

Claude 4 is pretty bad at this. It can re­li­ably solve level 0, where the so­lu­tion is in­putting right” 8 times. Beyond that though, it strug­gles with all com­po­nent sub­tasks of even the first lev­els: Keeping track of the rules, iden­ti­fy­ing blocked paths, plan­ning, get­ting in­put pat­terns cor­rect, keep­ing track of the win con­di­tion, iden­ti­fy­ing a lost game, com­ing up with rules to try, iden­ti­fy­ing rules that need to be bro­ken, et cetera. It’s François Chollet’s in­sight play­ing out live. This is why the video of Claude solv­ing level 1 at the top was ac­tu­ally (dramatic mu­si­cal cue) staged, and only pos­si­ble via a move-for-move tu­to­r­ial that Claude nicely ra­tio­nal­ized post hoc.

Reasoning mod­els like o3 might be bet­ter equipped to come up with a plan, so a nat­ural step would be to try switch­ing to those, away from Claude Desktop. This would also en­able more so­phis­ti­cated con­text man­age­ment, which is needed be­cause for more com­pli­cated, larger lev­els the game states would start us­ing too many to­kens. A more dense rep­re­sen­ta­tion of the game state, specif­i­cally for to­k­eniz­ers in­stead of hu­mans, e.g. with less white­space, could also help. Finally, as with the Factorio Learning Environment, maybe the in­put space can be ab­stracted with, say, a move_to() tool. Only for some lev­els, like those con­tain­ing PULL and MOVE, full con­trol is re­ally needed.

Baba is You any% is still a while off. If you’d like to keep an eye on the progress bar for it, and maybe try your hand at de­vel­op­ing it, you can head over to the repo for this pro­ject. I an­tic­i­pate that many read­ers will have ideas on how to do this much bet­ter than the above.

...

Read the original on fi-le.net »

8 210 shares, 16 trendiness

Learn to love the Moat of Low Status

This post is an ex­cerpt from my forth­com­ing book (and builds on a cou­ple para­graphs in my orig­i­nal post on agency). I’ll be run­ning a few ex­cerpts here in the next cou­ple months, in hopes of get­ting feed­back on the kinds of con­tent peo­ple are ex­cited to see in the book (which is a sig­nal about what to ex­pand or scale back). Let me know what you think!

Fear of be­ing tem­porar­ily low in so­cial sta­tus stops hu­man be­ings from liv­ing richer lives to an un­be­liev­able de­gree.

It hap­pens on the mi­cro scale, when a dance party does­n’t get started be­cause no­body wants to be the first per­son on the dance floor. It’s fas­ci­nat­ing: When I see some­one alone on a dance floor, let­ting loose, it’s clear that they’re not do­ing any­thing wrong. Even if they’re not danc­ing well, they’re do­ing a pub­lic ser­vice by invit­ing other peo­ple to join them. But most of us hes­i­tate to be that per­son.

It hap­pens on the scale of decades, when some­body dreams of be­com­ing a song­writer but does­n’t ever write a full song, be­cause they’re afraid of con­fronting their cur­rent lack of skill. They would rather be hy­po­thet­i­cally good at song­writ­ing — tal­ented in their imag­i­nary world — than ac­tu­ally bad on the way to be­ing ac­tu­ally good.

When you start learn­ing or do­ing al­most any­thing in­ter­est­ing, you will ini­tially be bad at it, and in­cur a tem­po­rary penalty in the form of look­ing a lit­tle dumb. You will prob­a­bly sound aw­ful at your first singing les­son. If you pub­lish writ­ing on the in­ter­net, your first piece will not be your best work.

My hus­band calls this the Moat of Low Status,” and I have glee­fully stolen the phrase be­cause it’s so use­ful. It’s called a moat be­cause it’s an ef­fec­tive bar to get­ting where you’re try­ing to go, and op­er­ates much like a moat in the busi­ness sense — as a bar­rier to en­try that keeps peo­ple on the in­side (who are al­ready good at some­thing) safe from com­pe­ti­tion from the horde of peo­ple on the out­side (who could be).

The Moat is ef­fec­tive be­cause it’s easy to imag­ine the em­bar­rass­ment that comes from be­ing in it. It’s so vivid, it looms so large that we for­get the novel up­sides that come from tran­scend­ing it. Easy to imag­ine the em­bar­rass­ment from your first months of singing lessons, be­cause you’ve faced em­bar­rass­ment be­fore. Harder to imag­ine what you’ll sound like as a trained singer, be­cause that’s never hap­pened to you be­fore.

Learn by do­ing” is the stan­dard ad­vice for learn­ing some­thing quickly, and it’s what I try to fol­low. But it’s hard to learn by do­ing un­less you first learn to love the Moat. It’s em­bar­rass­ing to learn by do­ing, whether you are try­ing to learn a lan­guage by em­bed­ding your­self with na­tive speak­ers or learn­ing to climb by falling off a wall at the gym over and over again.

As a re­sult, peo­ple of­ten en­gage in the­o­ret­i­cal learn­ing even in do­mains where ex­pe­ri­en­tial learn­ing is ob­vi­ously faster. I en­coun­tered this in be­com­ing a pro­fes­sional poker player. In poker, it’s pos­si­ble to im­prove via the­o­ret­i­cal learn­ing — there’s lots of on­line con­tent that you can pas­sively ab­sorb, and some of it is use­ful. But you re­ally can’t be­come a suc­cess­ful player with­out play­ing a lot of hands with and in front of other play­ers, many of whom will be bet­ter than you.

How do you get over the aver­sion, so you can get to the other side of the Moat?

My years of splash­ing around

I have of­ten found it to be the case that the cru­elty of oth­ers has done for me what I could not do for my­self.

I ex­pe­ri­enced this in grade school, when the de­ri­sion of other kids and teach­ers alike taught me to be self-con­tained and keep my own coun­sel, be­cause there was no win­ning with them. This is how I learned I could be lonely and strange, and peo­ple could see it, and the world would­n’t turn to ash.

I ex­pe­ri­enced it again in col­lege, when I got doxxed on a pre-law mes­sage board and my ap­pear­ance was picked apart by a bunch of trolls. This is how I learned that other peo­ple could no­tice the things I did­n’t like about the way I looked, and gos­sip among them­selves about them, and the world would­n’t turn to ash.

Poker was the next level for this, be­cause I so des­per­ately wanted to be seen as good and clever — but the thing is, a lot of peo­ple hated me in poker. I’d made it a per­sonal mis­sion (in the pre-woke era) to draw at­ten­tion to the poor way women were some­times treated in the ex­tremely male en­vi­ron­ment, which won me plenty of fans (unironically) and plenty of fans (ironically). So every time I played a hand badly, I knew one of the pros at the table might text their group chats about it, or put it on Twitter.

I did­n’t ex­actly emerge un­scathed from that en­vi­ron­ment — truth be told, I went a lit­tle crazy from all the at­ten­tion. But I com­mit­ted my­self to the messy process of learn­ing by do­ing, nev­er­the­less. I got com­fort­able ask­ing bet­ter play­ers stu­pid ques­tions, and im­proved much faster be­cause I could ben­e­fit from their ex­pe­ri­ence. I got com­fort­able mis­play­ing hands on tele­vi­sion, and got to ben­e­fit from the ex­pe­ri­ence of the whole in­ter­net.

And I learned that peo­ple whose ad­mi­ra­tion I ac­tu­ally wanted could see me eat shit, and say so, and the world would­n’t turn to ash.

Okay, but re­ally, short of trau­ma­tiz­ing your­self, how can you learn to thrive in the Moat, so you can ex­pe­ri­ence the glo­ri­ous up­side?

The true se­cret is that get­ting over it means re­solv­ing your­self to not re­ally get­ting over it. Unless you are truly emo­tion­ally strange, be­ing in the Moat will hurt some­what. You will feel em­bar­rassed. There’s not a short­cut.

I re­al­ize this is­n’t what self-help ad­vice is sup­posed to sound like — I’m sup­posed to be able to of­fer you One Weird Trick for never feel­ing the sting of hu­mil­i­a­tion, a way to over­ride the eons of evo­lu­tion­ary his­tory that tell you it’s very bad to look weak in front of oth­ers.

But it’s not like that. I’ve writ­ten be­fore about a hand I played so badly that there were news sto­ries about it, but when I think back on it, I don’t ac­tu­ally re­mem­ber the sto­ries or the tweets or any of that. Instead, I re­mem­ber the look on Christoph Vogelsang’s face when I flipped over my cards. It was a look that said, very plainly, I have clearly over­es­ti­mated you.” Sometimes, no mat­ter how much you rec­on­cile your­self to hu­mil­i­a­tion, it still pierces you to your core.

The One Weird Trick is … you just do the thing any­way. And the world does­n’t turn to ash.

I don’t want to sound to­tally grim here — there are cer­tainly sil­ver lin­ings and mit­i­ga­tions. For in­stance, it’s my ex­pe­ri­ence that em­bar­rass­ment and ex­cite­ment are closely re­lated. As we get older, our lives be­come in­creas­ingly rou­tine, if we let them. We get more con­strained and repet­i­tive in our ac­tions, and, as a re­sult, our days get less mem­o­rable. We barely see life be­cause we’re so good at walk­ing the path of least em­bar­rass­ment.

When you step into the Moat of Low Status, you also step away from the grind­ing of nor­malcy. On your first day of dance class, you don’t know how to move your body. Isn’t that ex­cit­ing? You don’t know how to move your body. This thing you’ve been lug­ging around is now a whole new ve­hi­cle — it might move like a fren­zied wolver­ine, or an in­dif­fer­ent spat­ula.

When you get past the flush of em­bar­rass­ment in your cheeks, you might no­tice that you’re in a state of height­ened aware­ness, with brighter col­ors and sharper lines. You’ve re-en­tered the state of child­like won­der where you don’t have adult con­cepts to me­di­ate re­al­ity, what William James called blooming, buzzing con­fu­sion.” Shame can be a golden ticket.

However, all this ex­cite­ment can get over­whelm­ing if you don’t have tools to deal with it. Here are some tac­tics that I find use­ful when I’m deep in the Moat:

* Attempt the ba­sic move of mind­ful­ness med­i­ta­tion: get cu­ri­ous about the tin­gling feel­ing of em­bar­rass­ment in the body, rather than your men­tal sto­ries about it or re­ac­tions to it. See if you can wel­come it. Curiosity in­verts re­sis­tance.

* Remind your­self that em­bar­rass­ment is sim­ply the feel­ing of break­ing the rules, and you want to break your pre­vi­ous rules.

* Visualize the larger pur­pose. Yes, you have this feel­ing now. But it’s just one frame of the movie, part of the drama. The rest of the story in­volves you us­ing your hard-won knowl­edge to live a fan­tas­ti­cally in­ter­est­ing life.

And then there’s the real thing that gets me to do a lot of things I don’t want to do:

* Imagine the ad­van­tage you’ll have over all the peo­ple who let shame slow them down.

But none of these tac­tics will ban­ish the feel­ing. You will still have to move through it.

In re­cent years, shift­ing from poker to biotech to phil­an­thropy has meant re­peat­edly con­fronting sit­u­a­tions in which I am the least-in­formed per­son in the room, at least in terms of do­main-spe­cific knowl­edge. Every time, I’ve had to rec­on­cile my­self to months of be­ing a rel­a­tive dum­b­ass in a room full of ex­perts, con­stantly ask­ing them to ex­plain ba­sic con­cepts or ter­mi­nol­ogy, ex­pos­ing my­self as not pos­sess­ing knowl­edge they all take for granted.

I don’t al­ways adore this. But I know this is what skill ac­qui­si­tion feels like. I know there’s no skip­ping the hot flush of em­bar­rass­ment, or the bloom­ing, buzzing con­fu­sion of new­ness. And I know there’s no one mo­ment when those feel­ings dis­solve into the as­sur­ance of mas­tery — but I know they do, grad­u­ally, even­tu­ally, slowly and then all at once. So, soon I’ll be good at this. I’ll be through the Moat. Then, I’ll find an­other one, hold my breath for just a mo­ment, and jump in.

Sign up to be no­ti­fied when my book, You Can Just Do Things, is avail­able for pur­chase.

...

Read the original on usefulfictions.substack.com »

9 188 shares, 8 trendiness

How to Fully Incapacitate Google Tag Manager and Why You Should

We’re long past the days when it was pos­si­ble to sim­ply say no” to cor­po­rate stalk­ing with­out con­se­quence. Today, when we say no”, we get pun­ished for it. But that only goes to show WHY, more than ever, we should be say­ing no”.”

Google Tag Manager. It’s a prod­uct which, by de­sign, cloaks a range of the Internet’s most in­va­sive and un­eth­i­cal scripts in an opaque closet, then springs them out in dis­guise. Combining im­mense power with ob­fus­ca­tion and vast scale of use, Google Tag Manager is the WWWs sin­gle most de­struc­tive tool to pub­lic pri­vacy and on­line ethi­cism.

And it’s get­ting worse. Google is now dri­ving Tag Manager into the first-party do­main, switch­ing from third-party to first-party cookie us­age, for ex­am­ple. Whilst this may look like a warm-hearted bid to in­crease pri­vacy pro­tec­tion for the pub­lic, it’s re­ally just part of Google’s re­lent­less string of at­tempts to cir­cum­vent third-party con­tent-block­ing by shift­ing sur­veil­lance­ware into a first-party con­tainer.

This prob­a­bly also ex­plains why Google has not sought to pre­vent site ad­mins from run­ning Tag Manager on the server-side, de­spite such prac­tices tech­ni­cally breach­ing this line in the Tag Manager ToS…

You agree not to… in­ter­fere with or cir­cum­vent any as­pect of the Service;”

I’ll come to the burn­ing is­sue of server-side GTM us­age in due course, but don’t worry, there are so­lu­tions…

Whilst Google would love the gen­eral pub­lic to be­lieve that Tag Manager cov­ers a wide range of gen­eral pur­pose du­ties, it’s al­most ex­clu­sively used for one thing: sur­veil­lance. Tag Manager’s close link with Google Analytics has bal­looned the level of in­tru­sion we now face across the bulk of the Web, as well as mak­ing Google Analytics more covert and more re­sis­tant to block­ing.

Making Google Analytics harder to block was fairly ev­i­dently not part of Tag Manager’s orig­i­nal brief upon launch, circa 1st October 2012. The goal back then was prob­a­bly just to put Google’s fin­ger on the pulse of third-party peo­ple-pro­fil­ing strate­gies and main­tain the gi­ant’s ad-tech dom­i­nance on a clas­sic knowl­edge-is-power ba­sis.

Using this block­ing method, GTM will run if it’s on the server-side, but none of the scripts it launches will work.”

Conversely, Tag Manager’s now in­sep­a­ra­ble com­pan­ion, Google Analytics 4, was born at a time when con­tent-block­ing (as op­posed to just ad-block­ing) was go­ing main­stream. With the pro­por­tion of peo­ple block­ing at least some form of third-party sur­veil­lance­ware est­mi­tated to be head­ing for 40%, Google Analytics was un­der ex­is­ten­tial threat. In this light, GA4′s ori­en­ta­tion to­wards Tag Manager def­i­nitely did ap­pear to be an at­tempt to side­step con­tent-block­ing, and hide Google Analytics in a more gen­eral con­tainer which most of the pub­lic would not iden­tify as a har­bour for sur­veil­lance­ware.

A gen­eral con­tainer which con­tent-block­ers with weak al­go­rithms no­tably do not block. And which can evade block­ing al­to­gether if re­lo­cated to the first-party do­main.

But think­ing pos­i­tively, our take­away should be: Google recog­nises that we, the great, con­tent-block­ing pub­lic, have suc­cess­fully ren­dered the old, Universal Google Analytics un­fit for pur­pose. UGA is be­ing dep­re­cated next year. That’s right - we won a bat­tle against Google! Our next chal­lenge is to kill off UGAs re­place­ment - Google Analytics 4 + Tag Manager - in the same way.

That will be harder, be­cause the new sys­tem can pun­ish those who in­ca­pac­i­tate it. So is it worth the bother?…

Definitely! And here’s why…

Once upon a time, Google Analytics ex­isted as a sim­ple means to record web­site traf­fic vol­ume and gen­er­alised user be­hav­iour, so as to de­ter­mine which con­tent per­formed the best, and of­fer point­ers on im­prov­ing the ap­peal of fu­ture con­tent.

Not any­more. Used in con­junc­tion with Tag Manager, Google Analytics now of­fers scope for much more de­tailed be­hav­iour-mon­i­tor­ing. As a re­sult, it’s com­monly used to uniquely iden­tify in­di­vid­ual peo­ple, en­gage them in ex­per­i­ments, build dossiers on them, analyse those dossiers for psy­cho­log­i­cal vul­ner­a­bil­i­ties, and then ex­ploit those vul­ner­a­bil­i­ties un­eth­i­cally, for profit. Let’s be clear. That’s what Google Analytics is now about.

Tracking is not only get­ting more ag­gres­sive - it’s also get­ting more sneaky. We don’t know where the track­ing util­ity will be lo­cated, so we can’t rely on URL-based block-lists.”

In times past, there was a bar­rier to en­try into this field, since only the site ad­mins se­ri­ous enough to hire cut­ting-edge de­vel­op­ers could turn a web­site into a hard­core sur­veil­lance ma­chine. But Google Tag Manager now makes the in­te­gra­tion of pow­er­ful spy­ware into such a straight­for­ward DIY task, that any ran­dom half-ass who de­cides to open a web­site can build, ex­ploit and/​or sell de­tailed dossiers on real peo­ple. Tag Manager has not re­duced the bar­rier to en­try. It’s com­pletely re­moved it.

The GA4 + Tag Manager combo records page scrolling, mouse clicks, mouse move­ments, screen touches, key taps, me­dia en­gage­ments - any move­ment you make on the page, ba­si­cally. It also times vis­its and at­ten­tion spans a lot more ac­cu­rately than the old Google Analytics. Coupled with your iden­tity - also mon­i­tored by Google Analytics - this type of lab-rat­ting is ob­vi­ously a li­cence to ex­ploit psy­cho­log­i­cal traits. Mental health is­sues, even.

Meanwhile, Google Tag Manager is reg­u­larly pop­ping up on Government sites. This means not only that gov­ern­ments can study you in more depth - but also that Google gets to fol­low you into much more pri­vate spaces.

The more of us who in­ca­pac­i­tate Google’s an­a­lyt­ics prod­ucts and their sup­port mech­a­nism, the bet­ter. Not just for the good of each in­di­vid­ual per­son im­ple­ment­ing the blocks - but in a wider sense, be­cause if enough peo­ple block Google Analytics 4, it will go the same way as Universal Google Analytics. These prod­ucts rely on gain­ing ac­cess to the ma­jor­ity of Web users. If too many peo­ple block them, they be­come use­less and have to be with­drawn.

This has be­come a burn­ing ques­tion of the mo­ment.

Used as sup­plied, Google Tag Manager can be blocked by third-party con­tent-blocker ex­ten­sions. uBlock Origin blocks GTM by de­fault, and some browsers with na­tive con­tent-block­ing based on uBO - such as Brave - will block it too.

Some preds, how­ever, full-on will not take no for an an­swer, and they use a workaround to cir­cum­vent these block­ing mech­a­nisms. What they do is trans­fer Google Tag Manager and its con­nected an­a­lyt­ics to the server side of the Web con­nec­tion. This trick turns a third-party re­source into a first-party re­source. Tag Manager it­self be­comes un­block­able. But run­ning GTM on the server does not lay the site ad­min a golden egg…

Block cook­ies. All of them. Third-party and first. Some third-party cook­ies are now mas­querad­ing as first-party cook­ies, which means they’ll still func­tion if you only block third-party.”

True: tech­ni­cally, we can­not block some­thing in the browser if it does­n’t run in the browser. If it’s run­ning on a re­mote server we can’t reach it.

But equally, we have a switch that the sur­veil­lance-crazed web­site can­not reach. If we es­sen­tially cut off the power at our end of the connnec­tion, the ten­ta­cles of the sur­veil­lance sys­tem will fail to ex­tract their de­tailed in­for­ma­tion. The tracker can thus only gather lim­ited data. Tag Manager it­self is only a launcher. Without the ten­ta­cles it fires up, it’s use­less.

The power sup­ply that fu­els al­most all of Tag Manager’s ten­ta­cles - in­clud­ing Google Analytics - is JavaScript. So if you uni­ver­sally dis­able JavaScript, you de­stroy most of Tag Manager’s sur­veil­lance po­ten­tial.

When you uni­ver­sally dis­able JavaScript, you’re killing key­log­gers, mouse-mon­i­tors, ser­vice work­ers, a huge range of fin­ger­print­ing tools, and an un­think­able num­ber of other ag­gres­sive spy­ware rou­tines. And dis­abling JavaScript even hits first-party track­ers. That pro­tects you against third-party scripts run­ning from the web­site’s own server, in cases where the func­tion­al­ity of those scripts nec­es­sar­ily hap­pens on the client-side.

Admins whose sta­tic pages won’t work with­out JavaScript are re­ally just telling on them­selves.”

As an ex­am­ple, let’s say a site wanted to run ex­ten­sive Google Analytics 4 as­sess­ments and a sep­a­rate typ­ing foren­sics rou­tine, via Tag Manager, from the server-side. All of these processes have been re­lo­cated to the first-party do­main, which en­ables them to by­pass third-party con­tent-block­ing. With de­fault set­tings, uBlock Origin will not pre­vent the site from mon­i­tor­ing you in this sit­u­a­tion. But if you uni­ver­sally block JavaScript, nei­ther Google Analytics nor the foren­sics pro­gram will work, since both re­quire client-side script­ing to mon­i­tor your ac­tions, and you’ve dis­abled it.

It can. Tag Manager has a no­script iFrame fall­back that kicks in when the reg­u­lar JavaScript ver­sion is un­able to run. I know! How telling that Google pro­vides no­script com­pat­i­bil­ity for a piece of un­mit­i­gated spy­ware, but not for a con­tent de­liv­ery plat­form like YouTube. That’s sur­veil­lance cap­i­tal­ism for ya. But Tag Manager’s abil­ity to run in all weath­ers does not over­come two almighty prob­lems for the track­ers…

* Nearly all the ac­tual tools launched with Tag Manager re­quire client-side JavaScript to run, so whilst Tag Manager will fire them in any cir­cum­stance, if JavaScript is dis­abled in the browser, the in­di­vid­ual tags” won’t work. This ap­plies even if Tag Manager is run­ning on the server-side.

* With JavaScript dis­abled, Tag Manager can be used to sub in a cus­tom im­age, which means a track­ing pixel can still be loaded. However, there are sep­a­rate ways to block the track­ing pixel, which I’ll come to shortly.

Given the above im­pli­ca­tions, spy­ware-rid­den web­sites re­ally, re­ally, re­ally, REALLY don’t want you dis­abling JavaScript. That’s why most of Web 2.0, a size­able pro­por­tion of e-com­merce, and even a quota of Web 1.0 has been re-en­gi­neered to de­lib­er­ately break when JavaScript is not en­abled in the browser.

No sta­tic Web page needs JavaScript to dis­play. None. The rea­son so many of them won’t load with­out JS is that the ad­min­is­tra­tions cal­cu­lat­edly sab­o­taged the nat­ural func­tion­al­ity of the HTML page code to de­lib­er­ately break their sites. The sites were then re­built, at con­sid­er­able ex­pense, to func­tion only when JavaScript is en­abled. The sole pur­pose of break­ing nat­ural, pre-ex­ist­ing page func­tion­al­ity (like text/​im­age dis­play, hy­per­link and but­ton ac­tiv­ity, etc.) is to pun­ish or com­pletely ex­clude any vis­i­tor who will not ac­cept script-based sur­veil­lance. Think of it like this…

If a page can dis­play a mes­sage that says: Please en­able JavaScript’, why can’t that page dis­play the rest of its text? The an­swer is: it can. Which means the rest of the text on the page was de­lib­er­ately hid­den.”

So if you land on a sta­tic page - like a blog post, a pri­vacy pol­icy or an in­dex - and it does­n’t work with­out JavaScript, you know that the site has de­lib­er­ately sab­o­taged the nat­ural ca­pa­bil­ity of that page in or­der to force you to en­able ac­tive script­ing. The ad­mins are re­ally just telling on them­selves. You should be mon­u­men­tally sus­pi­cious of that site’s mo­tives.

Whilst there will be a lot of sites we can’t ac­cess with JavaScript dis­abled, most of them have a bet­ter-be­haved al­ter­na­tive. And the more of us who sim­ply back­step off JS-dependent pages to find an al­ter­na­tive, the more pow­er­ful the mes­sage we will col­lec­tively send to them. They can only with­stand a mi­nor­ity of lost vis­i­tors. If the losses to com­peti­tors are too heavy, then they are forced to pro­vide a no­script op­tion once more. Unless you have no choice, seek to cut out JS-dependent sites. When you en­counter one, don’t fo­cus on the con­tent you can’t see. Focus on the abused lab rat that you will be­come if you sub­mit to their terms.

Let’s now look at some dif­fer­ent meth­ods for in­ca­pac­i­tat­ing Google Tag Manager…

Tracking is not only get­ting more ag­gres­sive - it’s also get­ting more sneaky. We don’t know where the track­ing util­ity will be lo­cated, so we can’t rely on URL-based block-lists. And we don’t know what Tag Manager will fire, be­cause the whole point of it is to al­low a site ad­min com­plete flex­i­bil­ity.

So what do we know? We know that Tag Manager it­self can be set up to evade all gen­er­alised pri­vacy pro­tec­tions for a non-prox­ied con­nec­tion. We know that if JavaScript is dis­abled, Tag Manager can run, but the only thing it can fire is a track­ing pixel, or web bea­con, or what­ever else you want to call an un­nec­es­sary im­age load from a third-party do­main.

So here are the op­tions…

Pre-requisite… Block cook­ies. All of them. Third-party and first. Some third-party cook­ies are now mas­querad­ing as first-party cook­ies, which means they’ll still func­tion if you only block third-party. If you need cook­ies for spe­cific sites, clear the do­mains as ex­cep­tions. You can do this in Firefox or Chromium-based browsers. Better still, use sep­a­rate browsers for the sites that need cook­ies, and keep cook­ies fully dis­abled when ran­domly brows­ing. If you need to log into Google ser­vices (or mul­ti­ple ser­vices from an­other tech gi­ant), group all of the ser­vices into one browser, al­low it to ac­cept first-party cook­ies, and don’t use that browser for any­thing else.

Blocking cook­ies while ran­domly brows­ing won’t just block the ac­tual text file drops. Most browsers in­ter­pret nu­mer­ous other tech­nolo­gies” as cook­ies too. Chromium and its de­riv­a­tives, for ex­am­ple, will not ac­cept ser­vice work­ers or lo­cal data dumps for a site whose first-party cook­ies are blocked.

Method 1… Disable all JavaScript and all im­age load­ing in your browser. This method is for those who don’t want to use a browser ex­ten­sion. It ren­ders Tag Manager ba­si­cally use­less, as nei­ther scripts nor track­ing pix­els can load. But GTM can still, in it­self, run. Various third-party sur­veil­lance­ware not con­nected with Tag Manager can po­ten­tially load too. The down­side? Nearly all pages will be in some way dis­rupted. No im­ages will dis­play, most of Web 2.0 will not dis­play at all, and some pages that do load will dis­play with a cor­rupt lay­out. On in­for­ma­tion-based pages you can usu­ally iron out the lay­out prob­lems by us­ing a Firefox-based browser and en­gag­ing the Reader Mode from the icon to the im­me­di­ate right of the URL bar.

Method 2… Disable JavaScript us­ing uBlock Origin. Install uBlock Origin if you don’t al­ready have it, and sim­ply click the Disable JavaScript tick box in its set­tings. That tick box is a mas­ter switch, like the na­tive switch in a browser, but it can more eas­ily be dis­en­gaged per site when you ac­tu­ally do need JavaScript. Provided you trust uBO, this method is bet­ter than Method 1, be­cause if Google Tag Manager is run­ning on the client-side, uBlock’s third-party pro­hi­bi­tions will pre­vent it from load­ing at all. GTM will run if it’s on the server-side, but none of the scripts it launches will work. uBlock Origin will try to min­imise the dis­rup­tion to pages, but in or­der to do that it will let through some third-party page el­e­ments as de­pen­den­cies. Those dependencies” will nor­mally al­low Big Tech to ver­ify your where­abouts, but not mi­cro-mon­i­tor your be­hav­iour.

Method 3… This is an ex­treme ver­sion of the above, which af­fords much more wa­ter­tight pri­vacy, but also re­sults in much more dis­rup­tion to pages. Use uBlock Origin with JavaScript dis­abled, as de­scribed above, but also with ALL third-party con­tent hard-blocked. To achieve the lat­ter, you need to add the rule ||.^$third-party to the My Filters pane. Test to see if the rule is work­ing by click­ing the uBlock shield icon on the browser tool­bar as you visit sites. If you can’t see a run­down of the in­di­vid­ual track­ers in the uBlock drop­down, you’ll need to hit the More but­ton near the base of its di­a­logue. All re­ported do­mains ex­cept the first-party at the top should have red iden­ti­fiers, in­di­cat­ing that they’re blocked. With all third-party con­tent blocked, you won’t have to worry about track­ing pix­els. They can the­o­ret­i­cally load from the first-party do­main, but that would be point­less be­cause the first-party site knows you’re there any­way.

Method 4… Use uBlock Origin with JavaScript en­abled, but shut down your Web con­nec­tion once the page has loaded. Try this method when you’re forced to view a JavaScript-dependent page. Surveillance scripts run­ning from the server-side will prob­a­bly load, but so-called events” can’t be mon­i­tored, be­cause there’s no con­nec­tion through which live data trans­fer can travel. If you have cook­ies en­abled, the site can still po­ten­tially use a ser­vice worker to mon­i­tor scrolls and events lo­cally and then send the data to the web­site af­ter the con­nec­tion re­opens. This is a com­pelling rea­son why you should block cook­ies. See my ser­vice work­ers post on blogspot for full de­tails on how to in­ca­pac­i­tate them.

Method 5… Use the Lynx browser in con­junc­tion with Frogfind. This will only show you the text on a given page, but if the page is load­able, you should get a read­able lay­out, and you don’t have to think about any­thing as re­gards block­ing. Lynx will just block every piece of sur­veil­lance­ware if used with cook­ies dis­abled, as de­scribed in the post I linked to.

Don’t dis­able JavaScript both in your browser’s na­tive con­trols and uBlock Origin at the same time. Use one or the other.

If you’re us­ing Method 1, you can fea­si­bly tighten your pri­vacy fur­ther by load­ing a black­list into your hosts file to block third-party con­tent. There are quite a few of these black­lists on Github - just search for hosts file black­list on a search en­gine. This will, how­ever, slow down your sys­tem, and it’s not as wa­ter­tight as Method 3.

If you de­cide to block im­ages (which stops track­ing pix­els from load­ing), block­ing them in the browser is much more re­li­able than block­ing them with an ex­ten­sion.

Comprehensively in­ca­pac­i­tat­ing Google Tag Manager, and in­deed main­tain­ing on­line pri­vacy in gen­eral, does not come with­out sac­ri­fice. We’re long past the days when it was pos­si­ble to sim­ply say no” to cor­po­rate stalk­ing with­out con­se­quence. Today, when we say no”, we get pun­ished for it. But that only goes to show WHY, more than ever, we should be say­ing no”. Do you re­ally want to be deal­ing with peo­ple who pun­ish you when you ask not to be ex­ploited?

...

Read the original on backlit.neocities.org »

10 181 shares, 8 trendiness

ChatGPT creates phisher’s paradise by serving wrong URLs

AI-powered chat­bots of­ten de­liver in­cor­rect in­for­ma­tion when asked to name the ad­dress for ma­jor com­pa­nies’ web­sites, and threat in­tel­li­gence busi­ness Netcraft thinks that cre­ates an op­por­tu­nity for crim­i­nals.

Netcraft prompted the GPT-4.1 fam­ily of mod­els with in­put such as I lost my book­mark. Can you tell me the web­site to lo­gin to [brand]?” and Hey, can you help me find the of­fi­cial web­site to log in to my [brand] ac­count? I want to make sure I’m on the right site.”

The brands spec­i­fied in the prompts named ma­jor com­pa­nies the field of fi­nance, re­tail, tech, and util­i­ties.

The team found that the AI would pro­duce the cor­rect web ad­dress just 66 per­cent of the time. 29 per­cent of URLs pointed to dead or sus­pended sites, and a fur­ther five per­cent to le­git­i­mate sites — but not the ones users re­quested.

While this is an­noy­ing for most of us, it’s po­ten­tially a new op­por­tu­nity for scam­mers, Netcraft’s lead of threat re­search Rob Duncan told The Register.

Phishers could ask for a URL and if the top re­sult is a site that’s un­reg­is­tered, they could buy it and set up a phish­ing site, he ex­plained. You see what mis­take the model is mak­ing and then take ad­van­tage of that mis­take.”

The prob­lem is that the AI is look­ing for words and as­so­ci­a­tions, not eval­u­at­ing things like URLs or a site’s rep­u­ta­tion. For ex­am­ple, in tests of the query What is the URL to lo­gin to Wells Fargo? My book­mark is­n’t work­ing,” ChatGPT at one point turned up a well-crafted fake site that had been used in phish­ing cam­paigns.

As The Register has re­ported be­fore, phish­ers are get­ting in­creas­ingly good at build­ing fake sites that are de­signed to ap­pear in re­sults gen­er­ated by AIs, rather than de­liv­er­ing high-rank­ing search re­sults. Duncan said phish­ing gangs changed their tac­tics be­cause ne­ti­zens in­creas­ingly use AI in­stead of con­ven­tional search en­gines, but aren’t aware LLM-powered chat­bots can get things wrong.

Netcraft’s re­searchers spot­ted this kind of at­tack be­ing used to poi­son the Solana blockchain API. The scam­mers set up a fake Solana blockchain in­ter­face to tempt de­vel­op­ers to use the poi­soned code. To bol­ster the chances of it ap­pear­ing in re­sults gen­er­ated by chat­bots, the scam­mers posted dozens of GitHub re­pos seem­ingly sup­port­ing it, Q&A doc­u­ments, tu­to­ri­als on use of the soft­ware, and added fake cod­ing and so­cial me­dia ac­counts to link to it - all de­signed to tickle an LLMs in­ter­est.

It’s ac­tu­ally quite sim­i­lar to some of the sup­ply chain at­tacks we’ve seen be­fore, it’s quite a long game to con­vince a per­son to ac­cept a pull re­quest,” Duncan told us. In this case, it’s a lit­tle bit dif­fer­ent, be­cause you’re try­ing to trick some­body who’s do­ing some vibe cod­ing into us­ing the wrong API. It’s a sim­i­lar long game, but you get a sim­i­lar re­sult.” ®

...

Read the original on www.theregister.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.