10 interesting stories served every morning and every evening.




1 768 shares, 68 trendiness

Laws of Software Engineering

Organizations de­sign sys­tems that mir­ror their own com­mu­ni­ca­tion struc­ture.

Premature op­ti­miza­tion is the root of all evil.

With a suf­fi­cient num­ber of API users, all ob­serv­able be­hav­iors of your sys­tem will be de­pended on by some­body.

Leave the code bet­ter than you found it.

YAGNI (You Aren’t Gonna Need It)

Don’t add func­tion­al­ity un­til it is nec­es­sary.

Adding man­power to a late soft­ware pro­ject makes it later.

A com­plex sys­tem that works is in­vari­ably found to have evolved from a sim­ple sys­tem that worked.

All non-triv­ial ab­strac­tions, to some de­gree, are leaky.

Every ap­pli­ca­tion has an in­her­ent amount of ir­re­ducible com­plex­ity that can only be shifted, not elim­i­nated.

A dis­trib­uted sys­tem can guar­an­tee only two of: con­sis­tency, avail­abil­ity, and par­ti­tion tol­er­ance.

Small, suc­cess­ful sys­tems tend to be fol­lowed by ov­erengi­neered, bloated re­place­ments.

A set of eight false as­sump­tions that new dis­trib­uted sys­tem de­sign­ers of­ten make.

Every pro­gram at­tempts to ex­pand un­til it can read mail.

There is a cog­ni­tive limit of about 150 sta­ble re­la­tion­ships one per­son can main­tain.

The square root of the to­tal num­ber of par­tic­i­pants does 50% of the work.

Those who un­der­stand tech­nol­ogy don’t man­age it, and those who man­age it don’t un­der­stand it.

In a hi­er­ar­chy, every em­ployee tends to rise to their level of in­com­pe­tence.

The min­i­mum num­ber of team mem­bers whose loss would put the pro­ject in se­ri­ous trou­ble.

Companies tend to pro­mote in­com­pe­tent em­ploy­ees to man­age­ment to limit the dam­age they can do.

Work ex­pands to fill the time avail­able for its com­ple­tion.

The first 90% of the code ac­counts for the first 90% of de­vel­op­ment time; the re­main­ing 10% ac­counts for the other 90%.

It al­ways takes longer than you ex­pect, even when you take into ac­count Hofstadter’s Law.

When a mea­sure be­comes a tar­get, it ceases to be a good mea­sure.

Anything you need to quan­tify can be mea­sured in some way bet­ter than not mea­sur­ing it.

Anything that can go wrong will go wrong.

Be con­ser­v­a­tive in what you do, be lib­eral in what you ac­cept from oth­ers.

Technical Debt is every­thing that slows us down when de­vel­op­ing soft­ware.

Given enough eye­balls, all bugs are shal­low.

Debugging is twice as hard as writ­ing the code in the first place.

A pro­ject should have many fast unit tests, fewer in­te­gra­tion tests, and only a small num­ber of UI tests.

Repeatedly run­ning the same tests be­comes less ef­fec­tive over time.

Software that re­flects the real world must evolve, and that evo­lu­tion has pre­dictable lim­its.

90% of every­thing is crap.

The speedup from par­al­leliza­tion is lim­ited by the frac­tion of work that can­not be par­al­lelized.

It is pos­si­ble to achieve sig­nif­i­cant speedup in par­al­lel pro­cess­ing by in­creas­ing the prob­lem size.

The value of a net­work is pro­por­tional to the square of the num­ber of users.

Every piece of knowl­edge must have a sin­gle, un­am­bigu­ous, au­thor­i­ta­tive rep­re­sen­ta­tion.

Designs and sys­tems should be as sim­ple as pos­si­ble.

Five main guide­lines that en­hance soft­ware de­sign, mak­ing code more main­tain­able and scal­able.

An ob­ject should only in­ter­act with its im­me­di­ate friends, not strangers.

Software and in­ter­faces should be­have in a way that least sur­prises users and other de­vel­op­ers.

The less you know about some­thing, the more con­fi­dent you tend to be.

Never at­tribute to mal­ice that which is ad­e­quately ex­plained by stu­pid­ity or care­less­ness.

The sim­plest ex­pla­na­tion is of­ten the most ac­cu­rate one.

Sticking with a choice be­cause you’ve in­vested time or en­ergy in it, even when walk­ing away helps you.

The Map Is Not the Territory

Our rep­re­sen­ta­tions of re­al­ity are not the same as re­al­ity it­self.

A ten­dency to fa­vor in­for­ma­tion that sup­ports our ex­ist­ing be­liefs or ideas.

We tend to over­es­ti­mate the ef­fect of a tech­nol­ogy in the short run and un­der­es­ti­mate the im­pact in the long run.

The longer some­thing has been in use, the more likely it is to con­tinue be­ing used.

Breaking a com­plex prob­lem into its most ba­sic blocks and then build­ing up from there.

Solving a prob­lem by con­sid­er­ing the op­po­site out­come and work­ing back­ward from it.

80% of the prob­lems re­sult from 20% of the causes.

The best way to get the cor­rect an­swer on the Internet is not to ask a ques­tion, it’s to post the wrong an­swer.

...

Read the original on lawsofsoftwareengineering.com »

2 710 shares, 145 trendiness

Intel Core Ultra 3 & LPCAMM2

When you’re ready for more per­for­mance, you can up­grade in­di­vid­ual com­po­nents in­stead of re­plac­ing your en­tire lap­top. Install a new Mainboard for gen­er­a­tional proces­sor up­grades, add mem­ory to han­dle heav­ier work­loads, or ex­pand your stor­age to in­crease ca­pac­ity or en­able dual boot­ing. The Framework Marketplace makes it easy to find the com­pat­i­ble parts you need.

...

Read the original on frame.work »

3 285 shares, 29 trendiness

Tim Cook’s Impeccable Timing

It’s the na­ture of busi­ness that the eu­logy for a chief ex­ec­u­tive does­n’t hap­pen when they die, but when they re­tire, or, in the case of Apple CEO Tim Cook, an­nounce that they will step up to the role of Executive Chairman on September 1. The one mor­bid ex­cep­tion is when a CEO dies on the job — or quits be­cause they are dy­ing — and the truth of the mat­ter is that that is where any hon­est re­count­ing of Cook’s in­cred­i­bly suc­cess­ful tenure as Apple CEO, par­tic­u­larly from a fi­nan­cial per­spec­tive, has to be­gin.

The num­bers, to be clear, are ex­tra­or­di­nary. Cook be­came CEO of Apple on August 24, 2011, and in the in­ter­ven­ing 15 years rev­enue has in­creased 303%, profit 354%, and the value of Apple has gone from $297 bil­lion to $4 tril­lion, a stag­ger­ing 1,251% in­crease.

The rea­son for Cook’s ac­ces­sion in 2011 be­came clear a mere six weeks later, when Steve Jobs passed away from can­cer on October 5, 2011. Jobs’ death is­n’t the rea­son Cook was cho­sen — Cook had al­ready served as in­terim CEO while Jobs un­der­went treat­ment in 2009 — but I think the tim­ing played a ma­jor role in mak­ing Cook ar­guably the great­est non-founder CEO of all time.

Peter Thiel in­tro­duced the con­cept of Zero To One thusly:

When we think about the fu­ture, we hope for a fu­ture of progress. That progress can take one of two forms. Horizontal or ex­ten­sive progress means copy­ing things that work — go­ing from 1 to n. Horizontal progress is easy to imag­ine be­cause we al­ready know what it looks like. Vertical or in­ten­sive progress means do­ing new things — go­ing from 0 to 1. Vertical progress is harder to imag­ine be­cause it re­quires do­ing some­thing no­body else has ever done. If you take one type­writer and build 100, you have made hor­i­zon­tal progress. If you have a type­writer and build a word proces­sor, you have made ver­ti­cal progress.

Steve Jobs made 0 to 1 prod­ucts, as he re­minded the au­di­ence in the in­tro­duc­tion to his most fa­mous keynote:

Every once in a while, a rev­o­lu­tion­ary prod­uct comes along that changes every­thing. First of all, one’s very for­tu­nate if one gets to work on one of these in your ca­reer. Apple’s been very for­tu­nate: it’s been able to in­tro­duce a few of these into the world.

In 1984, we in­tro­duced the Macintosh. It did­n’t just change Apple, it changed the whole com­puter in­dus­try. In 2001, we in­tro­duced the first iPod. It did­n’t just change the way we all lis­ten to mu­sic, it changed the en­tire mu­sic in­dus­try.

Well, to­day we’re in­tro­duc­ing three rev­o­lu­tion­ary prod­ucts of this class. The first one: a widescreen iPod with touch con­trols. The sec­ond: a rev­o­lu­tion­ary mo­bile phone. And the third is a break­through Internet com­mu­ni­ca­tions de­vice. Three things…are you get­ting it? These are not three sep­a­rate de­vices. This is one de­vice, and we are call­ing it iPhone.

Steve Jobs would, three years later, also in­tro­duce the iPad, which makes four dis­tinct prod­uct cat­e­gories if you’re count­ing. Perhaps the most im­por­tant 0 to 1 prod­uct Jobs cre­ated, how­ever, was Apple it­self, which raises the ques­tion: what makes Apple Apple?

What Makes Apple Apple” is­n’t a new ques­tion; it was the cen­tral ques­tion of Apple University, the in­ter­nal train­ing pro­gram the com­pany launched in 2008. Apple University was hailed on the out­side as a Steve Jobs cre­ation, but while I’m sure he green lit the con­cept, it was clear to me as an in­tern on the Apple University team in 2010, that the pro­gram’s dri­ving force was Tim Cook.

The core of the pro­gram, at least when I was there, was what be­came known as The Cook Doctrine:

We be­lieve that we’re on the face of the Earth to make great prod­ucts, and that’s not chang­ing.

We be­lieve in the sim­ple, not the com­plex.

We be­lieve that we need to own and con­trol the pri­mary tech­nolo­gies be­hind the prod­ucts we make, and par­tic­i­pate only in mar­kets where we can make a sig­nif­i­cant con­tri­bu­tion.

We be­lieve in say­ing no to thou­sands of pro­jects so that we can re­ally fo­cus on the few that are truly im­por­tant and mean­ing­ful to us.

We be­lieve in deep col­lab­o­ra­tion and cross-pol­li­na­tion of our groups, which al­low us to in­no­vate in a way that oth­ers can­not.

And frankly, we don’t set­tle for any­thing less than ex­cel­lence in every group in the com­pany, and we have the self-hon­esty to ad­mit when we’re wrong and the courage to change.

And I think, re­gard­less of who is in what job, those val­ues are so em­bed­ded in this com­pany that Apple will do ex­tremely well.

Cook ex­plained this on Apple’s January 2009 earn­ings call, dur­ing Jobs’ first leave of ab­sence, in re­sponse to a ques­tion about how Apple would fare with­out its founder. It’s a bril­liant state­ment, but it is — as the last para­graph makes clear — ul­ti­mately about main­tain­ing, nur­tur­ing, and grow­ing what Jobs built.

That is why I started this Article by high­light­ing the tim­ing of Cook’s as­cent to the CEO role. The chal­lenge for CEOs fol­low­ing iconic founders is that the per­son who took the com­pany from 0 to 1 usu­ally sticks around for 2, 3, 4, etc.; by the time they step down the only way for­ward is of­ten down. Jobs, how­ever, by virtue of leav­ing the world too soon, left Apple only a few years af­ter its most im­por­tant 0 to 1 prod­uct ever, mean­ing it was Cook who was in charge of grow­ing and ex­pand­ing Apple’s most rev­o­lu­tion­ary de­vice yet.

Cook, to be clear, man­aged this bril­liantly. Under his watch the iPhone not only got bet­ter every year, but ex­panded its mar­ket to every car­rier in ba­si­cally every coun­try, and ex­panded the line from one model in two col­ors to five mod­els in a plethora of col­ors sold at the scale of hun­dreds of mil­lions of units a year.

Cook was, with­out ques­tion, an op­er­a­tional ge­nius. Moreover, this was clearly the case even be­fore he scaled the iPhone to unimag­in­able scale. When Cook joined Apple in 1998 the com­pa­ny’s op­er­a­tions — cen­tered on Apple’s own fac­to­ries and ware­houses — were a mas­sive drag on the com­pany; Cook me­thod­i­cally shut them down and shifted Apple’s man­u­fac­tur­ing base to China, cre­at­ing a just-in-time sup­ply chain that year-af­ter-year co­or­di­nated a world­wide net­work of sup­pli­ers to de­liver Apple’s ever-ex­pand­ing prod­uct line to cus­tomers’ doorsteps and a fleet of beau­ti­ful and brand-ex­pand­ing stores. There was not, un­der Cook’s lead­er­ship, a sin­gle sig­nif­i­cant prod­uct is­sue or re­call.

Cook also over­saw the in­tro­duc­tion of ma­jor new prod­ucts, most no­tably AirPods and Apple Watch; the Wearables, Home, and Accessories” cat­e­gory de­liv­ered $35.4 bil­lion in rev­enue last year, which would rank 128 on the Fortune 500. Still, both prod­ucts are de­riv­a­tive of the iPhone; Cook’s sig­na­ture 0 to 1 prod­uct, the Apple Vision Pro, is more of a 0.5.

Cook’s more mo­men­tous con­tri­bu­tion to Apple’s top line was the el­e­va­tion of Services. The Google search deal ac­tu­ally orig­i­nated in 2002 with an agree­ment to make Google the de­fault search ser­vice for Safari on the Mac, and was ex­tended to the iPhone in 2007; Google’s mo­ti­va­tion was to en­sure that Apple never com­peted for their core busi­ness, and Cook was happy to take an ever in­creas­ing amount of pure profit.

The App Store also pre­dated Cook; Steve Jobs said dur­ing the App Store’s in­tro­duc­tion that we keep 30 [percent] to pay for run­ning the App Store”, and called it the best deal go­ing to dis­trib­ute ap­pli­ca­tions to mo­bile plat­forms”. It’s im­por­tant to note that, in 2008, this was true! The App Store re­ally was a great deal.

Three years later, in a July 28, 2011 email — less than a month be­fore Cook of­fi­cially be­came CEO — Phil Schiller won­dered if Apple should lower its take once they were mak­ing $1 bil­lion a year in profit from the App Store. John Gruber, writ­ing on Daring Fireball in 2021, won­dered what might have been had Cook fol­lowed Schiller’s ad­vice:

In my imag­i­na­tion, a world where Apple had used Phil Schiller’s memo above as a game plan for the App Store over the last decade is a bet­ter place for every­one to­day: de­vel­op­ers for sure, but also users, and, yes, Apple it­self. I’ve of­ten said that Apple’s pri­or­i­ties are con­sis­tent: Apple’s own needs first, users’ sec­ond, de­vel­op­ers’ third. Apple, for ob­vi­ous rea­sons, does not like to talk about the Apple-first part of those pri­or­i­ties, but Cook made ex­plicit dur­ing his tes­ti­mony dur­ing the Epic trial that when user and de­vel­oper needs con­flict, Apple sides with users. (Hence App Tracking Transparency, for ex­am­ple.)

These pri­or­i­ties are as they should be. I’m not com­plain­ing about their or­der. But putting de­vel­oper needs third does­n’t mean they should be ne­glected or over­looked. A large base of de­vel­op­ers who are ex­perts on de­vel­op­ing and de­sign­ing for Apple’s pro­pri­etary plat­forms is an in­cred­i­ble as­set. Making those de­vel­op­ers happy — happy enough to keep them want­ing to work and fo­cus on Apple’s plat­forms — is good for Apple it­self.

I want to agree with Gruber — I was crit­i­ciz­ing Apple’s App Store poli­cies within weeks of start­ing Stratechery, years be­fore it be­came a ma­jor is­sue — but from a share­holder per­spec­tive, i.e. Cook’s ul­ti­mate bosses, it’s hard to ar­gue with Apple’s un­com­pro­mis­ing ap­proach. Last year Apple Services gen­er­ated 26% of Apple’s rev­enue and 41% of the com­pa­ny’s profit; more im­por­tantly, Services con­tin­ues to grow year-over-year, even as iPhone growth has slowed from the go-go years.

Another way to frame the Services ques­tion is to say that Gruber is con­cerned about the long-term im­por­tance of some­thing that is some­what in­ef­fa­ble — de­vel­oper will­ing­ness and de­sire to sup­port Apple’s plat­forms — which is, at least in Gruber’s mind, es­sen­tial for Apple’s long-term health. Cook, in this cri­tique, pri­or­i­tized Apple’s fi­nan­cial re­sults and share­holder re­turns over what was best for Apple in the long run.

This is­n’t the only part of Apple’s busi­ness where this cri­tique has va­lid­ity. Cook’s great­est tri­umph was, as I noted above, com­pletely over­haul­ing and sub­se­quently scal­ing Apple’s op­er­a­tions, which first and fore­most meant de­vel­op­ing a heavy de­pen­dence on China. This de­pen­dence was not in­evitable: Patrick McGee ex­plained in Apple In China, which I con­sider one of the all-time great books about the tech in­dus­try, how Apple made China into the man­u­fac­tur­ing be­he­moth it be­came. McGee added in a Stratechery Interview:

Let me just re­fer back to some­thing that you wrote I think a few months ago when you called the last 20, 25 years, like the golden age for com­pa­nies like Apple and Silicon Valley fo­cused on soft­ware and Chinese tak­ing care of the hard­ware man­u­fac­tur­ing. That is a per­fect part­ner­ship, and if we were liv­ing in a sim­u­la­tion and it ended to­mor­row, you’d give props for Apple to tak­ing ad­van­tage of the sit­u­a­tion bet­ter than any­body else.

The prob­lem is we’re prob­a­bly not liv­ing in the sim­u­la­tion and things go on, and I’ve got this rather dis­qui­et­ing con­clu­sion where, look, Apple’s still re­ally good prob­a­bly, they’re not as good as they once were un­der Jony Ive, but they’re still good at in­dus­trial de­sign and prod­uct de­sign, but they don’t do any op­er­a­tions in our own coun­try. That’s all de­pen­dent on China. You’ve called this in fact the biggest vi­o­la­tion of the Tim Cook doc­trine to own and con­trol your des­tiny, but the Chinese aren’t just do­ing the op­er­a­tions any­more, they also have in­dus­trial de­sign, prod­uct de­sign, man­u­fac­tur­ing de­sign.

It re­ally is ironic: Tim Cook built what is ar­guably Apple’s most im­por­tant tech­nol­ogy — its abil­ity to build the world’s best per­sonal com­puter prod­ucts at as­tro­nom­i­cal scale — and did so in a way that leaves Apple more vul­ner­a­ble than any­one to the de­te­ri­o­rat­ing re­la­tion­ship be­tween the United States and China. China was cer­tainly good for the bot­tom line, but was it good for Apple’s long-run sus­tain­abil­ity?

This same cri­tique — of fa­vor­ing a fi­nan­cially op­ti­mal strat­egy over long-term sus­tain­abil­ity — may also one day be levied on the biggest ques­tion Cook leaves his suc­ces­sor: what im­pact will AI have on Apple? Apple has, to date, avoided spend­ing hun­dreds of bil­lions of dol­lars on the AI build­out, and there is one po­ten­tial fu­ture where the com­pany prof­its from AI by sell­ing the de­vices every­one uses to ac­cess com­modi­tized mod­els; there is an­other fu­ture where AI be­comes the means by which Apple’s 50 Years of Integration is fi­nally dis­rupted by com­pa­nies that ac­tu­ally in­vested in the tech­nol­ogy of the fu­ture.

If Tim Cook’s tim­ing was for­tu­nate in terms of when in Apple’s life­cy­cle he took the reins, then I would call his tim­ing in terms of when in Apple’s life­cy­cle he is step­ping down as be­ing pru­dent, both for his legacy and for Apple’s fu­ture.

Apple is, in terms of its tra­di­tional busi­ness model, in a bet­ter place than it has ever been. The iPhone line is fan­tas­tic, and sell­ing at a record pace; the Mac, mean­while, is poised to mas­sively ex­pand its mar­ket share as Apple Silicon — an­other Jobs ini­tia­tive, ap­pro­pri­ately in­vested in and nur­tured by Cook — makes the Mac the com­puter of choice for both the high end (thanks to Apple Silicon’s per­for­mance and uni­fied mem­ory ar­chi­tec­ture) and the low end (the iPhone chip-based MacBook Neo sig­nif­i­cantly ex­pands Apple’s ad­dress­able mar­ket). Meanwhile, the Services busi­ness con­tin­ues to grow. Cook is step­ping down af­ter Apple’s best-ever quar­ter, a mile­stone that very much cap­tures his tenure, for bet­ter and for worse.

At the same time, the AI ques­tion looms — and it sug­gests that Something Is Rotten in the State of Cupertino. The new Siri still has­n’t launched, and when it does, it will be with Google’s tech­nol­ogy at the core. That was, as I wrote in an Update, a mo­men­tous de­ci­sion for Apple’s fu­ture:

Apple’s plans are a bit like the al­co­holic who ad­mits that they have a drink­ing prob­lem, but promises to limit their in­take to so­cial oc­ca­sions. Namely, how ex­actly does Apple plan on re­plac­ing Gemini with its own mod­els when (1) Google has more tal­ent, (2) Google spends far more on in­fra­struc­ture, and (3) Gemini will be con­tin­u­ally in­creas­ing from the cur­rent level, where it is far ahead of Apple’s ef­forts? Moreover, there is now a new fac­tor work­ing against Apple: if this white-la­bel­ing ef­fort works, then the bar for good enough” will be much higher than it is cur­rently. Will Apple, af­ter all of the trou­ble they are go­ing through to fix Siri, ac­tu­ally be will­ing to tear out a model that works so that they can once again roll their own so­lu­tion, par­tic­u­larly when that so­lu­tion has­n’t faced the mar­ket pres­sure of ac­tu­ally work­ing, while Gemini has?

In short, I think Apple has made a good de­ci­sion here for short term rea­sons, but I don’t think it’s a short-term de­ci­sion: I strongly sus­pect that Apple, whether it has ad­mit­ted it to it­self or not, has just com­mit­ted it­self to de­pend­ing on 3rd-parties for AI for the long run.

As I noted above and in that Update, this de­ci­sion may work out; if it does­n’t, how­ever, the sting will be felt long af­ter Cook is gone. To that end, I cer­tainly hope that John Ternus, the new CEO, was heav­ily in­volved in the de­ci­sion; truth­fully, he should have made it.

To that end, it’s right that Cook is step­ping down now. Jobs might have been re­spon­si­ble for tak­ing Apple from 0 to 1, but it was Cook that took Apple from 1 to $436 bil­lion in rev­enue and $118 bil­lion in profit last year. It’s a tes­ta­ment to his ca­pa­bil­i­ties and ex­e­cu­tion that Apple did­n’t suf­fer any sort of post-founder hang­over; only time will tell if, along the way, Cook cre­ated the con­di­tions for a crash out, by virtue of he him­self for­get­ting The Cook Doctrine and what makes Apple Apple.

...

Read the original on stratechery.com »

4 278 shares, 6 trendiness

how a roblox cheat and one AI tool brought down vercel's entire platform

I read the Trend Micro re­port on my phone at 1am last night and havent been able to stop think­ing about it since. The time­line is gen­uinely ab­surd.

February 2026. An em­ployee at Context.ai down­loads a Roblox cheat. A Roblox cheat. Lumma Stealer comes bun­dled with it, grabs ses­sion cook­ies, cre­den­tials, every­thing. That em­ployee had ac­cess to in­ter­nal sys­tems at a com­pany that han­dles OAuth in­te­gra­tions for en­ter­prise cus­tomers.

March 2026. The at­tacker uses Context.ai’s com­pro­mised in­fra­struc­ture to pivot into a Vercel em­ploy­ee’s Google Workspace ac­count. This Vercel em­ployee had signed up for Context.ai’s AI Office Suite” with their en­ter­prise cre­den­tials and granted broad OAuth per­mis­sions. A Vercel en­gi­neer gave a third-party AI tool ac­cess to their cor­po­rate Google ac­count be­cause the on­board­ing flow asked for it and they clicked through.

April 19. Guillermo Rauch con­firms every­thing. Non-sensitive en­vi­ron­ment vari­ables were ac­cessed and ex­fil­trated. A threat ac­tor us­ing the ShinyHunters name is ask­ing $2 mil­lion for the data, though the ac­tual ShinyHunters group says theyre not in­volved. Vercel pub­lished their in­ci­dent bul­letin the same day.

Okay I need to cor­rect some­thing I got wrong in my ini­tial read of this. My first re­ac­tion was they stored env vars in plain­text??” but thats not ex­actly whats hap­pen­ing. All Vercel env vars are en­crypted at rest. The sensitive” check­box doesnt tog­gle en­cryp­tion on and off. What it does is change how the de­cryp­tion works.

Non-sensitive vars can be de­crypted by the dash­board back­end. You can view them, edit them, copy them from the UI. Sensitive vars can only be de­crypted at build time. Write-only. Once you set them you cant see the value again, only the app can read them at run­time.

So when the at­tacker got into Vercel’s in­ter­nal sys­tems, they could ac­cess the back­end that de­crypts the non-sen­si­tive vars. The sen­si­tive ones ap­pear to be safe. Vercel says they have no ev­i­dence the sen­si­tive vars were ac­cessed.

This is ac­tu­ally worse than a sim­ple plaintext” screwup be­cause its more sub­tle. The en­cryp­tion ex­isted. The in­fra­struc­ture was there. But the de­fault was set to the less pro­tected op­tion and most de­vel­op­ers never changed it be­cause why would you. You see a text field, you paste your API key, you hit save. Nobody is hunt­ing for a check­box that changes the de­cryp­tion scope of their en­vi­ron­ment vari­able. You just as­sume the plat­form han­dles that.

Vercel has since changed the de­fault to sen­si­tive. Which is an ad­mis­sion that the old de­fault was wrong. But every env var cre­ated be­fore that change is still sit­ting there in the less pro­tected state un­less some­one man­u­ally went back and tog­gled each one.

I’ve been watch­ing the AI tool­ing space for two years and theres a pat­tern that bugs me. Every AI pro­duc­tiv­ity tool re­quires broad ac­cess to func­tion. Thats the whole point. They need your docs, your emails, your code, your work­space. The value propo­si­tion is the ac­cess.

Every AI tool you plug into your work­flow is an at­tack sur­face mul­ti­plier. Context.ai wasnt Join to see!. It was a Y Combinator com­pany. Enterprise cus­tomers. SOC 2 com­pli­ance sup­pos­edly. And one em­ployee down­load­ing game cheats on a work ma­chine turned the whole thing into a sup­ply chain weapon.

I went through about a dozen AI tools I’ve per­son­ally au­tho­rized in the last year af­ter read­ing this. Nine of them have Google Workspace OAuth per­mis­sions that in­clude read­ing all emails and ac­cess­ing all Drive files. Nine. I au­tho­rized every one of them with­out read­ing the per­mis­sions be­cause the on­board­ing flow asked and I was in a hurry.

Actually I started count­ing how many OAuth apps I had au­tho­rized to­tal and stopped at 23 be­cause it was get­ting de­press­ing. I dont even re­mem­ber what half of them do. A meet­ing sum­ma­rizer I used twice in January still has full email ac­cess. Thats on me but its also on every OAuth di­a­log ever de­signed be­cause theyre all ter­ri­ble.

Vercel’s in­ci­dent page says limited cus­tomer cre­den­tials” were com­pro­mised. BleepingComputer says the at­tacker is ac­tively sell­ing data. Crypto de­vel­op­ers are scram­bling be­cause wal­let in­fra­struc­ture ran through Vercel env vars. The im­me­di­ate dam­age is bad enough.

But the part I keep com­ing back to is the trust cost. Every de­vel­oper on Vercel now has to go through every env var they ever set, fig­ure out which ones wer­ent marked sen­si­tive, ro­tate every cre­den­tial, and de­cide if they still trust the plat­form. Thats hun­dreds of thou­sands of pro­jects. Some peo­ple are re­port­ing it took them 6+ hours just to ro­tate every­thing on a sin­gle pro­ject.

Multiply that by the ac­tive Vercel user­base and youre look­ing at mil­lions of de­vel­oper-hours spent on cre­den­tial ro­ta­tion this week. Nobody at Vercel wants any­one do­ing that math right now.

Honestly? Probably not much changes for most peo­ple. I’ve watched this pat­tern enough times. Breach hap­pens. Posts get writ­ten. Keys get ro­tated for about a week. Then every­one goes back to past­ing se­crets into plat­form dash­board text fields be­cause its con­ve­nient and the al­ter­na­tives re­quire ac­tual work.

AWS Secrets Manager. HashiCorp Vault. SOPS + age. Self-hosted in­fra­struc­ture. Real op­tions that real teams use. All re­quire more setup than a text field. The gap be­tween know­ing whats se­cure and do­ing whats se­cure is mea­sured en­tirely in con­ve­nience.

I started look­ing into how many YC com­pa­nies have had se­cu­rity in­ci­dents tied to… ac­tu­ally thats a dif­fer­ent rab­bit hole for a dif­fer­ent post.

The one thing I am do­ing dif­fer­ently is what Im call­ing the 12x au­dit. For every AI tool I au­tho­rize, Im spend­ing 12x the time I used to spend click­ing Allow” on ac­tu­ally read­ing what it re­quests. Thats still only about two min­utes per tool since I was spend­ing roughly ten sec­onds be­fore. But two min­utes would have caught the ex­act per­mis­sion pat­tern that made this whole chain pos­si­ble. Ten sec­onds didnt.

A Roblox cheat brought down one of the biggest de­ploy­ment plat­forms on the in­ter­net. Not a zero-day. Not a na­tion-state. A game cheat that a Context.ai em­ployee prob­a­bly down­loaded for their kid. The at­tack sur­face wasnt so­phis­ti­cated. It was con­ve­nient. And con­ve­nience is the only prod­uct the en­tire AI tool­ing in­dus­try is ac­tu­ally sell­ing.

...

Read the original on webmatrices.com »

5 257 shares, 18 trendiness

MNT Reform

MNT Reform is an open hard­ware lap­top, de­signed and as­sem­bled in Berlin, Germany.

2023.04.17: mnt re­form #000120 is now be­ing of­fered as a loaner by sdf.org.

The track­ball can press against the screen when the lid is closed, caus­ing a small mark to ap­pear on the screen.

Lid, screen bezel, key­board frame, and wrist rest are made from milled alu­minium. Side pan­els and trans­par­ent bot­tom panel are made from acrylic.

Screws in the LCD bezel are not cov­ered, and over time the one in the cen­ter can start to rub the paint off of the wrist rest.

My friend kindly sent me a pair of metal re­place­ment side pan­els. First I tried paint­ing them with a paint brush and a bot­tle of Vanta Black. This flaked off eas­ily, so I sanded them down and re­painted them with black spray­paint (satin fin­ish). Managed to chip that as well dur­ing in­stal­la­tion. I don’t know what I’m do­ing.

2022.03.03 Update: MNT has now made avail­able steel re­place­ment side pan­els.

2022.04.27 Update: I ended up just stretch­ing the orig­i­nal molex an­tenna down un­der the track­ball, which im­proved re­cep­tion even more than buy­ing an ex­pen­sive new an­tenna. Because of its shape and the ori­en­ta­tion of its ca­bles, the Laird an­tenna would­n’t quite reach.

io­gear gwu637 eth­er­net to wifi n adapter - for op­er­at­ing sys­tems where wifi does­n’t (yet) work

piña­tex sleeve - note: pull tabs broke off in the first week

2022.02.22 Update: MNT sent me a re­place­ment sleeve with new, all-metal zip­per pulls that are now stan­dard equip­ment on the sleeve.

2022.07.16 Update: One of the all-metal zip­per pulls shat­tered as I tried to un­zip the sleeve.

mbk-col­ors: 1u and 1.5u hom­ing - re­place­ment key caps, some with raised edges to help with ac­cli­mat­ing to the non-stan­dard key­board lay­out

void linux -

sd­card im­age (does not boot on my ma­chine)

By de­fault, the speaker out­put of MNT Reform is a bit quiet, and

chang­ing the vol­ume with PulseAudio won’t dra­mat­i­cally change it.

There’s one more knob you can turn up that is only ac­ces­si­ble via

ALSA.

Open a Terminal and type al­samixer. Then press F6 and se­lect

the wm8960-au­dio card. Navigate with Cursor keys to the Playback

slider and turn it up

Well, there is no wm8960-au­dio listed on my sys­tem, only (default). And Master is al­ready cranked to 100. Investigating, I no­ticed:

sl@re­form:~$ dmesg | grep 8960

[ 3.613559] wm8960 2-001a: Failed to is­sue re­set

Usually a re­boot gets the au­dio go­ing for me if I see failed to is­sue

re­set (happens on boot­ing from power off). Lukas spec­u­lates on a fix

here[1] and an­other per­son[2] pro­vided this line in or­der to re­bind the

de­vice with­out a re­boot:

echo 2-001a > /sys/bus/i2c/drivers/wm8960/bind

I was able to repli­cate the is­sue and test the above line out just

now. I had to sudo su” first. Then the au­dio de­vice showed up in

al­samixer again just fine.

This worked for me, as well.

Update 2022.06.20: After nu­mer­ous up­dates, sound no longer works for me in Alpine Linux.

echo 0 > /sys/class/leds/ath9k-phy0/brightness # needs root per­mis­sions

...

Read the original on mnt.stanleylieber.com »

6 248 shares, 24 trendiness

Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return

Anthropic an­nounced on Monday that Amazon has agreed to in­vest a fresh $5 bil­lion, bring­ing Amazon’s to­tal in­vest­ment in the com­pany to $13 bil­lion. Anthropic, for its part, has agreed to spend over $100 bil­lion on AWS over the next 10 years, ob­tain­ing up to 5 GW of new com­put­ing ca­pac­ity to train and run Claude.

The deal echoes an agree­ment Amazon struck with OpenAI just two months ago, when it joined a $110 bil­lion fund­ing round — con­tribut­ing $50 bil­lion — that val­ued the ChatGPT maker at a $730 bil­lion pre-money val­u­a­tion. That deal, too, was struc­tured partly as cloud in­fra­struc­ture ser­vices rather than straight cash.

At the heart of this deal is Amazon’s cus­tom chips: Graviton (a low-power CPU) and Trainium (an Nvidia com­peti­tor and AI ac­cel­er­a­tor chip). The Anthropic deal specif­i­cally cov­ers Trainium2 through Trainium4 chips, even though Trainium4 chips are not cur­rently avail­able. The lat­est chip, Trainium3, was re­leased in December. On top of that, Anthropic has se­cured the op­tion to buy ca­pac­ity on fu­ture Amazon chips as they be­come avail­able.

We’ll see if this news is a teaser to Anthropic an­nounc­ing a new fund­ing round. VCs have re­port­edly been of­fer­ing the AI com­pany cap­i­tal in a deal that would value it at $800 bil­lion or more.

...

Read the original on techcrunch.com »

7 242 shares, 19 trendiness

i12bp8/TagTinker: Flipper Zero app for ESL research using IR. All based on https://www.furrtek.org/?a=esl

It is in­tended only for pro­to­col study, sig­nal analy­sis, and con­trolled ex­per­i­ments on hard­ware you per­son­ally own or are ex­plic­itly au­tho­rized to test.

This repos­i­tory does not au­tho­rize ac­cess to, mod­i­fi­ca­tion of, or in­ter­fer­ence with any third-party de­ploy­ment, com­mer­cial in­stal­la­tion, or re­tail en­vi­ron­ment.

TagTinker is a Flipper Zero app for ed­u­ca­tional re­search into in­frared elec­tronic shelf-la­bel pro­to­cols and re­lated dis­play be­hav­ior on au­tho­rized test hard­ware.

It is fo­cused on:

This README in­ten­tion­ally avoids de­ploy­ment-ori­ented in­struc­tions and ex­cludes guid­ance for in­ter­act­ing with live com­mer­cial sys­tems.

Where is the .fap re­lease?

The Flipper app is source-first. Build the .fap your­self from this repos­i­tory with ufbt so it matches your firmware and lo­cal tool­chain.

What if it crashes or be­haves oddly?

The main­tainer pri­mar­ily uses TagTinker on Momentum firmware with as­set packs dis­abled and has not had is­sues in that setup. If you are us­ing a dif­fer­ent firmware branch, cus­tom as­set packs, or a heav­ily mod­i­fied de­vice setup, start by test­ing from a clean base­line.

What hap­pens if I pull the bat­tery out of the tag?

Many in­frared ESL tags store their firmware, ad­dress, and dis­play data in volatile RAM (not flash mem­ory) to save cost and en­ergy.

If you re­move the bat­tery or let it fully dis­charge, the tag will lose all pro­gram­ming and be­come un­re­spon­sive (“dead”). It usu­ally can­not be re­cov­ered with­out the orig­i­nal base sta­tion.

I found a bug or want to con­tribute — how can I get in touch?

You can con­tact me on:

I’m cur­rently trav­el­ing, so re­sponse times may be slower than usual. Feel free to open is­sues or Pull Requests any­way — con­tri­bu­tions (bug fixes, im­prove­ments, doc­u­men­ta­tion, etc.) are very wel­come and will help keep the pro­ject alive while I’m away.

TagTinker is built around the study of in­frared elec­tronic shelf-la­bel com­mu­ni­ca­tion used by fixed-trans­mit­ter la­bel­ing sys­tems.

* com­mu­ni­ca­tion is based on ad­dressed pro­to­col frames con­tain­ing com­mand, pa­ra­me­ter, and in­tegrity fields

* dis­play up­dates are car­ried as pre­pared pay­loads for sup­ported mono­chrome graph­ics for­mats

* lo­cal tool­ing in this pro­ject helps re­searchers pre­pare as­sets and per­form con­trolled ex­per­i­ments on au­tho­rized hard­ware

This pro­ject is in­tended to help re­searchers un­der­stand:

For the un­der­ly­ing re­verse-en­gi­neer­ing back­ground and deeper pro­to­col re­search, see:

TagTinker is lim­ited to home-lab and au­tho­rized re­search use, in­clud­ing:

It is not a re­tail tool, op­er­a­tional tool, or field-use util­ity.

You are solely re­spon­si­ble for en­sur­ing that any use of this soft­ware is law­ful, au­tho­rized, and ap­pro­pri­ate for your en­vi­ron­ment.

The main­tainer does not au­tho­rize, ap­prove, or par­tic­i­pate in any unau­tho­rized use of this pro­ject, and dis­claims re­spon­si­bil­ity for mis­use, dam­age, dis­rup­tion, le­gal vi­o­la­tions, or any con­se­quences aris­ing from such use.

If you do not own the hard­ware, or do not have ex­plicit writ­ten per­mis­sion to test it, do not use this pro­ject on it.

Any unau­tho­rized use is out­side the in­tended scope of this repos­i­tory and is un­der­taken en­tirely at the user’s own risk.

This is an in­de­pen­dent re­search pro­ject.

It is not af­fil­i­ated with, en­dorsed by, au­tho­rized by, or spon­sored by any elec­tronic shelf-la­bel ven­dor, re­tailer, in­fra­struc­ture provider, or sys­tem op­er­a­tor.

Any ref­er­ences to ex­ter­nal re­search, pub­lic doc­u­men­ta­tion, or re­verse-en­gi­neer­ing work are in­cluded strictly for ed­u­ca­tional and re­search con­text.

This pro­ject is a port and adap­ta­tion of the ex­cel­lent pub­lic re­verse-en­gi­neer­ing work by fur­rtek / PrecIR and re­lated com­mu­nity re­search.

Licensed un­der the GNU General Public License v3.0 (GPL-3.0).

See the LICENSE file for de­tails.

This soft­ware is pro­vided AS IS, with­out war­ranty of any kind, ex­press or im­plied.

In no event shall the au­thors or copy­right hold­ers be li­able for any claim, dam­ages, or other li­a­bil­ity aris­ing from the use of this soft­ware.

This repos­i­tory is main­tained as a nar­rowly scoped ed­u­ca­tional re­search pro­ject.

The main­tainer does not au­tho­rize, en­cour­age, con­done, or ac­cept re­spon­si­bil­ity for use against third-party de­vices, de­ployed com­mer­cial sys­tems, re­tail in­fra­struc­ture, or any en­vi­ron­ment where the user lacks ex­plicit per­mis­sion.

...

Read the original on github.com »

8 241 shares, 30 trendiness

Original GrapheneOS responses to WIRED fact checker

This site is best viewed in a mod­ern browser with JavaScript en­abled.

Something went wrong while try­ing to load the full ver­sion of this site. Try hard-re­fresh­ing this page to fix the er­ror.

WIRED has pub­lished an ar­ti­cle about GrapheneOS with a his­tory of the pro­ject nearly en­tirely based on fab­ri­ca­tions from James Donaldson. Donaldson has spent the past 8 years try­ing to de­stroy GrapheneOS and the life of the pro­jec­t’s founder, Daniel Micay. Donaldson has heav­ily en­gaged in fab­ri­ca­tions with an ever chang­ing story about the his­tory of the pro­ject. Copperhead was forced to drop nearly all of their claims in the on­go­ing law­suit. Copperhead was also forced to dis­con­tinue their closed source fork of GrapheneOS and is a zom­bie com­pany with no sig­nif­i­cant op­er­a­tions or rev­enue. Copperhead lacks any se­ri­ous ba­sis for the re­main­ing claims in their law­suit and it is­n’t a ma­jor con­cern for us any­more. Their claims have been thor­oughly de­bunked at this point and are pri­mar­ily an is­sue in the form of an ex­treme level of fab­ri­ca­tions and ha­rass­ment they started which is car­ried on with­out them. James Donaldson has been thor­oughly proven to be a se­r­ial fab­ri­ca­tor, scam­mer and thief. Despite this, WIRED lis­tened to his tall tales and pre­sented it as a his­tory of GrapheneOS. We weren’t given an op­por­tu­nity to pro­vide an ac­tual his­tory of the pro­ject based in fact as we were led to be­lieve it was­n’t a ma­jor part of the ar­ti­cle and were barely asked about it.

Copperhead was propped up by the open source pro­ject and heav­ily held it back. After the split with the com­pany, the pro­ject quickly gained a lot more fund­ing via do­na­tions and has be­come highly suc­cess­ful. Instead of hav­ing a sin­gle full time de­vel­oper barely be­ing paid any­thing, GrapheneOS now has around 10 full time de­vel­op­ers and is in the process of ex­pand­ing by hir­ing sev­eral more. It’s en­tirely funded by do­na­tions and is far more than sim­ply be­ing sus­tain­able that way. Donaldson be­lieves that his past ties to the pro­ject he burned down and then spent years try­ing to de­stroy en­ti­tle him to get­ting rich from it. That’s why he con­tin­ues mis­lead­ing peo­ple about his in­volve­ment and dou­bled down on a failed law­suit. He con­tin­ues caus­ing harm to GrapheneOS and Daniel Micay to this day.

GrapheneOS Foundation is a non-profit and no one is get­ting rich from it. Daniel solely gets his in­come via GitHub Sponsors and has­n’t paid him­self any­thing from the GrapheneOS Foundation. Donaldson has only ever cared about money. He spent years ma­nip­u­lat­ing and ex­ploit­ing Daniel with the goal of en­rich­ing him­self. He even­tu­ally de­cided Daniel was a bar­rier to him get­ting rich due to his val­ues and tried to co­erce him into hand­ing over own­er­ship and con­trol of his open source pro­ject with no ba­sis for it.

As part of the split be­tween the open source pro­ject and Copperhead back in 2018, Donaldson stole a large amount of do­na­tions from the pro­ject. He ul­ti­mately ended up steal­ing around $300,000 worth of Bitcoin do­na­tions made to the open source pro­ject. Prior to his theft of the do­na­tions fol­lowed by years of re­peat­edly fork­ing our pro­ject to sell it while falsely claim­ing to have cre­ated it, Donaldson heav­ily de­pended on in­come cre­ated by the open source pro­ject. Donaldson never funded or sup­ported the pro­ject as he claims but rather it was en­tirely the other way around. He de­pended on a mas­sive amount of work done by Daniel Micay to pro­vide him with in­come for a tiny amount of work he was do­ing him­self. He re­ceived as much money from de­vice sales and do­na­tions as Daniel Micay for a tiny amount of work in com­par­i­son. His work was un­suc­cess­ful in get­ting any sub­stan­tial fund­ing. It did­n’t make any sense for the open source pro­ject to re­main tied to a com­pany hold­ing it back. It was en­tirely the pre­rog­a­tive of the open source pro­ject to move on with­out it. Donaldson could not ac­cept it con­tin­u­ing as an open source pro­ject.

Donaldson’s claims can be proven false by in­ter­view­ing nu­mer­ous peo­ple who were around at the time. WIRED made no at­tempt to ver­ify if any­thing he said was true prior to pub­lish­ing it. Copperhead was a com­pany founded by 3 peo­ple, not 2, and WIRED could have in­ter­viewed Dan McGrady who was the 3rd co-founder. There were many other peo­ple around back then they could have in­ter­viewed in­clud­ing many peo­ple who can con­firm they had their do­na­tions stolen by James Donaldson. Donaldson se­ri­ally fab­ri­cates things about him­self and oth­ers. Giving him such a huge plat­form to mis­lead peo­ple is ex­tremely ir­re­spon­si­ble. He has very lit­tle to do with the over­all his­tory of GrapheneOS. His in­volve­ment was as some­one leech­ing off the pro­ject for years while fail­ing to de­liver what he re­peat­edly promised. He is­n’t a hacker as he claims but rather is largely non-tech­ni­cal. GrapheneOS has been enor­mously suc­cess­ful through en­tirely fund­ing the pro­ject with do­na­tions. It was en­tirely pos­si­ble to cre­ate a suc­cess­ful busi­ness based around it but Donaldson was never the right per­son to do it.

Our com­mu­nity man­ager @spring-onion (Dave Wilson) han­dled nearly all of the com­mu­ni­ca­tions with WIRED over months. He is­n’t a de­vel­oper and clearly is­n’t the same per­son as Daniel Micay but yet the ar­ti­cle makes a com­pletely un­sub­stan­ti­ated claim that it could be the same per­son. @spring-onion knows lan­guages Daniel does­n’t speak in­clud­ing German, has a com­pletely dif­fer­ent writ­ing style and a dif­fer­ent voice. @spring-onion spent a mas­sive amount of time com­mu­ni­cat­ing with them in­clud­ing mul­ti­ple in­ter­views fo­cused on the GrapheneOS fea­ture set and much more. WIRED re­peat­edly told us the ar­ti­cle would barely cover the his­tory of the pro­ject and would­n’t fo­cus on Daniel Micay. Due to this, we weren’t given an op­por­tu­nity to pro­vide them with in­for­ma­tion and ad­dress the claims made by James Donaldson. Despite this, it ended up be­ing the pri­mary fo­cus of the ar­ti­cle. We were only given an op­por­tu­nity to re­spond to the vast ma­jor­ity of it af­ter the ar­ti­cle was al­ready fully writ­ten and there­fore our re­sponse to Donaldson’s sto­ries was nearly en­tirely omit­ted from the ar­ti­cle.

The con­tent be­low are the ques­tions we were asked by a WIRED fact checker with the orig­i­nal re­sponses we pro­vided to them with no mod­i­fi­ca­tions. This is what WIRED re­ceived in re­sponse to us and should have much more heav­ily in­cor­po­rated into the ar­ti­cle.

Do you live in Canada?

Did you meet James Donaldson be­tween 2011 and 2013, when you joined Toronto Crypto?

Micay met Donaldson in late 2014 through Dan McGrady. McGrady knew Micay from his se­cu­rity work on Arch Linux and pro­jects in Rust; McGrady and Micay ini­tially con­nected via IRC.

Micay was not a mem­ber of Toronto Crypto. He did join the Toronto Crypto IRC chan­nel while con­sid­er­ing at­tend­ing events, but did not at­tend any meet­ings be­fore be­gin­ning the work that later be­came GrapheneOS.

At the time, were you a se­cu­rity re­searcher study­ing tech­niques used to pro­tect banks and gov­ern­ments?

At that time, Micay was an open-source de­vel­oper, se­cu­rity en­gi­neer, and se­cu­rity re­searcher; his work did not in­volve study­ing tech­niques used to pro­tect banks or gov­ern­ments.

At the time, did you use your free time to ex­per­i­ment with ap­ply­ing the tech­niques you were study­ing to the fast-grow­ing mo­bile space?

The idea of a hard­ened mo­bile OS was not novel; sev­eral pro­jects ex­isted or were be­ing dis­cussed. Micay chose to in­vest his free time in his own open-source im­ple­men­ta­tion af­ter dis­cus­sions with McGrady. McGrady had a mi­nor, short-lived in­volve­ment, but Micay built the ini­tial pro­ject alone. This all oc­curred be­fore Donaldson be­came in­volved.

Is it ac­cu­rate that on one oc­ca­sion, a troll in­fil­trated Toronto Crypto’s group chat and gave it what they called an impossible” task of de­crypt­ing a se­ries of mes­sages? Did you ea­gerly ac­cept the chal­lenge and de­crypt them with ease?

Micay has no rec­ol­lec­tion of that event / was not per­son­ally in­volved.

Around 2014, did Donaldson ask you to join him in a ven­ture ad­dress­ing Android’s se­cu­rity prob­lems?

In late 2014, Donaldson and McGrady con­tacted Micay about form­ing a com­pany around Micay’s ex­ist­ing hard­ened mo­bile OS pro­ject. While Micay’s work was open source, and thus avail­able for any­body to use and im­prove upon, Donaldson wanted to sell sup­port ser­vices around it, as well as tele­phone hand­sets with Micay’s OS pre-loaded. McGrady and Donaldson pro­posed call­ing the com­pany Copperhead, and sug­gested that Micay mar­ket his work as CopperheadOS.

Micay agreed to par­tic­i­pate only on the ex­plicit un­der­stand­ing that he would re­tain con­trol over the open source pro­jec­t’s de­vel­op­ment, li­cens­ing, copy­rights, so­cial/​me­dia ac­counts (GitHub, Twitter, Reddit), and do­na­tions.

Due to con­flicts be­tween McGrady and Donaldson be­tween 2014-2015, McGrady stepped away be­fore Copperhead was in­cor­po­rated in November 2015. Prior to Mc­Grady’s de­par­ture, Micay had very lit­tle con­tact with Donaldson. At that point Micay’s hard­ened mo­bile OS pro­ject had been launched (as CopperheadOS”) and was us­ing in­fra­struc­ture which had been setup for the new com­pany.

Was the plan to split every­thing equally, with Donaldson as CEO and you as chief tech­nol­ogy of­fi­cer?

The orig­i­nal plan called for the com­pany to be split three ways be­tween Micay, McGrady, and Donaldson. With McGrady’s de­par­ture, Donaldson ap­pointed him­self Copperhead’s Chief Executive Officer and sole di­rec­tor upon in­cor­po­ra­tion. Micay and Donaldson be­came co-equal 50% share­hold­ers.

Although Donaldson some­times de­scribed Micay as Copperhead’s Chief Technology Officer,” Micay never signed an em­ploy­ment agree­ment with Copperhead, never ac­cepted a com­mon-law of­fer of em­ploy­ment, was not paid a reg­u­lar salary, and did not agree to serve as a fidu­ciary of the com­pany.

Was your flag­ship prod­uct CopperheadOS? Was it an open source op­er­at­ing sys­tem fo­cused on Android hard­en­ing? Did CopperheadOS pro­tect mo­bile data by adding lay­ers of se­cu­rity on top of the stock Android OS?

The pro­ject that took the name CopperheadOS” ex­isted prior to the com­pany and was an open source hard­ened mo­bile OS. Eventually CopperheadOS was re­named to the Android Hardening Project, and then GrapheneOS. These were re­names, not rewrites or forks - they are all the same pro­ject.

Did Donaldson take on a di­verse ar­ray of IT jobs in the early years of the com­pany? Are some ex­am­ples of that work fix­ing print­ers and re­cov­er­ing hacked WordPress web­sites? Did this fund your work on the op­er­at­ing sys­tem?

Micay’s im­prove­ments to the un­der­ly­ing Android sys­tem in­flu­enced or were ex­plic­itly adopted by the AOSP, re­sult­ing in the pay­ment of boun­ties from Google to Micay.

More sig­nif­i­cantly though, Micay’s open source pro­ject be­gan re­ceiv­ing sub­stan­tial com­mu­nity do­na­tions. Micay in­tended those do­na­tions to fund ad­di­tional con­trib­u­tors and nec­es­sary in­fra­struc­ture, tak­ing only a min­i­mal amount for per­sonal liv­ing ex­penses.

When the com­pany failed to gen­er­ate suf­fi­cient rev­enue, Micay agreed to tem­porar­ily share a por­tion of the pro­jec­t’s do­na­tions with Donaldson, so Donaldson could con­tinue work­ing on the com­pany.

While Donaldson was face of the op­er­a­tion, were you spend­ing most of your time hunt­ing vul­ner­a­bil­i­ties in Android and patch­ing them in CopperheadOS?

Donaldson was never the face of Micay’s open source pro­ject. He was only the face of the com­pany to­wards busi­nesses. Micay man­aged the so­cial me­dia ac­count for the open source pro­ject and built a fol­low­ing for it. Micay did most of the talk­ing to se­cu­rity en­gi­neers / re­searchers. Micay was also the one writ­ing con­tent about it, help­ing users and much more.

Did you also spend time trou­bleshoot­ing for the user­base?

Micay spent a sig­nif­i­cant por­tion of his free time an­swer­ing users’ ques­tions and trou­bleshoot­ing is­sues.

Did you feel it was your duty to sup­port any­one in­ter­ested in the pro­ject? Is this in part be­cause you be­lieve in the phi­los­o­phy of open source and help­ing every­one have free ac­cess to mo­bile se­cu­rity? Did you spend time help­ing users even at the ex­pense of your own well be­ing?

Micay cares deeply about his open source pro­ject, which is why he put so much time and ef­fort into it, of­ten at the ex­pense of his own health and well-be­ing.

That be­ing said, he did not nec­es­sar­ily feel a sense of duty - Micay also ded­i­cated much time to help­ing peo­ple with Arch Linux and Rust.

Were you a long­time con­trib­u­tor to pro­jects like Linux’s GRsecurity and Mozilla’s Rust pro­gram­ming lan­guage?

Micay only made mi­nor con­tri­bu­tions to Linux’s GRsecurity. His main work re­lated to it was pack­ag­ing and in­te­grat­ing it into Arch Linux, as well as test­ing and deal­ing with bugs.

Micay worked on Mozilla’s Rust pro­gram­ming lan­guage as a full-time vol­un­teer for about a year.

For the first two years of Copperhead’s op­er­a­tion, was every­thing some­one needed to down­load, in­stall, or mod­ify it avail­able on­line?

Yes, for free, and for any pur­pose.

At this time, was the goal to make money from sell­ing tech sup­port that pri­or­i­tized pay­ing users?

The ini­tial goal of the com­pany was to en­gage in se­cu­rity con­sult­ing, with in­come gen­er­ated from ser­vices un­re­lated to Micay’s open source pro­ject.

After those in­come streams failed to ma­te­ri­al­ize, new ap­proaches were ex­plored, in­clud­ing sell­ing de­vices pre­loaded with Micay’s OS, as well as of­fer­ing paid sup­port and con­tract work tied to it

Did the pro­lif­er­a­tion of CopperheadOS knock­offs, com­bined with your round-the-clock user sup­port ef­forts, mean that every­one but the two of you were ben­e­fit­ting from the en­ter­prise?

Micay’s open source pro­ject has been broadly suc­cess­ful and has gen­er­ated sub­stan­tial in­come through do­na­tions. It is rea­son­able to con­clude that an open source pro­ject with that level of in­ter­est could also gen­er­ate ad­di­tional rev­enue through prod­uct and ser­vice sales or con­tract work. In fact, to­day’s ecosys­tem of com­pa­nies of­fer­ing prod­ucts based on GrapheneOS il­lus­trates that po­ten­tial. Given that, the com­pa­ny’s in­abil­ity to es­tab­lish a sus­tain­able busi­ness model ap­pears to re­flect short­com­ings in its man­age­ment and strate­gic di­rec­tion un­der the stew­ard­ship of Donaldson.

Did your and Donaldson val­ues be­gin to di­verge? Was Donaldson more con­cerned with mak­ing money than you were?

Donaldson be­gan to fo­cus on the idea of chang­ing the na­ture of CopperheadOS from an open source pro­ject to closed source” soft­ware. In his view, this would al­low Copperhead to sell li­censes to CopperheadOS, since sup­port con­tracts were not lu­cra­tive enough for Donaldson’s lik­ing.

Micay con­sis­tently re­jected these pro­pos­als. Donaldson’s plan had two fun­da­men­tal flaws. First, the code had al­ready been li­censed to the pub­lic un­der open source li­censes. There was no way to claw back” the li­censes un­der which they had al­ready been re­leased and were be­ing used in the wider world. Moreover, Micay had no in­ter­est in writ­ing pro­pri­etary soft­ware, or soft­ware for hire. Second, Donaldson’s pro­posal was fun­da­men­tally in­con­sis­tent with the col­lab­o­ra­tive, com­mu­nity-based work that had al­lowed CopperheadOS to de­velop in the first place.

Despite these prob­lems, to pla­cate Donaldson, Micay tem­porar­ily adopted a source avail­able” li­cense for his fu­ture work on CopperheadOS in September 2016. This li­cense did not ap­ply to pre­vi­ous code / work done, or any con­tri­bu­tions to that code from third par­ties. It ap­plied only to the code re­leased un­der that source avail­able” li­cense.

In 2018, mat­ters be­tween Micay and Donaldson came to a head over Donaldson’s de­sire to pur­sue busi­ness deals with crim­i­nal or­ga­ni­za­tions, and his at­tempts to com­pro­mise the se­cu­rity of CopperheadOS, in­clud­ing by propos­ing li­cense en­force­ment and re­mote up­dat­ing sys­tems that would al­low third-par­ties to have ac­cess to users’ phones. As part of this process, Donaldson be­gan to de­mand that Micay pro­vide Donaldson with the signing keys” - i.e. the cre­den­tials re­quired to ver­ify the au­then­tic­ity of re­leases of CopperheadOS. Donaldson ad­vised that, in or­der to se­cure cer­tain new busi­ness, po­ten­tial cus­tomers re­quired ac­cess to the Keys.

The keys had been in con­tin­u­ous use by Micay, in his per­sonal ca­pac­ity, since be­fore the in­cor­po­ra­tion of Copperhead. However, more im­por­tantly, any party with the keys could mark ma­li­cious soft­ware as authentic”, and thereby in­fil­trate de­vices us­ing CopperheadOS.

Micay was un­will­ing to par­tic­i­pate in that kind of se­cu­rity breach. Since Donaldson had con­trol over cer­tain in­fra­struc­ture for the open source pro­ject, he would be able to in­cor­po­rate (or hire oth­ers to in­cor­po­rate) the pri­vacy-dam­ag­ing fea­tures de­scribed above for all fu­ture re­leases of CopperheadOS. Micay there­fore deleted the keys per­ma­nently and sev­ered ties with Copperhead and Donaldson.

Micay has since car­ried on his open source work as GrapheneOS, re­leased un­der an open source li­cense, in­cor­po­rat­ing all prior code ex­cept the afore­men­tioned source avail­able” code.

Donaldson told Wired that you both made the de­ci­sion to move Copperhead from be­ing open source to hav­ing a non­com­mer­cial li­cense. Is this ac­cu­rate? Did that mean that users had to pur­chase a Copperhead phone to ac­cess the OS?

This de­ci­sion was Micay’s alone, but was done to pla­cate Donaldson af­ter sus­tained pres­sure from Donaldson.

Micay agreed to ap­ply a tem­po­rary non‑com­mer­cial li­cense. During that pe­riod, new users needed to pur­chase a phone with the OS or build the OS from source to use it; ex­ist­ing users con­tin­ued to re­ceive up­dates with­out pay­ing. The change nar­rowed who could ac­cess the pro­ject, con­flicted with Micay’s goals for broader adop­tion, and failed to gen­er­ate sus­tain­able in­come streams - very few phones were pur­chased.

Is it ac­cu­rate that when Copperhead re­li­censed, the pro­ject im­me­di­ately started hear­ing from Fortune 500 com­pa­nies?

No, it’s not ac­cu­rate.

Donaldson se­cured li­cens­ing agree­ments with sev­eral com­pa­nies and non­prof­its, but those agree­ments com­mit­ted the pro­ject to far more work than it could de­liver. Micay was the sole de­vel­oper and the team lacked the ca­pac­ity to ful­fill many of Donaldson’s com­mit­ments. Ultimately many of the agree­ments never pro­gressed be­yond an early stage.

Were the most lu­cra­tive con­tracts from de­fense con­trac­tors? Was Copperhead’s tech­nol­ogy only used to pro­tect de­fense clients from ad­ver­saries, and not for mass sur­veil­lance?

No. There was an un­suc­cess­ful at­tempt to se­cure a large con­tract with a de­fense con­trac­tor, but the com­mit­ments made ex­ceeded the pro­jec­t’s ca­pac­ity. Deliverables dis­cussed would have re­quired ded­i­cated builds and spe­cial hard­ware sign­ing that the team could not re­al­is­ti­cally sup­port given avail­able re­sources. Donaldson’s pro­posed ap­proach in­volved con­vert­ing Micay’s pub­lic OS into a ver­sion tai­lored to those re­quire­ments, which would have re­quired tak­ing con­trol of Micay’s pro­ject - some­thing Micay re­tained au­thor­ity over. Donaldson pur­sued deals that de­pended on re­strict­ing or mon­e­tiz­ing the pro­ject in ways in­con­sis­tent with Micay’s com­mit­ment to re­turn­ing the pro­ject to an open source model.

Between li­cens­ing the OS and do­ing busi­ness with de­fense con­trac­tors, did you feel the in­tegrity of your code and your de­ci­sion­mak­ing role in the part­ner­ship were erod­ing?

There was one de­fense con­trac­tor at­tempt that failed early.

The larger is­sue was that Donaldson pur­sued rev­enue by promis­ing de­liv­er­ables the pro­ject could­n’t meet, which threat­ened the in­tegrity of the pro­ject and un­der­mined Micay’s role and val­ues.

Were you both­ered by both the facts that CopperheadOS was no longer avail­able to the masses and that it was start­ing to serve the very peo­ple you wanted to pro­tect users from?

Micay had al­ways in­tended to go back to open source li­cens­ing, he had no in­ter­est in writ­ing pro­pri­etary soft­ware, or soft­ware for hire.

In the spring of 2018, did you have sole pos­ses­sion of CopperheadOS’s sign­ing keys?

Micay had sole pos­ses­sion of his open source pro­jec­t’s sign­ing keys. The com­pany had the op­tion to make sep­a­rate builds signed with sep­a­rate keys but never did.

Did things be­tween you and Donaldson de­volve when he ap­proached you about a com­pli­ance au­dit? Did he tell you that he needed to know how the sign­ing keys were stored?

We un­der­stand that Daniel’s rec­ol­lec­tion was not that James wanted to know more in­for­ma­tion about how the sign­ing keys were stored, but that he wanted di­rect ac­cess to them.

Did you sus­pect his re­quest was tied to a deal he was bro­ker­ing with a large de­fense con­trac­tor? Did you be­lieve this would put the en­tirety of CopperheadOS’ user base at risk?

Yes and yes.

In re­sponse, did you post a se­ries of tweets from the CopperheadOS X ac­count—the same ac­count you used to of­fer tech sup­port—ac­cus­ing Donaldson of be­ing un­trust­wor­thy and in busi­ness with crim­i­nals”? Did you say that it was your duty to ex­pose this to the users?

The @CopperheadOS ac­count be­longed to Micay’s open-source pro­ject, not the com­pany; a sep­a­rate ac­count had pre­vi­ously been cre­ated for the com­pany.

Did you ac­cuse Donaldson of spread­ing mis­in­for­ma­tion about CopperheadOS, while Donaldson ac­cused you of im­pact­ing busi­ness op­por­tu­ni­ties?

Did you ban Donaldson from the CopperheadOS sub­red­dit?

Did Donaldson’s lawyers send you a let­ter on May 14, 2018 re­quest­ing your ter­mi­na­tion?

We un­der­stand that the May 14 let­ter was a re­quest to re­vise Daniel’s role at Copperhead, ei­ther by de­mo­tion or res­ig­na­tion.

Did the let­ter claim that there is no writ­ten share­hold­ers’ agree­ment in place, nor any writ­ten em­ploy­ment agree­ments or job de­scrip­tions for ei­ther of you”? Did it say that be­cause Donaldson was the sole di­rec­tor of the Corporation and the Chief Executive Officer,” he had the au­thor­ity to deem the sta­tus of the com­pany unsustainable” and man­date your de­mo­tion for or im­me­di­ate ter­mi­na­tion?

At this point, had Donaldson pre­vi­ously given you mul­ti­ple op­por­tu­ni­ties to take paid leaves and re­group? Did you de­cline those of­fers?

Micay was never an em­ployee of the com­pany and was not of­fered a leave”.

In June, 2018, did Donaldson file a claim against you to re­trieve CopperheadOS’s sign­ing keys and nearly half a mil­lion Canadian dol­lars’ worth in dam­ages?

We re­al­ize that this ques­tion is in­cor­rect. We un­der­stand that the June 2018 let­ter was sim­ply end­ing Daniel’s em­ploy­ment, and that af­ter this, James de­manded ac­cess to the keys. We un­der­stand that the suit de­mand­ing $400k in dam­ages was filed later, in 2020.

Did Donaldson tell you at the time that you needed to give up the keys so that the cus­tomers could keep us­ing their de­vices?

See the an­swer to ques­tion #17.

Did you view this as Donaldson’s last-ditch ef­fort to cash in on your work be­fore you parted ways?

See the an­swer to ques­tion #17.

Is it fair to say you were livid?

Micay was jus­ti­fi­ably dis­ap­pointed with how every­thing turned out.

Did you de­stroy the keys?

See the an­swer to ques­tion #17.

In a Reddit post, did you write: I con­sider the com­pany and the in­fra­struc­ture to be com­pro­mised”?

Micay did con­sider the com­pany and the in­fra­struc­ture to be com­pro­mised.

Without the sign­ing keys, could nei­ther you nor Donaldson make changes to CopperheadOS?

Correct. The OS ac­cepts only up­dates signed with the proper keys.

After Micay ended his re­la­tion­ship with the com­pany and Donaldson, Donaldson hired con­trac­tors to fork Micay’s open-source pro­ject into a closed-source OS; those ef­forts re­peat­edly re­quired new forks as they fell be­hind and did not pro­duce sub­stan­tial orig­i­nal work, leav­ing them de­pen­dent on Micay’s open source pro­ject.

...

Read the original on discuss.grapheneos.org »

9 238 shares, 10 trendiness

How To Make a Fast Dynamic Language Interpreter

This post is about op­ti­miz­ing an ex­tremely sim­ple AST-walking in­ter­preter for a dy­namic lan­guage called Zef that I cre­ated for fun to the point where it is com­pet­i­tive with the likes of Lua, QuickJS, and CPython.

Most of what gets writ­ten about mak­ing lan­guage im­ple­men­ta­tions fast fo­cuses on the work you’d do when you al­ready have a sta­ble foun­da­tion, like writ­ing yet an­other JIT (just in time) com­piler or fine tun­ing an al­ready pretty good garbage col­lec­tor. I’ve writ­ten a lot of posts about crazy op­ti­miza­tions in a ma­ture JS run­time. This post is dif­fer­ent. It’s about the case where you’re start­ing from scratch, and you’re nowhere near writ­ing a JIT and your GC is­n’t your top prob­lem.

The tech­niques in this post are easy to un­der­stand - there’s no SSA, no GC, no byte­codes, no ma­chine code - yet they achieve a mas­sive 16x speed-up (67x if you in­clude the in­com­plete port to Yolo-C++) and bring my tiny in­ter­preter into the ball­park of QuickJS, CPython, and Lua.

The tech­niques I’ll fo­cus on in this post are:

To eval­u­ate my pro­gres, I cre­ated a bench­mark suite called ScriptBench1. This has ports of clas­sic lan­guage bench­marks to Zef:

These bench­marks are also avail­able in a wide va­ri­ety of other lan­guages. I found ex­ist­ing ports of these bench­marks to JavaScript, Python, and Lua. For Splay, there weren’t ex­ist­ing Python and Lua ports, so I used Claude to port them.

All ex­per­i­ments run on Ubuntu 22.04.5 run­ning on a Intel Core Ultra 5 135U with 32GB RAM and Fil-C++ ver­sion 0.677. Lua 5.4.7 is com­piled with GCC 11.4.0. QuickJS-ng 0.14.0 is the bi­nary from QuickJS’s GitHub re­leases page. CPython 3.10 is just what came with Ubuntu.

All ex­per­i­ments use the av­er­age of 30 ran­domly in­ter­leaved runs.

To be clear: for most of this post, I’ll be com­par­ing my in­ter­preter com­piled with Fil-C++ to other folks’ in­ter­preters com­piled with Yolo-C com­pil­ers.

This post starts with a high-level de­scrip­tion of the orig­i­nal AST-walking, hashtable-heavy Zef in­ter­preter, fol­lowed by a sec­tion for each op­ti­miza­tion that I landed on my jour­ney to a 16.6x speed-up.

The orig­i­nal Zef in­ter­preter was writ­ten with al­most no re­gard for per­for­mance. Only two per­for­mance-aware choices were made:

* The value rep­re­sen­ta­tion is a 64-bit tagged value that may hold a dou­ble, a 32-bit in­te­ger, or a Object*. Doubles are rep­re­sented by off­set­ting them by 0x1000000000000 (a tech­nique I learned from JavaScriptCore; the lit­er­a­ture has taken to call­ing this ). Integers and point­ers are rep­re­sented na­tively, and I’m re­ly­ing on the fact that no pointer will have a value be­low 0x100000000 (a dan­ger­ous choice, but one that you could force to be true; note that I could have rep­re­sented in­te­gers by giv­ing them a high bit tag of 0xffff000000000000 if I was wor­ried about this). This makes it easy to have fast paths for op­er­a­tions on num­bers (because you can de­tect if you have a num­ber, and what kind, with a bit test). Even more im­por­tantly, this avoids heap al­lo­ca­tions for num­bers. If you’re build­ing an in­ter­preter from scratch, it’s good to start by mak­ing good choices about the fun­da­men­tal value rep­re­sen­ta­tion, since it’s su­per hard to change later! 32-bit or 64-bit tagged val­ues are a stan­dard place to start, if you’re im­ple­ment­ing a dy­nam­i­cally typed lan­guage.

* I used some kind of C++. It’s im­por­tant to pick a lan­guage that al­lows me to do all of the op­ti­miza­tions that lan­guage im­ple­men­ta­tions even­tu­ally grow to have, and C++ is such a lan­guage. Notably, I would not pick some­thing like Java, since there’s a ceil­ing to how many low level op­ti­miza­tions you can do. I would also not pick Rust, since a garbage col­lected lan­guage re­quires a heap rep­re­sen­ta­tion that has global mu­ta­ble state and cyclic ref­er­ences (though you could use Rust for some parts of the in­ter­preter, if you were happy with be­ing mul­ti­lin­gual; or you could use Rust if you were happy with lots of un­safe code).

I also made tons of ex­pe­di­ent choices that were wrong from a per­for­mance en­gi­neer­ing stand­point:

* I used Fil-C++. This did al­low me to move very quickly - for ex­am­ple, I get a garbage col­lec­tor for free. Also, it meant that I spent zero time de­bug­ging mem­ory safety is­sues (Fil-C++ re­ports mem­ory safety vi­o­la­tions with a pretty stack trace and lots of di­ag­nos­tics) or un­de­fined be­hav­ior (Fil-C++ does not have un­de­fined be­hav­ior). Fil-C++ costs about 4x per­for­mance typ­i­cally, so I’m start­ing with that 4x hand­i­cap, on top of all of the other sub­op­ti­mal choices.

* Recursive AST walk­ing in­ter­preter. The in­ter­preter is im­ple­mented as a vir­tual Node::evaluate method that gets over­rid­den in a bunch of places.

* Strings every­where. For ex­am­ple, the Get AST node holds a std::string to de­scribe the name of the vari­able that it’s get­ting, and that string is used each time a vari­able is ac­cessed.

* Hashtables every­where. When that Get ex­e­cutes, the string is used as a key to a std::un­ordered_map, which con­tains the vari­able value.

* Chains of re­cur­sive calls to crawl the scope chain. Zef al­lows al­most all con­structs to be nested and nest­ing leads to clo­sures; for ex­am­ple, class A nested in func­tion F nested in class B nested in func­tion G means that mem­ber func­tions of class A can see A’s fields, F’s lo­cals, B’s fields, and G’s lo­cals. The orig­i­nal in­ter­preter achieved this by re­curs­ing in C++ over func­tions that can query dif­fer­ent scope ob­jects.

That said, those al­lowed me to im­ple­ment an in­ter­preter for a fairly so­phis­ti­cated lan­guage with very lit­tle code. The largest mod­ule by far is the parser. Everything else is sim­ple and crisp.

This in­ter­preter was 35x slower than CPython 3.10, 80x slower than Lua 5.4.7, and 23x slower than QuickJS-ng 0.14.0. Let’s see how far we can get by im­ple­ment­ing a bunch of op­ti­miza­tions!

The first op­ti­miza­tion is to have the parser gen­er­ate dis­tinct AST nodes for each op­er­a­tor as op­posed to us­ing the DotCall node with the name of the op­er­a­tor.

a + b

Is iden­ti­cal to this:

a.add(b)

So, the orig­i­nal in­ter­preter would parse a + b to DotCall(a, add”) with b as an ar­gu­ment. That lead to slow ex­e­cu­tion since every since math op­er­a­tion in­volved a string lookup of the op­er­a­tor’s method name:

With this op­ti­miza­tion, we have the parser cre­ate Binary<> and Unary<> nodes. With the help of some tem­plate and lambda magic, these nodes have sep­a­rate vir­tual over­rides for Node::evaluate per op­er­a­tor. These call di­rectly into the cor­re­spond­ing Value fast paths for those op­er­a­tors. Hence, do­ing a + b now re­sults in a call to Binary, which then calls Value::add.

This change is a 17.5% speed-up. At this point, Zef is 30x slower than CPython 3.10, 67x slower than Lua 5.4.7, and 19x slower than QuickJS-ng 0.14.0.

In the pre­vi­ous op­ti­miza­tion, we made op­er­a­tors fast by avoid­ing string com­par­i­son based dis­patch. But that change did­n’t af­fect all op­er­a­tors! The RMW forms of those op­er­a­tors, like:

a += b

still used string based dis­patch. So, the sec­ond op­ti­miza­tion is to have the parser gen­er­ate dis­tinct nodes for each of the RMW cases. What’s hap­pen­ing here is that the parser re­quests LValue nodes to re­place them­selves with an RMW via the mak­eRMW vir­tual call:

* Get - cor­re­sponds to get­ting a vari­able, i.e. just id

Each of these vir­tual calls use the SPECIALIZE_NEW_RMW macro to cre­ate tem­plate spe­cial­ized forms of:

Note that while the rest of the op­er­a­tor spe­cial­iza­tion (from change #1) uses lamb­das to dis­patch to the ap­pro­pri­ate op­er­a­tor func­tion Value, for RMWs we use an enu­mer­a­tion. This is a prac­ti­cal choice be­cause of the num­ber of places we have to thread the enum through to han­dle the fact that we may ar­rive at an RMW three dif­fer­ent ways (get, dot, and sub­script). All of this magic then bot­toms out in the Value::callRMW<> tem­plate func­tion, which dis­patches the ac­tual RMW op­er­a­tor call.

This change is a 3.7% speed-up. At this point, Zef is 29x slower than CPython 3.10, 65x slower than Lua 5.4.7, and 18.5x slower than QuickJS-ng 0.14.0. We’re now 1.22x faster than where we started.

The Value fast paths have a small prob­lem: they use isInt(), which uses isIntSlow(), which does a vir­tual call to Object::isInt() to check if we’re re­ally deal­ing with an int.

This is hap­pen­ing be­cause the Zef value rep­re­sen­ta­tion in the orig­i­nal in­ter­preter had four dis­tinct cases:

An IntObject for in­t64′s that can­not be rep­re­sented as in­t32′s.

In the IntObject case, Value still drove the dis­patch for all in­te­ger meth­ods, since that al­lowed the in­ter­preter to just have one im­ple­men­ta­tion of all math op­er­a­tors (and that im­ple­men­ta­tion was al­wayts in Value).

This sim­ple op­ti­miza­tion causes Value fast paths to only con­sider in­t32 and dou­ble, and puts all IntObject han­dling in IntObject it­self. Additionally, this change avoids the isInt() call on every method dis­patch.

This is a 1% speed-up. At this point, Zef is 29x slower than CPython 3.10, 65x slower than Lua 5.4.7, and 18x slower than QuickJS-ng 0.14.0. We’re now 1.23x faster than where we started.

The orig­i­nal Zef in­ter­preter uses std::string every­where. Particularly bru­tal cases:

This is un­for­tu­nate be­cause it means that all of these lookups don’t just in­volve hashta­bles - they in­volve hashta­bles keyed by those strings! So we’re hash­ing and com­par­ing strings all the time when ex­e­cut­ing Zef.

This next op­ti­miza­tion uses point­ers to hash-consed Symbol ob­jects in­stead of strings for all of those lookups.. This is a large change in terms of files im­pacted, but it’s re­ally quite sim­ple:

* There’s a new Symbol class in sym­bol.h and sym­bol.cpp. Symbols can be turned into strings and vice versa. Turning a string into a sym­bol in­volves a global hashtable to per­form hash con­s­ing. This en­sures that pointer equal­ity on Symbol* is a valid way to check if two sym­bols are the same.

* Lots of places where we now re­fer to pre-cooked sym­bols in­stead of string lit­er­als, like Symbol::subscript in­stead of us­ing the string subscript”.

* Lots of places where we just change func­tion sig­na­tures to use Symbol* in­stead of­const std::string&`.

This is a 18% speed-up. At this point, Zef is 24x slower than CPython 3.10, 54x slower than Lua 5.4.7, and 15x slower than QuickJS-ng 0.14.0. We’re now 1.46x faster than where we started.

This change de­liv­ers a sig­nif­i­cant win by al­low­ing in­lin­ing of im­por­tant func­tions..

Almost all of the ac­tion in this change is the in­tro­duc­tion of the new val­uein­lines.h header. This has sep­a­rate header from value.h, since it uses head­ers that need to in­clude `value.h.

This is a 2.8% speed-up. At this point, Zef is 24x slower than CPython 3.10, 53x slower than Lua 5.4.7, and 15x slower than QuickJS-ng 0.14.0. We’re now 1.5x faster than where we started.

Sometimes the only way to make your lan­guage im­ple­men­ta­tion bet­ter is to land a mas­sive patch. Don’t let any­one tell you that good en­gi­neer­ing hap­pens in small, easy to di­gest changes. That’s not al­ways the case! It’s cer­tainly not the case if you want to have a fast im­ple­men­ta­tion of a dy­namic lan­guage!

This is a mas­sive change that re­does how Object, ClassObject, and Context work so that ob­jects are cheaper to al­lo­cate and ac­cesses can avoid hashtable lookups. This change com­bines three changes into one:

Previously, each lex­i­cal scope al­lo­cated Context ob­ject, and each Context ob­ject con­tained a hashtable of - i.e. the vari­ables in that scope. Objects were even worse: each ob­ject was a hashtable that mapped the classes that the ob­ject was an in­stance of to Context ob­jects. This was nec­es­sary be­cause if you have an in­stance of Bar that de­scends from Foo, then Bar and Foo could both close over dif­fer­ent scopes and they could share the same names for dis­tinct fields (since fields are pri­vate by de­fault in Zef). Clearly this is su­per in­ef­fi­cient! This change in­tro­duces the idea of Storage, which holds data at Offsets de­ter­mined by some Context. So, Contexts still ex­ist, but they are cre­ated ahead of time as part of the AST re­solve pass; when ob­jects or scopes are cre­ated, we just al­lo­cate a stor­age ac­cord­ing to the size com­puted by the cor­re­spond­ing Context.

This is a clas­sic tech­nique that forms the foun­da­tion of mod­ern high per­for­mance dy­namic lan­guage im­ple­men­ta­tions. But while this tech­nique is clas­si­cally dis­cussed in the con­text of JIT com­pil­ers, in this change we’ll use it in an in­ter­preter. The idea of in­line caches is that given a lo­ca­tion in code that does expr.name, we re­mem­ber the last type that expr dy­nam­i­cally had and the last off­set that name re­solved to. In this change, the is done by place­ment con­struct­ing a spe­cial­ized AST node on top of the generic one. There are five parts to this:

Say that we have a class Foo in­side a lex­i­cal scope that has vari­able x, and one of Foo’s meth­ods wants to ac­cess x. And, let’s say that there are no func­tions or var­i­albes called x in­side Foo. We should be able to ac­cess x with­out any checkes, right? Well, not quite - some­one could sub­class Foo and add a get­ter called x, in which case that ac­cess should re­solve to the get­ter, not the outer x. The way that in­line caches han­dle this is by set­ting Watchpoints` within the run­time. In this ex­am­ple, it’s the was the name over­rid­den watch­point.

Each of these three fea­tures is large. I chose to im­ple­ment all of them at once be­cause:

* A new ob­ject model would not be mean­ing­fully bet­ter un­less it al­lowed in­line caching to work well. So, I code­vel­oped the ob­ject model and in­line chaces.

* Inline caches would­n’t pro­vide mean­ing­ful ben­e­fit un­less I also had watch­points, be­cause so many cacheable con­di­tions re­quire watch­points.

* The new ob­ject model and watch­points have to work great to­gether.

I started this change by writ­ing a dumb ver­sion of CacheRecipe along with what ended up be­ing the mostly fi­nal ver­sion of Storage and Offsets.

Some of the hard­est work in­volved re­plac­ing the old style of in­trin­sic classes with a new style. Take ar­rays as an ex­am­ple. Previously, ArrayObject::tryCallMethod im­ple­mented all ArrayObject meth­ods by sim­ply in­ter­cept­ing the vir­tual Object::tryCallMethod call. But in the new ob­ject model, Object has no vtable and no vir­tual meth­ods; in­stead Object::tryCallMethod for­wards to ob­ject->clas­sOb­ject()->tryCall­Method(ob­ject, …). So, for Array to have meth­ods, we need to cre­ate a class for Array that has those meth­ods. Hence, this change shifts a lot of in­trin­sic func­tion­al­ity from be­ing spread through­out the im­ple­men­ta­tion to be­ing fo­cused in­side make­root­con­text.cpp. This is a good out­come, be­cause it means that all of the in­line caching ma­chin­ery just works for na­tive/​in­trin­sic func­tions on ob­jects!

This mas­sive change has a mas­sive win: 4.55x faster! At this point, Zef is 5.2x slower than CPython 3.10, 11.7x slower than Lua 5.4.7, and 3.3x slower than QuickJS-ng 0.14.0. In other words, Zef com­piled with Fil-C++’s mar­gin of loss against those other in­ter­preters is right around what Fil-C costs (those other in­ter­preters are com­piled with Yolo-C).

We’re now 6.8x faster than where we started.

Before this change, the Zef in­ter­preter would pass ar­gu­ments to func­tions us­ing const std::op­tional. The op­tional was needed be­cause in some cor­ner cases we have to dis­tin­guish be­tween:

o.get­ter

o.func­tion()

In most cases, in Zef, these two things are the same: they are a func­tion call. Here’s an ex­cep­tion:

o.Nest­ed­Class

o.Nest­ed­Class()

The first case gets the NestedClass ob­ject, while the sec­ond case in­stan­ti­ates it.

Therefore, we need to tell if we’re pass­ing an empty ar­gu­ments ar­ray be­cause this is a func­tion call with zero ar­gu­ments, or an empty ar­gu­ments ar­ray be­cause this was a get­ter-like call.

In any case, this is wildly in­ef­fi­cient be­cause it means that the caller is al­lo­cat­ing a vec­tor and then the callee is al­lo­cat­ing an ar­gu­ments scope that is a copy of that vec­tor.

This change in­tro­duces the Arguments type, which is shaped ex­actly like the ar­gu­ments scope that the callee would have al­lo­cated. So, now we have the caller al­lo­cate these di­rectly. This more than halves the num­ber of al­lo­ca­tions needed to make a call:

* Even in Yolo-C++, we’d be halv­ing the al­lo­ca­tions be­cause we’d no longer have to mal­loc the back­ing store of the vec­tor.

* In Fil-C++, the std::op­tional needs to be heap al­lo­cated. Even if we did­n’t have a std::op­tional, pass­ing a const std::vec­tor<>& would be an al­lo­ca­tion be­cause any­thing stack al­lo­cated is heap al­lo­cated.

* It so hap­pened that the callers would re­al­lo­cate the vec­tor mul­ti­ple times rather than pre­siz­ing it..

A lot of this change is just chang­ing func­tion sig­na­tures to take Arguments* in­stead of the op­tional vec­tors.

This is a 1.33x speed-up. At this point, Zef is 3.9x slower than CPython 3.10, 8.8x slower than Lua 5.4.7, and 2.5x slower than QuickJS-ng 0.14.0. We’re now 9.05x faster than where we started.

Like Ruby and many other ob­ject ori­ented lan­guages, Zef has pri­vate in­stance fields by de­fault. They are pri­vate in the sense that only that in­stance can see them. Take this code:

class Foo {

my f

fn (inF) f = inF

This is a class Foo that takes a value for f in its con­struc­tor, and stores it to a lo­cal vari­able scoped just to in­stances. For ex­am­ple, this would­n’t work:

class Foo {

my f

fn (inF) f = inF

fn nope(o) o.f

println(Foo(42).nope(Foo(666)))

The o.f ex­pres­sion in nope can­not ac­cess o’s f even though o is of the same type. This is just an out­come of the fact that fields work by ap­pear­ing in the scope chain of class mem­bers. When we do some­thing like o.f, we’re ask­ing to call a method called f. Hence, we get lots of code like:

class Foo {

my f

fn (inF) f = inF

fn f f # method called f that re­turns lo­cal vari­able f

class Foo {

read­able f # short­hand for `my f` and `fn f f`

fn (inF) f = inF

Hence, lots of method calls end up be­ing calls to get­ters. It’s su­per waste­ful to have all of those calls eval­u­ate the AST of the get­ter along with every­thing this en­tails!

So the next change is to spe­cial­ize get­ters.

The heart of this change is in UserFunction, which uses the new Node::inferGetter method to in­fer whether the body of the func­tion is just a get­ter. The im­por­tant bits of this are:

...

Read the original on zef-lang.dev »

10 227 shares, 20 trendiness

Free Online Video Resizer & Editor

Powerful video tools that run en­tirely in your browser. No up­loads to servers, no wait­ing - just pri­vate video pro­cess­ing. No Install

We use cook­ies for an­a­lyt­ics and er­ror track­ing to im­prove your ex­pe­ri­ence. Learn more

...

Read the original on vidstudio.app »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.