10 interesting stories served every morning and every evening.

1 1,549 shares, 64 trendiness

Panic! at the Tech Job Market

I have the two qual­i­ties you re­quire to see ab­solute truth: I am bril­liant and unloved.”

ready for an­other too-long ar­ti­cle about per­sonal fail­ure while blam­ing the world for our faults? let’s see where we end up with 11,500 words this time.

this post is spon­sored by me try­ing to not get evicted. fund­ing ap­pre­ci­ated: https://​github.com/​spon­sors/​mattsta

how are you do­ing, fel­low un­em­ployeds? en­joy­ing rid­ing your bikes mid­day past the three-piece suits?

so, uh, what’s go­ing on?

Basically, all the free money” went away when the gubb­ment man­dated in­ter­est rates go from years of de­clin­ing-or-near-zero per­cent to now over 5% (curiously, a 5% in­crease in the fed rate also caused all credit cards rates to go from 9% to 30% over the same time­frame. what world).

Why would in­ter­est rates cause jobs to go away? Remember an in­ter­est rate is es­sen­tially the price of money” — a higher in­ter­est rate means money is more ex­pen­sive it­self. Also with higher in­ter­est rates, or­ga­ni­za­tions with mil­lions and bil­lions of cash sit­ting idle can park their money in safe gov­ern­ment-backed in­ter­est ac­counts to grow their bal­ances risk-free in­stead of tak­ing on risk as­sets seek­ing out­sized re­turns.

What counts as risk as­sets avoided dur­ing high in­ter­est rate pe­ri­ods? Well, fund­ing com­pa­nies with un­cer­tain fu­tures is a pretty risky as­set. So, at times of high in­ter­est rates, the weaker com­pa­nies col­lapse, strong com­pa­nies use high in­ter­est rates as an ex­cuse to clean house” every 10 years, then a cou­ple hun­dred thou­sand pre­vi­ously high com­pen­sa­tion work­ers dis­cover there are no jobs for any­body any­more over the next 2-4 years.

By the power of draw­ing two lines, we see cor­re­la­tion is cau­sa­tion and you can’t ar­gue oth­er­wise:

in­ter­est rates go up, jobs go down. never a mis­com­mu­ni­ca­tion. you can ex­plain that.

Why do in­ter­est rate in­creases specif­i­cally de­stroy tech jobs while other sec­tors like min­i­mum wage part time jobs or con­struc­tion jobs have been in­creas­ing over ris­ing in­ter­est rate pe­ri­ods?

Tech com­pa­nies fall into one of four cat­e­gories:

* nepo com­pa­nies where your friends have un­lim­ited money, so you get to live in a fan­tasy world of build­ing un­re­al­is­tic un­mar­ketable dreams us­ing fam­ily nepo money. You never have to re­ally in­ter­face with re­al­ity to ever be proven wrong. See things like the VC funded grilled cheese startup or the VC funded baby food dis­penser or the nextdoor neigh­bor VC funded med­ical fraud or the VC funded mega­lo­ma­nia i will be­come king of Earth” desk rent­ing com­pany or the VC funded stan­ford drugs-and-sex crypto club not to be con­fused with the VC funded berke­ley apoc­a­lypse cult, etc.

* spec­u­la­tion com­pa­nies where some­body has an idea, but no prod­uct yet and no cus­tomers, and also no idea if there’s a mar­ket for the prod­uct, but they get money to try and build an or­ga­ni­za­tion to find out if the idea will work any­way.

* ini­tial growth com­pa­nies where you had an idea, it starts to be­come pop­u­lar, so you trade some own­er­ship in ex­change for cash/​cap­i­tal fund­ing for grow­ing your ideas into gen­er­at­ing as much rev­enue as pos­si­ble. This is also where things like the world fa­mous YC in­cu­ba­tor and 500 Cats and Techmoan come into play. Initial growth com­pa­nies are the next step up from a spec­u­la­tion com­pany af­ter prod­ucts be­gin ex­pe­ri­enc­ing ac­tual cus­tomer de­mand.

fun fact: be­fore the mod­ern tech ac­cel­er­a­tor” era, each big city, not named san fran­ciso or boston, had some­thing like a startup boss” where you’d po­ten­tially pay hun­dreds to thou­sands of dol­lars to present your idea” to the lo­cal angel fund­ing mafia” and then, if you passed, they would give you $50,000 in ex­change for tak­ing 80% of your com­pany. This sta­tus quo lasted for decades un­til YC proved the sys­tem was es­sen­tially fools fund­ing fools. The other fun part: af­ter the startup mafia” in your city legally owned 80% of your com­pany, they would let you work as founder for about 6-18 months then just fire you and re­place you with their own friends. At the first 3 com­pa­nies I worked at, the CEOs were failed lawyer friends of VCs, in­stalled by the VCs to re­place the orig­i­nal founders, due to VC us­ing ex­ploita­tive le­gal trick­ery at every turn pos­si­ble.

* fun fact: be­fore the mod­ern tech ac­cel­er­a­tor” era, each big city, not named san fran­ciso or boston, had some­thing like a startup boss” where you’d po­ten­tially pay hun­dreds to thou­sands of dol­lars to present your idea” to the lo­cal angel fund­ing mafia” and then, if you passed, they would give you $50,000 in ex­change for tak­ing 80% of your com­pany. This sta­tus quo lasted for decades un­til YC proved the sys­tem was es­sen­tially fools fund­ing fools. The other fun part: af­ter the startup mafia” in your city legally owned 80% of your com­pany, they would let you work as founder for about 6-18 months then just fire you and re­place you with their own friends. At the first 3 com­pa­nies I worked at, the CEOs were failed lawyer friends of VCs, in­stalled by the VCs to re­place the orig­i­nal founders, due to VC us­ing ex­ploita­tive le­gal trick­ery at every turn pos­si­ble.

* sta­ble era com­pa­nies with a re­peat­able GTM ca­pa­bil­ity with ac­cess to cus­tomers and re­cur­ring rev­enue cy­cles. Stable era com­pa­nies are es­sen­tially what nor­mal peo­ple think of as a com­pany” — a self-sup­port­ing cor­po­rate en­tity with growth and sta­bil­ity for all in­volved (usually).

Each cat­e­gory of com­pany has its own ben­e­fits and draw­backs.

Related to in­ter­est rates, the less suc­cess­ful a com­pany, the more it re­lies on VC fund­ing, and dur­ing high in­ter­est rate pe­ri­ods VC fund­ing tends to slow down or van­ish com­pletely. Higher in­ter­est rates also means your cus­tomers have more re­stric­tive spend­ing con­di­tions, so your cus­tomers will pull back, re­duce, or leave too. Interest rate growth kills spec­u­la­tion com­pa­nies and trun­cates or col­lapses ini­tial growth com­pa­nies into base­line sus­te­nance mode un­til the econ­omy opens up to take on more risk again.

When com­pa­nies have dif­fer­ent sizes, fund­ing, rev­enue, and war chest bal­ances, it all im­pacts how much com­pen­sa­tion lowly em­ployee scum can re­ceive.

For ease of brain math, we’re go­ing to roughly nor­mal­ize fully loaded com­pen­sa­tion to a per-day rate in­stead of per-year be­cause the num­bers feel more mean­ing­ful on a daily scale (and just re­mem­ber to di­vide all com­pen­sa­tion num­bers in half to ac­count for em­ploy­ment taxes).

If you are a fancy imag­i­nary nepo com­pany, there’s no limit to what you can pay em­ploy­ees be­cause your en­tire com­pany is a weird in-group fic­tional en­tity. You can be paid $100 per day or $100,000 per day de­pend­ing on your per­sonal con­nec­tion to the power struc­tures. There’s no real con­clu­sions to be drawn from imag­i­nary com­pa­nies pay­ing their friends what­ever they can.

Nepo com­pa­nies are the most frus­trat­ing be­cause they suck up all the me­dia at­ten­tion for be­ing out­sized celebrity dri­ven fads (and they also set the tone for ac­cel­er­at­ing un­sus­tain­able fad-to-fail­ure cy­cles).

These com­pa­nies leave you think­ing i could have made much bet­ter prod­ucts with so much less wasted fund­ing. too bad I was­n’t born rich/​con­nected/​bro-lyfe.” These com­pa­nies, due to their im­bal­ance of meta-fund­ing-vs-tiny-im­pact, usu­ally ex­plode in a huge ex­posé about fraud and cor­rup­tion leav­ing the founders no choice but to ei­ther fail up­ward for­ever or do not pass go.

The small­est form of com­pa­nies (pre-revenue, pre-growth, pre-cus­tomer, barely with an idea) are spec­u­la­tion com­pa­nies do­ing what they can to get by every day.

Speculation com­pa­nies are places to work when you’re a crazy child and so am­bi­tious for a ju­ve­nile. You won’t see Tim Apple go­ing to work for two 21 year old code bros liv­ing in Peoria try­ing to dis­rupt Google.

Working at pre-rev­enue low-idea com­pa­nies is a real loser move if you have any per­sonal mar­ketable ex­pe­ri­ence at all.

These com­pa­nies will have a pay scale some­where around 50% to 80% of mar­ket rate yet ex­pect 200% to 500% more work than any other pro­fes­sional set­ting. If you’re not liv­ing in a $600/month stu­dio apart­ment, it’s not worth your time (unless you have some in­sider in­for­ma­tion about the or­ga­ni­za­tion be­ing a se­cret tax avoid­ance or money laun­der­ing ac­qui­si­tion scam. some­times zero-prod­uct zero-rev­enue com­pa­nies go from no idea to be­ing ac­quired for a cou­ple bil­lion dol­lars in a year or two — this is a com­mon pat­tern for academic re­searchers” to form a fake com­pany, vest their shares, then get ac­quired on ba­si­cally a huge tax-ad­van­taged hir­ing bonus at some big tech).

Initial growth com­pa­nies have been the ruin of many a poor boy.

Initial growth com­pa­nies are the worst com­bi­na­tion of high-risk, low-re­ward ef­fort-vs-com­pen­sa­tion trade­offs. Sadly, get­ting trapped in un­der­per­form­ing ini­tial growth com­pa­nies is some­thing I never re­al­ized was a life­time risk un­til way too late to save my­self. Now I’ve got noth­ing to show of my life of work, while other peo­ple who just picked a bet­ter com­pany to work at 20 years ago and never left have been grow­ing their wealth by a cou­ple mil­lion dol­lars per year every year for al­most their en­tire ca­reer, all work­ing as just some rando mid­dle man­ager at multi-tril­lion-dol­lar com­pa­nies.

Initial growth com­pa­nies are of­ten un­prof­itable and just huff­ing VC fumes to stay alive wait­ing un­til some com­bi­na­tion of hopes and dreams” ac­ti­vates then the com­pany takes off” and every­thing just works out.

While you’re work­ing at an un­der­funded, un­der-com­pen­sated, low-growth com­pany just to get by, other peo­ple you know will be work­ing at multi-tril­lion dol­lar com­pa­nies mak­ing 5x to 50x your com­pen­sa­tion for do­ing the same work or less.

The pri­mary pur­pose of ini­tial growth com­pa­nies is only to make the founders rich” and prac­ti­cally noth­ing else mat­ters.

There are sub-cat­e­gories of ini­tial growth com­pa­nies though:

* growth up­trend, which is po­ten­tially in­ter­est­ing if a com­pany is ac­tu­ally work­ing and will sur­vive, be­cause some ini­tial growth com­pa­nies do turn into real sta­ble era com­pa­nies (which is the gam­bler’s lie un­der-per­form­ing down­trend com­pa­nies re­ally hype to all em­ploy­ees: JOIN US WHILE WE ARE SMALL! WE WILL GROW FROM -$12 mil­lion in­come per year to $300 tril­lion in­come per year in six months!!! CATCH THE WAVE!).

* growth down­trend, which is the worst for all in­volved be­cause you will end up in a de­cay­ing cy­cle of low com­pen­sa­tion com­bined with se­r­ial com­pany lay­offs be­cause the com­pany is just fail­ing, but re­fuses to out­right cease to ex­ist (working at these is called wasting your life”).

* sta­ble but zero-growth, which is in­ter­est­ing but not use­ful. There is no such thing as a zero-growth low-scale com­pany. If there is no growth, you can’t hire or have com­pen­sa­tion in­creases. Likely, if there is no growth, the en­tire cor­po­rate plan will be ad­justed into ei­ther coast for­ever” mode where all em­ploy­ees get re­duced to a skele­ton crew and 90% of oth­ers are laid off so the founders can just col­lect pas­sive in­come for­ever (or sell the com­pany off for parts in a break-even VC ac­qui­hire ma­neu­ver). Meanwhile, other peo­ple you know work­ing in real com­pa­nies will con­tinue hav­ing ex­po­nen­tial com­pen­sa­tion growth due to their free pas­sive yearly stock al­lo­ca­tion on liq­uid mar­kets.

I guess one rule of thumb is it does­n’t make sense to work for com­pa­nies who aren’t listed on the US stock mar­ket. The best time to join Apple was 25 years ago. The next best time is to­day? Who knows. What is it like to join a com­pany where all the co-work­ers your same age have made $10+ mil­lion over the past 4 years while you are join­ing with noth­ing?

Stable era com­pa­nies are long-lived or­ga­ni­za­tion not sub­ject to go­ing bank­rupt due to quar­terly mar­ket trends or gov­ern­ment eco­nomic pol­icy changes.

* sta­ble sta­ble, which is con­sis­tently grow­ing, con­sis­tently prof­itable, and pay­ing em­ploy­ees $5k to $10k per day at cur­rent full comp mar­ket rates. These are largely fly­ing un­der the news radar. These com­pa­nies aren’t Google or Apple, but rather some trac­tor com­pany or heavy man­u­fac­tur­ing com­pany just churn­ing out re­sults for years with­out desta­bi­liz­ing the world. Stable sta­ble com­pa­nies do that thing where every quar­ter they beat ex­pec­ta­tions” on their stock re­ports by a co­in­ci­den­tal $0.01 just to prove they are al­ways grow­ing.

* sta­ble un­sta­ble, sta­ble-qua-ego, which is a com­bi­na­tion of a pop­u­lar com­pany con­trolled by a pop­u­lar celebrity CEO fig­ure. These com­pa­nies tend to be as manic/​bipo­lar/​de­pressed as their CEOs float­ing be­tween mega growth phases to mega col­lapse phases then back to mega growth phases in cy­cles of 6-18 months each. The growth/​col­lapse phases usu­ally don’t im­pact your $10k to $20k per day com­pen­sa­tion un­less you get caught in a Year of Efficiency” as eu­phemism for lay­ing off 50,000 em­ploy­ees while the CEO con­tin­ues to spend $100 mil­lion per year buy­ing pri­vate is­lands.

* sta­ble neu­tro­n­ium, which is when a com­pany con­trols one or more sec­tors of the global econ­omy and just can’t be bro­ken. They tend to have sta­ble man­age­ment, ex­po­nen­tially grow­ing stock prices, and thus ex­po­nen­tially grow­ing com­pen­sa­tion for use­ful em­ploy­ees.

Under the mod­ern tech land­scape, sta­ble hyperscale ul­tra-growth” com­pa­nies are pay­ing ex­pe­ri­enced em­ploy­ees the equiv­a­lent of $10,000 to $50,000 per day if we in­clude the value of their ex­po­nen­tially grow­ing yearly stock grants. Meanwhile, back in the real world, other com­pa­nies ar­gue salaries should­n’t in­crease for 10 years be­cause who needs money any­way” or developers are too ex­pen­sive” so you are stuck with 10 years of no prac­ti­cal salary growth and no vi­able pas­sive stock grants (if you can get a job at all).

Big com­pa­nies are a gift and a curse be­cause with great size comes great abil­ity for out­right eco­nomic cap­ture ma­nip­u­la­tion. At a fun­da­men­tal level, every­body kinda knows big com­pa­nies aren’t the best places for good” things be­cause the de­fault mind­set is big com­pany bad so we con­tinue ig­nor­ing cor­po­rate ex­ploita­tive profit cap­ture is the in­evitable way of the world.

Tech jobs are para­dox­i­cal be­cause every­body agrees on three things:

* yet, every com­pany fol­lows the same hir­ing process and posts the same job re­quire­ments

I think the en­tire prob­lem of mod­ern tech hir­ing comes down to the mid­wit meme:

Let’s ig­nore the IQ axis and just con­sider capability points” or some met­ric for scal­ing ex­pe­ri­ence and abil­ity.

The key to mid­wit meme hu­mor is al­ways the most ad­vanced” peo­ple of­ten use sim­ple so­lu­tions in­dis­tin­guish­able from peo­ple who don’t know what they are do­ing. Average peo­ple are of­ten in the knows enough to be dan­ger­ous” cat­e­gory by over-think­ing and over-work­ing and over-pro­cess­ing every­thing out of lack of more com­plete ex­pe­ri­ence to dis­cover sim­pler and cleaner so­lu­tions. We find the mid­wit prob­lem in job in­ter­views all the time where in­ter­view­ers think they are elite spe­cial eval­u­a­tors” need­ing to gate­keep the un­washed hoards of des­per­ate can­di­dates, but in­ter­view­ers of­ten can’t re­li­ably judge or mea­sure peo­ple who have bet­ter an­swers than they ex­pect.

According to all the in­ter­views I’ve failed over the years (I don’t think I’ve ever passed an ac­tual coding in­ter­view” any­where?), the en­tire goal of tech hir­ing is just find­ing peo­ple in the 100 to 115 mid­wit block then out­right re­ject­ing every­body else as too much of an un­known risk.

If you fail the weird manda­tory per­for­mance-on-de­mand in­ter­views full of ran­dom tasks un­re­lated to the ac­tual job role, in­ter­view­ers im­me­di­ately as­sume you are a 0 to 55, but they can’t ac­tu­ally tell if you are in the 120 to 200 range in­stead — es­pe­cially in the case of, well, what if the in­ter­viewer has less ex­pe­ri­ence or less knowl­edge or less abil­ity than the can­di­date, so the in­ter­viewer just can’t de­tect high per­form­ing peo­ple?

Personally, I’ve had in­ter­views where the hir­ing man­ager seem­ingly does­n’t know how any­thing works but they are also in charge of the prod­uct ar­chi­tec­ture? You ask why their plat­form has a dozen bro­ken fea­tures when you tried to use it (and it over­charged you by thou­sands of dol­lars a month for ser­vices not even pro­vided), but you just get blank stares back be­cause the 24 year old lead se­nior en­gi­neer­ing man­ager prod­uct ar­chi­tect” does­n’t ac­tu­ally know how sys­tems, plat­forms, ar­chi­tec­ture, net­work­ing, dns, ssh, mon­i­tor­ing, us­abil­ity, ob­serv­abil­ity, re­li­a­bil­ity, or ca­pac­ity plan­ning works? Then, of course, you get re­jected un­der some false pre­tense of not hav­ing enough ex­pe­ri­ence” when you’re try­ing to pro­mote de­vel­op­ing fixes to their seem­ingly de­cay­ing plat­form.

Modern tech hir­ing, due to in­dus­try-wide per­sis­tent fear mon­ger­ing about not hir­ing secretly in­com­pe­tent peo­ple,” has be­come a game di­vorced from mean­ing­fully judg­ing in­di­vid­ual ex­pe­ri­ence and im­pact. Most tech in­ter­views are as rel­e­vant to job per­for­mance as if hir­ing a baker re­quired in­ter­view­ing them about how elec­tron or­bitals bind worked gluten to­gether then re­ject­ing bak­ers who don’t im­me­di­ately draw a valid or­bital con­fig­u­ra­tion.

I re­mem­ber fully giv­ing up on ever in­ter­view­ing at Google af­ter an in­ter­viewer just barked graph the­ory ques­tions through a low qual­ity speak­er­phone with their lap­top right next to the mi­cro­phone so all you could hear is loud typ­ing and an an­gry man com­plain­ing you aren’t solv­ing their ir­rel­e­vant ques­tions fast enough. The en­tire in­dus­try just kinda ac­cepts can­di­dates should have neg­a­tive and per­son­ally de­grad­ing in­ter­view ex­pe­ri­ences where can­di­dates are un­der­mined by some vague sense of so­cial su­pe­ri­or­ity from the in­ter­view­ers. Sure, it would be great to have big tech $30,000 per day comp pack­ages, but they long ago de­cided to pre­fer hir­ing the wrong peo­ple pass­ing the right tests in­stead ac­tu­ally eval­u­at­ing peo­ple around ex­pe­ri­ence and ca­pa­bil­ity and am­bi­tion.

The weird­est part of watch­ing the tech in­ter­view land­scape change over the past 20 years is it keeps get­ting worse every­where all in the same ways.

big tech” cre­ated the con­cept of the cod­ing test” as the pri­mary in­ter­view cri­te­ria. Why? Google was founded un­der the mind­set of be­ing a Post-Grad-School Grad School” so they de­mand every­body pass a mini sec­ondary GRE to walk amongst their aus­tere big wrin­kle brain ranks. Soon af­ter Google be­came suc­cess­ful, the founders joked they would­n’t be able to pass a Google in­ter­view test any­more. Yeah, real funny guys, mak­ing a sys­tem so se­lec­tive for anti-abil­ity where use­ful peo­ple can’t even get in the door any­more, great job every­body.

Now, every com­pany from 3 per­son spec­u­la­tive star­tups to 50,000 per­son big tech firms all use the same hir­ing prac­tices which seems be­tween some­what in­ef­fi­cient to out­right over­bear­ing. In what world should I put in the same ef­fort to join a zero-growth, un­prof­itable com­pany go­ing out of busi­ness in 3 years com­pared to join­ing a multi-tril­lion-dol­lar com­pany pay­ing 30x the com­pen­sa­tion of a startup?

Much like how every com­pany copied the google big tech big in­ter­view over 6-12 weeks” cy­cle, com­pa­nies have added an ad­di­tional in­ter­view step copied from Amazon: the behavioral in­ter­view” curse (also see: the bar raiser curse” as well).

As far as I can tell, the behavioral in­ter­view” is es­sen­tially the same as a Scientology in­take ses­sion ex­cept, you know, for cap­i­tal­ism in­stead. You have to an­swer the same 8 ques­tions at every in­ter­view around so what would you do if you had a con­flict at work?” where the in­ter­viewer treats you like a 5 year old learn­ing about peo­ple for the first time in­stead of ac­knowl­edg­ing you as a pro­fes­sional with 0.5, 1, 2, 3 decades of ex­pe­ri­ence.

The cur­rent behavioral in­ter­view” weird­ness is some­what of an off­shoot from the orig­i­nal idea about hiring for cul­ture fit,” but tra­di­tional culture fit” eval­u­a­tion was just per­sonal in­ter­viewer vibes around how can­di­dates acted dur­ing an in­ter­view. We’ve all seen can­di­dates who are un­co­op­er­a­tive or ex­ces­sively neg­a­tive with no re­course or just weird in an anti-so­cially drain­ing way, so clearly pass on ac­tively dan­ger­ous peo­ple, but at­tempt­ing to cod­ify is a per­son a good per­son” into call-and-re­sponse ques­tion­ing is a fun­da­men­tally bro­ken con­cept.

The ac­tual goal of any behavioral in­ter­view” or culture fit” es­ti­ma­tion is sim­ple, but no­body ever lays it out. The goal of cul­ture check­ing is only: de­ter­mine how a can­di­date han­dles the trade­offs be­tween progress vs. kind­ness.

A sec­ondary goal of the behavioral in­ter­view” is per­son­al­ity ho­mog­e­niza­tion where com­pa­nies want to en­force not hir­ing any­body too dif­fer­ent” from their cur­rent mean per­son­al­ity en­gram. Yet, the tech in­dus­try is his­tor­i­cally full of weird di­ver­gent peo­ple do­ing great things for their own rea­sons (though, due to just ba­sic pop­u­la­tion growth over the past 20 years, there are tens of mil­lions more normie com­puter peo­ple” now off­set­ting all the smaller pool of orig­i­nal weird com­puter peo­ple”). When you start en­forc­ing per­son­al­ity bub­bles out­side of what some­body can do, you’re just do­ing some sort of weird eco­nomic eu­gen­ics thing (make pro­gram­ming weird again!).

The trick with behavioral in­ter­view” is there are no true good an­swers. They want to watch you squirm. The an­swers de­pend on:

* in your hy­po­thet­i­cal sce­nar­ios about resolving dis­agree­ments,” what is the power dy­namic?

Am I ask­ing the CEO why we have 50 sales peo­ple but only have 3 de­vel­op­ers and I’d like more de­vel­op­ers?

Am I crit­i­ciz­ing an in­tern for blow­ing up the site for the 3rd week in a row when they did­n’t fol­low doc­u­mented de­ploy­ment guide­lines like we’ve rep­ri­manded them for twice be­fore?

* Am I ask­ing the CEO why we have 50 sales peo­ple but only have 3 de­vel­op­ers and I’d like more de­vel­op­ers?

* Am I crit­i­ciz­ing an in­tern for blow­ing up the site for the 3rd week in a row when they did­n’t fol­low doc­u­mented de­ploy­ment guide­lines like we’ve rep­ri­manded them for twice be­fore?

* in your hy­po­thet­i­cal sce­nar­ios about tell me a time you made a mis­take and apol­o­gized” rel­e­vant to any­thing? I’m not here to in­ter­view about my past fail­ures, so we ei­ther make up fake humble brag” fail­ures or tell you about ac­tual fail­ures which will give you more am­mu­ni­tion to think less of us and tank the en­tire in­ter­view process?

It’s the equiv­a­lent of bad bid-side ne­go­tia­tors start­ing with tell me the low­est price you’ll ac­cept” which is­n’t how any­thing in the world works at all. Interviews are about show­ing your best side, not try­ing to mi­cro­man­age a list of per­ceived his­tor­i­cal faults to a point where you dis­qual­ify your­self via your own con­fes­sions.

* It’s the equiv­a­lent of bad bid-side ne­go­tia­tors start­ing with tell me the low­est price you’ll ac­cept” which is­n’t how any­thing in the world works at all. Interviews are about show­ing your best side, not try­ing to mi­cro­man­age a list of per­ceived his­tor­i­cal faults to a point where you dis­qual­ify your­self via your own con­fes­sions.

* What level of ask­ing can­di­dates to ex­pose their raw past fail­ures and ar­gu­ments and dis­agree­ments just too much in­for­ma­tion? You don’t have a right to view the to­tal per­spec­tive vor­tex of every de­ci­sion in my life as em­ploy­ment cri­te­ria. If you want to guar­an­tee team co­he­sion, build good teams, po­si­tion man­agers so they have aware­ness of every­body’s strengths and weak­nesses, and have your man­agers be ex­perts in con­flict res­o­lu­tion.

* I think an­other goal of behavioral in­ter­views” is to show how much you en­joy com­pro­mise. There are com­pro­mise trade­offs though. Finding mid­dle ground be­tween a bro­ken so­lu­tion and a work­ing so­lu­tion does­n’t leave you with a work­ing prod­uct. Compromise can hap­pen around opin­ions, but not facts. I al­ways get the feel­ing in­ter­view­ers want you to talk about some time you were right then gave up be­ing right for a worse so­lu­tion just so the other per­son/​de­part­ment feels bet­ter. Are we here to cre­ate good prod­ucts or not?

sorry, I think I just failed your be­hav­ioral in­ter­view again. Good thing I can just in­ter­view at an­other com­pany. Oh, wait, every com­pany asks the same ques­tions and de­mands the same an­swers from the same book on how to do in­ter­view good” and if you de­vi­ate from the ex­pected an­swers in the book, no in­come 4 u. good luck on your un­em­ploy­ment jour­ney un­til the em­ploy­ment meta changes again.

Looping back to a point: is there some con­di­tion when hav­ing every com­pany con­tinue to add more in­ter­view steps un­re­lated to job tasks and ex­pe­ri­ence and ca­pa­bil­ity and am­bi­tion and in­sight will just col­lapse the en­tire in­dus­try? Sorry, you’re not qual­i­fied to be a pro­fes­sional soft­ware de­vel­oper be­cause you wore the wrong color shirt to the in­ter­view. You should know the color buleru­plange is trig­ger­ing to gen­er­a­tion delta, so you clearly are not a cul­ture fit for this job pay­ing $120,000 per hour, and no, we will not be look­ing over your 25 years of pro­fes­sional ex­pe­ri­ence at all.

Companies seem to for­get they are also part of, you know, the econ­omy and peo­ple need com­pen­sa­tion to, you know, not die, right? If you aren’t act­ing as an eco­nomic en­gine for help­ing the most peo­ple thrive, what is your pur­pose as a com­pany?

You’ve seen them. I’ve seen them. I call them Everything Bagel job de­scrip­tions.

They go some­thing like — As a Software Development Engineer (SDE) at Company, Inc, you will be re­quired to:

* truly love the SDLC and ag­ile story t-shirt poker points

* pro­vide com­pany-wide daily sta­tus up­dates on all your work from the pre­vi­ous day and de­scribe what you plan to do in the next 6 hours

* op­ti­mize all new code and refac­tor all ex­ist­ing code for max­i­mum per­for­mance

* sup­port other em­ploy­ees who don’t know how com­put­ers work

* mon­i­tor and op­ti­mize cost ef­fi­ciency of all aws us­age

* cre­ate and main­tain and mon­i­tor all CI/CD pipelines (github, aws)

* guar­an­tee in­fra­struc­ture is al­ways run­ning (on-call 24/7 3 weeks per month)

* guar­an­tee in­fra­struc­ture and ap­pli­ca­tion logic is al­ways logged and mon­i­tored and alerted and ob­serv­able

* be re­spon­si­ble for se­cu­rity of all ap­pli­ca­tion de­pen­den­cies (npm, pip, ubuntu, con­tainer im­ages)

* be re­spon­si­ble for se­cu­rity of all in­fra­struc­ture (SOC-2 de­mands it)

* man­age your­self and man­age your peers, but you also have an en­gi­neer­ing man­ager and a pro­ject man­ager and the CEO is your skip-level man­ager and the CEOs brother is also your skip-level man­ager too

* code all the time and man­age your own per­for­mance

* con­tin­u­ously doc­u­ment every­thing so we can re­place you with out­sourced con­trac­tors at any time

* mon­i­tor and main­tain all 37 javascript SaaS plu­g­ins our web­site uses to track every user click and record mouse move­ment with­out the user’s know­ing con­sent (plus the mar­ket­ing and prod­uct teams en­joy adding 3 new javascript plu­g­ins to the web­site every month, so you must add them im­me­di­ately when some­body re­quests it through a ticket with­out do­ing any tech­ni­cal eval­u­a­tion on the 3rd party scripts or check­ing fit­ness for pur­pose or even if we could repli­cate the be­hav­ior in-house with less than 4 hours ef­fort)

* be grate­ful for this job and truly ap­pre­ci­ate the op­por­tu­nity to make $300/day in this role be­cause we are all a fam­ily here at Company, Inc. (until the next round of snap overnight lay­offs at least)

like, my dude, your sin­gle job re­quire­ments are ac­tu­ally 5 en­tire de­part­ments worth of work to be shared across a to­tal of 20 peo­ple. yet, you see sin­gle-per­son job de­scrip­tions re­sem­bling this all over the place.

At some point, half the in­dus­try just gave up on the idea any tech­ni­cal per­son should spe­cial­ize in any­thing. Just make it all up as you go along. It’s just typ­ing, how hard can it be? You can’t just de­mand ap­pli­ca­tion de­vel­op­ers also be part time am­a­teur aws ar­chi­tects and ex­pect good re­sults. Experience in these roles is built over 5+ years at a time through fo­cused work, but half the in­dus­try is now cor­rupted into devops means DEVS DO OPS means OPS REQUIRES NO EXPERIENCE means NOTHING REQUIRES EXPERIENCE so FIRE OPS HIRE DEVS and just 15x ALL JOB RESPONSIBILITIES (without any match­ing 15x in­crease in com­pen­sa­tion, of course).

Such job de­scrip­tions also means: your job is phys­i­cally im­pos­si­ble. You will al­ways feel drained and in­com­pe­tent be­cause you can’t ac­tu­ally do every­thing every­day. You will al­ways be be­hind be­cause each of those bul­let points can be mul­ti­ple days of work per week just on their own (plus, how are you sup­posed to be pro­duc­tive in 35 dif­fer­ent ar­eas re­quir­ing months to years of ex­pe­ri­ence if you ac­tu­ally want to be good at each task?). So, from day 1, you will al­ready be about 4 months be­hind on your ex­pected job re­spon­si­bil­i­ties and you’ll never catch up. It turns into an end­less game of man­agers and ex­ec­u­tives say­ing you are underperforming” be­cause you have 18 pri­mary tasks, each pri­mary task re­quires 4 to 20 hours of ef­fort, and every man­ager wants their task done within 4 hours. You are setup to fail. What’s the point?

Maybe a point is some com­pa­nies just should­n’t ex­ist if they can’t af­ford the fully staffed pro­fes­sional teams re­quired to build and main­tain their prod­ucts? The worst se­cret in tech is am­a­teur de­vel­op­ers are happy to act like en­try level work­ers across 20 ar­bi­trary roles for years (in the ab­sence of never hav­ing enough time to fo­cus on build­ing up long-term ex­pe­ri­ence or best prac­tices). You can’t get gud if you are al­ways rushed from task to task with­out any chance of lev­el­ing up knowl­edge and ca­pa­bil­ity through deep work” as we would his­tor­i­cally ex­pect of pro­fes­sion­als.

Here’s some things I’ve seen in the real world” over a cou­ple jobs. Some mi­nor de­tails have been al­tered to pro­tect the guilty.

These are just to high­light how of­ten com­pa­nies have com­pletely bro­ken in­ter­nal prac­tices and don’t even know it. The so­lu­tions are fairly ba­sic, but you can only see the so­lu­tions if you have ac­tual ex­pe­ri­ence know­ing how every­thing works in the first place (and know where ex­pe­ri­ence comes from? 5, 10, 15+ years of ac­tu­ally do­ing fo­cused work and ac­tu­ally build­ing sys­tems from the ground-up over and over and over again—it takes com­pletely re-build­ing some­thing 3-5 times over any­where from 3 months to 5 years be­fore you ac­tu­ally start to be good at a task).


Read the original on matt.sh »

2 1,160 shares, 47 trendiness



Read the original on naya.lol »

3 1,105 shares, 40 trendiness

SCALE documentation

SCALE is a GPGPU pro­gram­ming toolkit that al­lows CUDA ap­pli­ca­tions to be na­tively com­piled for AMD GPUs.

SCALE does not re­quire the CUDA pro­gram or its build sys­tem to be mod­i­fied.

Support for more GPU ven­dors and CUDA APIs is in de­vel­op­ment.

SCALE has sev­eral key in­no­va­tions com­pared to other cross-plat­form GPGPU so­lu­tions:

* SCALE ac­cepts CUDA pro­grams as-is. No need to port them to an­other

lan­guage. This is true even if your pro­gram uses in­line PTX asm.

* The SCALE com­piler ac­cepts the same com­mand-line op­tions and CUDA di­alect

as nvcc, serv­ing as a drop-in re­place­ment.

* Impersonates” an in­stal­la­tion of the NVIDIA CUDA Toolkit, so ex­ist­ing

build tools and scripts like cmake just work.

We val­i­date SCALE by com­pil­ing open-source CUDA pro­jects and run­ning their tests. The fol­low­ing open-source pro­jects are cur­rently part of our nightly au­to­mated tests and pass fully:

The fol­low­ing GPU tar­gets are sup­ported, and are cov­ered by our nightly tests:

The fol­low­ing GPU tar­gets have un­der­gone ad-hoc man­ual test­ing and seem to work”:

We are work­ing on sup­port­ing the fol­low­ing GPUs:

Contact us if you want us to ex­pe­dite sup­port for a par­tic­u­lar AMD GPU ar­chi­tec­ture.

* An nvcc-com­pat­i­ble com­piler ca­pa­ble of com­pil­ing nvcc-di­alect CUDA for AMD

GPUs, in­clud­ing PTX asm.

* Implementations of the CUDA run­time and dri­ver APIs for AMD GPUs.

* Open-source wrap­per li­braries pro­vid­ing the CUDA-X” APIs by del­e­gat­ing to the

cor­re­spond­ing ROCm li­braries.

This is how li­braries such as cuBLAS and cu­SOLVER are han­dled.

Instead of pro­vid­ing a new way to write GPGPU soft­ware, SCALE al­lows pro­grams writ­ten us­ing the widely-pop­u­lar CUDA lan­guage to be di­rectly com­piled for AMD GPUs.

SCALE aims to be fully com­pat­i­ble with NVIDIA CUDA. We be­lieve that users should not have to main­tain mul­ti­ple code­bases or com­pro­mise on per­for­mance to sup­port mul­ti­ple GPU ven­dors.

SCALEs lan­guage is a su­per­set of NVIDIA CUDA, of­fer­ing some opt-in

lan­guage ex­ten­sions

that can make writ­ing GPU code eas­ier and more ef­fi­cient for users who wish to move away from nvcc.

SCALE is a work in progress. If there is a miss­ing API that is block­ing your at­tempt to use SCALE, please con­tact us so we can pri­ori­tise its de­vel­op­ment.

There are mul­ti­ple ways to get in touch with us:

* Send us an e-mail at


Read the original on docs.scale-lang.com »

4 1,010 shares, 40 trendiness

Jailbreaking RabbitOS: Uncovering Secret Logs, and GPL Violations

I as­sume by now that most peo­ple have heard of the Rabbit R1.

Critics unan­i­mously agree that it sucks, and some have ac­cused the com­pany of de­lib­er­ate de­cep­tion. Rabbit Inc. re­port­edly ac­cepts re­turns, but cus­tomers are so ea­ger to get rid of their R1s that even new-in-box units are sell­ing for well be­low RRP ($200) on sec­ondary mar­kets.

I spent 2 hours try­ing to get my r1 to do any­thing re­motely use­ful. […]

I’d sell mine, but hon­estly I’d feel bad for the per­son buy­ing it.

I’d give it away, but would feel bad for the per­son re­ceiv­ing it.

Well, here we are!

In this ar­ti­cle I’ll out­line the boot process of the R1, and how (and why) I sub­verted it to cre­ate a tethered jail­break” that gives you a root shell on oth­er­wise-stock firmware, all with­out un­lock­ing the boot­loader or mak­ing any per­sis­tent changes to in­ter­nal stor­age.

I’ll also talk about my ini­tial find­ings from pok­ing around in­side the RabbitOS” firmware.

After the head­lines caught my at­ten­tion, I started re­verse-en­gi­neer­ing a copy of the APK that I found float­ing around the in­ter­net (yes, RabbitOS” is just an app run­ning in a kiosk-like mode on Android 13 AOSP). There are no local AI mod­els” or any­thing like that, so once you un­der­stand the API it uses to talk to The Cloud™ you can re­place the Rabbit R1 hard­ware with a small Python script. I re­verse-en­gi­neered their API, and wrote up my find­ings (it’s noth­ing very ex­cit­ing, just JSON over a web­socket).

By the way, you might’ve seen head­lines about ex­posed API keys. Those API keys were al­legedly leaked from server-side source code, and were never stored on-de­vice (I can at­test to the lat­ter).

A week or so ago I bought an R1 on eBay for £122 (which is still way more than it’s ob­jec­tively worth). So why did I buy this garbage, in full knowl­edge of its garbage-ness?

Well, in sub­se­quent app up­dates they started ob­fus­cat­ing their code, and I took it per­son­ally! I love a good game of cat and mouse (or tor­toise and hare?), and the game was on. What se­crets are they try­ing to hide from me?

They’re us­ing a com­mer­cial ob­fus­ca­tor, and to be hon­est it’s quite good, mak­ing my purely sta­tic-analy­sis ap­proach te­dious. So, I de­cided it was time to get an R1 in-hand, to poke and prod at. Yes, I’d al­ready fig­ured out the API, but I did­n’t want to get locked out in fu­ture up­dates. Not be­cause I es­pe­cially care about be­ing able to in­ter­ro­gate Rabbit’s mediocre APIs, but be­cause my pride is at stake.

I’d also never looked at the boot se­cu­rity of a mod­ern-ish Android de­vice be­fore, so it was an in­ter­est­ing learn­ing op­por­tu­nity on that front.

During my sta­tic analy­sis of their ob­fus­cated code, I noted logic to de­tect off-the-shelf analy­sis tools like Magisk and Frida (if de­tected, it’d refuse to run). So, I was prob­a­bly go­ing to have to de­velop my own tool­ing. Fun! Of course, I could try to work around their anti-analy­sis de­tec­tions, but that’s much less fun. The ob­fus­cated code also takes steps to ver­ify that it’s run­ning on an R1, as op­posed to any other an­droid de­vice, and I could spoof or patch those checks, but that gets bor­ing (and tends not to be a very fu­ture-proof ap­proach).

In other words I pre­fer to be a re­verse en­gi­neer, rather than an anti-anti-re­verse-en­gi­neer.

The de­vice uses a MediaTek MT6765 SoC, with 4GB of DRAM, and 128GB(!) of eMMC stor­age. The SoC is an in­ter­est­ing choice for a newly de­signed prod­uct re­leased in 2024, since it has known bootrom ex­ploits (since 2019!) The 128GB of stor­age is also a weird choice, since the de­vice does­n’t store much lo­cally. Maybe they in­tended to de­velop lo­cal ML mod­els and gave up. Or maybe it was just sur­plus stock be­ing sold at a dis­count.

R1 own­ers fairly quickly no­ticed that al­though the boot­loader is locked” by de­fault, you can use mtk­client to un­lock and then re­flash it with a custom ROM and/​or root it. It does­n’t even need to use the afore­men­tioned bootrom ex­ploit, be­cause the de­vice is per­mis­sively con­fig­ured. However, I’m not too in­ter­ested in run­ning a cus­tom Android sys­tem im­age, I’m here be­cause I want a closer look at the fac­tory-in­stalled firmware.

Note: Many are call­ing the re­flash­ing process a jailbreak”, and I’m not go­ing to ar­gue with them. Just be aware that if you see some­one talk about jail­break­ing an R1, they might be re­fer­ring to that.

Although the first boot stages are wide open, sub­se­quent stages im­ple­ment Android Verified Boot 2.0. I could un­lock the boot­loader and in­stall Magisk (a root tool which works by patch­ing the boot par­ti­tion), but this has sev­eral is­sues:

* It might break OTA delta up­dates (which, to Rabbit’s credit, are reg­u­lar).

* It might get de­tected by the cur­rent anti-analy­sis code.

* It might get de­tected by fu­ture up­dates that check, for ex­am­ple, ro.boot.ver­i­fied­boot­state (which is set by AVB de­pend­ing on how happy it is).

All three of these prob­lems are workaround-able, but it’d be so much eas­ier if we could just not cause them in the first place. I want to run as much of the vanilla” code as pos­si­ble, with min­i­mally in­va­sive patches to grant me lo­cal root priv­i­leges, so that I can in­spect the app at run­time. The fewer things I change, the fewer things there are for an­noy­ing anti-analy­sis logic to de­tect.

The so­lu­tion I came up with was to write a bootkit” of sorts. Before I tell you how that works, let me ex­plain the de­fault boot process in de­tail. Fair warn­ing, it’s about to get dense.

All the boot chain logic comes from MediaTek, the SoC ven­dor.

The boot process starts in the bootrom (aka brom), which is im­mutably etched into the CPU sil­i­con, and mapped at phys­i­cal ad­dress 0. The bootrom does very ba­sic hard­ware ini­tial­i­sa­tion, and then loads the next stage (the Preloader”) from the eMMC boot0 par­ti­tion, into SRAM. The Preloader is signed, and the bootrom ver­i­fies the sig­na­ture be­fore ex­e­cut­ing it. (Edit: Actually, on the R1 it might not ver­ify it at all. More re­search needed…)

The Preloader ini­tialises DRAM, and then loads 3 im­ages from eMMC GPT par­ti­tions into DRAM:

It ver­i­fies their sig­na­tures, and then jumps to LK.

Through some process I don’t yet fully un­der­stand, LK jumps to ATF, which sets it­self up and then jumps to GZ, which sets it­self up, be­fore re­turn­ing back to LK, which con­tin­ues the boot process. I haven’t in­ves­ti­gated ATF and GZ in much de­tail so I might be slightly wrong here.

LK is where the in­ter­est­ing stuff hap­pens though. It im­ple­ments the afore­men­tioned Android Verified Boot, and as part of that, dm-ver­ity, which provides trans­par­ent in­tegrity check­ing of block de­vices. dm-ver­ity helps pre­vent per­sis­tent rootk­its that can hold onto root priv­i­leges and com­pro­mise de­vices. This fea­ture helps Android users be sure when boot­ing a de­vice it is in the same state as when it was last used.”

LK loads and ver­i­fies the GPT boot par­ti­tion (from eMMC user­data, not to be con­fused with eMMC boot0), which con­tains the Linux ker­nel and initramfs. If the boot­loader is in locked” state, it will refuse to boot if ver­i­fi­ca­tion fails. If the boot­loader is unlocked” it will still boot, but with a big scary warn­ing say­ing that the de­vice can­not be trusted, and it also sets var­i­ous flags to in­form the soon-to-be-booted ker­nel of this (aka orange state”). If dm-ver­ity checks fail, the de­vice won’t boot even if the boot­loader is un­locked (It dis­plays a warn­ing and says press the power but­ton to con­tinue”, but it does­n’t work. This may be a bug!)

Assuming the req­ui­site checks have passed, LK fi­nally de­com­presses and boots the Linux ker­nel, which in turn ex­e­cutes /init from the initramfs, which in turn mounts the other par­ti­tions and does all the other Boot Stuff (which I don’t un­der­stand too well at pre­sent - for my pur­poses I only need to un­der­stand as far as /init).

By the way, it uses the A/B par­ti­tion­ing scheme (so when I said boot ear­lier, that’s re­ally ei­ther boot_a or boot_b de­pend­ing on which slot is cur­rently ac­tive).

Also by the way, boot­loader lock/​un­lock state is stored in the sec­cfg GPT par­ti­tion. The sec­cfg data is just a few flags, along with an en­crypted hash of that data. The hash is en­crypted us­ing the SoC’s hard­ware AES en­gine, act­ing as a sig­na­ture/​MAC of sorts. Relatedly, the last byte of the frp par­ti­tion gov­erns whether boot­loader un­lock­ing is per­mit­ted (e.g. via fast­boot flash­ing un­lock, which would up­date sec­cfg on suc­cess).

Secure boot chains all have a root of trust. In this case, the root of trust is a cer­tifi­cate hash baked into the CPUs efuses, along with the bootrom code that ver­i­fies it. However, due to the afore­men­tioned kamakiri” bootrom ex­ploit, the first link of the chain is ir­rev­o­ca­bly bro­ken. If we can sub­vert the first stage, we can in prin­ci­ple sub­vert all sub­se­quent stages, no mat­ter how secure” they are in iso­la­tion. This hard­ware is fun­da­men­tally in­ca­pable of hid­ing se­crets from its users (I wish all hard­ware was like that, to be hon­est).

But, we don’t even need to use an ex­ploit here. Both the brom and Preloader boot stages fea­ture a USB boot­loader mode, which in the r1′s case will ac­cept un­signed DA (“Download Agent”) im­ages over USB, and al­low you to ex­e­cute them from mem­ory (from SRAM in the case of brom, and DRAM in the case of Preloader).

So, I wrote my own DA pay­load. It gets loaded into DRAM by Preloader and does the fol­low­ing things, in or­der:

It loads a cus­tom Android boot im­age over USB into DRAM (containing ker­nel and initramfs).

It in­stalls a hook in the very last part of Preloader, just be­fore it jumps to LK.

It jumps back into Preloader to con­tinue the reg­u­lar boot process.

Preloader loads ver­i­fies the tee, gz and lk im­ages from eMMC, as it nor­mally would.

Just as Preloader is about to jump to LK, our hook lands, and we take this op­por­tu­nity to in­stall cus­tom hooks/​patches in LK.

LK con­tin­ues boot as nor­mal, load­ing and ver­i­fy­ing the orig­i­nal boot par­ti­tion from eMMC.

One of our afore­men­tioned LK hooks is to hook mem­cpy. When the boot im­age is get­ting copied from the AVB code over to the boot linux” part of the code (they seem to be sep­a­rate mod­ules), we sub­sti­tute in the cus­tom boot im­age that we ini­tially loaded over USB.

Another LK hook dis­plays a cus­tom mes­sage on the screen, just for style points.

Our cus­tom ker­nel/​initramfs starts boot­ing, while all in­tegrity ver­i­fi­ca­tion checks pass!

LK uses the MMU to pro­vide mem­ory pro­tec­tion, and al­though the map­pings are all iden­tity map­pings (virtual ad­dress == phys­i­cal ad­dress), it cre­ated some headaches for me. I skipped the de­tails above, but I ac­tu­ally have to copy the boot im­age around mul­ti­ple times, as dif­fer­ent mem­ory ranges are ac­ces­si­ble and/​or clob­bered at dif­fer­ent sub-stages of LKs boot process. There’s al­most cer­tainly scope to sim­plify this logic, but hey, it works.

At each stage, my gen­eral ap­proach is to let the boot process pro­ceed un­mod­i­fied, let it ver­ify the data that needs to be ver­i­fied, and then sub­sti­tute in my patched data right at the last minute, be­tween ver­i­fi­ca­tion and use. A bit like this:

Just to re­it­er­ate, we don’t touch flash stor­age at any point dur­ing this process, the en­tire jailbreak” process stays in mem­ory only. This also means that once the de­vice is re­booted it’s back to a clean slate, which is of­ten use­ful when re­verse en­gi­neer­ing.

For the cus­tom boot im­age, I used the flash­able-an­droid-rootkit pro­ject, which is es­sen­tially a stripped-down ver­sion of Magisk. It re­places the de­fault /init bi­nary in the in­tramfs with one that in­jects a max­i­mally priv­i­leged user-space ser­vice (the payload”), be­fore con­tin­u­ing the boot process.

The tool used to do the ac­tual boot im­age patch­ing, magisk­boot, also comes from Magisk pro­ject. It’s in­tended to be ex­e­cuted on-de­vice, but that’s not vi­able in my case be­cause (until we’ve jail­bro­ken it at least once) there’s no way to run our own code on the R1. Fortunately the magisk­boot_build pro­ject ex­ists, al­low­ing magisk­boot to be com­piled and ex­e­cuted on reg­u­lar linux sys­tems.

For my pay­load, I wrote a quick-and-dirty TCP bind shell - not very stealthy” (i.e. po­ten­tially de­tectable by the Rabbit app), but I can al­ways im­prove this down the line.

Since I’m send­ing a cus­tom boot im­age, I could in the­ory patch the ker­nel, but I haven’t had a need for that yet.

I could also build an en­tire cus­tom ker­nel from source, but Rabbit Inc. has cho­sen to vi­o­late the GPL2 li­cense and not make the sources avail­able. Of par­tic­u­lar note are their dri­vers for hall-ef­fect scroll wheel sens­ing, and cam­era ro­ta­tion step­per mo­tor con­trol, which are closed-source and yet sta­t­i­cally linked into the GPL’d ker­nel im­age. Violations like this are hugely de­struc­tive to the free soft­ware ecosys­tem, from which com­pa­nies like Rabbit Inc. ben­e­fit.

I started writ­ing my own USB client soft­ware in Python, not be­cause there’s any­thing wrong with mtk­client (which al­ready im­ple­ments every­thing nec­es­sary) but be­cause I wanted to make sure I un­der­stood every­thing as much as pos­si­ble. Once I had it work­ing, I de­cided to port it to js/​Web­Se­r­ial, just for fun.

And now I have a web­page that can jail­break a phys­i­cally-con­nected Rabbit R1: https://​retr0.id/​stuff/​r1_­jail­break/

In the spirit of ter­ri­ble rab­bit-themed puns, I’m nam­ing the jail­break carroot”.

While boot­ing up, it looks like this:

And once it boots, we can log in and have a quick look around:

$ rl­wrap nc 1337

# id

uid=0(root) gid=0(root) groups=0(root) con­text=u:r:rootkit:s0

# get­prop ro.boot.ver­i­fied­boot­state


As you can see, we’re root, and the sys­tem thinks it’s been booted se­curely, with­out even need­ing to tam­per with sys­tem prop­erty val­ues.

Note, my TCP shell is so bare-bones that there’s no #” prompt by de­fault, I added it here for clar­ity.

The priv­i­leged rootkit” SELinux do­main is set up as part of flash­able-an­droid-rootkit.

In re­search­ing the R1′s boot chain I ben­e­fited from the work of many other re­searchers and de­vel­op­ers who came be­fore me, no­tably:

* bker­ler/​mtk­client - Code for ma­nip­u­lat­ing MediaTek de­vices through the brom/​pre­loader/​DA in­ter­faces, and more. Also in­cludes its own links to fur­ther learn­ing re­sources.

* cy­rozap/​me­di­atek-lte-base­band-re - Baseband-focused, but also in­cludes hard­ware/​boot notes, and links to fur­ther re­sources.

* 吴港南/preloader运行流程–基于MT6765 (“Preloader op­er­a­tion process - based on MT6765) - con­tains some help­ful di­a­grams and MT6765-specific notes.

* ng-dst/​flash­able-an­droid-rootkit, LuigiVampa92/unlocked-bootloader-backdoor-demo, topjohnwu/​Magisk - These pro­jects and their as­so­ci­ated doc­u­men­ta­tion cover the later stages of the boot process, from /init on­wards.

* RabbitHoleEscapeR1/r1_escape - Tools/instructions for flash­ing custom ROMs” on the R1.

If you look at the iFixit tear­down pho­tos, you can see test pads la­belled TX and RX. These are of course UART test pads, which were in­valu­able dur­ing my re­search. The logic lev­els are 1v8, al­though they ap­pear to be 3v3 tol­er­ant (at least, 3v3 did not blow mine up). At all stages of the boot chain, the de­vice logs de­bug in­for­ma­tion over UART (at 115200 baud dur­ing brom, and 921600 baud there­after).

The Preloader has an an­noy­ing fea­ture that dis­ables UART log­ging un­less the vol­ume-up key is be­ing held. The R1 does­n’t have a vol­ume-up key, so I had to patch pre­loader to dis­able this check (and I can boot a patched Preloader us­ing the bootrom’s USB down­load mode).

I was also able to patch the Linux ker­nel’s com­man­d­line flags to emit ker­nel logs to UART, like so:

ear­ly­con con­sole=ttyS1,921600

These com­bined patches al­lowed me to gather logs for the whole boot process.

During de­vel­op­ment of my jail­break tool, I was able to emit UART logs from my own code, for printf de­bug­ging”.

P. S. I think I spy an un­pop­u­lated JTAG header, which I have not yet in­ves­ti­gated fur­ther.

P. P.S. The test pad next to the re­set but­ton (accessible through the SIM slot, clos­est to the edge of the board), can be pulled to ground dur­ing re­set to force the de­vice to boot into brom’s USB mode.

So, what were they try­ing to hide from us?

To be hon­est I haven’t found any­thing par­tic­u­larly in­ter­est­ing yet. The analy­sis has only just be­gun! A big rea­son why I’m shar­ing my jail­break is the hope that other peo­ple will join me in my analy­sis.

One thing I did no­tice is that they were log­ging every­thing to text files on in­ter­nal stor­age:

:/storage/emulated/0 # ls -al ./Android/data/tech.rabbit.r1launcher.r1/files/logs/

to­tal 7140

dr­wxrws–- 2 u0_a66 ex­t_­da­ta_rw 4096 2024-07-07 00:52 .

dr­wxrws–- 3 u0_a66 ex­t_­da­ta_rw 4096 2024-07-04 22:11 ..

-rw-rw–– 1 u0_a66 ex­t_­da­ta_rw 671954 2024-07-05 01:37 2024-07-01.log

-rw-rw–– 1 u0_a66 ex­t_­da­ta_rw 1472020 2024-07-04 23:40 2024-07-04.log

-rw-rw–– 1 u0_a66 ex­t_­da­ta_rw 782800 2024-07-06 16:45 2024-07-05.log

-rw-rw–– 1 u0_a66 ex­t_­da­ta_rw 1747449 2024-07-07 00:52 2024-07-06.log

-rw-rw–– 1 u0_a66 ex­t_­da­ta_rw 2565224 2024-07-07 03:47 2024-07-07.log

At the time (July 7th), I noted this pub­licly in the Rabbitude com­mu­nity dis­cord. I just thought it was funny that they were choos­ing to fill up their 128GB of stor­age space with such ver­bose log­ging.

But as I and oth­ers looked closer, and thought about it more deeply, things be­came con­cern­ing.

* Your pre­cise GPS lo­ca­tions (which are also sent to their servers).

* The IDs of nearby cell tow­ers (even with no SIM card in­serted, also sent to their servers).

* The user to­ken used by the de­vice to au­then­ti­cate with Rabbit’s back-end API.

* Base64-encoded MP3s of every­thing the Rabbit has ever spo­ken to you (and the text tran­script thereof).


Read the original on www.da.vidbuchanan.co.uk »

5 1,006 shares, 43 trendiness

We need visual programming. No, not like that.

We need vi­sual pro­gram­ming. No, not like that.

Most vi­sual pro­gram­ming en­vi­ron­ments fail to get any us­age. Why? They try to re­place code syn­tax and busi­ness logic but de­vel­op­ers never try to vi­su­al­ize that. Instead, de­vel­op­ers vi­su­al­ize state tran­si­tions, mem­ory lay­outs, or net­work re­quests. In my opin­ion, those work­ing on vi­sual pro­gram­ming would be more likely to suc­ceed if they started with as­pects of soft­ware that de­vel­op­ers al­ready vi­su­al­ize. Every few months some­body puts out a very nice vi­sual pro­gram­ming lan­guage that looks some­thing like this: This one looks par­tic­u­larly sleek, most of them are much less invit­ing.The al­go­rithm above is re­plac­ing some pseudo-code that looks like1:def merge_­sort(a):

if (length(a) == 2):

if (a[0] < a[1])

re­turn a


re­turn [a[1], a[0]]


[x1, x2] = split_in­_half(a)

sort­ed_x1 = merge_­sort(x1)

sort­ed_x2 = merge_­sort(x2)

re­turn pri­or­i­ty_merge(sort­ed_x1, sort­ed_x2)

Like the ex­am­ple above, the sys­tems I am re­fer­ring to are try­ing to re­place the code syn­tax it­self. But every time one of these vi­sual pro­gram­ming sys­tems come out, we think oh neat!” and never try them. I have never seen any of these vi­sual pro­gram­ming sys­tems even be men­tioned while try­ing to solve prob­lems. Why? Why do we keep cir­cling back to vi­sual pro­gram­ming if no­body ever uses it?One rea­son is be­cause we think that other, more in­ex­pe­ri­enced, pro­gram­mers might have an eas­ier time with vi­sual pro­gram­ming. If only code was­n’t as scary! If only it was vi­sual! Excel Formula is the most pop­u­lar pro­gram­ming lan­guage by a few or­ders of mag­ni­tude and it can look like this:=IN­DEX(A1:A4,SMALL(IF(Ac­tive[A1:A4]=E$1,ROW(A1:A4)-1),ROW(1:1)),2)

I’ll ig­nore this rea­son be­cause many of these tools ex­plic­itly meant for ex­pe­ri­enced de­vel­op­ers. They as­sume that you will in­stall them with npm in­stall or de­ploy its code to AWS Lambdas.Why does vi­sual pro­gram­ming fail for de­vel­op­ers?De­vel­op­ers say they want visual pro­gram­ming”, which makes you think oh, let’s re­place if and for”. But no­body ever made a flow chart to read for (i in 0..10) if even?(i) print(i). Developers fa­mil­iar with code al­ready like and un­der­stand tex­tual rep­re­sen­ta­tions to read and write busi­ness log­ic2.Let’s ob­serve what de­vel­op­ers do, not what they say.De­vel­op­ers do spend the time to vi­su­al­ize as­pects of their code but rarely the logic it­self. They vi­su­al­ize other as­pects of their soft­ware that are im­por­tant, im­plicit, and hard to un­der­stand.Here are some vi­su­al­iza­tions that I en­counter of­ten in se­ri­ous con­texts of use:Var­i­ous ways to vi­su­al­ize the code­base over­all.Di­a­grams that show how com­put­ers are con­nected in a net­work­Di­a­grams that show how data is laid out in mem­o­ry­This is the vi­sual pro­gram­ming de­vel­op­ers are ask­ing for. Developers need help with those prob­lems and they re­sort to vi­su­als to tackle them.If you are skep­ti­cal about need­ing these, let me ask you: do you know ex­actly how your data is laid out in mem­ory? Bad mem­ory lay­outs are one of the biggest con­trib­u­tors to poor per­for­mance. Yet it is very hard to see” how a given piece of data is laid out and con­trast it with the ac­cess pat­terns pre­sent in the code­base.Al­ter­na­tively, do you know all the ex­ter­nal de­pen­den­cies your code hits when re­spond­ing to a given HTTP re­quest? Are you sure? Didn’t you no­tice that Bob just added a call to a rate lim­iter ser­vice in the mid­dle­ware? Don’t worry, you’ll learn about it in the next out­age.For both of these ques­tions, the an­swer is usu­ally I think I know the an­swer?” with a creep­ing fear that maybe you missed some­thing you could­n’t see.Sadly most of those vi­su­al­iza­tions are:made ad-hoc by some­body that goes out of their way to do it­This is not to say that the in­dus­try has noth­ing to show for. Some vi­su­al­iza­tion tech­niques are in­te­grated into de­vel­oper en­vi­ron­ments and heav­ily used:But these are the ex­cep­tions not the de­fault. Isn’t it great when you can pin­point the per­for­mance prob­lem in a flame­graph? We need that for every­thing else.I’ll now cover some of those vi­su­al­iza­tions to prime you to con­sider them for your cur­rent work or even in­te­grat­ing them into ex­ist­ing de­vel­oper en­vi­ron­ments. This great talk shows many ways to vi­su­al­ize dif­fer­ent as­pects of a code­base. There are a lot! Here are some that in­ter­est me:Source­trail is an open source to vi­su­al­ize code­bases (no longer ac­tive), writ­ten by the speaker in the talk above. Here is a great overview of how the tool helps you nav­i­gate a code­base. It looks like this:Source­trail solves many of the com­mon de­sign prob­lems of code vi­su­al­iza­tion:It shows the vi­su­al­iza­tion next to the code. When you over on the code, it high­lights its vi­sual rep­re­sen­ta­tion. When you hover on the di­a­gram, it high­lights the code. When you click on a de­pen­dency, it takes you to the code re­spon­si­ble for that de­pen­dency (e.g. one func­tion call­ing an­other, one mod­ule re­quir­ing an­other).It is smart about hid­ing in­for­ma­tion. In code­bases, there are of­ten too many con­nec­tions to vi­su­al­ize at any given time with­out over­whelm­ing the user. Sourcetrail tool shows you what it thinks you are look­ing for first, and asks you to click / hover for more. The UI is de­signed to to pull on threads that seem in­ter­est­ing, not to get an bird’s eye view of the code­base. This is the op­po­site from Treemap (covered be­low), which is ex­plic­itly de­signed to get an overview­But, as the demo shows, it suf­fers from some of the com­mon prob­lems of this type of vi­su­al­iza­tion:There are no ob­vi­ous hooks to when do I need this”. When you are pro­fil­ing, you think I need a flame­graph”. When do you need this vi­su­al­iza­tion?It is not in the same tools that I want to use. The demo shows the user switch­ing be­tween Sourcetrail and Sublime. That type of code vi­su­al­iza­tion and nav­i­ga­tion wants to live in­side of the code ed­i­tor.In this set of videos, Jonathan Blow im­ple­ments a treemap” to in­spect dif­fer­ent as­pects of a code­base. From watch­ing the videos (I’ve never used his viewer), the last ver­sion looks like this:The size of each square rep­re­sents the size of the fileThe color of the square rep­re­sents a blended of com­plex­ity met­rics in each file like depth of nested ifs, depth of nested loops, num­ber of global reads, etc.With this type of vi­su­al­iza­tion you can vi­su­al­ize other (size, color) met­rics over the code­base, like (code_size, code_qual­ity), (code_size, heap_ac­cess / code_­size), etc.Even if you vi­su­al­ize some­thing as sim­ple as just code_­size with no color, that can be very use­ful when on­board­ing in mas­sive code­bases. A typ­i­cal Big Tech mono­lith might look some­thing like:pack­ages/












You prob­a­bly glanced over it, did­n’t you? I glanced over it to when I joined a large com­pany. When you git clone this repos­i­tory and nav­i­gate around it, you don’t re­ally learn what is there, not even the ba­sic pic­ture. In the ex­am­ple above, it turns out that most of the code is in the racoon/ ser­vice (3M LOC) and the sec­ond_­tran­si­tive_dep/ (1M LOC). Everything else has less than 300k LOC and is, by com­par­i­son, a round­ing er­ror. You can work on this code­base for years with­out learn­ing these ba­sic facts.Rich Hickey’s History of Clojure pa­per has some neat vi­su­al­iza­tions to un­der­stand how the Clojure code­base evolved over time. This is a burn­down chart gen­er­ated from the Hercules CLI:The code that was writ­ten each year is rep­re­sented as an area of a cer­tain color (e.g. red for 2006)As some of that code is re­moved / re­placed the next year, its area di­min­ish­esBy track­ing the col­ors, we can see the stay­ing power of the code writ­ten each year. For ex­am­ple, the code writ­ten in 2006 (red) was largely re­moved or re­placed. But the code writ­ten is 2011 (green) has stayed al­most un­touched since then. This is true for most years!3If you ever use AWS, you’ll see that its doc­u­men­ta­tion is full of di­a­grams like this one:I think the di­a­gram is very clear. It shows you all the services” in­volved and their con­nec­tions. In this case, if you know what each of them does, it is ob­vi­ous how they re­late to each other. (If you don’t you have to go read about each one.)Through­out my ca­reer, I made one of these topol­ogy di­a­grams for every sin­gle team I’ve be­longed to. A few lessons from mak­ing them:As new peo­ple joined, I started from the last di­a­gram I had made (6 months or so on av­er­age) which made it eas­ier. There was also some change from last time.I missed some­thing im­por­tant each time I made the di­a­gram.As far as I can tell, it was the most im­por­tant tech­ni­cal ar­ti­fact I ever trans­mit­ted to new peo­ple in the team. Idea: if you use gRPC ser­vice de­f­i­n­i­tions, can you gen­er­ate these di­a­grams from them?

This red­dit thread cap­tures some­one try­ing to un­der­stand the mem­ory lay­out of Rc point­ers: Hello, I would like to un­der­stand the mem­ory lay­out while ex­e­cut­ing the fol­low­ing piece of code, taken from the stan­dard li­brary ex­am­ple: use std::rc::Rc;

let vec_­var = vec![1.0, 2.0, 3.0];

let foo = Rc::new(vec_var);

let a = Rc::clone(&foo);

let b = Rc::clone(&foo);

I have imag­ined the mem­ory lay­out like the fol­low­ing draw­ing. Is that cor­rect? Thank you! To which a dif­fer­ent user replies with this other di­a­gram:No­tice that the orig­i­nal code has­n’t changed. The only in­for­ma­tion trans­mit­ted in the an­swer is the cor­rected di­a­gram. That is be­cause to the per­son ask­ing the ques­tion, the di­a­gram is a bet­ter rep­re­sen­ta­tion of their men­tal model. As such, get­ting an cor­rected di­a­gram has an ef­fect on their men­tal model but look­ing at the code does­n’t. Hello and thanks a lot for your help, you’ve re­ally thrown light on it. This is why vi­sual pro­gram­ming mat­ters: it of­ten matches what peo­ple are vi­su­al­iz­ing in their head (or fail­ing to). Generating a good di­a­gram lights up their head. Idea: Can you gen­er­ate these di­a­grams di­rectly from the struct type an­no­ta­tions? Rust has an­other way in which memory is laid out”: its own­er­ship model. No mat­ter the shape of the size of the data in mem­ory, dif­fer­ent ref­er­ences own” other ref­er­ences form­ing a tree. Ownership is bet­ter ex­plained by this di­a­gram from Programming Rust than by any­thing I could ever say: Idea: can you gen­er­ate own­er­ship trees given Rust source code? These are pretty stan­dard. The Idris doc­u­men­ta­tion uses one well here to show what they’ll be talk­ing about be­fore they in­tro­duce a bunch of novel con­cepts on how to model state ma­chines in the type sys­tem. IMO, this ex­am­ple suc­ceeds at two lev­els:If you have any fa­mil­iar­ity with state tran­si­tion di­a­grams, it takes a sec­ond to un­der­stand what is go­ing onYou are most likely not fa­mil­iar with the code no­ta­tion for the state ma­chine, so it re­ally helps to have an al­ter­na­tive rep­re­sen­ta­tion for it. Idea: Can you gen­er­ate these di­a­grams di­rectly from the Idris type an­no­ta­tions? But you don’t need to stick to strict UML state ma­chine di­a­grams. What are the states used for?Pay­ment Intents are the main ob­ject Stripe uses to rep­re­sent an on­go­ing pay­ment. A lot can hap­pen to a pay­ment, so it ends up hav­ing a pretty com­pli­cated state ma­chine. We made this state ma­chine di­a­gram for it in 2019 with Michelle Bu and Isabel Bensusan. At the time, it was one of the first diagrams” in the doc­u­men­ta­tion. The di­a­gram walks through the dif­fer­ent states a PaymentIntent can be in, with the cor­re­spond­ing UI for each:For a fun ex­am­ple of state ma­chines and how to for­mal­ize them, watch Leslie Lamport’s lec­ture on trans­ac­tion com­mit in TLA+.Client / server re­quest ar­chi­tec­tures can get pretty com­pli­cated. I’ve of­ten seen peo­ple make swim­lane di­a­grams to track them.Here is a good ex­am­ple in the Stripe doc­u­men­ta­tion. It shows all the re­quest / re­sponses that hap­pen when a cus­tomer checks out, saves their pay­ment method, and pays:If you haven’t seen one al­ready:each col­umn rep­re­sents who is mak­ing each re­quest (a com­puter or a per­son)each box is an ac­tion they can ma­keeach ar­rows is the re­quest / re­sponses be­tween them.They are great. You can see the or­der of the re­quests, the de­pen­den­cies be­tween them, who does what, and more. Importantly, when you are cod­ing and you see a code snip­pet by it­self like this one:const r = await stripe.con­firm­Pay­ment();

you can find it its cor­re­spond­ing re­quest and see the con­text in which it hap­pens even if the code sur­round­ing it does­n’t have it.Adri­enne Dreyfus did much of the work to make and stan­dard­ize these di­a­grams through­out the Stripe doc­u­men­ta­tion. Idea: Can you gen­er­ate one of these di­a­grams di­rectly from the end-to-end tests you wrote for your ser­vice? That ex­am­ple does­n’t show the pass­ing of time while the mes­sages are trav­el­ing. Note that the ar­rows run hor­i­zon­tal. But you can use the same di­a­gram to di­ag­nose race con­di­tions or other bugs that de­pend on out-of-or­der or tim­ing prob­lems.Aphyr of­ten uses their own ver­sion of swim­lane di­a­grams to show how dif­fer­ent processes view state in a dis­trib­uted sys­tem. For ex­am­ple, in the Jepsen VoltDB 6.3 analy­sis, they show how dif­fer­ent data­base nodes can send mes­sages to each other:In this ver­sion of the swim­lane the pas­sage of time be­tween re­quests is cru­cial to un­der­stand the prob­lems with the sys­tem.The same post also shows an in­ter­ac­tive di­a­gram that re­sem­bles a swim­lane to vi­su­al­ize the re­sults from the Jepsen tool:Each swimlane” is now hor­i­zon­tal, a num­bered row (10, 11, 18) rep­re­sent­ing a process that is ei­ther read­ing or writ­ing data.The blocks are the process’ op­er­a­tions and they time they take to com­pleteķ- The lines rep­re­sent log­i­cal re­la­tion­ships be­tween the data the processes see. The lines that break lin­eariz­abil­ity are marked Illegal and red.An­other very cool ex­am­ple is in the doc­u­men­ta­tion for Signal’s Double Rachet al­go­rithm. These di­a­grams track what Alice and Bob need at each step of the pro­to­col to en­crypt and de­crypt the next mes­sage:The pro­to­col is com­pli­cated enough for me to think that the di­a­grams are the source of truth of the pro­to­col. In other words, I’d ven­ture to say that if an im­ple­men­ta­tion of the Double Rachet al­go­rithm ever does some­thing that does­n’t match the di­a­grams, it is more likely it is the code that is wrong than vice-versa. It is in these do­mains where I think vi­sual pro­gram­ming should ac­tu­ally be the medium of pro­gram­ming but that is for a dif­fer­ent post.


Read the original on blog.sbensu.com »

6 855 shares, 42 trendiness



WASHINGTON, D. C. — Today, the Federal Communications Commission (FCC) voted unan­i­mously to dra­mat­i­cally limit the rates that prison tele­coms charge for phone and video calls from pris­ons and jails. The new or­der more than halves the per-minute rate caps for all prison and jail phone calls across the coun­try. It also es­tab­lishes in­terim per-minute rate caps for video calls, mark­ing the first time the FCC has set rules for prison com­mu­ni­ca­tion be­yond phone calls. Finally, it pro­hibits all fees, in­clud­ing de­posit fees. Worth Rises es­ti­mates that the new rules will im­pact 83% of in­car­cer­ated peo­ple (about 1.4 mil­lion) and save im­pacted fam­i­lies at least $500 mil­lion an­nu­ally.

Impact sum­mary:Im­proves the well-be­ing and reen­try suc­cess of in­car­cer­ated peo­ple who will have more ac­cess to their sup­port sys­tem­sIn­creases the the fi­nan­cial sta­bil­ity of mil­lions of Americans with in­car­cer­ated loved ones and strength­ens their fam­i­lies­Reins in and right-sizes the prison tele­com in­dus­try that has preyed on in­car­cer­ated peo­ple and their loved ones for decadesRe­verses the mass sur­veil­lance pro­ject that cor­rec­tional agen­cies and their in­dus­try part­ners have ex­panded rapidly in re­cent years

The reg­u­la­tions adopted to­day mark the im­ple­men­ta­tion of the Martha Wright-Reed Just and Reasonable Communications Act, which es­tab­lished the FCCs au­thor­ity to reg­u­late in-state phone and video calls from cor­rec­tional fa­cil­i­ties, in ad­di­tion to out-of-state phone calls that it had al­ready reg­u­lated. The dis­cus­sion dur­ing to­day’s vote will re­sult in only mi­nor changes to the draft rules re­leased on June 27, and be re­leased in the com­ing days.

For far too long, fam­i­lies around the coun­try have strug­gled to af­ford to stay con­nected to their in­car­cer­ated loved ones. Too many have gone into debt just to keep in touch or been cut off en­tirely. Relief is fi­nally on the hori­zon,” said Bianca Tylek, Executive Director of Worth Rises. We ex­tend enor­mous grat­i­tude to FCC Chairwoman Jessica Rosenworcel and the other com­mis­sion­ers for rec­og­niz­ing the toll that preda­tory com­mu­ni­ca­tion costs take on in­car­cer­ated peo­ple, their fam­i­lies, com­mu­ni­ties, and the pub­lic. It’s im­pos­si­ble to cal­cu­late the mag­ni­tude of the im­pact that this will have on the mil­lions of fam­i­lies with in­car­cer­ated loved ones, but we know that it will save them more than $500 mil­lion an­nu­ally.”

The pri­mary fac­tors dri­ving the FCCs lower rate caps is the ex­clu­sion of se­cu­rity and sur­veil­lance costs as well as the ex­clu­sion of com­mis­sions. For decades, the cost of an ever-ex­pand­ing suite of in­va­sive sur­veil­lance ser­vices has been passed on to in­car­cer­ated peo­ple and their loved ones. With to­day’s new rules, prison tele­coms will be barred from re­cov­er­ing the cost of the ma­jor­ity of such ser­vices from ratepay­ers. This comes af­ter years of ad­vo­cacy from Worth Rises on the is­sue.

Tylek con­tin­ued, “We’re thrilled that the FCC has agreed that in­car­cer­ated peo­ple and their fam­i­lies should not have to pay for their own harm­ful and in­va­sive sur­veil­lance. Surveillance ser­vices are sep­a­rate and dis­tinct from com­mu­ni­ca­tion ser­vices with a sep­a­rate and dis­tinct con­sumer — cor­rec­tional agen­cies — and now they’ll be treated as such. Under the new rules, pris­ons and jails will have to pay for the sur­veil­lance ser­vices they wish to use. Should they chose not to, as the prison tele­com in­dus­try fears, that’s a busi­ness prob­lem for it to solve, not the FCC. Beyond the in­cred­i­ble sav­ings the new rules will cre­ate, we hope that they will also re­verse the grow­ing sur­veil­lance ap­pa­ra­tus that the cor­rec­tional agen­cies and the in­dus­try use to spy on our com­mu­ni­ties.”

Below are the new rate caps re­quired to go into ef­fect in early 2025. Immediately, the 24 state prison sys­tems and roughly 90% of jails that cur­rently charge more will be forced to re­duce rates. However, the ex­clu­sion of se­cu­rity and sur­veil­lance costs and com­mis­sions from rates, will force rate re­duc­tions across the board.

The new rules will im­pact the prison tele­com in­dus­try deeply. Loss of rev­enue is ex­pected to be in the hun­dreds of mil­lions of dol­lars (though not equal to the sav­ings to in­car­cer­ated peo­ple and their fam­i­lies as the in­dus­try’s busi­ness model shifts and cor­rec­tional agen­cies pur­chase se­cu­rity and sur­veil­lance ser­vices di­rectly). This comes as the two largest mar­ket play­ers, Aventiv and ViaPath, each nav­i­gate fi­nan­cial crises. Aventiv re­cently ef­fec­tively de­faulted on its $1.3 bil­lion debt af­ter a year of failed re­fi­nanc­ing ef­forts. ViaPath was re­port­edly clos­ing in on a $1.5 bil­lion re­fi­nanc­ing deal un­til news of the reg­u­la­tions killed the deal.

The FCCs new rules also come as an in­creas­ing num­ber of states move to make prison and jail com­mu­ni­ca­tion free. In 2023, Massachusetts, Minnesota, and Colorado each passed leg­is­la­tion to make prison calls free. California made prison calls free in 2022 and Connecticut in 2021. Campaigns are cur­rently un­der­way in more than a dozen other states.

Former FCC Acting Chairwoman of the FCC and mem­ber of the Board of Directors of the Benton Institute for Broadband & Society Mignon Clyburn said, “To­day marks a ma­jor mile­stone on the long road to right a mar­ket dys­func­tion that has wronged in­car­cer­ated per­sons and their loved ones for decades.  I have noth­ing but praise for Chairwoman Rosenworcel and her fel­low Commissioners who moved this item for­ward, and the hard-work­ing FCC staff for craft­ing force­ful rules that are faith­ful to the goals of the Martha Wright-Reed Just and Reasonable Communications Act.”

Today, we cross a ma­jor mile­stone in the decades-long quest to pre­vent gaug­ing of fam­i­lies when they pay to com­mu­ni­cate with their in­car­cer­ated loved ones. With the Federal Communications Commission vote to­day, prices are drop­ping be­tween 8 and 14 cents per minute from ex­ist­ing voice call­ing caps, and video com­mu­ni­ca­tion will ben­e­fit from rate pro­tec­tions for the first time. Lower cost com­mu­ni­ca­tion means more mem­o­ries and tighter bonds, it means more op­por­tu­ni­ties to say I love you’ and I miss you’ at the mo­ment they can make a huge dif­fer­ence in a child’s life. It brings peo­ple to­gether in­stead of break­ing them apart un­der the weight of im­pos­si­ble fi­nan­cial bur­dens,” said Cheryl A. Leanza, pol­icy ad­vi­sor to the United Church of Christ Media Justice Ministry. The jour­ney to this day has been decades in the mak­ing, from Mrs. Martha Wright-Reed, who was will­ing to put her name on a law­suit more than 20 years ago, to bi-par­ti­san cham­pi­ons in Congress, hard-work­ing agency lead­ers and staff, fel­low ad­vo­cates, and most im­por­tant: the count­less fam­ily mem­bers and in­car­cer­ated peo­ple who took the time to tell their sto­ries, write post­cards, con­tact their leg­is­la­tors or at­tend a Federal Communications Commission lis­ten­ing ses­sion. It all re­ally does make a dif­fer­ence.”

Today, 18 months from the pas­sage of the Marta Wright-Reed Just and Reasonable Communications Act of 2022, the Federal Communications Commission took ac­tion to end preda­tory prison phone rates. After a decade of ad­vo­cacy, in­car­cer­ated in­di­vid­u­als and their loved ones will be able to com­mu­ni­cate with each other with­out the bar­rier of bur­den­some fees. The ab­sence of these preda­tory rates is life-chang­ing, po­ten­tially low­er­ing rates of de­pres­sion, iso­la­tion, and lone­li­ness in­car­cer­ated in­di­vid­u­als ex­pe­ri­ence while in prison and help­ing them suc­ceed when reen­ter­ing their com­mu­ni­ties,” said Koustubh K.J.” Bagchi, Vice President of the Center for Civil Rights and Technology at The Leadership Conference on Civil and Human Rights.

Incarcerated peo­ple and their loved ones have paid an un­con­scionably high price for the greed of the com­pa­nies who pro­vide tele­com ser­vices to pris­ons, jails and de­ten­tion cen­ters. Thankfully, that’s about to change,” said Heather Franklin, Internet Campaign Director at Free Press. The abil­ity for in­car­cer­ated peo­ple to main­tain reg­u­lar com­mu­ni­ca­tion with loved ones, le­gal coun­sel and clergy is a hu­man-rights is­sue. Incarcerated peo­ple should be able to con­nect and com­mu­ni­cate with­out be­ing sub­ject to ex­tor­tion­ate rates. We’re grate­ful for the work of FCC Chairwoman Jessica Rosenworcel, the other com­mis­sion­ers, agency staff, and decades of work by ad­vo­cates across the coun­try who’ve made to­day’s strides pos­si­ble.”

The FCCs ac­tion to­day will help to en­sure that fam­i­lies no longer have to pay in­flated, preda­tory rates to speak to their in­car­cer­ated loved ones,” said Ariel Nelson, staff at­tor­ney at the National Consumer Law Center, who leads its Criminal Justice Debt & Reintegration Project. We are grate­ful the Commission has also put a stop to two en­dur­ing ar­eas of con­sumer abuse–out­ra­geous fees and site com­mis­sions–bring­ing mean­ing­ful fi­nan­cial re­lief and con­nec­tion to fam­i­lies across the coun­try.”

With to­day’s or­der, the FCC af­firmed what our move­ment has been say­ing for decades: The fam­i­lies of in­car­cer­ated peo­ple should not be shoul­der­ing the costs of their loved ones’ in­car­cer­a­tion,” said Peter Wagner, Executive Director of the Prison Policy Initiative. Capping the costs of all phone and video calls, as the FCC has done, will bring re­lief to mil­lions of fam­i­lies, and other pro­vi­sions in its or­der will pre­vent the ex­ploita­tion of con­sumers through fees and de­cep­tive pric­ing. We are glad to see the FCC take these steps to­ward true con­sumer pro­tec­tion for in­car­cer­ated peo­ple.”

For years, in­car­cer­ated peo­ple un­able to af­ford ex­or­bi­tant call rates have been de­nied the abil­ity to eas­ily com­mu­ni­cate with their loved ones, their com­mu­ni­ties, and even their at­tor­neys. These ex­ces­sive com­mu­ni­ca­tions costs hurt in­car­cer­ated peo­ple as well as their fam­i­lies and com­mu­ni­ties. By re­quir­ing just and rea­son­able rates, the FCC is work­ing to keep fam­i­lies to­gether and to ease the re­turn to so­ci­ety of in­car­cer­ated peo­ple. We com­mend Chairwoman Rosenworcel for her lead­er­ship in keep­ing in­car­cer­ated peo­ple con­nected and ap­plaud the FCC for re­quir­ing rea­son­able rates for these vi­tal com­mu­ni­ca­tions,” said Al Kramer, se­nior fel­low at Public Knowledge.

We ap­plaud the lead­er­ship of Chairwoman Rosenworcel and the Federal Communications Commission for tak­ing this im­por­tant and mon­u­men­tal ac­tion. The FCCs or­der serves as a trans­for­ma­tive win for in­car­cer­ated peo­ples and their loved ones who for far too long have had to grap­ple with out­ra­geous rates,” said Rebekah P. Goodheart, Jenner & Block LLP, pro bono coun­sel to the Wright Petitioners. Thanks to the tire­less ad­vo­cacy of var­i­ous groups and civil rights or­ga­ni­za­tions, much-needed re­lief will be pro­vided.”

It’s ex­cit­ing to see that the Federal Communications Commission is tak­ing steps to en­act the Martha Wright-Reed leg­is­la­tion to lower the cost of prison phone calls,” said Bran­don Tucker, Senior Policy Director at Color Of Change. This is a ma­jor step in prison jus­tice. Phone call costs have been preda­tory and it has never made sense for fam­i­lies of in­car­cer­ated peo­ple to pay high amounts of money to stay in con­tact with their loved ones. The sys­tem was rooted in cor­po­rate greed and not re­ha­bil­i­ta­tion. It is un­ac­cept­able that fam­i­lies have gone into debt ac­cept­ing phone calls from jails and prison. Thank you to Chairwoman Jessica Rosenworcel and other com­mis­sion­ers for rec­ti­fy­ing this mis­step in our ju­di­cial sys­tem.“Worth Rises is a non-profit ad­vo­cacy or­ga­ni­za­tion ded­i­cated to dis­man­tling the prison in­dus­try and end­ing the ex­ploita­tion of those it tar­gets. Follow @WorthRises on Twitter, Instagram, and Facebook.



Read the original on worthrises.org »

7 838 shares, 34 trendiness


Say hello to tiny­Pod. Say hello to tiny­Pod.Every­thing you need.in the palm of your hand.Every­thing you need.In the palm of your hand.Every­thing you need.In the palm of your hand.Your phone away from phone.What if some­times you could just…leave your phone at home? With all the es­sen­tials to stay con­nected, tiny­Pod makes that ac­tu­ally pos­si­ble.Trans­form the most pow­er­ful mini com­puter in the world into some­thing new: A form fac­tor made to func­tion.What goes around, comes around! Rediscover the de­light of tac­tile scrolling with tiny­Pod’s phys­i­cal scroll wheel. And yes, it ac­tu­ally scrolls.Your phone away from phone.What if some­times you could just…leave your phone at home? With all the es­sen­tials to stay con­nected, tiny­Pod makes that ac­tu­ally pos­si­ble.Trans­form the most pow­er­ful mini com­puter in the world into some­thing new: A form fac­tor made to func­tion.What goes around, comes around! Rediscover the de­light of tac­tile scrolling with tiny­Pod’s phys­i­cal scroll wheel. And yes, it ac­tu­ally scrolls.Start charg­ing with a snap us­ing your same Apple Watch mag­netic charg­ing ca­ble.Start charg­ing with a snap us­ing your same Apple Watch mag­netic charg­ing ca­ble.With wrist de­tec­tion turned off, get even more mileage out of your bat­tery.With wrist de­tec­tion off, get even more mileage out of your bat­tery.Are you get­ting it?Are you get­ting it?Tap to pay or name drop your con­tact de­tails, just like you would with your phone.Tap to pay or name drop your con­tact de­tails, just like you would with your phone.The ul­ti­mate mu­sic player and then some. With 32GB, down­load mu­sic, pod­casts & au­dio­books for of­fline lis­ten­ing de­light.Your AirPods’ new best friend.The ul­ti­mate mu­sic player and then some. With 32GB, down­load mu­sic, pod­casts & au­dio­books for of­fline lis­ten­ing de­light.Want to go ex­tra min­i­mal? Try tiny­Pod lite.Go lite if you don’t want the scroll wheel, with a durable yet flex­i­ble case for any Apple Watch.Want to go ex­tra min­i­mal? Try tiny­Pod lite.Go lite if you don’t want the scroll wheel, with a durable yet flex­i­ble case for any Apple Watch.Compatible with your Series 9, 8, 7, SE, 6, 5, or 4Compatible with your Series 9, 8, 7, SE, 6, 5, or 4Compatible with your Series 9, 8, 7, SE, 6, 5, or 4Compatible with your Series 9, 8, 7, SE, 6, 5, or 4Don’t have Apple Watch? Get one for a steal >


Read the original on thetinypod.com »

8 781 shares, 30 trendiness

NVIDIA Transitions Fully Towards Open-Source GPU Kernel Modules

With the R515 dri­ver, NVIDIA re­leased a set of Linux GPU ker­nel mod­ules in May 2022 as open source with dual GPL and MIT li­cens­ing. The ini­tial re­lease tar­geted dat­a­cen­ter com­pute GPUs, with GeForce and Workstation GPUs in an al­pha state.

At the time, we an­nounced that more ro­bust and fully-fea­tured GeForce and Workstation Linux sup­port would fol­low in sub­se­quent re­leases and the NVIDIA Open Kernel Modules would even­tu­ally sup­plant the closed-source dri­ver.

NVIDIA GPUs share a com­mon dri­ver ar­chi­tec­ture and ca­pa­bil­ity set. The same dri­ver for your desk­top or lap­top runs the world’s most ad­vanced AI work­loads in the cloud. It’s been in­cred­i­bly im­por­tant to us that we get it just right.

Two years on, we’ve achieved equiv­a­lent or bet­ter ap­pli­ca­tion per­for­mance with our open-source GPU ker­nel mod­ules and added sub­stan­tial new ca­pa­bil­i­ties:

We’re now at a point where tran­si­tion­ing fully to the open-source GPU ker­nel mod­ules is the right move, and we’re mak­ing that change in the up­com­ing R560 dri­ver re­lease.

Not every GPU is com­pat­i­ble with the open-source GPU ker­nel mod­ules.

For cut­ting-edge plat­forms such as NVIDIA Grace Hopper or NVIDIA Blackwell, you must use the open-source GPU ker­nel mod­ules. The pro­pri­etary dri­vers are un­sup­ported on these plat­forms.

For newer GPUs from the Turing, Ampere, Ada Lovelace, or Hopper ar­chi­tec­tures, NVIDIA rec­om­mends switch­ing to the open-source GPU ker­nel mod­ules.

For older GPUs from the Maxwell, Pascal, or Volta ar­chi­tec­tures, the open-source GPU ker­nel mod­ules are not com­pat­i­ble with your plat­form. Continue to use the NVIDIA pro­pri­etary dri­ver.

For mixed de­ploy­ments with older and newer GPUs in the same sys­tem, con­tinue to use the pro­pri­etary dri­ver.

If you are not sure, NVIDIA pro­vides a new de­tec­tion helper script to help guide you on which dri­ver to pick. For more in­for­ma­tion, see the Using the in­stal­la­tion helper script sec­tion later in this post.

In gen­eral, the de­fault ver­sion of the dri­ver in­stalled by all in­stal­la­tion meth­ods is switch­ing from the pro­pri­etary dri­ver to the open-source dri­ver. There are a few spe­cific sce­nar­ios that de­serve spe­cial at­ten­tion:

When you are in­stalling CUDA Toolkit us­ing a pack­age man­ager (not the .run file), in­stal­la­tion meta­pack­ages ex­ist and are com­monly used. By in­stalling a top-level cuda pack­age, you in­stall a com­bi­na­tion of CUDA Toolkit and the as­so­ci­ated dri­ver re­lease. For ex­am­ple, by in­stalling cuda dur­ing the CUDA 12.5 re­lease time frame, you get the pro­pri­etary NVIDIA dri­ver 555 along with CUDA Toolkit 12.5.

Previously, us­ing the open-source GPU ker­nel mod­ules would mean that you could use the top-level meta­pack­age. You would have had to in­stall the dis­tro-spe­cific NVIDIA dri­ver open pack­age along with the cuda-toolkit-X-Y pack­age of your choice.

Beginning with the CUDA 12.6 re­lease, the flow ef­fec­tively switches places (Figure 2).

If you in­stall CUDA or the NVIDIA dri­vers us­ing the .run file, the in­staller queries your hard­ware and au­to­mat­i­cally in­stalls the best-fit dri­ver for your sys­tem. UI tog­gles are also avail­able to se­lect be­tween the pro­pri­etary dri­ver and the open source dri­ver, as you choose.

If you’re in­stalling through the CUDA .run file and us­ing the ncurses user in­ter­face, you now see a menu sim­i­lar to the fol­low­ing:

If you’re in­stalling through the dri­ver .run file, you see a sim­i­lar choice pre­sented (Figure 3).

You can also pass over­rides us­ing the com­mand line to in­stall with­out the user in­ter­face or if you are us­ing au­toma­tion tools such as Ansible.

As men­tioned ear­lier, if you’re un­sure which dri­ver to pick for the GPUs in your sys­tem, NVIDIA cre­ated a helper script to guide you through the se­lec­tion process.

To use it, first in­stall the nvidia-dri­ver-as­sis­tant pack­age with your pack­age man­ager, then run the script:

For a con­sis­tent ex­pe­ri­ence, NVIDIA rec­om­mends that you use pack­age man­agers to in­stall CUDA Toolkit and the dri­vers. However, the spe­cific de­tails of which pack­age man­age­ment sys­tems are used by dif­fer­ent dis­tri­b­u­tions or how pack­ages are struc­tured can vary de­pend­ing on your par­tic­u­lar dis­tri­b­u­tion.

This sec­tion out­lines the spe­cific de­tails, caveats, or mi­gra­tion steps needed for var­i­ous plat­forms.

To up­grade us­ing the cuda meta­pack­age on Ubuntu 20.04, first switch to open ker­nel mod­ules:

To up­grade us­ing the cuda meta­pack­age on dnf-based dis­tros, mod­ule streams must be dis­abled:

Run one of the fol­low­ing com­mands:

For sim­pli­fi­ca­tion, we’ve con­densed the pack­age man­ager rec­om­men­da­tions in table for­mat. All re­leases be­yond dri­ver ver­sion 560 and CUDA Toolkit 12.6 will use these pack­ag­ing con­ven­tions.

For more in­for­ma­tion, see NVIDIA Datacenter Drivers.

Windows Subsystem for Linux (WSL) uses the NVIDIA ker­nel dri­ver from the host Windows op­er­at­ing sys­tem. You should­n’t in­stall any dri­ver into this plat­form specif­i­cally. If you are us­ing WSL, no change or ac­tion is re­quired.

The in­stal­la­tion of CUDA Toolkit re­mains un­changed through pack­age man­agers. Run the fol­low­ing com­mand:

For more in­for­ma­tion about how to in­stall NVIDIA dri­vers or the CUDA Toolkit, in­clud­ing how to en­sure that you in­stall the pro­pri­etary dri­vers if you’re un­able to mi­grate to the open-source GPU ker­nel mod­ules at this time, see Driver Installation in the CUDA Installation Guide.


Read the original on developer.nvidia.com »

9 738 shares, 23 trendiness

Global IT chaos triggered by software glitch eases slightly

In what will go down as the most spec­tac­u­lar IT fail­ure the world has ever seen, a botched soft­ware up­date from cy­ber­se­cu­rity firm CrowdStrike Holdings crashed count­less Microsoft Windows com­puter sys­tems glob­ally.

Microsoft and CrowdStrike rolled out fixes and sys­tems were be­ing re­stored, though it’s ex­pected to take time to com­pletely re­cover from the dis­rup­tions.


Read the original on www.afr.com »

10 698 shares, 27 trendiness

veggiedefender/open-and-shut: Type in Morse code by repeatedly slamming your laptop shut

Type in Morse code by re­peat­edly slam­ming your lap­top shut

* Use a bat­tle-tested en­cod­ing trusted by pi­lots, sub­mariners, and am­a­teur ra­dio nerds

* Closing your lap­top now types the let­ter E in­stead of lock­ing

Note: For way­land sup­port, check out the way­land branch.

Clone this repos­i­tory and copy files into /etc/acpi

Acpid needs to know how and when to run our scripts. Create a file /etc/acpi/events/lm_lid with the fol­low­ing con­tents:

Temporarily: Run sys­temd-in­hibit –what=handle-lid-switch cat and press Ctrl+C when you’re done.

Permanently: Add HandleLidSwitch=ignore to /etc/systemd/logind.conf and ei­ther run sudo sys­tem­ctl restart sys­temd-logind.ser­vice (this will log you out), or re­boot your com­puter.


Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.