10 interesting stories served every morning and every evening.

Introducing Claude for Small Business

www.anthropic.com

We’re launch­ing Claude for Small Business—a pack­age of con­nec­tors and ready-to-run work­flows that put Claude in­side the tools small busi­nesses de­pend on—to help small busi­ness own­ers take full ad­van­tage of AI and cross off items on the to-do list.

Small busi­nesses ac­count for 44% of U.S. GDP and em­ploy nearly half the pri­vate-sec­tor work­force, but their adop­tion of AI has lagged be­hind larger en­ter­prises. Tools and train­ing are rarely tai­lored to the ways small busi­nesses op­er­ate, and as a re­sult their use of­ten stops at the chat win­dow. As part of our pub­lic ben­e­fit mis­sion, we are com­mit­ted to help­ing busi­ness own­ers har­ness AI more fully and ef­fec­tively for their most im­por­tant work.

Claude for Small Business is a tog­gle in­stall that puts Claude to work in­side the tools small busi­ness own­ers al­ready use: Intuit Quickbooks, PayPal, HubSpot, Canva, Docusign, Google Workspace, and Microsoft 365. From these tools, it can plan pay­roll, close the month, run a sales cam­paign, chase in­voices, and more.

Small busi­nesses make up nearly half the American econ­omy, but they’ve never had the re­sources of big­ger com­pa­nies. AI is the first tech­nol­ogy that can fi­nally close that gap, which is why we’re launch­ing Claude for Small Business, along­side train­ing and part­ner­ships to make sure AI shows up for the en­tre­pre­neurs and com­mu­ni­ties who need it most. Claude for Small Business runs in­side the tools own­ers al­ready rely on, like QuickBooks, PayPal, and HubSpot, and takes on the work that piles up af­ter hours, like plan­ning pay­roll, chas­ing in­voices, or kick­ing off a mar­ket­ing pro­ject. People run the busi­ness, and Claude helps take the late-night work off their plates.”

—Daniela Amodei, Co-founder and President of Anthropic

How it works

Toggle on Claude for Small Business in­side Claude Cowork, con­nect the tools you al­ready use, and pick the job. Claude does the work; you ap­prove be­fore any­thing sends, posts, or pays.

It ships with 15 ready-to-run agen­tic work­flows across fi­nance, op­er­a­tions, sales, mar­ket­ing, HR, and cus­tomer ser­vice. It also in­cludes 15 skills built on the re­peat­able tasks own­ers told us slow them down most.

These in­clude:

Planning pay­roll with con­fi­dence. Settle your QuickBooks cash po­si­tion against in­com­ing PayPal set­tle­ments, build a 30-day fore­cast, rank what’s over­due, and queue the re­minders for you to ap­prove and send.

Closing the month with fewer er­rors. Reconcile your books against set­tle­ments, flag what does­n’t match, write a plain-Eng­lish P&L, and ex­port a close packet you can for­ward straight to your ac­coun­tant through Intuit QuickBooks.

Getting a pulse on your busi­ness. Surface your most im­por­tant busi­ness in­sights on a sched­ule, all on one page: view your cash po­si­tion through Intuit QuickBooks, sales trend, pipeline move­ment, this week’s com­mit­ments, and more.

Running your next cam­paign. Find the slow stretch in your rev­enue, an­a­lyze your HubSpot cam­paign per­for­mance, draft the promo strat­egy, and gen­er­ate the as­sets in Canva to pre­pare your next send.

There’s also an in­voice chaser, mar­gin an­a­lyzer, month-end prep­per, tax-sea­son or­ga­nizer, con­tract re­viewer, lead triager, con­tent strate­gist, and more.

Not only could it prob­lem-solve for me, it also showed me prob­lems I did­n’t know I had.

Not only could it prob­lem-solve for me, it also showed me prob­lems I did­n’t know I had.

What we used to think were the con­straints are just not con­straints any­more. It’s em­pow­er­ing. Hours of look­ing at stuff that does­n’t mat­ter are gone. I want an en­tire or­ga­ni­za­tion where every­body is us­ing these tools daily.

What we used to think were the con­straints are just not con­straints any­more. It’s em­pow­er­ing. Hours of look­ing at stuff that does­n’t mat­ter are gone. I want an en­tire or­ga­ni­za­tion where every­body is us­ing these tools daily.

It’s free­ing up things that used to be a lot of very te­dious cler­i­cal work for more value-add tasks.

It’s free­ing up things that used to be a lot of very te­dious cler­i­cal work for more value-add tasks.

01 /

03

Connect to your stack

Running through Claude Cowork, each con­nected tool han­dles a spe­cific job:

PayPal pow­ers set­tle­ments, in­voic­ing, dis­putes, and re­funds in­side Claude.

Intuit QuickBooks han­dles pay­roll plan­ning, the monthly close, and cash-flow, along with tools to help busi­nesses pre­pare for tax sea­son, and rec­on­cil­i­a­tion work that touches every other sys­tem.

HubSpot runs lead triage, cus­tomer pulse, and cam­paign at­tri­bu­tion.

Canva gen­er­ates con­tent for every chan­nel, with the abil­ity to col­lab­o­rate and edit with your team, pub­lish as­sets, and track per­for­mance.

Docusign sends con­tracts out for sig­na­ture, tracks sta­tus, and files the ex­e­cuted copy back where it be­longs.

The full list of skills, au­toma­tions, and con­nec­tors is avail­able on the so­lu­tions page.

Small and mid-mar­ket busi­nesses fuel our economies, and for decades, QuickBooks has been proud to be their trusted fi­nan­cial part­ner. By in­te­grat­ing the agen­tic AI ca­pa­bil­i­ties of our QuickBooks plat­form into Claude for Small Business, we’re pro­vid­ing small busi­nesses with AI-powered au­toma­tions and ex­pe­ri­ences that al­low them to re­move the com­plex­i­ties of man­ag­ing their fi­nances, ac­cel­er­ate pay­roll work­flows, and gen­er­ate data-backed in­sights that help them grow and scale with speed and con­fi­dence.

Small and mid-mar­ket busi­nesses fuel our economies, and for decades, QuickBooks has been proud to be their trusted fi­nan­cial part­ner. By in­te­grat­ing the agen­tic AI ca­pa­bil­i­ties of our QuickBooks plat­form into Claude for Small Business, we’re pro­vid­ing small busi­nesses with AI-powered au­toma­tions and ex­pe­ri­ences that al­low them to re­move the com­plex­i­ties of man­ag­ing their fi­nances, ac­cel­er­ate pay­roll work­flows, and gen­er­ate data-backed in­sights that help them grow and scale with speed and con­fi­dence.

At HubSpot, our mis­sion is to help scal­ing com­pa­nies grow with AI. We part­nered with Anthropic to build the first CRM con­nec­tor for Claude so go-to-mar­ket teams can ac­cess their HubSpot con­text wher­ever they work. For small busi­nesses, that means get­ting tai­lored an­swers, sum­maries, and vi­su­al­iza­tions di­rectly from their cus­tomer plat­form so they can seg­ment smarter, run bet­ter cam­paigns, and drive more leads.

At HubSpot, our mis­sion is to help scal­ing com­pa­nies grow with AI. We part­nered with Anthropic to build the first CRM con­nec­tor for Claude so go-to-mar­ket teams can ac­cess their HubSpot con­text wher­ever they work. For small busi­nesses, that means get­ting tai­lored an­swers, sum­maries, and vi­su­al­iza­tions di­rectly from their cus­tomer plat­form so they can seg­ment smarter, run bet­ter cam­paigns, and drive more leads.

Small busi­nesses need AI that moves at the speed they do. With Canva pow­er­ing con­tent cre­ation in Claude for Small Business, a busi­ness owner can go from idea to pub­lished, on-brand de­sign in one flow, while AI stream­lines the work in be­tween. It’s part of our vi­sion to make com­plex AI work­flows sim­ple, so we can help peo­ple achieve their goals through de­sign.

Small busi­nesses need AI that moves at the speed they do. With Canva pow­er­ing con­tent cre­ation in Claude for Small Business, a busi­ness owner can go from idea to pub­lished, on-brand de­sign in one flow, while AI stream­lines the work in be­tween. It’s part of our vi­sion to make com­plex AI work­flows sim­ple, so we can help peo­ple achieve their goals through de­sign.

01 /

03

Built for trust

In a sur­vey we ran with small busi­ness own­ers, half named data se­cu­rity as their sin­gle biggest hes­i­ta­tion about AI.

With Claude for Small Business:

You stay in the loop. Every task and work­flow you run within Claude is ini­ti­ated by you. You ap­prove the plan first or, when you’re ready, let it run end-to-end.

Your ex­ist­ing per­mis­sions hold. If an em­ployee can’t see some­thing in QuickBooks or Drive to­day, they can’t see it through Claude.

We don’t train on your data by de­fault on our Team and Enterprise Plans.

Full de­tails are in the Trust Center.

AI Fluency for Small Business

Tools aren’t enough on their own. Owners and their teams need to know when and how to use them, and most haven’t had the op­por­tu­nity to learn.

That’s why we part­nered with PayPal on AI Fluency for Small Business, a free on­line course on us­ing AI to run a small busi­ness. It’s taught by own­ers who’ve built it into their own op­er­a­tions—Prospect Butcher Co. in Brooklyn, MAKS TIPM Rebuilders in California, and oth­ers—with step-by-step guid­ance on how to use AI in your busi­ness safely, re­spon­si­bly, and eth­i­cally. We’ll cover top­ics like know­ing which tasks in your busi­ness are right for AI and how you can get started.

PayPal is proud to part­ner with Anthropic to help small and medium-sized busi­nesses har­ness the full po­ten­tial of the AI-led econ­omy. Together, we are equip­ping these busi­ness own­ers and en­tre­pre­neurs with the tools, ex­per­tise, and trusted in­fra­struc­ture they need to com­pete and thrive in a rapidly evolv­ing dig­i­tal econ­omy and cre­at­ing new op­por­tu­ni­ties for them to in­no­vate, grow and bet­ter serve their cus­tomers.” — Amy Bonitatibus, Chief Corporate Affairs Officer at PayPal

The course is avail­able on-de­mand start­ing to­day.

The Claude SMB Tour

Starting May 14 in Chicago, we’re tak­ing Claude for Small Business on the road. The tour is a free, half-day live AI flu­ency train­ing and hands-on work­shop for 100 lo­cal small busi­ness lead­ers per stop. Anthropic and part­ner Tenex.co are host­ing the tour, with lo­cal part­ners at each stop. Attendees get a one-month Claude Max sub­scrip­tion to start in­te­grat­ing AI into their day-to-day work­flows.

Spring stops in­clude: Chicago, Tulsa, Dallas, Hamilton Township, Baton Rouge, Birmingham, Salt Lake City, Baltimore, San Jose, and Indianapolis.

Thank you to the Greater Cleveland Partnership and the National Talent Collaborative for pi­lot­ing the con­cept with us in March. More cities will be added in the fall.

Partnering with small busi­ness–fo­cused non­prof­its

As a pub­lic ben­e­fit cor­po­ra­tion, part of Anthropic’s mis­sion is to make sure the gains from AI reach all peo­ple and com­mu­ni­ties, es­pe­cially those who have his­tor­i­cally been last in line for new tech­nol­ogy. Small busi­ness own­ers—and the lo­cal in­sti­tu­tions that fund and ad­vise them—are ex­actly that au­di­ence. So along­side Claude for Small Business, we’re in­vest­ing in part­ner­ships that put Claude di­rectly in the hands of small busi­ness own­ers and the or­ga­ni­za­tions that help them grow.

We be­lieve AI can mean­ing­fully ex­pand what’s pos­si­ble for the small­est busi­nesses, in­clud­ing solo en­tre­pre­neurs. Together with Workday and the Local Initiatives Support Corporation (LISC), we’re sup­port­ing the Workday Foundation Solopreneurship Accelerator Program, which in 2026 will equip an ini­tial co­hort of 15 as­pir­ing solo­pre­neurs with seed fund­ing from the Workday Foundation, Claude cred­its from Anthropic, and an AI-first en­tre­pre­neur­ship cur­ricu­lum de­vel­oped by LISC.

Small busi­nesses also de­pend on an en­abling en­vi­ron­ment, in­clud­ing ac­cess to cap­i­tal. That’s why we’re part­ner­ing with three Community Development Financial Institutions (CDFIs) that are de­ploy­ing AI in their own op­er­a­tions and ser­vices: Accion Opportunity Fund, Community Reinvestment Fund USA, and Pacific Community Ventures. With Claude cred­its and hands-on tech­ni­cal sup­port from our team, these CDFIs are build­ing tools that help more small busi­nesses get funded. Pacific Community Ventures, for ex­am­ple, is us­ing Claude to power its Radiant Data Hub—a shared re­source for a net­work of CDFIs—to col­lect and syn­the­size voice-based feed­back from its small busi­ness clients and their work­ers to im­prove prod­ucts and ser­vices.

Getting started

To learn more about Claude for Small Business and ac­cess the AI Fluency for Small Business course, get started here.

Related con­tent

Anthropic forms $200 mil­lion part­ner­ship with the Gates Foundation

Read more

Higher us­age lim­its for Claude and a com­pute deal with SpaceX

We’ve raised Claude’s us­age lim­its and agreed a new com­pute part­ner­ship with SpaceX that will sub­stan­tially in­crease our ca­pac­ity in the near term.

Read more

Agents for fi­nan­cial ser­vices

We’re re­leas­ing ten new Cowork and Claude Code plu­g­ins, in­te­gra­tions with the Microsoft 365 suite, new con­nec­tors, and an MCP app for fi­nan­cial ser­vices and in­sur­ance or­ga­ni­za­tions.

Read more

Video transcript: A message from President Kornbluth about funding and the talent pipeline | MIT Office of the President | MIT

president.mit.edu

Hello, every­one.

It’s been a while since I’ve spo­ken with you all.

But the Institute is fac­ing on­go­ing chal­lenges in two re­lated ar­eas: fund­ing, and our tal­ent pipeline.

So I thought you’d ap­pre­ci­ate hear­ing the facts.

First, fund­ing.

For more than a year, we’ve all worked on re­spond­ing to ex­tra­or­di­nary new and sus­tained pres­sures on our bud­get (due largely to the heavy new 8% tax on our en­dow­ment re­turns, a bur­den for MIT and only a few other peer schools).

Across the Institute — cen­trally and in lo­cal units — it was clear to every­one that change was im­per­a­tive, and you all took the chal­lenge head on. That re­quired se­ri­ous ef­fort and sac­ri­fice. Some units are still work­ing through the process, and I know the cuts have been painful.

But please know that your ef­forts have been in­cred­i­bly valu­able — and I ap­pre­ci­ate every­thing you’ve done to get us here.

Through all of this, of course, we’ve kept our eyes on Washington. In February, we heard wel­come news re­gard­ing Congressional ap­pro­pri­a­tions: Funding for many re­search agen­cies had been at least par­tially re­stored.

The news seemed en­cour­ag­ing enough that I started hear­ing peo­ple ask if maybe these new de­vel­op­ments in DC meant that we could step back from some of the bud­get cuts at MIT or at least feel con­fi­dent that we’re past the storm.

I re­ally wish we could! But un­for­tu­nately, the an­swer is no — for a set of rea­sons.

First, al­though Congress re­stored sub­stan­tial agency fund­ing, we can see in the num­bers that fed­eral fund­ing is not ac­tu­ally flow­ing to MIT the way it typ­i­cally has. Relatedly, some fed­eral agen­cies are dis­cussing the pos­si­bil­ity of fac­tor­ing in ge­og­ra­phy when they al­lo­cate their funds, rather than bas­ing de­ci­sions on sci­en­tific merit alone.

Compared to this time last year, MIT has ex­pe­ri­enced a de­cline in cam­pus re­search ac­tiv­ity funded by fed­eral awards of more than 20%. Still more con­cern­ing is that our num­ber of new fed­eral re­search awards is also down more than 20%.

While we’ve seen en­cour­ag­ing growth in re­search fund­ing from other spon­sors, it’s not nearly enough to off­set the fed­eral de­cline.

So here’s the big pic­ture: Counting fed­eral and non-fed­eral sources to­gether, our cam­pus spon­sored-re­search ac­tiv­ity is now 10% smaller than it was a year ago. That is a strik­ing loss for one of the most in­flu­en­tial and pro­duc­tive re­search com­mu­ni­ties in the world.

Now, the sec­ond chal­lenge: Talent.

I’ve said many times that MIT is in the tal­ent busi­ness. Which means we’re very alert to changes in our tal­ent pipeline.

We’ve al­ready seen clear signs that pol­icy changes af­fect­ing in­ter­na­tional stu­dents and schol­ars are dis­cour­ag­ing ex­tremely tal­ented in­di­vid­u­als from ap­ply­ing to join our com­mu­nity.

Right now, we’re com­ing to the end of ad­mis­sions sea­son.

For de­part­ments across the Institute, the fund­ing un­cer­tainty I talked about has made them cau­tious about ad­mit­ting new grad­u­ate stu­dents.

That cau­tion is com­pletely un­der­stand­able: If fed­eral grants con­tinue to de­crease, PIs just won’t have the funds to sup­port ad­di­tional stu­dents!

But the cu­mu­la­tive im­pact di­rectly af­fects our mis­sion of re­search and ed­u­ca­tion: Our grad­u­ate stu­dent en­roll­ment de­creased this year…and we ex­pect that to con­tinue next year.

Outside of Sloan and the EECS MEng pro­gram, still in the midst of ad­mis­sions, com­pared with 2024, our de­part­ments’ new en­roll­ments for next year are down close to 20%.

That means that, in to­tal, out­side of Sloan, we could have about 500 fewer grad­u­ate stu­dents. Which means we’ll have many fewer stu­dents ad­vanc­ing the work of MIT, and un­der­grad­u­ates will have fewer grad stu­dents as men­tors in their re­search.

But to me, far and away the worst im­pact is that hun­dreds of ex­cep­tion­ally tal­ented young peo­ple will not have the ben­e­fit of an MIT ed­u­ca­tion — and we won’t have the ben­e­fit of their cre­ative bril­liance.

As I’m sure you all un­der­stand, re­spond­ing to these new pres­sures is not just a mat­ter of belt-tight­en­ing — and it’s not just trim­ming around the edges.”

Last week, I spoke with sev­eral se­nior fac­ulty mem­bers, in very dif­fer­ent fields, all with long records of win­ning sig­nif­i­cant grants. All of them are now hav­ing to cut grad­u­ate stu­dents, post­docs and par­tic­u­lar av­enues of re­search.

At the Institute level, we are work­ing on plans to help sup­port groups whose op­er­a­tions are se­ri­ously im­pacted by cur­rent fed­eral fund­ing lapses. But that will not be a long-term so­lu­tion.

The fact is that we’re look­ing at a real drop in re­search be­ing done by the peo­ple of MIT. It’s a loss of mo­men­tum for fac­ulty and stu­dents.

And frankly, it’s a loss for the na­tion: When you shrink the pipeline of ba­sic dis­cov­ery re­search, you choke off the flow of fu­ture so­lu­tions, in­no­va­tions and cures — and you shrink the sup­ply of fu­ture sci­en­tists.

I know that hear­ing these facts all to­gether feels pretty chilly and over­cast.”

But I also know that, over its long his­tory, MIT has con­fronted and pushed through many se­ri­ous storms be­fore.

And I take heart in what I see here on cam­pus every day: The same MIT in­ten­sity. The same en­thu­si­asm. The same cre­ativ­ity and drive.

And the peo­ple of MIT are ap­ply­ing that same en­ergy in many dif­fer­ent ways to meet these new chal­lenges to our mis­sion.

•    Our fac­ulty are ris­ing to the mo­ment with ex­cit­ing ideas to meet emerg­ing fed­eral op­por­tu­ni­ties. For the Department of Energy’s new Genesis Mission — in a her­culean ef­fort by our fac­ulty and ad­min­is­tra­tive staff — MIT PIs re­cently sub­mit­ted 176 grant pro­pos­als. These pro­pos­als of­fer a snap­shot of first-class MIT sci­ence and en­gi­neer­ing, in ser­vice to the na­tion.

•    We’re ag­gres­sively pur­su­ing new sources of fund­ing, es­pe­cially from in­dus­try, and we’re build­ing on deep re­la­tion­ships, like the MIT-IBM Computing Research Lab, which we re­cently launched to shape the fu­ture of AI and quan­tum com­put­ing.

•    We’re ex­plor­ing new ways to gen­er­ate in­come through ed­u­ca­tional of­fer­ings (like mas­ters’-only pro­grams) that match our mis­sion.

•    With a new leader for our Resource Development team, we’re tak­ing a fresh look at how to at­tract more sup­port through phil­an­thropy.

•    And our alumni and friends are step­ping up, both with do­na­tions and as cham­pi­ons for the value of MIT.

We need to ad­vo­cate for our­selves, and for America’s re­search uni­ver­si­ties, in all these ways and more.

Our Washington Office is work­ing en­er­get­i­cally, on both sides of the aisle, to raise aware­ness about the dam­age the en­dow­ment tax is do­ing to MIT and to a hand­ful of our peer schools.

We’re pur­su­ing new ways to en­gage pol­i­cy­mak­ers and the pub­lic around the trans­for­ma­tive im­pact of cu­rios­ity-dri­ven sci­ence.

And I’m meet­ing fre­quently with lead­ers in Congress and the Administration, to make the case for MITs value to the na­tion.

I can do this with con­fi­dence be­cause I know you’re all here, work­ing to re­al­ize our mis­sion.

Thank you for that — and for all you’ve done, and will do, to help the Institute nav­i­gate this dif­fi­cult time.

Princeton faculty mandate proctoring for in-person exams, upending 133 years of precedent

www.dailyprincetonian.com

All in-per­son ex­am­i­na­tions at Princeton will be proc­tored start­ing July 1, rep­re­sent­ing the most sig­nif­i­cant change to the honor sys­tem since it was es­tab­lished in 1893. The fac­ulty passed a pro­posal re­quir­ing in­struc­tor su­per­vi­sion at Monday’s fac­ulty meet­ing, with one op­pos­ing vote.

The his­toric vote was the cul­mi­na­tion of months of de­lib­er­a­tion within the ad­min­is­tra­tion and stu­dent gov­ern­ing bod­ies about how to ad­dress in­creas­ing con­cerns over aca­d­e­mic in­tegrity vi­o­la­tions, in­clud­ing the pro­lif­er­a­tion of AI us­age. The pro­posal cleared a full fac­ulty vote as the fi­nal of three re­quired rounds of ap­proval, hav­ing al­ready been passed unan­i­mously by the Committee on Examinations and Standing and the Faculty Advisory Committee on Policy.

According to the pol­icy pro­posal, pre­vi­ously sent by Dean of the College Michael Gordin to the Faculty Advisory Committee and in­cluded in Monday’s meet­ing notes, in­struc­tors will re­main pre­sent in exam rooms as a wit­ness to what hap­pens,” but are in­structed not to in­ter­fere with stu­dents. If a sus­pected Honor Code vi­o­la­tion oc­curs, proc­tors will doc­u­ment their ob­ser­va­tions and sub­mit a re­port to the stu­dent-run Honor Committee, where they may later tes­tify un­der the same stan­dards used for other wit­nesses.

The pro­posal notes that ad­di­tional de­tails, in­clud­ing proc­tor-to-stu­dent ra­tios and guide­lines re­gard­ing mon­i­tor­ing prac­tices, will be fi­nal­ized in con­sul­ta­tion with fac­ulty and stu­dent rep­re­sen­ta­tives be­fore the pol­icy takes ef­fect.

Princeton’s honor sys­tem dates back to 1893, when the fac­ulty first in­sti­tuted the Honor Code fol­low­ing a stu­dent pe­ti­tion to elim­i­nate proc­tor­ing dur­ing ex­am­i­na­tions. Since then, the honor sys­tem has re­lied on in­di­vid­ual ac­count­abil­ity, with stu­dents pledg­ing both to re­frain from aca­d­e­mic dis­hon­esty and to re­port those they wit­ness in vi­o­la­tion.

Following the Honor Code’s orig­i­nal im­ple­men­ta­tion, proc­tor­ing was ex­plic­itly banned in the Rules and Procedures of the Faculty and the Rights, Rules, Responsibilities of the University, which re­mained in ef­fect for 133 years up un­til Monday’s vote.

The pol­icy pro­posal cites AI and per­sonal elec­tronic de­vices as ma­jor cat­a­lysts be­hind the pol­icy shift. The ease of ac­cess of these [AI] tools on a small per­sonal de­vice have also changed the ex­ter­nal ap­pear­ance of mis­con­duct dur­ing an ex­am­i­na­tion,” it reads, mak­ing cheat­ing much harder for other stu­dents to ob­serve (and hence to re­port).”

The pro­posal also points to a grow­ing re­luc­tance among stu­dents to re­port peers di­rectly. The pro­posal claims that anony­mous re­port­ing of al­le­ga­tions has in­creased in re­cent years, fu­eled by fears of doxxing or sham­ing among their peer groups” on­line.

Support non­profit stu­dent jour­nal­ism. Donate to the Prince.’ Donate now »

In The Daily Princetonian’s 2025 Senior Survey of over 500 se­niors, 29.9 per­cent of re­spon­dents re­ported that they had cheated on an as­sign­ment or exam dur­ing their time at Princeton. 44.6 per­cent of se­nior re­spon­dents re­ported knowl­edge of Honor Code vi­o­la­tions that they chose not to re­port. Only 0.4 per­cent of se­niors re­sponded say­ing that they had re­ported a peer for an Honor Code vi­o­la­tion.

An Undergraduate Student Government sur­vey of stu­dents cited in the pro­posal re­port­edly found that a ma­jor­ity would fa­vor proc­tor­ing or are in­dif­fer­ent to any change,” though a sizeable mi­nor­ity op­poses it on the grounds that stu­dents should be­have hon­or­ably, and that fac­ulty and stu­dents should trust each other given the 1893 Honor Code com­pact.”

Similarly, stu­dents and fac­ulty pre­vi­ously in­ter­viewed by the Prince’ ex­pressed di­vided views on the pol­i­cy’s im­ple­men­ta­tion. Some cited the in­ad­e­quacy of the cur­rent stu­dent re­port­ing model, while oth­ers said the in­tro­duc­tion of proc­tors could erode the trust that de­fines Princeton’s aca­d­e­mic cul­ture.

The his­toric change comes in the wake of a November pol­icy change man­dat­ing proc­tor­ing for all in­di­vid­ual and small-group ex­ams, in­clud­ing make-up ex­ams, ex­ams taken by stu­dent-ath­letes while trav­el­ing, and ex­ams taken with dis­abil­ity ac­com­mo­da­tions.

In a March guest Opinion col­umn in the Prince,’ Honor Committee Chair Emerita Nadia Makuc 26 wrote that the Honor Committee, which ad­ju­di­cates sus­pected vi­o­la­tions of the Honor Code dur­ing in-per­son ex­am­i­na­tions, had long dis­cussed in­tro­duc­ing proc­tors as an ad­di­tional wit­ness and re­porter in exam rooms, and that the time had come to take that step.

The Honor Committee has ex­pe­ri­enced new strains, in­clud­ing an uptick in cases in the last year and chal­lenges such as gen­er­a­tive AI, and stu­dent sen­ti­ment has rec­og­nized that its pro­ce­dures need to bet­ter re­flect the cur­rent chal­lenges to aca­d­e­mic in­tegrity,” Makuc wrote.

Get the best of the Prince’ de­liv­ered to your doorstep or in­box. Subscribe now »

Honor Committee hear­ings are con­fi­den­tial, stu­dent-led pro­ceed­ings ad­dress­ing po­ten­tial vi­o­la­tions of the Honor Code. Accused stu­dents can pre­sent de­fenses, call wit­nesses, and be as­sisted by a Peer Representative. If stu­dents are found re­spon­si­ble for Honor Code vi­o­la­tions, the max­i­mum penalty that can be as­signed is ex­pul­sion.

William Aepli 26, for­mer co-chair of the Peer Representatives, which ad­vises stu­dents ac­cused of aca­d­e­mic in­tegrity vi­o­la­tions, pre­vi­ously told the Prince’ that his or­ga­ni­za­tion would likely see changes in the type of ev­i­dence pre­sented in hear­ings in Honor Committee hear­ings.

The Honor Committee Constitution and the Honor Code it­self will not need to be changed fol­low­ing the in­sti­tu­tion of proc­tor­ing. Gordin pre­vi­ously con­firmed to the Prince’ that just the Rules and Procedures of the Faculty and Rights, Rules, and Responsibilities will need to be up­dated.

The sec­tion of the Rules and Procedures of the Faculty that pre­vi­ously banned proc­tor­ing will re­place those lines with lan­guage man­dat­ing in­struc­tor su­per­vi­sion dur­ing in-per­son ex­am­i­na­tions, ac­cord­ing to the pro­posal. A one-sen­tence re­vi­sion to Rights, Rules, and Responsibilities will be made be­fore the start of the new aca­d­e­mic year.

The pro­posal states that Gordin met with and re­ceived en­dorse­ments on the pol­icy from current and for­mer stu­dent chairs of the Honor Committee; col­leagues from the Office of the Dean of Undergraduate Students and the McGraw Center for Teaching and Learning; the Faculty-Student Committee on Discipline; and the Academics Chair of the Undergraduate Student Government.”

Undergraduates and fac­ulty are re­al­is­tic in un­der­stand­ing that hav­ing an in­struc­tor su­per­vis­ing ex­am­i­na­tions will not erad­i­cate cheat­ing,” the pro­posal notes. However, they be­lieve that there will be a sig­nif­i­cant de­ter­rent ef­fect, and that hav­ing an ad­di­tional wit­ness in the room will re­duce pres­sure on stu­dents to no­tice and re­port con­cerns while they are them­selves com­plet­ing ex­ams.”

Multiple fac­ulty mem­bers de­clined to com­ment on the new pol­icy fol­low­ing the meet­ing. Professor of English and Theater Jill Dolan, who served as dean of the col­lege from 2015 to 2024, briefly dis­cussed the change in an in­ter­view with the Prince.’

I think it’s a shame, but it’s nec­es­sary,” Dolan said. But I also do un­der­stand why it passed. I think we need some dif­fer­ent prac­tices in this day and age, but it does mark a mo­ment.”

Devon Williams is a News con­trib­u­tor for the Prince’ from Menlo Park, Calif. She can be reached at dw9268[at]prince­ton.edu.

Support non­profit stu­dent jour­nal­ism. Donate to the Prince.’ Donate now »

Luke Grippo con­tributed re­port­ing.

Please send any cor­rec­tions to cor­rec­tions[at]dai­lyprince­ton­ian.com.

Scorched Earth 2000 HTML Port

www.scorch2000.com

Name

Game name

Private Hide from pub­lic list Tank

Game

Resolution

Wind

Initial cash

Rounds

Offline

MacBook Neo Processor Benchmarks: A18 Pro CPU vs M1 and M4

www.jdhodges.com

Updated: May 8th, 2026 with pric­ing and avail­abil­ity up­date

Preface: I’m not re­ally a Mac guy. But I have deep re­spect for what Apple has done with their sil­i­con, and I’ve been fol­low­ing their CPU jour­ney since the Motorola 68k days through PowerPC, the Intel tran­si­tion, and now their in-house Apple Silicon. What they’ve ac­com­plished in the last five years is gen­uinely re­mark­able. Apple is one of the few orig­i­nal tech com­pa­nies that has sur­vived and thrived over the decades while still stay­ing in the con­sumer tech space.

As a kid I used both Apple and Compaq com­put­ers in the text based OS days. Over the years I’ve pur­chased Apple sys­tems pe­ri­od­i­cally over the years and their re­cent en­trants are ex­tremely ca­pa­ble. While in the mod­ern era I’ve never fully made the switch away from Windoze and Linux, I do give Apple props for do­ing what they do. 💪

The fol­low­ing (attempted) analy­sis hits close to home for me. AnandTech was one of my go-to sites when I built my first PC back in the day: a Tyan moth­er­board, Pentium II 233 MHz, SCSI hard drive, with Anand’s ar­ti­cles as the guide (I was a sopho­more in HS I be­lieve, and Anand was young as well). That was a blast, and Anand’s deep-dive hard­ware cov­er­age was a huge part of what made the hobby so re­ward­ing. (Anand even­tu­ally joined Apple, which tells you some­thing about the cal­iber of tal­ent they at­tract.) AnandTech had some of the best Apple sil­i­con analy­sis ever pub­lished, and this ar­ti­cle is writ­ten in that spirit: real data, real math, no hand-wav­ing. At least as much as can be for a prod­uct that has­n’t been re­leased (thankfully the CPU is a pretty known en­tity and we know how Apple puts these sorts of prod­ucts to­gether).

With all that said, here’s a look at how Apple, and re­ally only Apple, can de­liver this kind of ver­ti­cally and hor­i­zon­tally in­te­grated prod­uct at a $599 price point while main­tain­ing a com­par­a­tively high-qual­ity build. They de­signed the chip, they con­trol the OS, they ne­go­ti­ate di­rectly with TSMC, and they amor­tize sil­i­con costs across 230 mil­lion iPhones a year. Nobody else has that sup­ply chain.

Yes, 8GB of RAM is a real lim­i­ta­tion. But give it a year and the next ver­sion will al­most cer­tainly ship with 12GB and a mod­est CPU bump. Apple will main­tain their mar­gins, the world will con­tinue on, and early adopters will have got­ten a sur­pris­ingly ca­pa­ble ma­chine in the mean­time. There’s also a sil­ver lin­ing to the tight mem­ory en­ve­lope: Apple has to keep ma­cOS run­ning well within 8GB, which is ac­tu­ally a nice forc­ing func­tion against bloat and in­ef­fi­ciency. We could all use a lit­tle more of that.

Now, let’s get into the num­bers. ⬇️

Technical Analysis

What proces­sor is in the MacBook Neo?

The MacBook Neo runs Apple’s A18 Pro, the same chip used in the iPhone 16 Pro. Six CPU cores (2 per­for­mance + 4 ef­fi­ciency), a 5-core GPU, a 16-core Neural Engine, fab­ri­cated on TSMCs sec­ond-gen­er­a­tion 3nm process (N3E).

Geekbench 6 (cold, 3-run av­er­age): 3,569 sin­gle-core, 8,879 multi-core

Single-core vs Apple Silicon: lands be­tween the M3 and M4

Sustained-load be­hav­ior: full burst for about 60 sec­onds, then ther­mal throt­tling drops CPU uti­liza­tion 64% in 15 sec­onds (fanless chas­sis)

Price floor en­abled: $599 base, 8GB RAM, 256GB SSD

Below: the full bench­mark data across three ther­mal states, how the A18 Pro com­pares ar­chi­tec­turally to the M-series, and the wafer eco­nom­ics that make $599 pos­si­ble.

On March 4, 2026, Apple un­veiled the MacBook Neo, its most af­ford­able Mac lap­top ever at $599. The head­line spec that has the in­ter­net ar­gu­ing: in­stead of an M-series chip, the Neo runs the A18 Pro, the same proces­sor from the iPhone 16 Pro.

An iPhone chip in a Mac” sounds like a down­grade. The bench­marks tell a very dif­fer­ent story.

The A18 Pro’s sin­gle-core per­for­mance lands be­tween the M3 and M4, de­mol­ishes Intel and Qualcomm com­peti­tors at this price tier by 38 – 43%, and does it all in a fan­less alu­minum chas­sis with 16 hours of claimed bat­tery life. The chip is not the con­straint. The 8GB of RAM, with no up­grade path, is.

This ar­ti­cle cov­ers every­thing: ac­tual bench­mark data, how the A18 Pro com­pares to M-series chips ar­chi­tec­turally, the wafer eco­nom­ics that make $599 pos­si­ble, and why the global RAM short­age makes Apple’s tim­ing look less like luck and more like strat­egy.

What You’re Getting for $599

The MacBook Neo is a 13-inch alu­minum note­book built around the A18 Pro, fab­ri­cated on TSMCs sec­ond-gen­er­a­tion 3nm process (N3E). Here are the specs that mat­ter:

To hit $599, Apple cut: MagSafe, Thunderbolt, back­lit key­board, hap­tic track­pad, P3 wide color, True Tone, Wi-Fi 7, and the 12MP we­b­cam (replaced with 1080p). Touch ID is only on the $699 model. One of the two USB-C ports runs at USB 2.0 speeds, which is gen­uinely bad.

Hands-On: Three Thermal States

Every bench­mark num­ber you see in re­views is a snap­shot of one mo­ment. One am­bi­ent tem­per­a­ture, one back­ground load, one ther­mal state. That is not how lap­tops work in real life.

So I ran Geekbench 6 on my own MacBook Neo un­der three dif­fer­ent con­di­tions, mea­sur­ing what ac­tu­ally hap­pens when you push a fan­less 6-core chip past its com­fort zone. The re­sults were dra­matic.

Test Setup

The ma­chine: MacBook Neo (Mac17,5), Apple A18 Pro, 8GB uni­fied mem­ory, 256GB SSD, ma­cOS Tahoe 26.3.2. All tests run on the same unit within the same 12-hour win­dow.

Three con­di­tions, tested in this or­der:

Cold start (fan-assisted): Machine rested overnight, then placed on a USB desk fan to keep the chas­sis at am­bi­ent tem­per­a­ture. Claude Code and screen shar­ing dis­abled. Three con­sec­u­tive runs with 2-minute cooldowns be­tween each.

Dev work­load (Claude Code ac­tive): Cold start, but with Claude Code (Opus 4.6, 1M con­text) run­ning in the back­ground. This rep­re­sents a real de­vel­oper work­flow: an AI cod­ing as­sis­tant con­sum­ing mem­ory and oc­ca­sional CPU while you try to get work done.

Post ther­mal soak: After a 5-minute all-core stress test that drove CPU uti­liza­tion to 570% and trig­gered ag­gres­sive ther­mal throt­tling. This is the worst case: what your Neo de­liv­ers af­ter sus­tained heavy lift­ing.

Geekbench 6 Across Three States

Read those num­bers again. The same chip that posts 3,569 sin­gle-core when cold de­liv­ers 476 af­ter five min­utes of sus­tained load. That is an 87% re­duc­tion in sin­gle-core per­for­mance on the same hard­ware, run­ning the same bench­mark, sep­a­rated by noth­ing but heat.

The cold start num­bers (3-run av­er­age: SC 3,569, MC 8,879) match pub­lished A18 Pro scores al­most ex­actly. The vari­ance across three pris­tine cold runs was just 7 points on sin­gle-core, con­firm­ing the test method­ol­ogy is sound. (Run 1, Run 2, Run 3)

One de­tail worth not­ing: multi-core scores un­der dev work­load (1,305) and ther­mal soak (1,340) are es­sen­tially iden­ti­cal. Once the Neo hits its ther­mal or mem­ory ceil­ing, multi-core per­for­mance con­verges re­gard­less of the cause. The chip has one sus­tained per­for­mance floor, and both con­di­tions find it.

The 60-Second Thermal Cliff

To un­der­stand why the post-soak score is so low, I ran a 5-minute all-core stress test and logged CPU uti­liza­tion every 15 sec­onds.

For the first 60 sec­onds, the A18 Pro runs at full tilt: all six cores near 100%, CPU uti­liza­tion around 570%. Then the ther­mal wall hits. Between T+60 and T+75, uti­liza­tion crashes from 570% to 207%, a 64% drop in 15 sec­onds. For the re­main­ing four min­utes, the chip bounces be­tween 188% and 360%, never re­cov­er­ing its burst per­for­mance.

There is one in­ter­est­ing spike at T+240 (448%) where the SoC briefly at­tempts to boost be­fore throt­tling right back down. The cool­ing sys­tem sim­ply can­not dis­si­pate the heat fast enough to sus­tain high clocks.

This matches what Technetbook found in­de­pen­dently: the A18 Pro hits its 105°C ther­mal limit and drops from 3.3 GHz to ap­prox­i­mately 2.3 GHz. Modders have con­firmed the cool­ing is the con­straint: TweakTown mea­sured an 18% Geekbench im­prove­ment with liq­uid cool­ing, and Hackaday doc­u­mented dou­bled gam­ing frame rates with a wa­ter cool­ing mod.

Here is what the out­side of the ma­chine tells you dur­ing all this: I mea­sured 97.6°F (36.4°C) on the hottest spot of the case sur­face with an in­frared ther­mome­ter dur­ing sus­tained load. That is barely above body tem­per­a­ture. The chip is in­ter­nally at 105°C and shed­ding 87% of its per­for­mance while the chas­sis feels per­fectly com­fort­able in your lap. Apple made a de­lib­er­ate de­sign choice: com­fort over sus­tained power.

What This Means in Real Use

The MacBook Neo is a sprinter, not a marathon run­ner. For tasks that com­plete within 60 sec­onds (compiling a small pro­ject, pro­cess­ing a batch of pho­tos, ren­der­ing a short video clip), you get desk­top-class sin­gle-core per­for­mance that beats Ryzen 9 chips. For tasks that sus­tain heavy load be­yond a minute (long video en­codes, large builds, train­ing loops), you get dra­mat­i­cally less.

This is not a flaw. It is a de­sign choice in­her­ent to every fan­less lap­top, and the Neo makes that trade­off at $599. The ques­tion is whether your work­load fits in­side the burst win­dow. For the vast ma­jor­ity of users (web brows­ing, of­fice work, light de­vel­op­ment, me­dia con­sump­tion), every in­ter­ac­tion is a burst: a page load, a doc­u­ment save, an app launch. Those users will never see the ther­mal wall.

For the full bench­mark com­par­i­son in­clud­ing third-party data from every ma­jor com­peti­tor, see the bench­mark ta­bles be­low. For the hands-on re­view, in­clud­ing why this ma­chine re­minds me of a leg­endary com­puter from 25 years ago, see our MacBook Neo Review.

CPU Benchmarks: The Data

Actual Geekbench 6 re­sults for the MacBook Neo were pub­lished by MacRumors on March 5, 2026. The Neo scored 3,461 sin­gle-core, 8,668 multi-core, and 31,286 Metal (GPU).

Here’s how that stacks up against both Apple’s own lineup and the $600-class Windows/ARM com­pe­ti­tion:

The sin­gle-core story is re­mark­able. The A18 Pro at 3,461 is 47% faster than the M1 (2,346), out­per­forms the M2 and M3, and lands within 6 – 7% of the M4 (3,696). Against the com­pe­ti­tion avail­able at $600, it beats Intel’s Lunar Lake Ultra 5 226V by 38% and the Snapdragon X Plus by 43%. Only the un­re­leased Snapdragon X2 Plus (3,311) gets close, and it’s not ship­ping in sub-$700 lap­tops yet.

For the tasks this ma­chine is built for (web brows­ing, doc­u­ments, stream­ing, light photo edit­ing), in­di­vid­ual core per­for­mance a huge fac­tor. With two per­for­mance cores the Neo will feels sur­pris­ingly snappy and the re­main­ing ef­fi­ciency cores are there if needed.

[the fol­low­ing com­men­tary has been up­dated af­ter us­ing a Neo for a few weeks and find­ing out that it does­n’t strug­gle with multi core tasks as much as I ex­pected, as ev­i­denced here when stress test­ing. -May 2026, -JD]

Lets dig into multi core, be­cause com­pared to com­peti­tors the Apple multi-core sit­u­a­tion is a dif­fer­ent story than sin­gle store. With 6 cores (2 per­for­mance + 4 ef­fi­ciency) ver­sus 8 – 10 (or more) on com­peti­tors, the Neo’s 8,668 score is es­sen­tially M1-class. It ac­tu­ally trails Intel’s 8-core Ultra 5 226V (9,702) and the Snapdragon X Plus (11,345) in multi-threaded work­loads. The M4 Air at 14,730 is 70% higher.

If you’re com­pil­ing a lot of code, run­ning par­al­lel builds, or do­ing sus­tained multi-threaded work a LOT, then this mat­ters. For the Neo’s tar­get au­di­ence, it likely won’t mat­ter much b/​c bursty work­loads feel great and even with tons of multi-core and mul­ti­task­ing work it still man­age to power through those mo­ment. (Again granted, ex­treme en­cod­ing or long tasks are not go­ing to be as fast as more-core com­peti­tors but I truly do no think the tar­get au­di­ence is go­ing to spend a large por­tion of their time on those tasks… if you need more than 6 cores then def­i­nitely get some­thing else! For me, re­mem­ber when the Intel Q6600 desk­top CPU seemed SO fast and amaz­ing to have quad core, I am pretty ec­sta­tic with the Neo to get 6 cores in a light­weight portable pack­age for ~$599.)

GPU per­for­mance at 31,286 (Metal) ac­tu­ally trails the M1 Air (33,148) slightly, de­spite the newer ar­chi­tec­ture. Five GPU cores ver­sus the M1s seven or eight means fewer par­al­lel shader units. The M4 Air’s 54,630 is 75% higher. GPU-intensive work (video edit­ing, 3D, gam­ing) is clearly not the Neo’s ter­ri­tory.

Architecture: Is the A18 Pro Really Just a Phone Chip”?

The in­ter­net dis­course has cen­tered on whether an iPhone chip” be­longs in a Mac. This fram­ing is ar­chi­tec­turally mis­lead­ing.

The A18 Pro and M4 share the same DNA at the core level. Both are built on the ARMv9.2-A in­struc­tion set, both use Apple’s cus­tom Everest per­for­mance cores and Sawtooth ef­fi­ciency cores, and both are fab­ri­cated on TSMCs N3E 3nm process. When you nor­mal­ize Geekbench sin­gle-core scores by clock speed, you get ap­prox­i­mately 857 points per GHz for both chips. The IPC (instructions per clock) is es­sen­tially iden­ti­cal.

They also share the same GPU shader core ar­chi­tec­ture (with hard­ware ray trac­ing and mesh shad­ing) and the same 16-core Neural Engine rated at 35 TOPS. If Apple had branded the A18 Pro as M4 Lite,” no­body would have blinked.

Where they gen­uinely dif­fer is at the sys­tem level, and these dif­fer­ences mat­ter:

* Apple does not of­fi­cially pub­lish full cache hi­er­ar­chies. SLC fig­ures from cpu-mon­key.com and nanore­view.net.

The mem­ory band­width gap (60 vs 120 GB/s, a full 2x) is the most con­se­quen­tial dif­fer­ence. It lim­its any work­load that’s mem­ory-bound: large ma­trix op­er­a­tions, high-bi­trate video en­cod­ing, GPU-intensive ren­der­ing. The A18 Pro par­tially com­pen­sates with a larger System Level Cache (24 MB vs the M4s re­ported 16 MB), which re­duces how of­ten it needs to hit main mem­ory.

The ther­mal dif­fer­ence also mat­ters. The A18 Pro was de­signed for an iPhone, where sus­tained power is roughly 4W. The Neo’s larger fan­less chas­sis al­lows some­what more ther­mal head­room, but un­der pro­longed multi-core loads it will throt­tle sooner than an M4 in a MacBook Air with its ded­i­cated heatsink.

Bottom line: “Baby M4 is a use­ful short­hand. The shared core de­sign means every­day re­spon­sive­ness is M4-class. The sys­tem-level dif­fer­ences (bandwidth, ther­mals, I/O) mean sus­tained work­loads are not.

Silicon Economics: How Apple Hits $599

The Neo’s price be­comes less sur­pris­ing when you un­der­stand the chip eco­nom­ics. The A18 Pro’s die mea­sures ap­prox­i­mately 105 mm² on TSMC N3E, con­firmed by TechInsights die pho­tog­ra­phy. That’s small. And small means cheap.

At 105 mm², the A18 Pro is 25% smaller than the M4 (~140 mm²) and 76% smaller than the M4 Max (~440 mm²). Smaller dies yield dra­mat­i­cally more chips per wafer and have higher yield rates be­cause there’s less sil­i­con for de­fects to land on.

Here’s the math. A stan­dard 300mm TSMC wafer pro­duces ap­prox­i­mately 586 gross dies at 105 mm² (per Arete Research). After 16 months of pro­duc­tion ma­tu­rity on N3E, yields are es­ti­mated at 85 – 90%, giv­ing 498 – 527 good dies per wafer. At Apple’s es­ti­mated wafer cost of $18,000-$20,000 (per Ben Bajarin/Creative Strategies and Morgan Stanley), that works out to $34 – 40 per die be­fore pack­ag­ing and test. Fully loaded: roughly $38 – 47 per SoC.

Compare that to an M4 at ~140 mm² yield­ing ap­prox­i­mately 430 gross dies, or an M4 Max at ~440 mm² yield­ing maybe 130. The A18 Pro costs Apple roughly one-third what an M4 costs and one-quar­ter what an M4 Max costs in raw sil­i­con.

The real kicker: Apple ships ap­prox­i­mately 230 mil­lion iPhones an­nu­ally. The A18 Pro has been in vol­ume pro­duc­tion since September 2024. All mask costs ($10 – 20M for a 3nm EUV tape­out) and de­sign en­gi­neer­ing were amor­tized across hun­dreds of mil­lions of units be­fore a sin­gle Neo shipped. The mar­ginal cost to Apple of rout­ing A18 Pro dies into the Neo is wafer cost plus pack­ag­ing. Zero in­cre­men­tal R&D.

The Neo may also ab­sorb binned A18 Pro dies that failed their sixth GPU core dur­ing iPhone pro­duc­tion. The Neo ships with 5 GPU cores; binned dies are per­fectly suit­able. This is stan­dard in­dus­try prac­tice (AMD and Nvidia do the same) and fur­ther im­proves Apple’s ef­fec­tive yield.

Adding up the full es­ti­mated BOM (SoC, mem­ory, stor­age, dis­play, chas­sis, bat­tery, key­board, wire­less), the to­tal lands at roughly $200 – 290. At $599 re­tail, that im­plies ap­prox­i­mately 50 – 58% gross mar­gin be­fore R&D, mar­ket­ing, and dis­tri­b­u­tion. This is con­sis­tent with Apple’s com­pany-wide gross mar­gin of 47% on $436 bil­lion in rev­enue. The Neo is not a loss leader. It’s a prof­itable prod­uct.

The 2026 RAM Shortage: Why 8GB Is Strategic, Not Just Cheap

The most com­mon crit­i­cism of the MacBook Neo is the 8GB RAM ceil­ing with no up­grade path. Every Windows and Qualcomm com­peti­tor at this price ships with 16GB. Apple’s choice looks stingy un­til you un­der­stand the 2026 DRAM mar­ket.

The 2026 DRAM short­age is not a typ­i­cal sup­ply/​de­mand cy­cle. It is a struc­tural re­al­lo­ca­tion of global mem­ory fab­ri­ca­tion ca­pac­ity to­ward AI in­fra­struc­ture. The mech­a­nism works like this:

High Bandwidth Memory (HBM), the DRAM used in AI ac­cel­er­a­tors like Nvidia’s H100/B200 GPUs, con­sumes ap­prox­i­mately 3x the wafer area per gi­ga­byte com­pared to stan­dard DDR5 or LPDDR5x. This has been con­firmed by Micron ex­ec­u­tives and in­de­pen­dently by EE Times. HBM stacks re­quire larger dies op­ti­mized for Through-Silicon Via in­ter­con­nects, and yields for 12-high stack­ing run only 50 – 60%.

Samsung, SK Hynix, and Micron con­trol 93% of global DRAM pro­duc­tion. All three have ag­gres­sively re­al­lo­cated ca­pac­ity: up to 40% of ad­vanced wafer out­put now goes to HBM. Micron ex­ited the con­sumer mem­ory mar­ket en­tirely in December 2025. As IDC put it: Every wafer al­lo­cated to an HBM stack for an Nvidia GPU is a wafer de­nied to the LPDDR5X mod­ule of a mid-range smart­phone or the SSD of a con­sumer lap­top.”

The pric­ing tells the story. DDR5 32GB kits that cost $120 in Q3 2025 hit $350 by Q1 2026. Memory’s share of a PCs bill of ma­te­ri­als rose from 16% to 23% (Gartner). TrendForce pro­jects a 90 – 95% quar­ter-over-quar­ter jump in PC DRAM con­tract prices for Q1 2026. Data cen­ters will con­sume 70% of all mem­ory chips man­u­fac­tured in 2026.

The down­stream ef­fects are se­vere. Gartner pro­jects global PC ship­ments will fall 10.4% in 2026, with av­er­age prices ris­ing 17%. Lenovo, Dell, HP, Acer, and ASUS have con­firmed 15 – 20% price hikes. Gartner’s pro­jec­tion: The sub-$500 en­try-level PC seg­ment will dis­ap­pear by 2028.”

This is the con­text that makes Apple’s 8GB choice strate­gi­cally in­ter­est­ing, not just cheap:

Cost sav­ings are real but not the whole story. At short­age pric­ing, 8GB of LPDDR5x costs Apple roughly $25 – 35. Doubling to 16GB would add $25 – 35 per unit. On a $599 prod­uct that’s sig­nif­i­cant, but Apple’s 47% gross mar­gin on $436B rev­enue means an ex­tra $30 is ab­sorbable. This is­n’t purely forced by eco­nom­ics.

The mem­ory con­troller is a gen­uine con­straint. The A18 Pro was de­signed for the iPhone 16 Pro, which has al­ways shipped with 8GB. The LPDDR5x con­troller is con­fig­ured for that pack­age. Upgrading to 16GB would re­quire dif­fer­ent mem­ory pack­ag­ing and PCB rout­ing. Apple could have de­signed around this, but chose not to for a first-gen­er­a­tion bud­get prod­uct.

The short­age cre­ates a pric­ing um­brella. As com­peti­tor lap­top prices rise 15 – 20%, Apple’s fixed $599 be­comes more com­pet­i­tive every month with­out Apple do­ing any­thing. A $600 Windows lap­top that shipped with 16GB in mid-2025 now costs $700 – 750 for the same specs. Apple’s de­ci­sion to halve the RAM halves its ex­po­sure to the short­age while com­peti­tors eat the full price in­crease.

The ecosys­tem math works. A Neo buyer who sub­scribes to iCloud+ and Apple One gen­er­ates $240 – 480 in ser­vices rev­enue over a 2-year de­vice life­cy­cle. At those num­bers, the Neo’s hard­ware mar­gin mat­ters less than con­vert­ing a Chromebook user into the Apple ecosys­tem.

Who Should (and Shouldn’t) Buy This

The MacBook Neo ex­cels at: web brows­ing, email, doc­u­ment edit­ing, stream­ing, mes­sag­ing, light photo work, and run­ning Apple Intelligence on-de­vice. Single-core per­for­mance faster than any Mac un­til the M3 gen­er­a­tion means every­day tasks will feel snappy.

It is not for: de­vel­op­ment work, con­tent cre­ation, video edit­ing, vir­tual ma­chines, heavy mul­ti­task­ing, or any­thing that reg­u­larly ex­ceeds ~1.5 – 2GB of avail­able ap­pli­ca­tion mem­ory (after ma­cOS over­head). The I/O is also a gen­uine lim­i­ta­tion: one USB 2.0 port is func­tion­ally use­less for data trans­fer, no Thunderbolt means no fast ex­ter­nal stor­age, and charg­ing oc­cu­pies your only USB 3 port.

The $500 gap to the MacBook Air ($1,099) is enor­mous, but so is what it buys: 2x RAM, 2x multi-core per­for­mance, Thunderbolt, MagSafe, back­lit key­board, P3 dis­play, Wi-Fi 7, and a 12MP cam­era. If you can af­ford the Air, buy the Air. The Neo ex­ists for peo­ple where $1,099 is sim­ply not an op­tion.

The Bottom Line

The MacBook Neo is a gen­uinely im­pres­sive piece of en­gi­neer­ing at an un­prece­dented Apple price point, strate­gi­cally timed to ex­ploit a mar­ket in cri­sis. The A18 Pro is not a com­pro­mise chip. It is the same core ar­chi­tec­ture as the M4, run­ning at M3-to-M4 class per­for­mance for sin­gle-threaded work. Apple reused ma­ture iPhone sil­i­con at mas­sive scale, elim­i­nated in­cre­men­tal R&D cost, and shipped a prod­uct with healthy mar­gins at $599.

The defin­ing con­straint is the 8GB mem­ory ceil­ing, not the proces­sor. That ceil­ing is a prod­uct of en­gi­neer­ing con­straint (the A18 Pro’s mem­ory con­troller), mar­ket eco­nom­ics (DRAM short­age pric­ing), and strate­gic cal­cu­la­tion (ecosystem con­ver­sion at max­i­mum vol­ume). It will age poorly, it is not up­grade­able, and Apple knows this. The sec­ond-gen­er­a­tion Neo with 12GB or 16GB is al­ready the ob­vi­ous prod­uct.

But right now, in March 2026, with sub-$500 PCs dis­ap­pear­ing and av­er­age lap­top prices climb­ing 17%, a $599 MacBook with M3-class sin­gle-core per­for­mance, an alu­minum chas­sis, and 16 hours of bat­tery life is the most in­ter­est­ing thing Apple has shipped in years. Not be­cause it’s the best Mac. Because it’s the most strate­gi­cally sig­nif­i­cant one.

More MacBook Neo Coverage

MacBook Neo Review: The $599 Mac That Benchmarked Itself: Full hands-on re­view with ther­mal test­ing, tear­down pho­tos, and the BeBox nos­tal­gia trip.

Can MacBook Neo Run Claude Code?: First-party re­source us­age data, the 80% Geekbench drop, and what throt­tled per­for­mance ac­tu­ally feels like in prac­tice.

MacBook Neo vs. Best Laptops 2026: How the Neo stacks up against Windows al­ter­na­tives in every price tier.

If you en­joyed this deep dive, you might also like a blast from the past: the BeBox, a 1996 dual-Pow­erPC tower from Be Inc. It has some in­ter­est­ing par­al­lels to the Neo, from the com­pany which Apple con­sid­ered buy­ing be­fore choos­ing NeXT! (but it has WAY more ports lol)

Where I’d check first: pric­ing and avail­abil­ity as of May 2026

Shopping for one of these? The Neo is pretty wild for a $599 Mac. Apple has been back­o­rdered on Neos lately, and I or­dered one for my daugh­ter and it took about 3 weeks. However, Amazon has the Apple MacBook Neo at ~$589, with de­liv­ery in a cou­ple days with Prime. The MacBook Air M5 16GB on Amazon is a pretty epic ma­chine too, at ~$949 on sale ver­sus $1,099 at Apple, which makes the step up worth a look IMHO. Lastly, if you want to see what Windows lap­tops of­fer there are some de­cent op­tions in the $600-$700 price range.

Sources: Apple Newsroom, MacRumors (March 5, 2026 bench­marks), Geekbench 6 Browser, TechInsights die analy­sis, Ben Bajarin/Creative Strategies (wafer cost analy­sis), Tom’s Hardware, IDC Global Memory Shortage Crisis Report (Feb 2026), Gartner PC Market Forecast, EE Times (“The Great Memory Stockpile,” Jan 2026), IEEE Spectrum, Network World, TrendForce, Counterpoint Research, AppleInsider, Six Colors. Estimated fig­ures are noted as such and rely on pub­lic in­dus­try analy­sis and stan­dard semi­con­duc­tor cost mod­el­ing.

If you’re shop­ping for a lap­top to run Claude Code, the Neo made the cut on my full best lap­top for Claude Code roundup (as the ul­tra bud­get choice).

Bitcoin trader recovers $400,000 using Claude AI after getting 'stoned' and losing wallet password 11 years ago — bot tried 3.5 trillion passwords before decrypting an old wallet backup

www.tomshardware.com

A Bitcoin holder who changed their wal­let pass­word while stoned’ and then for­got it was fi­nally able to re­cover their wal­let with the help of Claude. According to X user cprkrn, they’d been try­ing to re­cover their wal­let for more than 11 years. Still, they did­n’t give up be­cause that wal­let con­tained 5 BTC; this may not sound much, but it has a value of al­most $400,000.

After find­ing a mnemonic that ac­tu­ally turned out to be their old pass­word a few weeks ago, the user dumped their en­tire col­lege com­puter files in Claude in a last-gasp ef­fort. The bot un­cov­ered an old backup wal­let file that it suc­cess­fully de­crypted, while also un­cov­er­ing a bug in the pass­word con­fig­u­ra­tion that was pre­vent­ing re­cov­ery up to that point.

HOLY FUCKING SHIT OMG CLAUDE JUST CRACKED THIS SHIT, THANK YOU @AnthropicAI THANK YOU @DarioAmodei NAMING MY KID AFTER YOU 😍https://​t.co/​gOb­Nir­RDpS https://​t.co/​ByT­dIM4d20 pic.twit­ter.com/​xB5LU­Jb6Pe­May 13, 2026

HOLY FUCKING SHIT OMG CLAUDE JUST CRACKED THIS SHIT, THANK YOU @AnthropicAI THANK YOU @DarioAmodei NAMING MY KID AFTER YOU 😍https://​t.co/​gOb­Nir­RDpS https://​t.co/​ByT­dIM4d20 pic.twit­ter.com/​xB5LU­Jb6Pe­May 13, 2026

Cryptocurrency wal­lets dur­ing their early years were com­pletely dif­fer­ent beasts. Mnemonic seed phrases back then gen­er­ated the HD key tree, but wal­lets of­ten mixed them with non-HD and im­ported keys. Those can­not be re­cov­ered by the seed phrase and are stored in a wal­let file that re­quires a pass­word. This is what hap­pened to cprkrn — they changed the pass­word to the wal­let file that con­tained some spe­cific keys while they were stoned and then com­pletely for­got what pass­word they used. This meant that the Bitcoins tied to those keys were com­pletely in­ac­ces­si­ble, and they’ve been try­ing to find their way back in since then.

It seems that the user al­ready had some can­di­date pass­words and mul­ti­ple wal­lets stored on their PC. They’d been try­ing to brute-force their way into the locked file with bt­cre­cover, an open-source Bitcoin wal­let re­cov­ery tool, but to no suc­cess. Their luck changed for the bet­ter when they found an old mnemonic seed phrase writ­ten in an old col­lege note­book. The HD ad­dresses re­cov­ered by the seed phrase matched those of a spe­cific file on their com­puter, con­firm­ing that it was the wal­let that held the 5 BTC, but it re­mained en­crypted.

Out of frus­tra­tion, cprkrn then dumped their whole col­lege com­puter into Claude. This was when the AI dis­cov­ered an older backup file of the wal­let from December 2019 hid­den in cprkrn’s data. Claude also dis­cov­ered an is­sue where the shared key and pass­words that bt­cre­cover was try­ing weren’t com­bined prop­erly. With the bug ironed out and an older wal­let pre­dat­ing the pass­word change, Claude suc­cess­fully ran bt­cre­cover and was able to de­crypt the pri­vate keys, al­low­ing cprkrn to trans­fer the five lost” BTC to their cur­rent wal­let.

This is a happy end­ing for one user who for­got their wal­let pass­word, giv­ing them a mas­sive wind­fall be­cause of Bitcoin’s mas­sive in­crease in value dur­ing the past few years. And while Anthropic’s Claude did not mag­i­cally guess the right set of char­ac­ters to un­lock the file that held the pri­vate keys, it fixed one crit­i­cal is­sue that cprkrn missed out on, al­low­ing him to fi­nally re­gain his crypto. Before AI LLMs be­came pop­u­lar, re­searchers spent at least half a year crack­ing open a Bitcoin wal­let with a for­got­ten 20-character pass­word. It was well worth the ef­fort, though, as it con­tained an es­ti­mated $1.6 mil­lion in BTC back in 2024. Unfortunately, we can­not say the same thing for this poor guy who lost $780 mil­lion in Bitcoin af­ter a 2025 court rul­ing pre­vented him from at­tempt­ing to rum­mage through the lo­cal dump af­ter his lap­top with 8,000 BTC was dis­carded in the trash.

Follow Tom’s Hardware on Google News, or add us as a pre­ferred source, to get our lat­est news, analy­sis, & re­views in your feeds.

Get Tom’s Hardware’s best news and in-depth re­views, straight to your in­box.

Jowi Morales is a tech en­thu­si­ast with years of ex­pe­ri­ence work­ing in the in­dus­try. He’s been writ­ing with sev­eral tech pub­li­ca­tions since 2021, where he’s been in­ter­ested in tech hard­ware and con­sumer elec­tron­ics.

Rewrite Bun in Rust by Jarred-Sumner · Pull Request #30412 · oven-sh/bun

github.com

Blog post with de­tails com­ing soon.

It passes Bun’s pre-ex­ist­ing test suite on all plat­forms (and fixes sev­eral mem­ory leaks and flaky tests), the bi­nary size shrinks by 3 MB - 8 MB, the bench­marks are be­tween neu­tral and faster - and most im­por­tantly, we now have com­piler-as­sisted tools for catch­ing & pre­vent­ing mem­ory bugs, which have costed the team an enor­mous amount of de­vel­op­ment & de­bug­ging time over the years.

The code­base is oth­er­wise largely the same. The same ar­chi­tec­ture, the same data struc­tures. Bun still uses few 3rd party li­braries. No async rust.

To try this, run:

bun up­grade –canary

Please do file is­sues if you run into any. If this thread gets crazy I will lock it.

Note:

Still some op­ti­miza­tion work to do be­fore this lands in non-ca­nary ver­sion.

Still some cleanup work to do (which will come in a se­ries of fol­low-up PRs)

Our Path Forward

blogs.cisco.com

This af­ter­noon, we shared the fol­low­ing email with Cisco em­ploy­ees.

Team,

Today we an­nounced our Q3 FY26 earn­ings with record rev­enue of $15.8 bil­lion, up 12 per­cent year over year, and dou­ble-digit top and bot­tom-line growth. The ELT and I could not be prouder of the growth you have all de­liv­ered for Cisco.

These re­sults are even more im­pres­sive given the com­plex en­vi­ron­ment we’re op­er­at­ing in — a rapidly chang­ing mar­ket, with in­ten­si­fy­ing com­pe­ti­tion, and a global short­age of com­po­nents crit­i­cal to sup­port our port­fo­lio and the AI build­out from our cus­tomers.

The com­pa­nies that will win in the AI era will be those with fo­cus, ur­gency, and the dis­ci­pline to con­tin­u­ously shift in­vest­ment to­ward the ar­eas where de­mand and long-term value cre­ation are strongest. I’m con­fi­dent Cisco will be one of those win­ners. This means mak­ing hard de­ci­sions — about where we in­vest, how we’re or­ga­nized, and how our cost struc­ture re­flects the op­por­tu­nity in front of us.

With this, we are mak­ing changes to­day that will re­sult in the re­duc­tion of our over­all work­force in Q4 by fewer than 4,000 jobs, rep­re­sent­ing less than 5 per­cent of our to­tal em­ployee base. Most no­ti­fi­ca­tions will be­gin on May 14 and con­tinue glob­ally in align­ment with ap­plic­a­ble lo­cal laws and reg­u­la­tions. For em­ploy­ees whose roles are im­pacted, lead­ers will share de­tails di­rectly — in­clud­ing tim­ing, avail­able re­sources, sup­port, and ben­e­fits in each coun­try. This will in­clude pro-rated pay­ment of FY26 bonuses to im­pacted em­ploy­ees. We will pro­vide sup­port in find­ing new op­por­tu­ni­ties, whether in­ter­nal or ex­ter­nal, through Cisco’s place­ment ser­vices — a pro­gram that has seen 75 per­cent of par­tic­i­pants dis­cover their next role. We are also com­mit­ted to con­tin­ued per­son­al­ized learn­ing and will pro­vide one year of ac­cess to all Cisco U courses and cer­ti­fi­ca­tions, cov­er­ing AI, Security, Networking, and more.

While we are re­duc­ing roles in some ar­eas, we are mak­ing clear, strate­gic in­vest­ments — par­tic­u­larly in sil­i­con, op­tics, se­cu­rity, and in our em­ploy­ees’ use of AI across the com­pany. These in­vest­ments are build­ing from a po­si­tion of strength — and fo­cus­ing on the tech­nolo­gies and busi­nesses that will ac­cel­er­ate our growth, de­liver un­matched in­no­va­tion to cus­tomers and part­ners, and de­fine our fu­ture.

To those leav­ing Cisco, thank you for your con­tri­bu­tion, your ded­i­ca­tion, and the mark you have made on this com­pany. We are deeply grate­ful and are com­mit­ted to han­dling this tran­si­tion with the care, clar­ity, and re­spect that de­fines our cul­ture.

For those who will con­tinue here, we will dis­cuss these changes and an­swer ques­tions at the Cisco Beat on May 21 at 8 a.m. PT.

We have im­por­tant, im­pact­ful, and con­se­quen­tial work ahead. Your fo­cus, re­silience, and lead­er­ship are vi­tal to our growth and rel­e­vance in FY27 and be­yond.

Chuck and the Executive Leadership Team

Authors

Stay up on Thought Leadership from Cisco

Get the lat­est blogs from Cisco Executives in your email.

Executive Perspectives

Navigate the lat­est tech­nol­ogy trends and get so­lu­tions with the help of Cisco ex­ec­u­tives.

RTX 5090 + M4 MacBook Air: Can it Game?

scottjg.com

What if you could strap a full desk­top GPU to your MacBook Air? Turns out, you can.

Just a quick FTC re­quired note: When you buy through my links, I may earn a com­mis­sion.

Never tell me the odds

As much as I hate to ad­mit it, step one in most of my pro­jects now is to ask AI about it. Maybe it’ll tell me some­thing I don’t know.

Fortunately, bor­der­line-im­prac­ti­cal is kind of my thing.

What’s a Thunderbolt eGPU?

Ok, so the plan is to plug a big PC gam­ing GPU, an NVIDIA RTX 5090, into my M4 MacBook Air. To do that, we plug it into a Thunderbolt dock which adapts PCIe to Thunderbolt, and we plug that into a USB-C port.

Thunderbolt tun­nels PCIe over a USB-C ca­ble, so from the com­put­er’s per­spec­tive a Thunderbolt de­vice re­ally is a PCIe de­vice, not a USB one. You get 4 PCIe lanes at up to 40Gbps on Thunderbolt 4, with a small per­for­mance penalty for the tun­nel­ing. USB4 in­cludes the same PCIe tun­nel­ing as an op­tional fea­ture, so some non-Thun­der­bolt USB4 ports can do this too. You can use this to plug a GPU into a lap­top with a com­pat­i­ble port.

Thunderbolt from the lap­top plugs into the GPU dock. The GPU plugs into the mon­i­tor via DisplayPort. Shortly af­ter this was taken, I broke this dock.

From the com­put­er’s per­spec­tive, the de­vice looks more or less like a slightly slower PCIe de­vice, so you can usu­ally use the same dri­vers you’d nor­mally use for those de­vices. eG­PUs work pretty much out of the box on Linux and Windows. It’s even pos­si­ble to use one on a Raspberry Pi (albeit with Oculink, not Thunderbolt).

The first hur­dle is that ma­cOS does not ship with dri­vers for NVIDIA or AMD GPUs on Apple Silicon.

What about tiny­grad?

tiny­grad re­cently re­leased their own ma­cOS eGPU dri­vers. It’s a whole new AI stack with its own open source dri­ver pipeline for NVIDIA and AMD hard­ware.

Sadly, if your main ob­jec­tive is to run AI in­fer­ence or play games, tiny­grad prob­a­bly is­n’t the so­lu­tion you’re look­ing for. This video by YouTuber Alex Ziskind shows that us­ing an eGPU via tiny­grad for in­fer­ence is about 10 times slower than run­ning na­tive Metal in­fer­ence di­rectly on an M4 Pro with­out an eGPU. You can only use the tiny­grad eGPU dri­ver with the tiny­grad stack, not for any­thing else. It also has very lim­ited sup­port for dif­fer­ent AI mod­els.

Getting NVIDIA PTX code run­ning on the GPU is one thing. Writing a full gen­eral-pur­pose dis­play dri­ver that works with ar­bi­trary soft­ware is a sig­nif­i­cantly harder prob­lem. So for now, what can you ac­tu­ally do with an eGPU and a Mac?

The ex­ist­ing Linux dri­ver

Linux can run on Apple Silicon Macs now. Regrettably, at this time, the Linux ker­nel does not sup­port Thunderbolt on Apple Silicon (only in­ter­nal de­vices and USB3). But…

You can run Linux in a 64-bit ARM VM on a ma­cOS host. ma­cOS sup­ports Thunderbolt de­vices. Linux sup­ports NVIDIA GPUs. Let’s put the pieces to­gether and pass through the GPU into the Linux VM.

At a high level, we’re just go­ing to put the GPU in the Linux VM. The VM is the same ar­chi­tec­ture as the Mac host (arm64), so per­for­mance should be com­pa­ra­ble. Of course, the devil is in the de­tails.

There is no dri­ver for NVIDIA cards on ARM64 Windows. That’s why we use Linux.

There is no dri­ver for NVIDIA cards on ARM64 Windows. That’s why we use Linux.

For a quick video demo of the re­sult, take a look:

In the rest of the post, I’ll go through the long and wind­ing road of get­ting this to ac­tu­ally work. If you just want to see screen­shots and bench­marks, you can prob­a­bly skip to the bench­mark sec­tion.

Engineering PCI Passthrough on ma­cOS

PCI de­vice ba­sics

Let’s look at two things we need work­ing for the VM to talk to the PCI de­vice:

PCI BAR (Base Address Registers) - Each PCI de­vice com­mu­ni­cates through chunks of mem­ory that the com­puter can read and write to. There’s ba­si­cally a re­served re­gion of mem­ory on your com­puter for each de­vice. Those mem­ory re­gions have to be mir­rored into the VM for PCI passthrough to work.

PCI BAR (Base Address Registers) - Each PCI de­vice com­mu­ni­cates through chunks of mem­ory that the com­puter can read and write to. There’s ba­si­cally a re­served re­gion of mem­ory on your com­puter for each de­vice. Those mem­ory re­gions have to be mir­rored into the VM for PCI passthrough to work.

DMA (Direct Memory Access) - This is how the de­vice can read and write in­for­ma­tion di­rectly in/​out of your com­put­er’s mem­ory. Instead of hav­ing the CPU burn cy­cles copy­ing data from the de­vice, the de­vice can copy the mem­ory au­to­mat­i­cally. For a GPU, it might be used to copy tex­tures di­rectly from the com­put­er’s mem­ory into its own video mem­ory.

DMA (Direct Memory Access) - This is how the de­vice can read and write in­for­ma­tion di­rectly in/​out of your com­put­er’s mem­ory. Instead of hav­ing the CPU burn cy­cles copy­ing data from the de­vice, the de­vice can copy the mem­ory au­to­mat­i­cally. For a GPU, it might be used to copy tex­tures di­rectly from the com­put­er’s mem­ory into its own video mem­ory.

Mapping PCI BARs

When QEMU starts a VM, it sets up the guest’s mem­ory lay­out. For nor­mal RAM, this boils down to a call to hvf_set_­phys_mem() in QEMU, which uses the Hypervisor.framework method:

hv_vm_map(mem, guest_­phys­i­cal_ad­dress, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);

Next, we con­nect to the host PCIDriverKit dri­ver and ask to map the mem­ory from the PCI de­vice into our process. (I’m leav­ing the dri­ver-side code out for now, but it’s very sim­i­lar boil­er­plate.)

// map BAR0 into the cur­rent process and set `addr` to the lo­ca­tion // where it was mapped mach_vm_ad­dress_t addr = 0; mach_vm_­size_t size = 0; IOConnectMapMemory64(driverConnection, 0, mach_­task_­self(), &addr, &size, kIOMa­pA­ny­where);

Ok, so then we have addr, which now points to the BAR0 mem­ory that we can ac­cess di­rectly in our process. At this point you can just read and write stuff to it, like any other piece of mem­ory.

volatile uin­t32_t *bar0 = (volatile uin­t32_t *)addr; printf(“BAR0[0] = %x\n”, bar0[0]); // this would out­put: BAR0[0] = 0x1b2000a1 // which is a de­vice-spe­cific con­stant that de­scribes my RTX 5090 // // BAR0[0] is the BOOT_0 reg­is­ter. The fields break down as: // arch = 0x1b → GB200 GPU fam­ily // impl = 0x2 → GB202 die (RTX 5090) // ma­jor_rev = 0xa → step­ping A // mi­nor_rev = 0x1 → re­vi­sion 1 (together: step­ping A1)

Now we just make sure QEMU calls hvf_set_­phys_mem() for our de­vice mem­ory, and we can map that into the guest. When guest code touches that map­ping, it talks di­rectly to the GPU with min­i­mal host over­head. This is the best case for per­for­mance. At least, in the­ory.

In prac­tice, as soon as the VM touched the PCI BAR mem­ory, the host ker­nel crashed.

If you’ve never ex­pe­ri­enced this be­fore, it’s dis­ori­ent­ing. Your en­tire com­puter will hang, and be­cause the track­pad feed­back is con­trolled by soft­ware, sud­denly the track­pad will no longer click. The dogs and cats in your neigh­bor­hood start howl­ing. Pictures fall off the walls of your house. Eventually your com­puter will re­boot, and you will be pre­sented with this di­a­log.

Ok, so we can’t map de­vice mem­ory di­rectly, but we have other tricks up our sleeve. We can trap every ac­cess to the mem­ory, exit the guest back into QEMU, and have QEMU for­ward each read or write to the de­vice. That keeps be­hav­ior cor­rect, but it’s bru­tally slow. In many work­loads the pain is else­where. Most of the per­for­mance-sen­si­tive work is DMA, but some paths still care how fast you can push com­mands through the BAR.

I started prepar­ing a bug re­port for Apple and wrote a small re­pro­duc­tion (well, AI-assisted) to demon­strate the is­sue:

In ~100 lines of C, you can spin up a VM, map the de­vice BAR into the guest, and run code that touches it. I’m still not sure whether that was more frus­trat­ing or en­cour­ag­ing, but that ver­sion ran with­out crash­ing, while QEMU was still pan­ick­ing the host. I was stumped for a while. Was it the guest page ta­bles? Was the BAR col­lid­ing with guest RAM in some sub­tle way? Why were the dogs and cats still howl­ing?

Eventually, in my des­per­a­tion, I asked an AI cod­ing as­sis­tant to com­pare my sam­ple and QEMU. It im­me­di­ately flagged that my map­ping used HV_MEMORY_READ | HV_MEMORY_WRITE while QEMU used HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC. Alas, bested again by AI. Not even silly blog pro­jects are safe any­more (mostly kid­ding).

The workaround in QEMU was a small change:

It works, but it’s not per­fect. ARM has sev­eral fla­vors of de­vice mem­ory (the Device-nGnRnE/nGnRE/nGRE/GRE fam­ily), with dif­fer­ent rules for whether writes can be gath­ered, re­ordered, or ac­knowl­edged early. It’s roughly anal­o­gous to x86 write-com­bin­ing on the most per­mis­sive end.

On real hard­ware, the prefetch­able BARs on my GPU are sup­posed to al­low gath­er­ing, which makes them sev­eral times faster for bulk writes than BAR0. But hv_vm_map() has no flags to con­fig­ure this, so every de­vice map­ping ends up as the strictest nGn­RnE. There’s noth­ing we can do about it, and it’s still ~30x faster than trap­ping every ac­cess, but it makes writ­ing the BAR ~10x slower than it would be nor­mally.

DMA

This was by far the sketchi­est part of the pro­ject. To start, let’s go over how this works on a PC run­ning Linux with VM PCI-passthrough, and then we’ll com­pare to our chal­lenge on ma­cOS.

When there’s just a com­puter talk­ing to a de­vice (no VM in­volved), they can talk to­gether di­rectly. The PC will tell the de­vice hey I got that DMA buffer ready at this mem­ory ad­dress” and the de­vice can ac­cess that mem­ory di­rectly (AKA DMA). Easy.

When a VM is in­volved, it’s more com­pli­cated. Guest phys­i­cal ad­dresses don’t cor­re­spond to host phys­i­cal ad­dresses. The VMs RAM is just some chunk of host mem­ory al­lo­cated wher­ever it was avail­able. So if the guest tells the de­vice DMA into 0x00000000,” the de­vice will hap­pily scrib­ble over what­ever ac­tu­ally lives there on the host. The sim­plest fix is two things:

Pin all guest mem­ory so it can’t be paged out while the de­vice might touch it.

Put a hard­ware unit called the IOMMU be­tween the de­vice and host mem­ory. The hy­per­vi­sor pro­grams it with the guest → host trans­la­tions, and every DMA re­quest from the de­vice gets remapped on the fly.

DMA Request:Read/Write0x00000000

IOMMU

Translation Table

0x00000000 – 0x80000000

0x20000000 – 0xA0000000

Translated to:0x20000000

Host Physical Memory

0x20000000 – 0xA0000000

This is a blunt so­lu­tion. The guest does­n’t have to do any­thing spe­cial, but the host has to keep all guest RAM pinned. There are more ad­vanced ap­proaches (like a vir­tual IOMMU), but they’re out­side the scope of this post.

DMA on Apple Silicon

On Apple Silicon, there’s a hard­ware unit called DART that’s more or less equiv­a­lent to an IOMMU. It’s not spe­cific to VMs; it also acts as a se­cu­rity bound­ary, pre­vent­ing de­vices from ac­cess­ing ar­bi­trary host mem­ory. Ideally we’d just use DART the same way Linux uses the IOMMU in the sim­ple case above.

Unfortunately, DART (at least via PCIDriverKit for Thunderbolt de­vices) has some hard con­straints:

~1.5GB map­ping limit. A VM with 1.5GB of RAM can tech­ni­cally boot, but CUDA runs out of mem­ory and any mod­ern game needs 8 – 16GB.

~64k map­ping cap. With many small DMA buffers the map­ping table fills up.

No ad­dress or align­ment con­trol. PCIDriverKit as­signs mapped ad­dresses for you. You can’t pick them, or spec­ify align­ment con­straints. This rules out a vir­tual IOMMU, which re­quires the guest to choose its own DMA ad­dresses.

The 1.5GB ceil­ing was the biggest ini­tial blocker. I tried a few workarounds: pre-map­ping ranges where I guessed DMAs might land (obviously did­n’t work), and us­ing a re­stricted-dma-pool de­vice tree at­tribute to force all DMA through a pre-al­lo­cated re­gion. The re­stricted pool ap­proach ac­tu­ally works for sim­pler de­vices, but GPU dri­vers are too weird to fit into that model. (If you’re cu­ri­ous about the specifics, there’s a qemu-de­vel thread where I dis­cuss it.)

ap­ple-dma-pci

I ended up de­sign­ing a new vir­tual PCI de­vice in QEMU called ap­ple-dma-pci. It gets in­serted into the VM along­side the passed-through GPU, and a com­pan­ion ker­nel dri­ver in the guest in­ter­cepts the NVIDIA dri­ver’s DMA map­ping calls. The so­lu­tion is, frankly, a very up­set­ting hack, but it works.

Because map­pings are cre­ated on de­mand per DMA re­quest and torn down when the buffer is freed, we re­duce the amount of mapped mem­ory we need at any given time. Only the work­ing set of live DMA buffers at any given mo­ment has to fit in our 1.5GB limit, as op­posed to the en­tirety of guest mem­ory.

The guest dri­ver is loaded early (via an /etc/modules-load.d/ con­fig), so it can find the GPU at probe time and swap in cus­tom DMA ops be­fore the NVIDIA dri­ver touches it:

sta­tic struct dma_map_ops ap­ple_d­ma_ops = { .map_page = ap­ple_d­ma_map_­page, .unmap_page = ap­ple_d­ma_un­map_­page, .map_sg = ap­ple_d­ma_map_sg, .unmap_sg = ap­ple_d­ma_un­map_sg, .alloc = ap­ple_d­ma_al­loc, .free = ap­ple_d­ma_free, };

sta­tic int ap­ple_d­ma_p­ci_probe(struct pci_dev *pdev, const struct pci_de­vice_id *id) { struct pci_dev *gpu = pci_get_de­vice(PCI_VEN­DOR_N­VIDIA, PCI_ANY_ID, NULL); if (!gpu) re­turn -ENODEV;

set_d­ma_ops(&gpu->dev, &apple_dma_ops); pci_de­v_put(gpu); re­turn 0; }

Each of the cus­tom ops is a thin wrap­per. It mar­shals its ar­gu­ments into a small re­quest, writes it into mem­ory for the ap­ple-dma-pci vir­tual BAR, kicks a door­bell reg­is­ter, and waits for a re­ply. On the host side, QEMU picks up the re­quest, hands it off to the PCIDriverKit dri­ver, which per­forms the ac­tual DART map­ping, and the re­sult­ing DMA ad­dress gets writ­ten back to guest mem­ory. The NVIDIA dri­ver should­n’t know the dif­fer­ence.

Linux VM (Guest)

NVIDIA Driver

dma_map_­page()

ap­ple_d­ma_ops han­dler

vir­tual PCI BAR write

ap­ple-dma-pci vir­tual de­vice

VM exit

ma­cOS Host

QEMU

IOConnectCallMethod()

PCIDriverKit dri­ver

IODMACommand

DART hard­ware

mapped ad­dress re­turned back up the stack

NVIDIA align­ment quirk

It did­n’t im­me­di­ately work well, though. While the dri­ver ini­tially loaded and ini­tial­ized the card, I was greeted with this fun ker­nel log mes­sage as soon as I at­tempted to run a CUDA work­load:

[ 456.194883] NVRM: nvAsser­tOk­Failed­NoLog: Assertion failed: The off­set passed is not valid [NV_ERR_INVALID_OFFSET] (0x00000037) re­turned from pRmApi->Al­loc(pRmApi, de­vice->ses­sion->han­dle, is­Sys­tem­Mem­ory ? de­vice->han­dle : de­vice->sub­han­dle, &physHandle, is­Sys­tem­Mem­ory ? NV01_MEMORY_SYSTEM : NV01_MEMORY_LOCAL_USER, &memAllocParams, sizeof(memAl­loc­Params)) @ nv_g­pu_ops.c:4972 [ 456.371282] NVRM: GPU0 nvAssert­Failed­NoLog: Assertion failed: 0 == (physAddr & (RM_PAGE_SIZE_HUGE - 1)) @ mem_m­gr_g­m107.c:1312 [ 456.372020] NVRM: nvAsser­tOk­Failed­NoLog: Assertion failed: The off­set passed is not valid [NV_ERR_INVALID_OFFSET] (0x00000037) re­turned from pRmApi->Al­loc(pRmApi, de­vice->ses­sion->han­dle, is­Sys­tem­Mem­ory ? de­vice->han­dle : de­vice->sub­han­dle, &physHandle, is­Sys­tem­Mem­ory ? NV01_MEMORY_SYSTEM : NV01_MEMORY_LOCAL_USER, &memAllocParams, sizeof(memAl­loc­Params)) @ nv_g­pu_ops.c:4972

If you re­call the ear­lier DMA sec­tion, we noted that we can’t con­trol the align­ment of DMA-mapped buffers. Bummer. At this point, I dug into the dri­ver to try to see if there was some­thing sim­ple we could patch.

Here’s the rel­e­vant seg­ment:

if (type == UVM_RM_MEM_TYPE_SYS) { if (size >= UVM_PAGE_SIZE_2M) al­loc_info.pa­ge­Size = UVM_PAGE_SIZE_2M; else if (size >= UVM_PAGE_SIZE_64K) al­loc_info.pa­ge­Size = UVM_PAGE_SIZE_64K;

sta­tus = uvm_r­m_locked_­call(nvU­vmInter­face­Mem­o­ryAl­loc­Sys(gpu->rm_ad­dress_­space, size, &gpu_va, &alloc_info));

// TODO: Bug 5042223 if (status == NV_ERR_NO_MEMORY && size >= UVM_PAGE_SIZE_64K) { UVM_ERR_PRINT(“nvUvmInterfaceMemoryAllocSys al­loc failed with big page size, retry with de­fault page size\n”); al­loc_info.pa­ge­Size = UVM_PAGE_SIZE_DEFAULT; sta­tus = uvm_r­m_locked_­call(nvU­vmInter­face­Mem­o­ryAl­loc­Sys(gpu->rm_ad­dress_­space, size, &gpu_va, &alloc_info)); } }

By adding more de­bug log­ging in the mod­ule, I could see it was a 16MB al­lo­ca­tion of type UVM_RM_MEM_TYPE_SYS. So, it uses the largest (2MB) page size. Ironically, there is al­ready a workaround here when the al­lo­ca­tion fails. It’ll just try again with a smaller page size. It just does­n’t take into ac­count the dif­fer­ent er­ror code for align­ment (NV_ERR_INVALID_OFFSET).

Microsoft BitLocker-protected drives can now be opened with just some files on a USB stick — YellowKey zero-day exploit demonstrates an apparent backdoor

www.tomshardware.com

There’s noth­ing more dan­ger­ous than a bored en­gi­neer with a screw­driver, and hell hath no fury like a se­cu­rity re­searcher scorned. Last month, Security re­searcher Chaotic Eclipse (aka Nightmare-Eclipse) pub­lished two zero-day ex­ploits, BlueHammer and RedSun, that made Windows Defender of­fer up sys­tem ad­min­is­tra­tor priv­i­leges. They did this af­ter their dis­clo­sure re­ports were al­legedly dis­missed by Microsoft’s se­cu­rity team, re­sult­ing in a vendetta of sorts. Eclipse has now done it again, post­ing two new zero-day ex­ploits, the first one an ex­tremely se­ri­ous BitLocker ex­ploit named Yellow Key that grants full ac­cess to a locked drive. The sec­ond one, GreenPlasma, does­n’t have a com­plete proof-of-con­cept (PoC), but it al­legedly per­forms a lo­cal priv­i­lege es­ca­la­tion and gains sys­tem-level ac­cess. Given Eclipse’s track record, it’s a fair bet that it works as ad­ver­tised.

YellowKey can be trig­gered sim­ply by merely copy­ing some files to a USB stick and re­boot­ing to the Windows Recovery Environment. We tested this our­selves, and sure enough, not only does it work, it bears all the hall­marks of a back­door, down to the ex­ploit’s files dis­ap­pear­ing from the USB stick af­ter it’s used once.

To say that this is dan­ger­ous is an un­der­state­ment. Not only is it an im­me­di­ate con­cern as BitLocker can­not be trusted for en­crypt­ing dri­ves, but the way the ex­ploit ex­e­cutes and its files dis­ap­pear also raises very un­com­fort­able cor­po­rate and/​or po­lit­i­cal ques­tions. YellowKey also re­port­edly works in Windows Server 2022 and 2025, but not in Windows 10.

BitLocker pro­tects mil­lions of ma­chines world­wide across home, en­ter­prises, and gov­ern­ments, es­pe­cially as it’s en­abled by de­fault in Windows 11. As far as we can tell, a drive can’t be taken from ma­chine Alice and opened in ma­chine Bob be­cause the en­cryp­tion keys are in Alice’s TPM, but it’s not hard to just up and steal a lap­top, mini-PC, or even desk­top.

Eclipse notes that us­ing a full TPM-and-PIN setup does­n’t help, as ap­par­ently, they have a vari­ant for that sce­nario that they haven’t pub­lished a PoC for. They also state the vul­ner­a­bil­ity is well-hid­den, and that they could have made some in­sane cash sell­ing this, but no amount of money will stand be­tween me and my de­ter­mi­na­tion against Microsoft.”

As for GreenPlasma, it’s sup­posed to get an at­tacker full sys­tem-level ac­cess (even higher than ad­min­is­tra­tor) by ma­nip­u­lat­ing the CTFMon process into plac­ing a crafted mem­ory sec­tion ob­ject — a slice of mem­ory that can be shared be­tween processes or mapped to a file — in any Windows’ Object Manager sec­tion the SYSTEM user has write ac­cess to, by­pass­ing reg­u­lar ac­cess con­trols.

Get Tom’s Hardware’s best news and in-depth re­views, straight to your in­box.

From thereon, the ex­ploit code can get ac­cess to re­gions of mem­ory they’re not meant to and lever­age that for any num­ber of shenani­gans, the most ob­vi­ous one be­ing get­ting full sys­tem ac­cess. This is bad enough for a desk­top sys­tem, as any pro­gram can get full ac­cess, but it’s par­tic­u­larly bad for server en­vi­ron­ments, where any reg­u­lar user can get con­trol of the server and, by ex­ten­sion, every­one else’s data.

Meanwhile, as of this writ­ing, there is no of­fi­cial re­sponse from the com­pany about YellowKey or GreenPlasma. BlueHammer has al­ready been patched, and Chaotic claims that Microsoft silently patched RedSun, but there’s no of­fi­cial word on that ei­ther.

Follow Tom’s Hardware on Google News, or add us as a pre­ferred source, to get our lat­est news, analy­sis, & re­views in your feeds.

Bruno Ferreira is a con­tribut­ing writer for Tom’s Hardware. He has decades of ex­pe­ri­ence with PC hard­ware and as­sorted sun­dries, along­side a ca­reer as a de­vel­oper. He’s ob­sessed with de­tail and has a ten­dency to ram­ble on the top­ics he loves. When not do­ing that, he’s usu­ally play­ing games, or at live mu­sic shows and fes­ti­vals.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

Visit pancik.com for more.