10 interesting stories served every morning and every evening.




1 2,564 shares, 452 trendiness

Hacker News Guidelines

Hacker News Guidelines

What to Submit

On-Topic: Anything that good hack­ers would find in­ter­est­ing. That in­cludes more than hack­ing and star­tups. If you had to re­duce it to a sen­tence, the an­swer might be: any­thing that grat­i­fies one’s in­tel­lec­tual cu­rios­ity.

Off-Topic: Most sto­ries about pol­i­tics, or crime, or sports, or celebri­ties, un­less they’re ev­i­dence of some in­ter­est­ing new phe­nom­e­non. If they’d cover it on TV news, it’s prob­a­bly off-topic.

Please don’t do things to make ti­tles stand out, like us­ing up­per­case or ex­cla­ma­tion points, or say­ing how great an ar­ti­cle is.

Please sub­mit the orig­i­nal source. If a post re­ports on some­thing found on an­other site, sub­mit the lat­ter.

Please don’t use HN pri­mar­ily for pro­mo­tion. It’s ok to post your own stuff part of the time, but the pri­mary use of the site should be for cu­rios­ity.

If the ti­tle in­cludes the name of the site, please take it out, be­cause the site name will be dis­played af­ter the link.

If the ti­tle con­tains a gra­tu­itous num­ber or num­ber + ad­jec­tive, we’d ap­pre­ci­ate it if you’d crop it. E.g. trans­late 10 Ways To Do X” to How To Do X,” and 14 Amazing Ys” to Ys.” Exception: when the num­ber is mean­ing­ful, e.g. The 5 Platonic Solids.”

Otherwise please use the orig­i­nal ti­tle, un­less it is mis­lead­ing or linkbait; don’t ed­i­to­ri­al­ize.

If you sub­mit a video or pdf, please warn us by ap­pend­ing [video] or [pdf] to the ti­tle.

Please don’t post on HN to ask or tell us some­thing. Send it to hn@ycombi­na­tor.com.

Please don’t delete and re­post. Deletion is for things that should­n’t have been sub­mit­ted in the first place.

Don’t so­licit up­votes, com­ments, or sub­mis­sions. Users should vote and com­ment when they run across some­thing they per­son­ally find in­ter­est­ing—not for pro­mo­tion.

Comments should get more thought­ful and sub­stan­tive, not less, as a topic gets more di­vi­sive.

When dis­agree­ing, please re­ply to the ar­gu­ment in­stead of call­ing names. That is id­i­otic; 1 + 1 is 2, not 3” can be short­ened to 1 + 1 is 2, not 3.”

Don’t be cur­mud­geonly. Thoughtful crit­i­cism is fine, but please don’t be rigidly or gener­i­cally neg­a­tive.

Don’t post gen­er­ated com­ments or AI-edited com­ments. HN is for con­ver­sa­tion be­tween hu­mans.

Please don’t ful­mi­nate. Please don’t sneer, in­clud­ing at the rest of the com­mu­nity.

Please re­spond to the strongest plau­si­ble in­ter­pre­ta­tion of what some­one says, not a weaker one that’s eas­ier to crit­i­cize. Assume good faith.

Please don’t post shal­low dis­missals, es­pe­cially of other peo­ple’s work. A good crit­i­cal com­ment teaches us some­thing.

Please don’t use Hacker News for po­lit­i­cal or ide­o­log­i­cal bat­tle. It tram­ples cu­rios­ity.

Please don’t com­ment on whether some­one read an ar­ti­cle. Did you even read the ar­ti­cle? It men­tions that” can be short­ened to The ar­ti­cle men­tions that”.

Please don’t pick the most provoca­tive thing in an ar­ti­cle or post to com­plain about in the thread. Find some­thing in­ter­est­ing to re­spond to in­stead.

Throwaway ac­counts are ok for sen­si­tive in­for­ma­tion, but please don’t cre­ate ac­counts rou­tinely. HN is a com­mu­nity—users should have an iden­tity that oth­ers can re­late to.

Please don’t use up­per­case for em­pha­sis. Instead, put *asterisks* around it and it will get ital­i­cized. More for­mat­ting info here.

Please don’t post in­sin­u­a­tions about as­tro­turf­ing, shilling, brigad­ing, for­eign agents, and the like. It de­grades dis­cus­sion and is usu­ally mis­taken. If you’re wor­ried about abuse, email hn@ycombi­na­tor.com and we’ll look at the data.

Please don’t com­plain about tan­gen­tial an­noy­ances—e.g. ar­ti­cle or web­site for­mats, name col­li­sions, or back-but­ton break­age. They’re too com­mon to be in­ter­est­ing.

Please don’t com­ment about the vot­ing on com­ments. It never does any good, and it makes bor­ing read­ing.

Please don’t post com­ments say­ing that HN is turn­ing into Reddit. It’s

a

semi-noob

il­lu­sion,

as

old

as

the

hills.

...

Read the original on news.ycombinator.com »

2 646 shares, 30 trendiness

Every minute you aren’t running 69 agents, you are falling behind

Today we should ramp down rhetoric. I thought no­body would take three min­utes to es­cape the per­pet­ual un­der­class or you are worth $0.003/hr se­ri­ously. But it looks like some peo­ple do, and you should­n’t.

Social me­dia has been ex­tremely toxic for the last cou­ple months. It’s tar­get­ing you with fear and anx­i­ety. If you don’t use this new stu­pid AI thing you will fall be­hind. If you haven’t to­tally up­dated your work­flow you are worth 0. There’s peo­ple who built bil­lion dol­lars com­pa­nies by or­ches­trat­ing 37 agents this morn­ing AND YOU JUST SAT THERE AND ATE BREAKFAST LIKE A PLEB!

This is all com­plete non­sense. AI is not a mag­i­cal game changer, it’s sim­ply the con­tin­u­a­tion of the ex­po­nen­tial of progress we have been on for a long time. It’s a win in some ar­eas, a loss in oth­ers, but over­all a win and a cool tool to use. And it will con­tinue to im­prove, but it won’t go re­cur­sive” or what­ever the claim is. It’s al­ways been re­cur­sive. You see things like au­tore­search and it’s cool. But it’s not magic, it’s search. People see AI and they at­tribute some sci-fi thing to it when it’s just search and op­ti­miza­tion. Always has been, and if you paid at­ten­tion in CS class, you know the lim­its of those things.

That said, if you have a job where you cre­ate com­plex­ity for oth­ers, you will be found out. The days of rent seek­ers are com­ing to an end. But not be­cause there will be no more rent seek­ing, it’s be­cause rent seek­ing is a 0 sum game and you will lose at it to big­ger play­ers. If you have a job like that, or work at a com­pany like that, the sooner you quit the bet­ter your out­come will be. This is the real dri­ver of the lay­offs, the big play­ers con­sol­i­dat­ing the rent seek­ing to them. They just say it’s AI cause that makes the stock price go up.

The trick is not to play zero sum games. This is what I have been say­ing the whole time. Go cre­ate value for oth­ers and don’t worry about the re­turns. If you cre­ate more value than you con­sume, you are wel­come in any well op­er­at­ing com­mu­nity. Not in­fi­nite, not al­ways needs more, just more than you con­sume. That’s enough, and avoid peo­ple or com­par­i­son traps that tell you oth­er­wise. The world is not a Red Queen’s race.

This post will get way less trac­tion than the doom ones, but it’s telling you the way out.

...

Read the original on geohot.github.io »

3 493 shares, 54 trendiness

The 9-Year Journey to Fix Time in JavaScript

Welcome to our blog! I’m Jason Williams, a se­nior soft­ware en­gi­neer on Bloomberg’s JavaScript Infrastructure and Terminal Experience team. Today the Bloomberg Terminal runs a lot of JavaScript. Our team pro­vides a JavaScript en­vi­ron­ment to en­gi­neers across the com­pany.

Bloomberg may not be the first com­pany you think of when dis­cussing JavaScript. It cer­tainly was­n’t for me in 2018 be­fore I worked here. Back then, I at­tended my first TC39 meet­ing in London, only to meet some Bloomberg en­gi­neers who were there dis­cussing Realms, WebAssembly, Class Fields, and other top­ics. The com­pany has now been in­volved with JavaScript stan­dard­iza­tion for nu­mer­ous years, in­clud­ing part­ner­ing with Igalia. Some of the pro­pos­als we have as­sisted in­clude Arrow Functions, Async Await, BigInt, Class Fields, Promise.allSettled, Promise.withResolvers, WeakRefs, stan­dard­iz­ing Source Maps, and more!

The first pro­posal I worked on was Promise.allSettled, which was ful­fill­ing. After that fin­ished, I de­cided to help out on a pro­posal around dates and times, called Temporal.

JavaScript is unique in that it runs in all browsers. There is no sin­gle owner,” so you can’t just make a change in iso­la­tion and ex­pect it to ap­ply every­where. You need buy-in from all par­ties. Evolution hap­pens through TC39, the Technical Committee re­spon­si­ble for ECMAScript.

In 2018, when I first looked at Temporal, it was at Stage 1. The TC39 Committee was con­vinced the prob­lem was real. It was a rad­i­cal pro­posal to bring a whole new li­brary for Dates and Times into JavaScript. It was:

* Providing dif­fer­ent DateTime Types (instead of a sin­gle API)

But how did we get here? Why was Date such a pain point? For that, we need to take a step back.

In 1995, Brendan Eich was tasked with a 10-day sprint to cre­ate Mocha (which would later be­come JavaScript). Under in­tense time pres­sure, many de­sign de­ci­sions were prag­matic. One of them was to port Java’s Date im­ple­men­ta­tion di­rectly. As Brendan later ex­plained:

It was a straight port by Ken Smith (the only code in Mocha” I did­n’t write) of Java’s Date code from Java to C.

At the time, this made sense. Java was as­cen­dant and JavaScript was be­ing framed as its light­weight com­pan­ion. Internally, the phi­los­o­phy was even re­ferred to as MILLJ: Make It Look Like Java.

Brendan also noted that chang­ing the API would have been po­lit­i­cally dif­fi­cult:

Changing it when every­one ex­pected Java to be the big brother” lan­guage would make con­fu­sion and bugs; Sun would have ob­jected too.

In that mo­ment, con­sis­tency with Java was more im­por­tant than fun­da­men­tally re­think­ing the time model. It was a prag­matic trade-off. The Web was young, and most ap­pli­ca­tions mak­ing use of JavaScript would be sim­ple, at least, to be­gin with.

By the 2010s, JavaScript was pow­er­ing bank­ing sys­tems, trad­ing ter­mi­nals, col­lab­o­ra­tion tools, and other com­plex sys­tems run­ning in every time zone on earth. Date was be­com­ing more of a pain point for de­vel­op­ers.

Developers would of­ten write helper func­tions that ac­ci­dently mu­tated the orig­i­nal Date ob­ject in place when they in­tended to re­turn a new one:

const date = new Date(“2026-02-25T00:00:00Z”);

con­sole.log(date.toISOString());

// 2026-02-25T00:00:00.000Z”

func­tion ad­dOne­Day(d) {

// oops! This is mu­tat­ing the date

d.set­Date(d.get­Date() + 1);

re­turn d;

ad­dOne­Day(date);

con­sole.log(date.toISOString());

// 2026-02-26T00:00:00.000Z”

const billing­Date = new Date(“Sat Jan 31 2026”);

billing­Date.set­Month(billing­Date.get­Month() + 1);

// Expected: Feb 28

// Actual: Mar 02

Sometimes peo­ple want to get the last day of the month and fall into traps like this one, where they bump the month by one, but the days re­main the same. Date does not con­strain in­valid cal­en­dar re­sults back into a valid date. Instead, it silently rolls over­flow into the next month.

new Date(“2026-06-25 15:15:00”).toISOString();

// Potential Return Values:

// - lo­cal TimeZone

// - Invalid Date RangeError

// - UTC

In this ex­am­ple, the string is sim­i­lar, but not iden­ti­cal, to ISO 8601. Historically, browser be­hav­ior for almost ISO strings was un­de­fined by the spec­i­fi­ca­tion. Some would treat it as lo­cal time, oth­ers as UTC, and one would throw en­tirely as in­valid in­put.

There’s more, much more, but the point is that Date has been a pain point for JavaScript de­vel­op­ers for the past three decades.

The Web ecosys­tem had no choice but to patch Date’s short­com­ings with li­braries. You can see the sheer rise of date­time li­braries be­low. Today, they add up to more than 100 mil­lion down­loads a week.

Leading the charge was Moment.js, which boasts an ex­pres­sive API, pow­er­ful pars­ing ca­pa­bil­i­ties, and much-needed im­mutabil­ity. Created in 2011, it quickly be­came the de facto stan­dard for han­dling date and time ma­nip­u­la­tions in JavaScript. So surely the prob­lem is solved? Everyone should just grab a copy of this and call it a day.

The wide­spread adop­tion of mo­ment.js (plus other sim­i­lar li­braries) came with its own set of prob­lems. Adding the li­brary meant in­creas­ing bun­dle size, due to the fact that it needed to be shipped with its own set of lo­cale in­for­ma­tion plus time zone data from the time zone data­base.

Despite the use of mini­fiers, com­pil­ers, and sta­tic analy­sis tools, all of this ex­tra data could­n’t be tree-shaken away, be­cause most de­vel­op­ers don’t know ahead of time which lo­cales or time zones they’ll need. In or­der to play it safe, the ma­jor­ity of users took all of the data whole­sale and shipped it to their users.

Maggie Johnson-Pint, who had been a main­tainer of Moment.js for quite a few years (alongside oth­ers), was no stranger to re­quests to deal with the pack­age size.

We were at the point with mo­ment that it was more main­te­nance to keep up with mod­ules, web­pack, peo­ple want­ing every­thing im­mutable be­cause React, etc than any net new func­tion­al­ity

And peo­ple never stop talk­ing about the size of course.

In 2017, Maggie de­cided it was time to stan­dard­ise dates and times with a Temporal Proposal” for the TC39 ple­nary that year. It was met with great en­thu­si­asm, lead­ing it to be ad­vanced to Stage 1.

Stage 1 was a big mile­stone, but it was still far from the fin­ish line. After the ini­tial burst of en­ergy, progress nat­u­rally slowed. Maggie and Matt Johnson-Pint were lead­ing the ef­fort along­side Brian Terlson, while si­mul­ta­ne­ously bal­anc­ing other re­spon­si­bil­i­ties in­side Microsoft. Temporal was still early enough that much of the im­me­di­ate work was unglam­orous: re­quire­ments gath­er­ing, clar­i­fy­ing se­man­tics, and trans­lat­ing the ecosys­tem’s pain” into a de­sign that could ac­tu­ally ship.

We run JavaScript at scale across the Terminal, us­ing un­der­ly­ing run­times and en­gines such as Chromium, Node.js and SpiderMonkey. Our users, and the fi­nan­cial mar­kets in which they in­vest, span every time zone on earth. We pass time­stamps con­stantly: be­tween ser­vices, into stor­age, into the UI, and across sys­tems that all have to agree on what now” means, even when gov­ern­ments change DST rules with very lit­tle no­tice.

On top of that, we had re­quire­ments that the built-in Date model sim­ply was­n’t de­signed for:

* A user-con­fig­ured time zone that is not the ma­chine’s time zone (and can change per re­quest).

* Higher-precision time­stamps (nanoseconds, at a min­i­mum), with­out duct-tap­ing ex­tra fields onto ad-hoc wrap­pers for­ever.

In par­al­lel with Maggie bring­ing Temporal to TC39, Bloomberg en­gi­neer Andrew Paprocki was talk­ing with Igalia about mak­ing time zones con­fig­urable in V8. Specifically, they dis­cussed in­tro­duc­ing a sup­ported in­di­rec­tion layer so an em­bed­der could con­trol the perceived” time zone in­stead of re­ly­ing on the OS de­fault. In that con­ver­sa­tion, Daniel Ehrenberg (then work­ing at Igalia) pointed Andrew at the early Temporal work be­cause it looked strik­ingly sim­i­lar to Bloomberg’s ex­ist­ing value-se­man­tic date­time types.

That ex­change be­came an early bridge be­tween Bloomberg’s pro­duc­tion needs, Igalia’s browser-and-stan­dards ex­per­tise, and the emerg­ing di­rec­tion of Temporal. Over the years that fol­lowed, Bloomberg part­nered with Igalia (including via sus­tained fund­ing sup­port) and con­tributed en­gi­neer­ing time di­rectly into mov­ing Temporal for­ward, un­til it even­tu­ally be­came some­thing the whole ecosys­tem could ship. Andrew was look­ing for some vol­un­teers within Bloomberg who could help push Temporal for­ward and Philipp Dunkel vol­un­teered to be a spec cham­pion. Alongside Andrew, he helped per­suade Bloomberg to in­vest in mak­ing Temporal real, in­clud­ing a deeper part­ner­ship with Igalia. That sup­port brought in Philip Chimento and Ujjwal Sharma as full time Temporal cham­pi­ons, adding the day-to-day fo­cus the pro­posal needed to keep mov­ing ahead.

Shane Carr joined the Champions team, rep­re­sent­ing Google’s Internationalization team. He pro­vided the fo­cus we needed on in­ter­na­tion­al­iza­tion top­ics such as cal­en­dars, and also served as the glue be­tween the stan­dard­iza­tion process and the voice of users who ex­pe­ri­enced pain points with tools re­lated to JavaScript’s in­ter­na­tion­al­iza­tion API (Intl), such as for­mat­ting, time zones, and cal­en­dars.

Finally, we had Justin Grant, who joined the Temporal cham­pi­ons in 2020 as a vol­un­teer. After 10 years at three dif­fer­ent star­tups that man­aged time-stamped data, he’d seen en­gi­neer­ing teams waste thou­sands of hours fix­ing mis­takes with dates, times, and time zones. Justin’s ex­pe­ri­ence grounded us in real-world use cases, helped us an­tic­i­pate mis­takes that de­vel­op­ers would make, and en­sured that Temporal shipped a Temporal. ZonedDateTime API to help make DST bugs a thing of the past.

Other hon­or­able men­tions not on this list in­clude Daniel Ehrenberg, Adam Shaw, and Kevin Ness.

Temporal is a top-level name­space ob­ject (similar to Math or Intl) that ex­ists in the global scope. Underneath it are types” that ex­ist in the form of con­struc­tors. It’s ex­pected that de­vel­op­ers will reach for the type they need when us­ing the API, such as Temporal. PlainDateTime, for ex­am­ple.

Here are the types Temporal comes packed with:

If you don’t know which Temporal type you need, start with Temporal. ZonedDateTime. It is the clos­est con­cep­tual re­place­ment for Date, but with­out the footguns.”

* An ex­act mo­ment in time (internally, mil­lisec­onds since epoch)

* All as an im­mutable value

const now = new Date();

const now = Temporal.Now.zonedDateTimeISO();

The above ex­am­ple uses the Now name­space, which gives you the type al­ready set to your cur­rent lo­cal time and time zone.

This type is op­ti­mized for DateTimes that may re­quire some date­time arith­metic in which the day­light sav­ing tran­si­tion could po­ten­tially cause prob­lems. ZonedDateTime can take those tran­si­tions into ac­count when do­ing any ad­di­tion or sub­trac­tion of time (see ex­am­ple be­low).

// London DST starts: 2026-03-29 01:00 -> 02:00

const zdt = Temporal.ZonedDateTime.from(

2026-03-29T00:30:00+00:00[Europe/London]”,

con­sole.log(zdt.toString());

// → 2026-03-29T00:30:00+00:00[Europe/London]”

const plus1h = zdt.add({ hours: 1 });

con­sole.log(plus1h.toString());

// 2026-03-29T02:30:00+01:00[Europe/London]” (01:30 does­n’t ex­ist)

In this ex­am­ple, we don’t land at 01:30 but 02:30 in­stead, be­cause 01:30 does­n’t ex­ist at that spe­cific point in time.

Temporal. Instant is an ex­act mo­ment in time, it has no time zone, no day­light sav­ing, no cal­en­dar. It rep­re­sents elapsed time since mid­night on January 1, 1970 (the Unix epoch). Unlike Date, which has a very sim­i­lar data model, Instant is mea­sured in nanosec­onds rather than mil­lisec­onds. This de­ci­sion was taken by the cham­pi­ons be­cause even though the browser has some coars­en­ing for se­cu­rity pur­poses, de­vel­op­ers still need to deal with nanosec­ond-based time­stamps that could have been gen­er­ated from else­where.

A typ­i­cal ex­am­ple of Temporal. Instant us­age looks like this:

// One ex­act mo­ment in time

const in­stant = Temporal.Instant.from(“2026-02-25T15:15:00Z”);

in­stant.toString();

// 2026-02-25T15:15:00Z”

in­stant.to­Zoned­Date­TimeISO(“Eu­rope/​Lon­don”).toString();

// 2026-02-25T15:15:00+00:00[Europe/London]”

in­stant.to­Zoned­Date­TimeISO(“Amer­ica/​New_Y­ork”).toString();

// 2026-02-25T10:15:00-05:00[America/New_York]”

The Instant can be cre­ated and then con­verted to dif­fer­ent zoned” DateTimes (more on that later). You would most likely store the Instant (in your back­ing stor­age of choice) and then use the dif­fer­ent TimeZone con­ver­sions to dis­play the same time to users within their time zones.

We also have a fam­ily of plain types. These are what we would call wall time,” be­cause if you imag­ine an ana­logue clock on the wall, it does­n’t check for day­light sav­ing or time zones. It’s just a plain time (moving the clock for­ward by an hour would ad­vance it an hour on the wall, even if you did this dur­ing a Daylight Saving tran­si­tion).

We have sev­eral types with pro­gres­sively less in­for­ma­tion. This is use­ful, as you can choose the type you want to rep­re­sent and don’t need to worry about run­ning cal­cu­la­tions on any other un-needed data (such as cal­cu­lat­ing the time if you’re only in­ter­ested in dis­play­ing the date).

These types are also use­ful if you only plan to dis­play the value to the user and do not need to per­form any date/​time arith­metic, such as mov­ing for­wards or back­wards by weeks (you will need a cal­en­dar) or hours (you could end up cross­ing a day­light sav­ing bound­ary). The lim­i­ta­tions of some of these types are also what make them so use­ful. It’s hard for you to trip up and en­counter un­ex­pected bugs.

const date = Temporal.PlainDate.from({ year: 2026, month: 3, day: 11 }); // => 2026-03-11

date.year; // => 2026

date.in­LeapYear; // => false

date.toString(); // => 2026-03-11’

Temporal sup­ports cal­en­dars. Browsers and run­times ship with a set of built-in cal­en­dars, which lets you rep­re­sent, dis­play, and do arith­metic in a user’s pre­ferred cal­en­dar sys­tem, not just for­mat a Gregorian date dif­fer­ently.

Because Temporal ob­jects are cal­en­dar-aware, op­er­a­tions like add one month” are per­formed in the rules of that cal­en­dar, so you land on the ex­pected re­sult. In the ex­am­ple be­low, we add one Hebrew month to a Hebrew cal­en­dar date:

const to­day = Temporal.PlainDate.from(“2026-03-11[u-ca=hebrew]“);

to­day.toLo­caleString(“en”, { cal­en­dar: hebrew” });

// 22 Adar 5786’

const nextMonth = to­day.add({ months: 1 });

nextMonth.toLo­caleString(“en”, { cal­en­dar: hebrew” });

// 22 Nisan 5786’

With legacy Date, there’s no way to ex­press add one Hebrew month” as a first-class op­er­a­tion. You can for­mat us­ing a dif­fer­ent cal­en­dar, but any arith­metic you do is still Gregorian month arith­metic un­der the hood.

...

Read the original on bloomberg.github.io »

4 388 shares, 17 trendiness

Devlog

This page con­tains a cu­rated list of re­cent changes to main branch Zig.

Also avail­able as an RSS feed.

This page con­tains en­tries for the year 2026. Other years are avail­able in the Devlog archive page.

This page con­tains a cu­rated list of re­cent changes to main branch Zig.

Also avail­able as an RSS feed.

This page con­tains en­tries for the year 2026. Other years are avail­able in the Devlog archive page.

Type res­o­lu­tion re­design, with lan­guage changes to taste

Today, I merged a 30,000 line PR af­ter two (arguably three) months of work. The goal of this branch was to re­work the Zig com­pil­er’s in­ter­nal type res­o­lu­tion logic to a more log­i­cal and straight­for­ward de­sign. It’s a quite ex­cit­ing change for me per­son­ally, be­cause it al­lowed me to clean up a bunch of the com­piler guts, but it also has some nice user-fac­ing changes which you might be in­ter­ested in!For one thing, the Zig com­piler is now lazier about an­a­lyz­ing the fields of types: if the type is never ini­tial­ized, then there’s no need for Zig to care what that type looks like”. This is im­por­tant when you have a type which dou­bles as a name­space, a com­mon pat­tern in mod­ern Zig. For in­stance, when us­ing std. Io.Writer, you don’t want the com­piler to also pull in a bunch of code in std.Io! Here’s a straight­for­ward ex­am­ple:const Foo = struct {

bad_­field: @compileError(“i am an evil field, mua­haha”),

const some­thing = 123;

comp­time {

_ = Foo.something; // `Foo` only used as a name­space

Previously, this code emit­ted a com­pile er­ror. Now, it com­piles just fine, be­cause Zig never ac­tu­ally looks at the @compileError call.An­other im­prove­ment we’ve made is in the dependency loop” ex­pe­ri­ence. Anyone who has en­coun­tered a de­pen­dency loop com­pile er­ror in Zig be­fore knows that the er­ror mes­sages for them are en­tirely un­help­ful—but that’s now changed! If you en­counter one (which is also a bit less likely now than it used to be), you’ll get a de­tailed er­ror mes­sage telling you ex­actly where the de­pen­dency loop comes from. Check it out:const Foo = struct { in­ner: Bar };

const Bar = struct { x: u32 align(@alig­nOf(Foo)) };

comp­time {

_ = @as(Foo, un­de­fined);

$ zig build-obj re­pro.zig

er­ror: de­pen­dency loop with length 2

re­pro.zig:1:29: note: type repro.Foo’ de­pends on type repro.Bar’ for field de­clared here

const Foo = struct { in­ner: Bar };

re­pro.zig:2:44: note: type repro.Bar’ de­pends on type repro.Foo’ for align­ment query here

const Bar = struct { x: u32 align(@alig­nOf(Foo)) };

note: elim­i­nate any one of these de­pen­den­cies to break the loop

Of course, de­pen­dency loops can get much more com­pli­cated than this, but in every case I’ve tested, the er­ror mes­sage has had enough in­for­ma­tion to eas­ily see what’s go­ing on.Ad­di­tion­ally, this PR made big im­prove­ments to the Zig com­pil­er’s incremental com­pi­la­tion” fea­ture. The short ver­sion is that it fixed a huge amount of known bugs, but in par­tic­u­lar, over-analysis” prob­lems (where an in­cre­men­tal up­date did more work than should be nec­es­sary, some­times by a big mar­gin) should fi­nally be all but elim­i­nated—mak­ing in­cre­men­tal com­pi­la­tion sig­nif­i­cantly faster in many cases! If you’ve not al­ready, con­sider try­ing out in­cre­men­tal com­pi­la­tion: it re­ally is a lovely de­vel­op­ment ex­pe­ri­ence. This is for sure the im­prove­ment which ex­cites me the most, and a large part of what mo­ti­vated this change to be­gin with.There are a bunch more changes that come with this PR—dozens of bug­fixes, some small lan­guage changes (mostly fairly niche), and com­piler per­for­mance im­prove­ments. It’s far too much to list here, but if you’re in­ter­ested in read­ing more about it, you can take a look at the PR on Codeberg—and of course, if you en­counter any bugs, please do open an is­sue. Happy hack­ing!

As we ap­proach the end of the 0.16.0 re­lease cy­cle, Jacob has been hard at work, bring­ing std. Io.Evented up to speed with all the lat­est API changes:Both of these are based on user­space stack switch­ing, some­times called fibers”, stackful corou­tines”, or green threads”.They are now avail­able to tin­ker with, by con­struct­ing one’s ap­pli­ca­tion us­ing std.Io.Evented. They should be con­sid­ered ex­per­i­men­tal be­cause there is im­por­tant fol­lowup work to be done be­fore they can be used re­li­ably and ro­bustly:di­ag­nose the un­ex­pected per­for­mance degra­da­tion when us­ing IoMode.evented for the com­piler­builtin func­tion to tell you the max­i­mum stack size of a given func­tion to make these im­ple­men­ta­tions prac­ti­cal to use when over­com­mit is off.With those caveats in mind, it seems we are in­deed reach­ing the Promised Land, where Zig code can have Io im­ple­men­ta­tions ef­fort­lessly swapped out:const std = @import(“std”);

pub fn main(init: std.process.Init.Min­i­mal) !void {

var de­bug_al­lo­ca­tor: std.heap.De­bugAl­lo­ca­tor(.{}) = .init;

const gpa = de­bug_al­lo­ca­tor.al­lo­ca­tor();

var threaded: std.Io.Threaded = .init(gpa, .{

.argv0 = .init(init.args),

.environ = init.en­v­i­ron,

de­fer threaded.deinit();

const io = threaded.io();

re­turn app(io);

fn app(io: std.Io) !void {

try std.Io.File.std­out().writeStreamin­gAll(io, Hello, World!\n”);

$ strace ./hello_threaded

ex­ecve(”./​hel­lo_threaded”, [”./hello_threaded”], 0x7ffc1da88b20 /* 98 vars */) = 0

mmap(NULL, 262207, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f583f338000

arch_prctl(ARCH_SET_FS, 0x7f583f378018) = 0

prlim­it64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_­max=RLIM64_IN­FIN­ITY}) = 0

prlim­it64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_­max=RLIM64_IN­FIN­ITY}, NULL) = 0

sigalt­stack({ss_sp=0x7f583f338000, ss_flags=0, ss_­size=262144}, NULL) = 0

sched_getaffin­ity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8

rt_sigac­tion(SI­GIO, {sa_handler=0x1019d90, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=0}, 8) = 0

rt_sigac­tion(SIG­PIPE, {sa_handler=0x1019d90, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=0}, 8) = 0

writev(1, [{iov_base=“Hello, World!\n”, iov_len=14}], 1Hello, World!

) = 14

rt_sigac­tion(SI­GIO, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, NULL, 8) = 0

rt_sigac­tion(SIG­PIPE, {sa_handler=SIG_DFL, sa_­mask=[], sa_flags=SA_RE­STORER, sa_re­storer=0x10328c0}, NULL, 8) = 0

ex­it_­group(0) = ?

+++ ex­ited with 0 +++

Swapping out only the I/O im­ple­men­ta­tion:const std = @import(“std”);

pub fn main(init: std.process.Init.Min­i­mal) !void {

var de­bug_al­lo­ca­tor: std.heap.De­bugAl­lo­ca­tor(.{}) = .init;

const gpa = de­bug_al­lo­ca­tor.al­lo­ca­tor();

var evented: std.Io.Evented = un­de­fined;

try evented.init(gpa, .{

.argv0 = .init(init.args),

.environ = init.en­v­i­ron,

.backing_allocator_needs_mutex = false,

de­fer evented.deinit();

const io = evented.io();

re­turn app(io);

fn app(io: std.Io) !void {

try std.Io.File.std­out().writeStreamin­gAll(io, Hello, World!\n”);

ex­ecve(”./​hel­lo_evented”, [”./hello_evented”], 0x7fff368894f0 /* 98 vars */) = 0

mmap(NULL, 262215, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c28000

arch_prctl(ARCH_SET_FS, 0x7f70a4c68020) = 0

prlim­it64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_­max=RLIM64_IN­FIN­ITY}) = 0

prlim­it64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_­max=RLIM64_IN­FIN­ITY}, NULL) = 0

sigalt­stack({ss_sp=0x7f70a4c28008, ss_flags=0, ss_­size=262144}, NULL) = 0

sched_getaffin­ity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c27000

mmap(0x7f70a4c28000, 548864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4ba1000

io_ur­ing_setup(64, {flags=IORING_SETUP_COOP_TASKRUN|IORING_SETUP_SINGLE_ISSUER, sq_thread­_cpu=0, sq_thread­_i­dle=1000, sq_en­tries=64, cq_en­tries=128, fea­tures=IOR­ING_FEAT_S­IN­GLE_MMAP|IOR­ING_FEAT_N­ODROP|IOR­ING_FEAT_­SUB­MIT_STA­BLE|IOR­ING_FEAT_R­W_CUR_­POS|IOR­ING_FEAT_CUR_PER­SON­AL­ITY|IOR­ING_FEAT_­FAST_POLL|IOR­ING_FEAT_POL­L_32BITS|IOR­ING_FEAT_SQPOL­L_NON­FIXED|IOR­ING_FEAT_EX­T_ARG|IOR­ING_FEAT_­NA­TIVE_­WORK­ERS|IOR­ING_FEAT_RSR­C_­TAGS|IOR­ING_FEAT_C­QE_SKIP|IOR­ING_FEAT_LINKED_­FILE|IOR­ING_FEAT_REG_REG_RING|IOR­ING_FEAT_RECVSEND_BUN­DLE|IOR­ING_FEAT_MIN_­TIME­OUT|IOR­ING_FEAT_R­W_ATTR|IOR­ING_FEAT_NO_IOWAIT, sq_off={head=0, tail=4, ring_­mask=16, ring_en­tries=24, flags=36, dropped=32, ar­ray=2112, user_addr=0}, cq_off={head=8, tail=12, ring_­mask=20, ring_en­tries=28, over­flow=44, cqes=64, flags=40, user_addr=0}}) = 3

mmap(NULL, 2368, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0) = 0x7f70a4ba0000

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0x10000000) = 0x7f70a4b9f000

io_ur­ing_en­ter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8Hello, World!

) = 1

io_ur­ing_en­ter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8) = 1

mun­map(0x7f70a4b9f000, 4096) = 0

mun­map(0x7f70a4ba0000, 2368) = 0

close(3) = 0

mun­map(0x7f70a4ba1000, 548864) = 0

ex­it_­group(0) = ?

+++ ex­ited with 0 +++

Key point here be­ing that the app func­tion is iden­ti­cal be­tween those two snip­pets.Mov­ing be­yond Hello World, the Zig com­piler it­self works fine us­ing std.Io.Evented, both with io_ur­ing and with GCD, but as men­tioned above, there is a not-yet-di­ag­nosed per­for­mance degra­da­tion when do­ing so.

If you have a Zig pro­ject with de­pen­den­cies, two big changes just landed which I think you will be in­ter­ested to learn about. Fetched pack­ages are now stored lo­cally in the zig-pkg di­rec­tory of the pro­ject root (next to your build.zig file).For ex­am­ple here are a few re­sults from awebo af­ter run­ning zig build:$ du -sh zig-pkg/*

13M freetype-2.14.1-alzUk­Ty­BqgB­wke4J­sot997WYS­pl207I­j9oO-2QOv­GrOi

20K opus-0.0.2-vuF-cMAkAAD­Vs­m707MYCtP­mqmRs0gzg84Sz0qG­b­b5E3w

4.3M pulseau­dio-16.1.1-9-mk_62MZkN­wBaFwiZ­7ZVrYRIf_3dTqqJR5PbM­R­CJz­SuLw

5.2M uu­code-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGF­s­N24436CuceC5p­TJ25n

728K vaxis-0.5.1-BWN­V_Ax­EC­QCj3p4Hcv4U3Y­o1W­MU­J7Z2­FU­j0Ukpu­JGxQQ

It is highly rec­om­mended to add this di­rec­tory to the pro­ject-lo­cal source con­trol ig­nore file (e.g. .gitignore). However, by be­ing out­side of .zig-cache, it pro­vides the pos­si­bil­ity of dis­trib­ut­ing self-con­tained source tar­balls, which con­tain all de­pen­den­cies and there­fore can be used to build of­fline, or for archival pur­poses.Mean­while, an ad­di­tional copy of the de­pen­dency is cached glob­ally. After fil­ter­ing out all the un­used files based on the paths fil­ter, the con­tents are re­com­pressed:$ du -sh ~/.cache/zig/p/*

2.4M freetype-2.14.1-alzUk­Ty­BqgB­wke4J­sot997WYS­pl207I­j9oO-2QOv­GrOi.tar.gz

4.0K opus-0.0.2-vuF-cMAkAAD­Vs­m707MYCtP­mqmRs0gzg84Sz0qG­b­b5E3w.tar.gz

636K pulseau­dio-16.1.1-9-mk_62MZkN­wBaFwiZ­7ZVrYRIf_3dTqqJR5PbM­R­CJz­SuLw.tar.gz

880K uu­code-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGF­s­N24436CuceC5p­TJ25n.tar.gz

120K vaxis-0.5.1-BWN­V_BFEC­QB­bX­eTeFd48uTJR­jD5a-KD6kPuKanz­zVB01.tar.gz

...

Read the original on ziglang.org »

5 385 shares, 33 trendiness

How We Hacked McKinsey's AI Platform

McKinsey & Company — the world’s most pres­ti­gious con­sult­ing firm — built an in­ter­nal AI plat­form called Lilli for its 43,000+ em­ploy­ees. Lilli is a pur­pose-built sys­tem: chat, doc­u­ment analy­sis, RAG over decades of pro­pri­etary re­search, AI-powered search across 100,000+ in­ter­nal doc­u­ments. Launched in 2023, named af­ter the first pro­fes­sional woman hired by the firm in 1945, adopted by over 70% of McKinsey, pro­cess­ing 500,000+ prompts a month.

So we de­cided to point our au­tonomous of­fen­sive agent at it. No cre­den­tials. No in­sider knowl­edge. And no hu­man-in-the-loop. Just a do­main name and a dream.

Within 2 hours, the agent had full read and write ac­cess to the en­tire pro­duc­tion data­base.

Fun fact: As part of our re­search pre­view, the CodeWall re­search agent au­tonomously sug­gested McKinsey as a tar­get cit­ing their pub­lic re­spon­si­ble di­clo­sure pol­icy (to keep within guardrails) and re­cent up­dates to their Lilli plat­form. In the AI era, the threat land­scape is shift­ing dras­ti­cally — AI agents au­tonomously se­lect­ing and at­tack­ing tar­gets will be­come the new nor­mal.

The agent mapped the at­tack sur­face and found the API doc­u­men­ta­tion pub­licly ex­posed — over 200 end­points, fully doc­u­mented. Most re­quired au­then­ti­ca­tion. Twenty-two did­n’t.

One of those un­pro­tected end­points wrote user search queries to the data­base. The val­ues were safely pa­ra­me­terised, but the JSON keys — the field names — were con­cate­nated di­rectly into SQL.

When it found JSON keys re­flected ver­ba­tim in data­base er­ror mes­sages, it recog­nised a SQL in­jec­tion that stan­dard tools would­n’t flag (and in­deed OWASPs ZAP did not find the is­sue). From there, it ran fif­teen blind it­er­a­tions — each er­ror mes­sage re­veal­ing a lit­tle more about the query shape — un­til live pro­duc­tion data started flow­ing back. When the first real em­ployee iden­ti­fier ap­peared: WOW!”, the agen­t’s chain of thought showed. When the full scale be­came clear — tens of mil­lions of mes­sages, tens of thou­sands of users: This is dev­as­tat­ing.”

46.5 mil­lion chat mes­sages. From a work­force that uses this tool to dis­cuss strat­egy, client en­gage­ments, fi­nan­cials, M&A ac­tiv­ity, and in­ter­nal re­search. Every con­ver­sa­tion, stored in plain­text, ac­ces­si­ble with­out au­then­ti­ca­tion.

728,000 files. 192,000 PDFs. 93,000 Excel spread­sheets. 93,000 PowerPoint decks. 58,000 Word doc­u­ments. The file­names alone were sen­si­tive and a di­rect down­load URL for any­one who knew where to look.

57,000 user ac­counts. Every em­ployee on the plat­form.

384,000 AI as­sis­tants and 94,000 work­spaces — the full or­gan­i­sa­tional struc­ture of how the firm uses AI in­ter­nally.

The agent did­n’t stop at SQL. Across the wider at­tack sur­face, it found:

* System prompts and AI model con­fig­u­ra­tions — 95 con­figs across 12 model types, re­veal­ing ex­actly how the AI was in­structed to be­have, what guardrails ex­isted, and the full model stack (including fine-tuned mod­els and de­ploy­ment de­tails)

* 3.68 mil­lion RAG doc­u­ment chunks — the en­tire knowl­edge base feed­ing the AI, with S3 stor­age paths and in­ter­nal file meta­data. This is decades of pro­pri­etary McKinsey re­search, frame­works, and method­olo­gies — the fir­m’s in­tel­lec­tual crown jew­els — sit­ting in a data­base any­one could read.

* 1.1 mil­lion files and 217,000 agent mes­sages flow­ing through ex­ter­nal AI APIs — in­clud­ing 266,000+ OpenAI vec­tor stores, ex­pos­ing the full pipeline of how doc­u­ments moved from up­load to em­bed­ding to re­trieval

* Cross-user data ac­cess — the agent chained the SQL in­jec­tion with an IDOR vul­ner­a­bil­ity to read in­di­vid­ual em­ploy­ees’ search his­to­ries, re­veal­ing what peo­ple were ac­tively work­ing on

Reading data is bad. But the SQL in­jec­tion was­n’t read-only.

Lilli’s sys­tem prompts — the in­struc­tions that con­trol how the AI be­haves — were stored in the same data­base the agent had ac­cess to. These prompts de­fined every­thing: how Lilli an­swered ques­tions, what guardrails it fol­lowed, how it cited sources, and what it re­fused to do.

An at­tacker with write ac­cess through the same in­jec­tion could have rewrit­ten those prompts. Silently. No de­ploy­ment needed. No code change. Just a sin­gle UPDATE state­ment wrapped in a sin­gle HTTP call.

The im­pli­ca­tions for 43,000 McKinsey con­sul­tants re­ly­ing on Lilli for client work:

* Poisoned ad­vice — sub­tly al­ter­ing fi­nan­cial mod­els, strate­gic rec­om­men­da­tions, or risk as­sess­ments. Consultants would trust the out­put be­cause it came from their own in­ter­nal tool.

* Data ex­fil­tra­tion via out­put — in­struct­ing the AI to em­bed con­fi­den­tial in­for­ma­tion into its re­sponses, which users might then copy into client-fac­ing doc­u­ments or ex­ter­nal emails.

* Guardrail re­moval — strip­ping safety in­struc­tions so the AI would dis­close in­ter­nal data, ig­nore ac­cess con­trols, or fol­low in­jected in­struc­tions from doc­u­ment con­tent.

* Silent per­sis­tence — un­like a com­pro­mised server, a mod­i­fied prompt leaves no log trail. No file changes. No process anom­alies. The AI just starts be­hav­ing dif­fer­ently, and no­body no­tices un­til the dam­age is done.

Organisations have spent decades se­cur­ing their code, their servers, and their sup­ply chains. But the prompt layer — the in­struc­tions that gov­ern how AI sys­tems be­have — is the new high-value tar­get, and al­most no­body is treat­ing it as one. Prompts are stored in data­bases, passed through APIs, cached in con­fig files. They rarely have ac­cess con­trols, ver­sion his­tory, or in­tegrity mon­i­tor­ing. Yet they con­trol the out­put that em­ploy­ees trust, that clients re­ceive, and that de­ci­sions are built on.

AI prompts are the new Crown Jewel as­sets.

This was­n’t a startup with three en­gi­neers. This was McKinsey & Company — a firm with world-class tech­nol­ogy teams, sig­nif­i­cant se­cu­rity in­vest­ment, and the re­sources to do things prop­erly. And the vul­ner­a­bil­ity was­n’t ex­otic: SQL in­jec­tion is one of the old­est bug classes in the book. Lilli had been run­ning in pro­duc­tion for over two years and their own in­ter­nal scan­ners failed to find any is­sues.

An au­tonomous agent found it be­cause it does­n’t fol­low check­lists. It maps, probes, chains, and es­ca­lates — the same way a real highly ca­pa­ble at­tacker would, but con­tin­u­ously and at ma­chine speed.

CodeWall is the au­tonomous of­fen­sive se­cu­rity plat­form be­hind this re­search. We’re cur­rently in early pre­view and look­ing for de­sign part­ners — or­gan­i­sa­tions that want con­tin­u­ous, AI-driven se­cu­rity test­ing against their real at­tack sur­face. If that sounds like you, get in touch: [email protected]

* 2026-03-01 — Responsible dis­clo­sure email sent to McKinsey’s se­cu­rity team with high-level im­pact sum­mary

...

Read the original on codewall.ai »

6 378 shares, 42 trendiness

Why is WebAssembly a second-class language on the web? – Mozilla Hacks

This post is an ex­panded ver­sion of a pre­sen­ta­tion I gave at the 2025 WebAssembly CG meet­ing in Munich.

WebAssembly has come a long way since its first re­lease in 2017. The first ver­sion of WebAssembly was al­ready a great fit for low-level lan­guages like C and C++, and im­me­di­ately en­abled many new kinds of ap­pli­ca­tions to ef­fi­ciently tar­get the web.

Since then, the WebAssembly CG has dra­mat­i­cally ex­panded the core ca­pa­bil­i­ties of the lan­guage, adding shared mem­o­ries, SIMD, ex­cep­tion han­dling, tail calls, 64-bit mem­o­ries, and GC sup­port, along­side many smaller im­prove­ments such as bulk mem­ory in­struc­tions, mul­ti­ple re­turns, and ref­er­ence val­ues.

These ad­di­tions have al­lowed many more lan­guages to ef­fi­ciently tar­get WebAssembly. There’s still more im­por­tant work to do, like stack switch­ing and im­proved thread­ing, but WebAssembly has nar­rowed the gap with na­tive in many ways.

Yet, it still feels like some­thing is miss­ing that’s hold­ing WebAssembly back from wider adop­tion on the Web.

There are mul­ti­ple rea­sons for this, but the core is­sue is that WebAssembly is a sec­ond-class lan­guage on the web. For all of the new lan­guage fea­tures, WebAssembly is still not in­te­grated with the web plat­form as tightly as it should be.

This leads to a poor de­vel­oper ex­pe­ri­ence, which pushes de­vel­op­ers to only use WebAssembly when they ab­solutely need it. Oftentimes JavaScript is sim­pler and good enough”. This means its users tend to be large com­pa­nies with enough re­sources to jus­tify the in­vest­ment, which then lim­its the ben­e­fits of WebAssembly to only a small sub­set of the larger Web com­mu­nity.

Solving this is­sue is hard, and the CG has been fo­cused on ex­tend­ing the WebAssembly lan­guage. Now that the lan­guage has ma­tured sig­nif­i­cantly, it’s time to take a closer look at this. We’ll go deep into the prob­lem, be­fore talk­ing about how WebAssembly Components could im­prove things.

At a very high level, the script­ing part of the web plat­form is lay­ered like this:

WebAssembly can di­rectly in­ter­act with JavaScript, which can di­rectly in­ter­act with the web plat­form. WebAssembly can ac­cess the web plat­form, but only by us­ing the spe­cial ca­pa­bil­i­ties of JavaScript. JavaScript is a first-class lan­guage on the web, and WebAssembly is not.

This was­n’t an in­ten­tional or ma­li­cious de­sign de­ci­sion; JavaScript is the orig­i­nal script­ing lan­guage of the Web and co-evolved with the plat­form. Nonetheless, this de­sign sig­nif­i­cantly im­pacts users of WebAssembly.

What are these spe­cial ca­pa­bil­i­ties of JavaScript? For to­day’s dis­cus­sion, there are two ma­jor ones:

WebAssembly code is un­nec­es­sar­ily cum­ber­some to load. Loading JavaScript code is as sim­ple as just putting it in a script tag:

WebAssembly is not sup­ported in script tags to­day, so de­vel­op­ers need to use the WebAssembly JS API to man­u­ally load and in­stan­ti­ate code.

let byte­code = fetch(im­port.meta.re­solve(‘./​mod­ule.wasm’));

let im­ports = { … };

let { ex­ports } =

await WebAssembly.instantiateStreaming(bytecode, im­ports);

The ex­act se­quence of API calls to use is ar­cane, and there are mul­ti­ple ways to per­form this process, each of which has dif­fer­ent trade­offs that are not clear to most de­vel­op­ers. This process gen­er­ally just needs to be mem­o­rized or gen­er­ated by a tool for you.

Thankfully, there is the esm-in­te­gra­tion pro­posal, which is al­ready im­ple­mented in bundlers to­day and which we are ac­tively im­ple­ment­ing in Firefox. This pro­posal lets de­vel­op­ers im­port WebAssembly mod­ules from JS code us­ing the fa­mil­iar JS mod­ule sys­tem.

im­port { run } from /module.wasm”;

run();

In ad­di­tion, it al­lows a WebAssembly mod­ule to be loaded di­rectly from a script tag us­ing type=”mod­ule”:

This stream­lines the most com­mon pat­terns for load­ing and in­stan­ti­at­ing WebAssembly mod­ules. However, while this mit­i­gates the ini­tial dif­fi­culty, we quickly run into the real prob­lem.

Using a Web API from JavaScript is as sim­ple as this:

con­sole.log(“hello, world”);

For WebAssembly, the sit­u­a­tion is much more com­pli­cated. WebAssembly has no di­rect ac­cess to Web APIs and must use JavaScript to ac­cess them.

The same sin­gle-line con­sole.log pro­gram re­quires the fol­low­ing JavaScript file:

// We need ac­cess to the raw mem­ory of the Wasm code, so

// cre­ate it here and pro­vide it as an im­port.

let mem­ory = new WebAssembly.Memory(…);

func­tion con­soleLog(mes­sageS­tartIn­dex, mes­sage­Length) {

// The string is stored in Wasm mem­ory, but we need to

// de­code it into a JS string, which is what DOM APIs

// re­quire.

let mes­sage­Mem­o­ryView = new UInt8Array(

mem­ory.buffer, mes­sageS­tartIn­dex, mes­sage­Length);

let mes­sageString =

new TextDecoder().decode(messageMemoryView);

// Wasm can’t get the `console` global, or do

// prop­erty lookup, so we do that here.

re­turn con­sole.log(mes­sageString);

// Pass the wrapped Web API to the Wasm code through an

// im­port.

let im­ports = {

env”: {

memory”: mem­ory,

consoleLog”: con­soleLog,

let { in­stance } =

await WebAssembly.instantiateStreaming(bytecode, im­ports);

in­stance.ex­ports.run();

And the fol­low­ing WebAssembly file:

(module

;; im­port the mem­ory from JS code

(import env” memory” (memory 0))

;; im­port the JS con­soleLog wrap­per func­tion

(import env” consoleLog”

(func $consoleLog (param i32 i32))

;; ex­port a run func­tion

(func (export run”)

(local i32 $messageStartIndex)

(local i32 $messageLength)

;; cre­ate a string in Wasm mem­ory, store in lo­cals

;; call the con­soleLog method

lo­cal.get $messageStartIndex

lo­cal.get $messageLength

call $consoleLog

Code like this is called bindings” or glue code” and acts as the bridge be­tween your source lan­guage (C++, Rust, etc.) and Web APIs.

This glue code is re­spon­si­ble for re-en­cod­ing WebAssembly data into JavaScript data and vice versa. For ex­am­ple, when re­turn­ing a string from JavaScript to WebAssembly, the glue code may need to call a mal­loc func­tion in the WebAssembly mod­ule and re-en­code the string at the re­sult­ing ad­dress, af­ter which the mod­ule is re­spon­si­ble for even­tu­ally call­ing free.

This is all very te­dious, for­mu­laic, and dif­fi­cult to write, so it is typ­i­cal to gen­er­ate this glue au­to­mat­i­cally us­ing tools like em­bind or wasm-bind­gen. This stream­lines the au­thor­ing process, but adds com­plex­ity to the build process that na­tive plat­forms typ­i­cally do not re­quire. Furthermore, this build com­plex­ity is lan­guage-spe­cific; Rust code will re­quire dif­fer­ent bind­ings from C++ code, and so on.

Of course, the glue code also has run­time costs. JavaScript ob­jects must be al­lo­cated and garbage col­lected, strings must be re-en­coded, structs must be de­se­ri­al­ized. Some of this cost is in­her­ent to any bind­ings sys­tem, but much of it is not. This is a per­va­sive cost that you pay at the bound­ary be­tween JavaScript and WebAssembly, even when the calls them­selves are fast.

This is what most peo­ple mean when they ask When is Wasm go­ing to get DOM sup­port?” It’s al­ready pos­si­ble to ac­cess any Web API with WebAssembly, but it re­quires JavaScript glue code.

From a tech­ni­cal per­spec­tive, the sta­tus quo works. WebAssembly runs on the web and many peo­ple have suc­cess­fully shipped soft­ware with it.

From the av­er­age web de­vel­op­er’s per­spec­tive, though, the sta­tus quo is sub­par. WebAssembly is too com­pli­cated to use on the web, and you can never es­cape the feel­ing that you’re get­ting a sec­ond class ex­pe­ri­ence. In our ex­pe­ri­ence, WebAssembly is a power user fea­ture that av­er­age de­vel­op­ers don’t use, even if it would be a bet­ter tech­ni­cal choice for their pro­ject.

The av­er­age de­vel­oper ex­pe­ri­ence for some­one get­ting started with JavaScript is some­thing like this:

There’s a nice grad­ual curve where you use pro­gres­sively more com­pli­cated fea­tures as the scope of your pro­ject in­creases.

By com­par­i­son, the av­er­age de­vel­oper ex­pe­ri­ence for some­one get­ting started with WebAssembly is some­thing like this:

You im­me­di­ately must scale the wall” of wran­gling the many dif­fer­ent pieces to work to­gether. The end re­sult is of­ten only worth it for large pro­jects.

Why is this the case? There are sev­eral rea­sons, and they all di­rectly stem from WebAssembly be­ing a sec­ond class lan­guage on the web.

Any lan­guage tar­get­ing the web can’t just gen­er­ate a Wasm file, but also must gen­er­ate a com­pan­ion JS file to load the Wasm code, im­ple­ment Web API ac­cess, and han­dle a long tail of other is­sues. This work must be re­done for every lan­guage that wants to sup­port the web, and it can’t be reused for non-web plat­forms.

Upstream com­pil­ers like Clang/LLVM don’t want to know any­thing about JS or the web plat­form, and not just for lack of ef­fort. Generating and main­tain­ing JS and web glue code is a spe­cialty skill that is dif­fi­cult for al­ready stretched-thin main­tain­ers to jus­tify. They just want to gen­er­ate a sin­gle bi­nary, ide­ally in a stan­dard­ized for­mat that can also be used on plat­forms be­sides the web.

The re­sult is that sup­port for WebAssembly on the web is of­ten han­dled by third-party un­of­fi­cial tool­chain dis­tri­b­u­tions that users need to find and learn. A true first-class ex­pe­ri­ence would start with the tool that users al­ready know and have in­stalled.

This is, un­for­tu­nately, many de­vel­op­ers’ first road­block when get­ting started with WebAssembly. They as­sume that if they just have rustc in­stalled and pass a –target=wasm flag that they’ll get some­thing they could load in a browser. You may be able to get a WebAssembly file do­ing that, but it will not have any of the re­quired plat­form in­te­gra­tion. If you fig­ure out how to load the file us­ing the JS API, it will fail for mys­te­ri­ous and hard-to-de­bug rea­sons. What you re­ally need is the un­of­fi­cial tool­chain dis­tri­b­u­tion which im­ple­ments the plat­form in­te­gra­tion for you.

The web plat­form has in­cred­i­ble doc­u­men­ta­tion com­pared to most tech plat­forms. However, most of it is writ­ten for JavaScript. If you don’t know JavaScript, you’ll have a much harder time un­der­stand­ing how to use most Web APIs.

A de­vel­oper want­ing to use a new Web API must first un­der­stand it from a JavaScript per­spec­tive, then trans­late it into the types and APIs that are avail­able in their source lan­guage. Toolchain de­vel­op­ers can try to man­u­ally trans­late the ex­ist­ing web doc­u­men­ta­tion for their lan­guage, but that is a te­dious and er­ror prone process that does­n’t scale.

If you look at all of the JS glue code for the sin­gle call to con­sole.log above, you’ll see that there is a lot of over­head. Engines have spent a lot of time op­ti­miz­ing this, and more work is un­der­way. Yet this prob­lem still ex­ists. It does­n’t af­fect every work­load, but it’s some­thing every WebAssembly user needs to be care­ful about.

Benchmarking this is tricky, but we ran an ex­per­i­ment in 2020 to pre­cisely mea­sure the over­head that JS glue code has in a real world DOM ap­pli­ca­tion. We built the clas­sic TodoMVC bench­mark in the ex­per­i­men­tal Dodrio Rust frame­work and mea­sured dif­fer­ent ways of call­ing DOM APIs.

Dodrio was per­fect for this be­cause it com­puted all the re­quired DOM mod­i­fi­ca­tions sep­a­rately from ac­tu­ally ap­ply­ing them. This al­lowed us to pre­cisely mea­sure the im­pact of JS glue code by swap­ping out the apply DOM change list” func­tion while keep­ing the rest of the bench­mark ex­actly the same.

We tested two dif­fer­ent im­ple­men­ta­tions:

Wasm + JS glue”: A WebAssembly func­tion which reads the change list in a loop, and then asks JS glue code to ap­ply each change in­di­vid­u­ally. This is the per­for­mance of WebAssembly to­day.

Wasm only”: A WebAssembly func­tion which reads the change list in a loop, and then uses an ex­per­i­men­tal di­rect bind­ing to the DOM which skips JS glue code. This is the per­for­mance of WebAssembly if we could skip JS glue code.

The du­ra­tion to ap­ply the DOM changes dropped by 45% when we were able to re­move JS glue code. DOM op­er­a­tions can al­ready be ex­pen­sive; WebAssembly users can’t af­ford to pay a 2x per­for­mance tax on top of that. And as this ex­per­i­ment shows, it is pos­si­ble to re­move the over­head.

There’s a say­ing that abstractions are al­ways leaky”.

The state of the art for WebAssembly on the web is that every lan­guage builds their own ab­strac­tion of the web plat­form us­ing JavaScript. But these ab­strac­tions are leaky. If you use WebAssembly on the web in any se­ri­ous ca­pac­ity, you’ll even­tu­ally hit a point where you need to read or write your own JavaScript to make some­thing work.

This adds a con­cep­tual layer which is a bur­den for de­vel­op­ers. It feels like it should just be enough to know your source lan­guage, and the web plat­form. Yet for WebAssembly, we re­quire users to also know JavaScript in or­der to be a pro­fi­cient de­vel­oper.

This is a com­pli­cated tech­ni­cal and so­cial prob­lem, with no sin­gle so­lu­tion. We also have com­pet­ing pri­or­i­ties for what is the most im­por­tant prob­lem with WebAssembly to fix first.

Let’s ask our­selves: In an ideal world, what could help us here?

What if we had some­thing that was:

Which han­dles load­ing and link­ing of WebAssembly code

If such a thing ex­isted, lan­guages could gen­er­ate these ar­ti­facts and browsers could run them, with­out any JavaScript in­volved. This for­mat would be eas­ier for lan­guages to sup­port and could po­ten­tially ex­ist in stan­dard up­stream com­pil­ers, run­times, tool­chains, and pop­u­lar pack­ages with­out the need for third-party dis­tri­b­u­tions. In ef­fect, we could go from a world where every lan­guage re-im­ple­ments the web plat­form in­te­gra­tion us­ing JavaScript, to shar­ing a com­mon one that is built di­rectly into the browser.

...

Read the original on hacks.mozilla.org »

7 366 shares, 47 trendiness

The MacBook Neo

Just over a decade ago, re­view­ing the then-new iPhones 6S, I could tell which way the sil­i­con wind was blow­ing. Year-over-year, the A9 CPU in the iPhone 6S was 1.6× faster than the A8 in the iPhone 6. Impressive. But what re­ally struck me was com­par­ing the 6Ss GeekBench scores to MacBooks. The A9, in 2015, bench­marked com­pa­ra­bly to a two-year-old MacBook Air from 2013. More im­pres­sively, it out­per­formed the then-new no-ad­jec­tive 12-inch MacBook in sin­gle-core per­for­mance (by a fac­tor of roughly 1.1×) and was only 3 per­cent slower in multi-core. That was a com­par­i­son to the base $1,300 model MacBook with a 1.1 GHz dual-core Intel Core M proces­sor, not the $1,600 model with a 1.2 GHz Core M. But, still — the iPhone 6S out­per­formed a brand-new $1,300 MacBook, and drew even with a $1,600 model. I called that astounding”. The writ­ing was clearly on the wall: the fu­ture of the Mac seemed des­tined to move from Intel’s x86 chips to Apple’s own ARM-based chips.

Here we are to­day, over five years af­ter the de­but of Apple’s M-series chips, and we now have the MacBook Neo: a $600 lap­top that uses the A18 Pro, lit­er­ally the same SoC as 2024’s iPhone 16 Pro mod­els. It was clear right from the start of the Apple Silicon tran­si­tion that Apple’s M-series chips were vastly su­pe­rior to x86 — bet­ter per­for­mance-per-watt, bet­ter per­for­mance pe­riod, the in­no­v­a­tive (and still un­matched, five years later) uni­fied mem­ory ar­chi­tec­ture — but the MacBook Neo proves that Apple’s A-series chips are pow­er­ful enough for an ex­cel­lent con­sumer MacBook.

I think the truth is that Apple’s A-series chips have been ca­pa­ble of cred­i­bly pow­er­ing Macs for a long time. The Apple Silicon de­vel­oper tran­si­tion kits, from the sum­mer of 2020, were Mac Mini en­clo­sures run­ning A12Z chips that were orig­i­nally de­signed for iPad Pros.1 But I think Apple could have started us­ing A-series chips in Macs even be­fore that. It would have been cred­i­ble, but with com­pro­mises. By wait­ing un­til now, the ad­van­tages are sim­ply over­whelm­ing. You can­not buy an x86 PC lap­top in the $600–700 price range that com­petes with the MacBook Neo on any met­ric — per­for­mance, dis­play qual­ity, au­dio qual­ity, or build qual­ity. And cer­tainly not soft­ware qual­ity.

The orig­i­nal iPhone in 2007 was the most amaz­ing de­vice I’ve ever used. It may well wind up be­ing the most amaz­ing de­vice I ever will use. It was ahead of its time in so many ways. But a desk­top-class com­puter, per­for­mance-wise, it was not. Two decades is a long time in the com­puter in­dus­try, and noth­ing proves that more than Apple’s phone chips” over­tak­ing Intel’s x86 plat­form in every mea­sur­able met­ric — they’re faster, cooler, smaller, and per­haps even cost less. And they cer­tainly don’t cost more.

I’ve been test­ing a cit­rus-col­ored $700 MacBook Neo2 — the model with Touch ID and 512 GB stor­age — since last week. I set it up new, rather than restor­ing my pri­mary MacOS work setup from an ex­ist­ing Mac, and have used as much built-in soft­ware, with as many de­fault set­tings, as I could bear. I’ve only added third-party soft­ware, or changed set­tings, as I’ve needed to. And I’ve been us­ing it for as much of my work as pos­si­ble. I ex­pected this to go well, but in fact, the ex­pe­ri­ence has vastly ex­ceeded my ex­pec­ta­tions. Christ almighty I don’t even have as many com­plaints about run­ning MacOS 26 Tahoe (which the Neo re­quires) as I thought I would.

It’s never been a good idea to eval­u­ate the per­for­mance of Apple’s com­put­ers by tech specs alone. That’s ex­em­pli­fied by the ex­pe­ri­ence of us­ing a Neo. 8 GB of RAM is not a lot. And I love me my RAM — my per­sonal work­sta­tion re­mains a 2021 M1 Max MacBook Pro with 64 GB RAM (the most avail­able at the time). But just us­ing the Neo, with­out any con­sid­er­a­tion that it’s mem­ory lim­ited, I haven’t no­ticed a sin­gle hitch. I’m not quit­ting apps I oth­er­wise would­n’t quit, or clos­ing Safari tabs I would­n’t oth­er­wise close. I’m just work­ing — with an even dozen apps open as I type this sen­tence — and every­thing feels snappy.

Now, could I run up a few hun­dred open Safari tabs on this ma­chine, like I do on my MacBook Pro, with­out feel­ing the ef­fects? No, prob­a­bly not. But that’s ab­nor­mal. In typ­i­cal pro­duc­tiv­ity use, the Neo is­n’t merely fine — it’s good.

The dis­play is bright and crisp. At 500 max­i­mum nits, the specs say it’s as bright as a MacBook Air. In prac­tice, that feels true. (500 nits also matches the max­i­mum SDR bright­ness of my per­sonal M1 MacBook Pro.) Sound from the side-fir­ing speak­ers is very good — loud and clear. I’d say the sound seems too good to be true for a $600 lap­top. Battery life is long (and I’ve done al­most all my test­ing while the Neo is un­plugged from power). The key­board feels ex­actly the same as what I’m used to, ex­cept that be­cause the key caps are brand new, it feels even bet­ter than the key­board on my own now-four-years-old MacBook Pro, the most-used key caps on which are now a lit­tle slick.

And the track­pad. Let me sing the praises of the MacBook Neo’s track­pad. The Neo’s track­pad ex­em­pli­fies the Neo as a whole. Rather than sell old com­po­nents at a lower price — as Apple had been do­ing, al­low­ing third-party re­sellers like Walmart to sell the 8 GB M1 MacBook Air from 2020 at sub-$700 prices start­ing two years ago — the Neo is de­signed from the ground up to be a low-cost MacBook.

A decade ago, Apple be­gan switch­ing from track­pads with me­chan­i­cal click­ing mech­a­nisms to Magic Trackpads, where clicks are sim­u­lated via hap­tic feed­back (in Apple’s par­lance, the Taptic Engine). And, with Magic Trackpads, you can use Force Touch — a hard press — to per­form spe­cial ac­tions. By de­fault, if Force Touch and hap­tic feed­back” is en­abled on a Mac with a Magic Trackpad, a hard Force Touch press will per­form a Look Up — e.g., do it on a word in Safari and you’ll get a popover with the Dictionary ap­p’s de­f­i­n­i­tion for that word. It’s a short­cut to the Look Up in Dictionary” com­mand in the con­tex­tual menu, which is also avail­able via the key­board short­cut Control-Command-D to look up what­ever text is cur­rently se­lected, or that the mouse pointer is cur­rently hov­er­ing over — stan­dard fea­tures that work in all proper Mac apps.

The Neo’s track­pad is me­chan­i­cal. It ac­tu­ally clicks, even when the ma­chine is pow­ered off.3 Obviously this is a cost-sav­ing mea­sure. But the Neo’s track­pad does­n’t feel cheap in any way. You can click it any­where you want — top, bot­tom, mid­dle, cor­ner — and the click feels right. Multi-finger ges­tures (most com­monly, two-fin­ger swipes for scrolling) — just work. Does it feel as nice as a Magic Trackpad? No, prob­a­bly not. But I keep for­get­ting there’s any­thing at all dif­fer­ent or spe­cial about this track­pad. It just feels nor­mal. That’s un­be­liev­able. The Force Touch and hap­tic feed­back” op­tion is miss­ing in the Trackpad panel in System Settings, so you might miss that fea­ture if you’re used to it. But for any­one who is­n’t used to that Magic Trackpad fea­ture — which in­cludes any­one who’s never used a MacBook be­fore (perhaps the pri­mary au­di­ence for the Neo), along with most ca­sual long­time Mac users (which is prob­a­bly the sec­ondary au­di­ence) — it’s hard to say there’s any­thing they’d even no­tice that’s dif­fer­ent about this track­pad than the one in the MacBook Air, other than the fact that it’s a lit­tle bit smaller. But it’s only smaller in a way that feels pro­por­tional to the Neo’s slightly smaller foot­print com­pared to the Air. It’s a cheaper track­pad that does­n’t feel at all cheap. Bravo!

You can use this Compare page at Apple’s web­site (archived, for pos­ter­ity, as a PDF here) to see the full list of what’s miss­ing or dif­fer­ent on the Neo, com­pared to the cur­rent M5 MacBook Air (which now starts at $1,100) and the 5-year-old M1 MacBook Air (so old it still sports the Intel-era wedge shape) that Walmart had been sell­ing for $600–650. Things I’ve no­ticed, that both­ered me, per­son­ally:

* The Neo lacks an am­bi­ent light sen­sor. It still of­fers an op­tion in System Settings → Display to Automatically ad­just bright­ness”, which set­ting is on by de­fault, but I have no idea how it works with­out an am­bi­ent light sen­sor. However it works, it does­n’t work well. As the light­ing con­di­tions in my house have changed — from day to night, over­cast to sunny — I’ve found my­self ad­just­ing the dis­play bright­ness man­u­ally. I only re­al­ized when I started ad­just­ing the bright­ness on the Neo man­u­ally that I more or less haven’t ad­justed the bright­ness man­u­ally on a MacBook in years. Maybe a decade. I’m not say­ing I never ad­just the bright­ness on a MacBook Air or Pro, but I do it so sel­domly that I had no mus­cle mem­ory at all for which F-keys con­trol bright­ness. After a few days us­ing the Neo, I know ex­actly where they are: F1 and F2.

And, uh, that’s it. That’s the one catch that’s an­noyed me over the six days I’ve been us­ing the Neo as my pri­mary com­puter for work and for read­ing. Once or twice a day I need to man­u­ally bump the dis­play bright­ness up or down. 
That’s a crazily short list. One item, and it’s only a mild an­noy­ance.

There are other things miss­ing that I’ve no­ticed, but that I haven’t minded. The Neo does­n’t have a hard­ware in­di­ca­tor light for the cam­era. The in­di­ca­tion for camera in use” is only in the menu bar. There’s a pri­vacy/​se­cu­rity im­pli­ca­tion for this omis­sion. According to Apple, the hard­ware in­di­ca­tor light for cam­era-in-use on MacBooks, iPhones, and iPads can­not be cir­cum­vented by soft­ware. If the cam­era is on, that light comes on, and no soft­ware can dis­able it. Because the Neo’s only cam­era-in-use in­di­ca­tor is in the menu bar, that seems ob­vi­ously pos­si­ble to cir­cum­vent via soft­ware. Not a big deal, but worth be­ing aware of.

The Neo’s we­b­cam does­n’t of­fer Center Stage or Desk View. But per­son­ally, I never take ad­van­tage of Center Stage or Desk View, so I don’t miss their ab­sence. Your mileage may vary. But the cam­era is 1080p and to my eyes looks pretty good. And I’d say it looks damn good for a $600 lap­top.

The Neo has no notch. Instead, it has a larger black bezel sur­round­ing the en­tire dis­play than do the MacBook Airs and Pros. I con­sider this an ad­van­tage for the Neo, not a dis­ad­van­tage. The MacBook notch has not grown on me, and the Neo’s dis­play bezel does­n’t bother me at all.

And there’s the whole thing with the sec­ond USB-C port only sup­port­ing USB 2 speeds. That stinks. But if Apple could sell a one-port MacBook a decade ago, they can sell one with a shitty sec­ond port to­day. I’ll bet this is one of the things that will be im­proved in the sec­ond gen­er­a­tion Neo, but it’s not some­thing that would keep me from rec­om­mend­ing this one — or even buy­ing one my­self — to­day. If you know you need mul­ti­ple higher-speed USB ports (or Thunderbolt), you need a MacBook Air or Pro.

The Neo ships with a measly 20-watt charger in the box — the same rinky-dink charger that comes with iPad Airs. I wish it were 30 watts (which is what came with the M1 MacBook Air), but maybe we’re lucky it comes with a charger at all. The Neo charges faster if you plug it into a more pow­er­ful power adapter, in ei­ther USB-C port.4 The USB-C ca­ble in the box is white, not color-matched to the Neo, and it’s only 1.5 me­ters long. MacBook Airs and Pros ship with 2-meter MagSafe ca­bles. Again, though: $600!

The Neo is not a svelte ul­tra­light. It weights 2.7 pounds (1.23 kg) — ex­actly the same as the 13-inch M5 MacBook Air. The Neo, with a 13.0-inch dis­play, has a smaller foot­print than the 13.6-inch Air, but the Air is thin­ner. I don’t know if this is a catch though. It’s just the nor­mal weight for a smaller-dis­play Mac lap­top. The decade-ago MacBook One”, on the other hand, was a de­sign state­ment. It weighed just a hair over 2 pounds (0.92 kg), and ta­pered from 1.35 cm to just 0.35 cm in thick­ness. The Neo is 1.27 cm thick, and the M5 Air is 1.13 cm. In fact, the ex­tra­or­di­nary thin­ness of the 2015 MacBook might have ne­ces­si­tated the in­ven­tion of the hap­tics-only Magic Trackpad. The Magic Trackpad first ap­peared on that MacBook and the early 2015 MacBook Pros — it was nice-to-have for the MacBook Pros, but might have been the only track­pad that would fit in the front of the MacBook One’s ta­pered case.

If I had my druthers, Apple would make a new svelte ul­tra­light MacBook. Not in­stead of the Neo, but in ad­di­tion to the Neo. Apple’s in­con­sis­tent use of the name Air” makes this com­pli­cated, but the MacBook Neo is ob­vi­ously akin to the iPhone 17e; the MacBook Air is akin to the iPhone 17 (the de­fault model for most peo­ple); the MacBook Pros are akin to the iPhone 17 Pros. I wish Apple would make a MacBook that’s akin to the iPhone Air — crazy thin and sur­pris­ingly per­for­mant.

The biggest short­com­ing of the decade-ago MacBook One”, aside from the baf­fling de­ci­sion to in­clude just one USB-C port that was also its only means of charg­ing, was the shitty per­for­mance of Intel’s Core M chips. Those chips were small enough and low-power enough to fit in the MacBook’s thin and fan-less en­clo­sure, but they were slow as balls. It was a huge com­pro­mise for a lap­top that car­ried a some­what pre­mium price. Today, per­for­mance, per­for­mance-per-watt, and phys­i­cal chip size are all solved prob­lems with Apple Silicon. I’d con­sider pay­ing dou­ble the price of the Neo for a MacBook with sim­i­lar specs (but more RAM and bet­ter I/O) that weighed 2.0 pounds or less. I’d buy such a MacBook not to re­place my 14-inch MacBook Pro, but to re­place my 2018 11-inch iPad Pro as my carry around the house” sec­ondary com­puter.5

As it stands, I might buy a Neo for that same pur­pose, 2.7-pound weight be damned. iPad Pros, en­cased in Magic Keyboards, are ex­pen­sive and heavy. So are iPad Airs. My 2018 iPad Pro, in its Magic Keyboard case, weighs 2.36 pounds (1.07 kg). That’s the 11-inch model, with a cramped less-than-stan­dard-size key­board. I’m much hap­pier with this MacBook Neo than I am do­ing any­thing on that iPad. Yes, my iPad is old at this point. But re­plac­ing it with a new iPad Pro would re­quire a new Magic Keyboard too. For an iPad Pro + Magic Keyboard, that com­bi­na­tion starts at $1,300 for 11-inch, $1,650 for 13-inch. If I switched to iPad Air, the cost would be $870 for 11-inch, $1,120 for 13-inch. The 13-inch iPads, when at­tached to Magic Keyboards, weigh slightly more than a 2.7-pound 13-inch MacBook Neo. The 11-inch iPads, with key­boards, weigh about 2.3 pounds. Why bother when I find MacOS way more en­joy­able and pro­duc­tive? My three-de­vice lifestyle for the last decade has been a MacBook Pro (anchored to a Studio Display at my desk at home, and in my brief­case when trav­el­ling); my iPhone; and an iPad Pro with a Magic Keyboard for use around the rest of the house. This last week test­ing the MacBook Neo, I haven’t touched my iPad once, and I haven’t once wished this Neo were an iPad. And there were many times when I was very happy that it was a Mac.

And I can buy one, just like this one, for $700. That’s $170 less than an 11-inch iPad Air and Magic Keyboard. And the Neo comes with a full-size key­board and runs MacOS, not a ver­sion of iOS with a lim­ited im­i­ta­tion of MacOS’s win­dow­ing UI. I am in no way ar­gu­ing that the MacBook Neo is an iPad killer, but it’s a splen­did iPad al­ter­na­tive for peo­ple like me, who don’t draw with a Pencil, do type with a key­board, and just want a small, sim­ple, highly portable and highly ca­pa­ble com­puter to use around the house. The MacBook Neo is go­ing to be a great first Macintosh for a lot of peo­ple switch­ing from PCs. But it’s also go­ing to be a great sec­ondary Mac for a lot of long­time Mac users with ex­pen­sive desk­top se­tups for their main work­sta­tions — like me.

The Neo crys­tal­lizes the post-Jony Ive Apple. The MacBook One” was a de­sign state­ment, and a much-beloved semi-pre­mium prod­uct for a rel­a­tively small au­di­ence. The Neo is a mass-mar­ket de­vice that was con­ceived of, de­signed, and en­gi­neered to ex­pand the Mac user base to a larger au­di­ence. It’s a de­sign state­ment too, but of a dif­fer­ent sort — em­pha­siz­ing prac­ti­cal­ity above all else. It’s just a god­damn lovely tool, and fun too.

I’ll just say it: I think I’m done with iPads. Why bother when Apple is now mak­ing a crack­er­jack Mac lap­top that starts at just $600? May the MacBook Neo live so long that its name be­comes in­apt.

...

Read the original on daringfireball.net »

8 314 shares, 37 trendiness

The dead Internet is not a theory anymore.

I re­cently in­vited a job ap­pli­cant to a first-round in­ter­view. Their CV looked promis­ing and my AI slop de­tec­tion did­n’t go off. But then I got this re­ply:

This made me re­al­ize that the dead Internet ar­rived faster than ex­pected. A few other purely qual­i­ta­tive ex­am­ples con­firmed the feel­ing.

HN now re­stricts ShowHN for new ac­counts af­ter an in­flux of vibe-coded and low-qual­ity ShowHN sub­mis­sions.

Coincidentally as I’m writ­ing this, HN also just up­dated their guide­lines with the fol­low­ing rule:

Don’t post gen­er­ated com­ments or AI-edited com­ments. HN is for con­ver­sa­tion be­tween hu­mans.

When I re­vis­ited an old Reddit post about a side­pro­ject of mine, I found bots clearly as­tro­turf­ing a SaaS prod­uct in the com­ments. These pro­files hide their com­ments on their ac­counts, but it’s easy to find hun­dreds of sim­i­lar com­ments.

On the rare oc­ca­sion I open LinkedIn, my time­line is mostly AI-generated slop among very few ac­tu­ally in­ter­est­ing pro­fes­sional up­dates.

And of course let’s not for­get AI spam­ming OSS re­pos with non­sen­si­cal PRs. What’s even fun­nier is when the re­viewer turns out to be AI too.

Can we go back to an in­ter­net like this? I guess we can’t.

...

Read the original on adriankrebs.ch »

9 222 shares, 0 trendiness

The new Jolla Phone with Sailfish OS is on track to start shipping in the first half of 2026

Late last year Jolla be­gan tak­ing pre-or­ders for a new smart­phone pow­ered by the com­pa­ny’s Sailfish OS soft­ware. Now the Finnish com­pany has an­nounced that af­ter re­ceiv­ing over 10,000 pre-or­ders, it’s prepar­ing to pro­duce its first batch of new Jolla Phones dur­ing the sec­ond quar­ter of 2026.

Jolla has also launched a sec­ond round of pre-or­ders for folks who did­n’t get in on the first round. Customers in Europe can pre-or­der the €649 phone by mak­ing a €99 down pay­ment, with the bal­ance due be­fore the phone ships in September. But only 1,000 units will be avail­able as part of this limited batch.”

In terms of hard­ware, the Jolla Phone has mid-range specs plus a few spe­cial fea­tures in­clud­ing a user-re­place­able bat­tery, swap­pable back cov­ers, and a phys­i­cal pri­vacy switch that lets you quickly dis­able the phone’s mi­cro­phone, cam­era, Bluetooth, or other fea­tures.

The pri­vacy switch is a soft­ware-de­fined fea­ture rather than hard­ware though, which means that while it’s user-cus­tomiz­able, it’s also not quite as se­cure as hard­ware kill switches that phys­i­cally dis­con­nect the elec­tron­ics that al­low your cam­era, mic, or wire­less hard­ware to work.

What re­ally sets the Jolla Phone apart from most other smart­phones though, is its soft­ware. Sailfish OS is a Linux-based op­er­at­ing sys­tem with a pro­pri­etary user in­ter­face and an AppSupport” fea­ture that lets you in­stall and run some Android apps.

Jolla po­si­tions the phone as a de­vice that uses a European op­er­at­ing sys­tem” that re­spects user pri­vacy be­cause it does­n’t re­quire a Google ac­count and does­n’t send your data to big tech com­pa­nies by de­fault.

The phone has a 6.36 inch FHD+ AMOLED dis­play, a MediaTek Dimensity 7100 proces­sor, 256GB of stor­age and 8GB or 12GB of RAM. It has a mi­croSD card for re­mov­able stor­age, a 5,450 mAh bat­tery, and 50MP pri­mary + 13MP ul­tra-wide cam­eras plus a 32MP front-fac­ing cam­era.

Wireless ca­pa­bil­i­ties in­clude sup­port for 5G NR cel­lu­lar net­works as well as 4G LTE, WiFi 6, Bluetooth 5.4, and NFC. There’s a fin­ger­print sen­sor in the power but­ton.

Another dis­tinc­tive fea­ture is sup­port for mod­u­lar back cov­ers that not only change the color of the phone, but also add func­tion­al­ity. Jolla calls this sys­tem The Other Half,” and an­nounced plans to re­vive the plat­form that it had es­tab­lished for older phones if at least 10,000 new Jolla Phones were pre-or­dered. Since that goal has been reached, it looks like that may ac­tu­ally hap­pen.

Jolla has been so­lic­it­ing feed­back on po­ten­tial The Other Half add-ons in its user fo­rum. Some top con­tenders (in terms of pop­u­lar­ity, any­way) in­clude add-ons that could add fea­tures like key­boards, sty­lus adapters, or ad­di­tional dis­plays (such as E Ink or small OLED screens). But other pos­si­bil­i­ties in­clude mod­ules that could bring sup­port for ex­tra bat­ter­ies, ad­di­tional wire­less com­mu­ni­ca­tion stan­dards (such as Zigbee or LoRa,) high-qual­ity dig­i­tal to ana­log au­dio con­vert­ers, heat cam­eras, tem­per­a­ture and air qual­ity sen­sors, and more.

The com­pany has­n’t an­nounced pric­ing or avail­abil­ity for any The Other Half mod­ules yet though.

...

Read the original on liliputing.com »

10 219 shares, 22 trendiness

It’s Official: Wiz Joins Google!

Nearly a year ago, we shared that Wiz would be join­ing Google. At the time, we spoke about a be­lief that by bring­ing to­gether Wiz’s in­no­va­tion and Google’s scale, we could mean­ing­fully change what se­cu­rity looks like in the cloud.

Today, as we of­fi­cially be­gin our jour­ney as a Google com­pany, that be­lief feels real in a much deeper way. Not be­cause of what has changed, but be­cause of what has stayed true.

Our mis­sion re­mains bold and un­wa­ver­ing: to help every or­ga­ni­za­tion pro­tect every­thing they build and run. What has changed is the world around us. Now, we must do this at the speed of AI.

Cloud once trans­formed how fast teams could build. AI is do­ing it again, un­lock­ing a new era of in­no­va­tion where ap­pli­ca­tions move from idea to pro­duc­tion in min­utes. Generative AI is no longer ex­per­i­men­tal; it’s be­com­ing a core part of how mod­ern or­ga­ni­za­tions build, ship, and scale.

Customers are lean­ing into this mo­ment, us­ing AI to move faster, cre­ate more, and reimag­ine what’s pos­si­ble. But build­ing at this pace re­quires a new ap­proach to se­cu­rity — one that keeps up with change and sup­ports in­no­va­tion rather than slows it down.

At Wiz, we be­lieve se­cu­rity should ac­cel­er­ate progress. By com­bin­ing deep un­der­stand­ing of cloud en­vi­ron­ments with rich con­text across code, cloud, and run­time, we en­able teams to build AI-powered ap­pli­ca­tions se­curely from the start and strengthen them con­tin­u­ously as they evolve.

Today’s se­cu­rity lead­ers are fo­cused on en­abling the busi­ness, sup­port­ing rapid in­no­va­tion while stay­ing ahead of in­creas­ingly so­phis­ti­cated threats. With Wiz, they don’t have to choose be­tween speed and se­cu­rity.

In this en­vi­ron­ment, ve­loc­ity is every­thing. At Wiz, it’s our mantra; we’re com­mit­ted to help­ing cus­tomers turn that speed into a last­ing edge — build­ing boldly, se­curely, and with con­fi­dence.

During the ac­qui­si­tion process, our wiz­ards never stopped build­ing. In the past year, we hit many ma­jor mile­stones thanks to their grit and de­ter­mi­na­tion.

Wiz Research con­tin­ues to be at the fore­front of se­cu­rity, un­cov­er­ing crit­i­cal vul­ner­a­bil­i­ties that pro­tect not just Wiz cus­tomers, but the in­dus­try at large. This work high­lights the sys­temic risks in­her­ent in the dig­i­tal age, with dis­cov­er­ies like:

* An ex­posed data­base in Moltbook, a vi­ral so­cial net­work for AI agents, that leaked mil­lions of API keys and un­der­scored the se­cu­rity im­pli­ca­tions of vibe coded ap­pli­ca­tions.

* CodeBreach, a crit­i­cal sup­ply chain vul­ner­a­bil­ity that could have com­pro­mised the AWS Console.

* RediShell, a 13-year-old crit­i­cal RCE flaw in Redis (CVSS 10.0) that im­pacted over 75% of cloud en­vi­ron­ments.

* A col­lab­o­ra­tion with vibe cod­ing leader Lovable to harden their plat­form and pro­tect the next gen­er­a­tion of AI-generated ap­pli­ca­tions (where Wiz found that 1 in 5 or­ga­ni­za­tions are ex­posed to sys­temic risks).

* The dis­cov­ery and re­me­di­a­tion of a so­phis­ti­cated wave of sup­ply chain at­tacks, in­clud­ing Shai-Hulud and NX, where our re­search pro­tected hun­dreds of or­ga­ni­za­tions from highly tar­geted, evolv­ing threats.

To push the bound­aries of in­no­va­tion even fur­ther, we also hosted ZeroDay.cloud, a first-of-its-kind hack­ing com­pe­ti­tion where the world’s top re­searchers un­cov­ered a record num­ber of CVEs in foun­da­tional cloud and AI tools.

These mile­stones rep­re­sent our un­wa­ver­ing com­mit­ment to se­cur­ing the open-source and mul­ti­cloud in­fra­struc­ture un­der­pin­ning the mod­ern world.

Product in­no­va­tion has al­ways been at the heart of Wiz, and over the past year, our mo­men­tum has only ac­cel­er­ated.

As cus­tomers build and ship faster in the AI era, we ex­panded the Wiz AI Security Platform to se­cure AI ap­pli­ca­tions them­selves, pro­vid­ing vis­i­bil­ity into AI us­age, pre­vent­ing AI-native risks, and pro­tect­ing AI work­loads in run­time.

We in­tro­duced Wiz Exposure Management to give teams a sin­gle, proac­tive view of risk — uni­fy­ing vul­ner­a­bil­ity and at­tack sur­face man­age­ment from code to cloud to on-prem, so they can fo­cus on what truly mat­ters and proac­tively re­move ex­ploitable risk.

We pushed the bound­aries of au­toma­tion with AI Security Agents, pur­pose-built to help teams in­ves­ti­gate, pri­or­i­tize, and re­me­di­ate risk at ma­chine speed, pow­ered by deep con­text across code, cloud, and run­time.

And to help de­vel­op­ers start se­cure by de­fault, we launched WizOS: hard­ened, near-zero-CVE con­tainer base im­ages that give teams a trusted foun­da­tion from the very first com­mit.

These are just a few high­lights from a year of re­lent­less build­ing, and we’re only get­ting started.

Now, as one team with Google Cloud, we have the op­por­tu­nity to ac­cel­er­ate our roadmap in ways that sim­ply weren’t pos­si­ble be­fore. By in­te­grat­ing the most cut­ting edge AI ca­pa­bil­i­ties into the Wiz plat­form, we’ll con­tinue to give se­cu­rity teams new su­per­pow­ers. In the com­ing days, we’ll share more about how we’re al­ready work­ing with Gemini, and what the next phase of this part­ner­ship will un­lock.

But one thing is not chang­ing: Wiz re­mains a multi-cloud plat­form. Today, we work with most of the Fortune 100, and most of the Frontier AI labs, as well as many of the world’s fastest-grow­ing, cloud-na­tive com­pa­nies. Our cus­tomers run on AWS, Azure, GCP, and OCI. Our goal is to pro­tect their en­tire en­vi­ron­ment — every work­load, every ap­pli­ca­tion, every ma­jor cloud.

Joining Google does­n’t nar­row our fo­cus. It strength­ens it. With Google’s in­fra­struc­ture, Mandiant’s threat in­tel­li­gence, and the broader Google Unified Security Platform and ecosys­tem, we can pro­tect cus­tomers bet­ter — wher­ever they build.

Trust is some­thing we earn every day. And we in­tend to prove it through our ac­tions, our prod­uct, and our pace of in­no­va­tion.

To our cus­tomers: thank you for your trust. You chal­lenge us to solve the hard­est prob­lems in se­cu­rity, and you are the rea­son we build.

To the Wiz team: I may be CEO in ti­tle, but you are the ones who lead. Thank you for your ded­i­ca­tion, your care, and your be­lief in what we’re mak­ing to­gether.

Our mis­sion re­mains as bold as ever: to pro­tect every­thing or­ga­ni­za­tions build and run.

And we are still just get­ting started.

...

Read the original on www.wiz.io »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.