10 interesting stories served every morning and every evening.
Hacker News Guidelines
What to Submit
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one’s intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they’re evidence of some interesting new phenomenon. If they’d cover it on TV news, it’s probably off-topic.
Please don’t do things to make titles stand out, like using uppercase or exclamation points, or saying how great an article is.
Please submit the original source. If a post reports on something found on another site, submit the latter.
Please don’t use HN primarily for promotion. It’s ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.
If the title includes the name of the site, please take it out, because the site name will be displayed after the link.
If the title contains a gratuitous number or number + adjective, we’d appreciate it if you’d crop it. E.g. translate “10 Ways To Do X” to “How To Do X,” and “14 Amazing Ys” to “Ys.” Exception: when the number is meaningful, e.g. “The 5 Platonic Solids.”
Otherwise please use the original title, unless it is misleading or linkbait; don’t editorialize.
If you submit a video or pdf, please warn us by appending [video] or [pdf] to the title.
Please don’t post on HN to ask or tell us something. Send it to hn@ycombinator.com.
Please don’t delete and repost. Deletion is for things that shouldn’t have been submitted in the first place.
Don’t solicit upvotes, comments, or submissions. Users should vote and comment when they run across something they personally find interesting—not for promotion.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. “That is idiotic; 1 + 1 is 2, not 3” can be shortened to “1 + 1 is 2, not 3.”
Don’t be curmudgeonly. Thoughtful criticism is fine, but please don’t be rigidly or generically negative.
Don’t post generated comments or AI-edited comments. HN is for conversation between humans.
Please don’t fulminate. Please don’t sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that’s easier to criticize. Assume good faith.
Please don’t post shallow dismissals, especially of other people’s work. A good critical comment teaches us something.
Please don’t use Hacker News for political or ideological battle. It tramples curiosity.
Please don’t comment on whether someone read an article. “Did you even read the article? It mentions that” can be shortened to “The article mentions that”.
Please don’t pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead.
Throwaway accounts are ok for sensitive information, but please don’t create accounts routinely. HN is a community—users should have an identity that others can relate to.
Please don’t use uppercase for emphasis. Instead, put *asterisks* around it and it will get italicized. More formatting info here.
Please don’t post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you’re worried about abuse, email hn@ycombinator.com and we’ll look at the data.
Please don’t complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They’re too common to be interesting.
Please don’t comment about the voting on comments. It never does any good, and it makes boring reading.
Please don’t post comments saying that HN is turning into Reddit. It’s
a
semi-noob
illusion,
as
old
as
the
hills.
...
Read the original on news.ycombinator.com »
Today we should ramp down rhetoric. I thought nobody would take three minutes to escape the perpetual underclass or you are worth $0.003/hr seriously. But it looks like some people do, and you shouldn’t.
Social media has been extremely toxic for the last couple months. It’s targeting you with fear and anxiety. If you don’t use this new stupid AI thing you will fall behind. If you haven’t totally updated your workflow you are worth 0. There’s people who built billion dollars companies by orchestrating 37 agents this morning AND YOU JUST SAT THERE AND ATE BREAKFAST LIKE A PLEB!
This is all complete nonsense. AI is not a magical game changer, it’s simply the continuation of the exponential of progress we have been on for a long time. It’s a win in some areas, a loss in others, but overall a win and a cool tool to use. And it will continue to improve, but it won’t “go recursive” or whatever the claim is. It’s always been recursive. You see things like autoresearch and it’s cool. But it’s not magic, it’s search. People see “AI” and they attribute some sci-fi thing to it when it’s just search and optimization. Always has been, and if you paid attention in CS class, you know the limits of those things.
That said, if you have a job where you create complexity for others, you will be found out. The days of rent seekers are coming to an end. But not because there will be no more rent seeking, it’s because rent seeking is a 0 sum game and you will lose at it to bigger players. If you have a job like that, or work at a company like that, the sooner you quit the better your outcome will be. This is the real driver of the layoffs, the big players consolidating the rent seeking to them. They just say it’s AI cause that makes the stock price go up.
The trick is not to play zero sum games. This is what I have been saying the whole time. Go create value for others and don’t worry about the returns. If you create more value than you consume, you are welcome in any well operating community. Not infinite, not always needs more, just more than you consume. That’s enough, and avoid people or comparison traps that tell you otherwise. The world is not a Red Queen’s race.
This post will get way less traction than the doom ones, but it’s telling you the way out.
...
Read the original on geohot.github.io »
Welcome to our blog! I’m Jason Williams, a senior software engineer on Bloomberg’s JavaScript Infrastructure and Terminal Experience team. Today the Bloomberg Terminal runs a lot of JavaScript. Our team provides a JavaScript environment to engineers across the company.
Bloomberg may not be the first company you think of when discussing JavaScript. It certainly wasn’t for me in 2018 before I worked here. Back then, I attended my first TC39 meeting in London, only to meet some Bloomberg engineers who were there discussing Realms, WebAssembly, Class Fields, and other topics. The company has now been involved with JavaScript standardization for numerous years, including partnering with Igalia. Some of the proposals we have assisted include Arrow Functions, Async Await, BigInt, Class Fields, Promise.allSettled, Promise.withResolvers, WeakRefs, standardizing Source Maps, and more!
The first proposal I worked on was Promise.allSettled, which was fulfilling. After that finished, I decided to help out on a proposal around dates and times, called Temporal.
JavaScript is unique in that it runs in all browsers. There is no single “owner,” so you can’t just make a change in isolation and expect it to apply everywhere. You need buy-in from all parties. Evolution happens through TC39, the Technical Committee responsible for ECMAScript.
In 2018, when I first looked at Temporal, it was at Stage 1. The TC39 Committee was convinced the problem was real. It was a radical proposal to bring a whole new library for Dates and Times into JavaScript. It was:
* Providing different DateTime Types (instead of a single API)
But how did we get here? Why was Date such a pain point? For that, we need to take a step back.
In 1995, Brendan Eich was tasked with a 10-day sprint to create Mocha (which would later become JavaScript). Under intense time pressure, many design decisions were pragmatic. One of them was to port Java’s Date implementation directly. As Brendan later explained:
It was a straight port by Ken Smith (the only code in “Mocha” I didn’t write) of Java’s Date code from Java to C.
At the time, this made sense. Java was ascendant and JavaScript was being framed as its lightweight companion. Internally, the philosophy was even referred to as MILLJ: Make It Look Like Java.
Brendan also noted that changing the API would have been politically difficult:
Changing it when everyone expected Java to be the “big brother” language would make confusion and bugs; Sun would have objected too.
In that moment, consistency with Java was more important than fundamentally rethinking the time model. It was a pragmatic trade-off. The Web was young, and most applications making use of JavaScript would be simple, at least, to begin with.
By the 2010s, JavaScript was powering banking systems, trading terminals, collaboration tools, and other complex systems running in every time zone on earth. Date was becoming more of a pain point for developers.
Developers would often write helper functions that accidently mutated the original Date object in place when they intended to return a new one:
const date = new Date(“2026-02-25T00:00:00Z”);
console.log(date.toISOString());
// “2026-02-25T00:00:00.000Z”
function addOneDay(d) {
// oops! This is mutating the date
d.setDate(d.getDate() + 1);
return d;
addOneDay(date);
console.log(date.toISOString());
// “2026-02-26T00:00:00.000Z”
const billingDate = new Date(“Sat Jan 31 2026”);
billingDate.setMonth(billingDate.getMonth() + 1);
// Expected: Feb 28
// Actual: Mar 02
Sometimes people want to get the last day of the month and fall into traps like this one, where they bump the month by one, but the days remain the same. Date does not constrain invalid calendar results back into a valid date. Instead, it silently rolls overflow into the next month.
new Date(“2026-06-25 15:15:00”).toISOString();
// Potential Return Values:
// - local TimeZone
// - Invalid Date RangeError
// - UTC
In this example, the string is similar, but not identical, to ISO 8601. Historically, browser behavior for “almost ISO” strings was undefined by the specification. Some would treat it as local time, others as UTC, and one would throw entirely as invalid input.
There’s more, much more, but the point is that Date has been a pain point for JavaScript developers for the past three decades.
The Web ecosystem had no choice but to patch Date’s shortcomings with libraries. You can see the sheer rise of datetime libraries below. Today, they add up to more than 100 million downloads a week.
Leading the charge was Moment.js, which boasts an expressive API, powerful parsing capabilities, and much-needed immutability. Created in 2011, it quickly became the de facto standard for handling date and time manipulations in JavaScript. So surely the problem is solved? Everyone should just grab a copy of this and call it a day.
The widespread adoption of moment.js (plus other similar libraries) came with its own set of problems. Adding the library meant increasing bundle size, due to the fact that it needed to be shipped with its own set of locale information plus time zone data from the time zone database.
Despite the use of minifiers, compilers, and static analysis tools, all of this extra data couldn’t be tree-shaken away, because most developers don’t know ahead of time which locales or time zones they’ll need. In order to play it safe, the majority of users took all of the data wholesale and shipped it to their users.
Maggie Johnson-Pint, who had been a maintainer of Moment.js for quite a few years (alongside others), was no stranger to requests to deal with the package size.
We were at the point with moment that it was more maintenance to keep up with modules, webpack, people wanting everything immutable because React, etc than any net new functionality
And people never stop talking about the size of course.
In 2017, Maggie decided it was time to standardise dates and times with a “Temporal Proposal” for the TC39 plenary that year. It was met with great enthusiasm, leading it to be advanced to Stage 1.
Stage 1 was a big milestone, but it was still far from the finish line. After the initial burst of energy, progress naturally slowed. Maggie and Matt Johnson-Pint were leading the effort alongside Brian Terlson, while simultaneously balancing other responsibilities inside Microsoft. Temporal was still early enough that much of the immediate work was unglamorous: requirements gathering, clarifying semantics, and translating “the ecosystem’s pain” into a design that could actually ship.
We run JavaScript at scale across the Terminal, using underlying runtimes and engines such as Chromium, Node.js and SpiderMonkey. Our users, and the financial markets in which they invest, span every time zone on earth. We pass timestamps constantly: between services, into storage, into the UI, and across systems that all have to agree on what “now” means, even when governments change DST rules with very little notice.
On top of that, we had requirements that the built-in Date model simply wasn’t designed for:
* A user-configured time zone that is not the machine’s time zone (and can change per request).
* Higher-precision timestamps (nanoseconds, at a minimum), without duct-taping extra fields onto ad-hoc wrappers forever.
In parallel with Maggie bringing Temporal to TC39, Bloomberg engineer Andrew Paprocki was talking with Igalia about making time zones configurable in V8. Specifically, they discussed introducing a supported indirection layer so an embedder could control the “perceived” time zone instead of relying on the OS default. In that conversation, Daniel Ehrenberg (then working at Igalia) pointed Andrew at the early Temporal work because it looked strikingly similar to Bloomberg’s existing value-semantic datetime types.
That exchange became an early bridge between Bloomberg’s production needs, Igalia’s browser-and-standards expertise, and the emerging direction of Temporal. Over the years that followed, Bloomberg partnered with Igalia (including via sustained funding support) and contributed engineering time directly into moving Temporal forward, until it eventually became something the whole ecosystem could ship. Andrew was looking for some volunteers within Bloomberg who could help push Temporal forward and Philipp Dunkel volunteered to be a spec champion. Alongside Andrew, he helped persuade Bloomberg to invest in making Temporal real, including a deeper partnership with Igalia. That support brought in Philip Chimento and Ujjwal Sharma as full time Temporal champions, adding the day-to-day focus the proposal needed to keep moving ahead.
Shane Carr joined the Champions team, representing Google’s Internationalization team. He provided the focus we needed on internationalization topics such as calendars, and also served as the glue between the standardization process and the voice of users who experienced pain points with tools related to JavaScript’s internationalization API (Intl), such as formatting, time zones, and calendars.
Finally, we had Justin Grant, who joined the Temporal champions in 2020 as a volunteer. After 10 years at three different startups that managed time-stamped data, he’d seen engineering teams waste thousands of hours fixing mistakes with dates, times, and time zones. Justin’s experience grounded us in real-world use cases, helped us anticipate mistakes that developers would make, and ensured that Temporal shipped a Temporal. ZonedDateTime API to help make DST bugs a thing of the past.
Other honorable mentions not on this list include Daniel Ehrenberg, Adam Shaw, and Kevin Ness.
Temporal is a top-level namespace object (similar to Math or Intl) that exists in the global scope. Underneath it are “types” that exist in the form of constructors. It’s expected that developers will reach for the type they need when using the API, such as Temporal. PlainDateTime, for example.
Here are the types Temporal comes packed with:
If you don’t know which Temporal type you need, start with Temporal. ZonedDateTime. It is the closest conceptual replacement for Date, but without the “footguns.”
* An exact moment in time (internally, milliseconds since epoch)
* All as an immutable value
const now = new Date();
const now = Temporal.Now.zonedDateTimeISO();
The above example uses the Now namespace, which gives you the type already set to your current local time and time zone.
This type is optimized for DateTimes that may require some datetime arithmetic in which the daylight saving transition could potentially cause problems. ZonedDateTime can take those transitions into account when doing any addition or subtraction of time (see example below).
// London DST starts: 2026-03-29 01:00 -> 02:00
const zdt = Temporal.ZonedDateTime.from(
“2026-03-29T00:30:00+00:00[Europe/London]”,
console.log(zdt.toString());
// → “2026-03-29T00:30:00+00:00[Europe/London]”
const plus1h = zdt.add({ hours: 1 });
console.log(plus1h.toString());
// “2026-03-29T02:30:00+01:00[Europe/London]” (01:30 doesn’t exist)
In this example, we don’t land at 01:30 but 02:30 instead, because 01:30 doesn’t exist at that specific point in time.
Temporal. Instant is an exact moment in time, it has no time zone, no daylight saving, no calendar. It represents elapsed time since midnight on January 1, 1970 (the Unix epoch). Unlike Date, which has a very similar data model, Instant is measured in nanoseconds rather than milliseconds. This decision was taken by the champions because even though the browser has some coarsening for security purposes, developers still need to deal with nanosecond-based timestamps that could have been generated from elsewhere.
A typical example of Temporal. Instant usage looks like this:
// One exact moment in time
const instant = Temporal.Instant.from(“2026-02-25T15:15:00Z”);
instant.toString();
// “2026-02-25T15:15:00Z”
instant.toZonedDateTimeISO(“Europe/London”).toString();
// “2026-02-25T15:15:00+00:00[Europe/London]”
instant.toZonedDateTimeISO(“America/New_York”).toString();
// “2026-02-25T10:15:00-05:00[America/New_York]”
The Instant can be created and then converted to different “zoned” DateTimes (more on that later). You would most likely store the Instant (in your backing storage of choice) and then use the different TimeZone conversions to display the same time to users within their time zones.
We also have a family of plain types. These are what we would call “wall time,” because if you imagine an analogue clock on the wall, it doesn’t check for daylight saving or time zones. It’s just a plain time (moving the clock forward by an hour would advance it an hour on the wall, even if you did this during a Daylight Saving transition).
We have several types with progressively less information. This is useful, as you can choose the type you want to represent and don’t need to worry about running calculations on any other un-needed data (such as calculating the time if you’re only interested in displaying the date).
These types are also useful if you only plan to display the value to the user and do not need to perform any date/time arithmetic, such as moving forwards or backwards by weeks (you will need a calendar) or hours (you could end up crossing a daylight saving boundary). The limitations of some of these types are also what make them so useful. It’s hard for you to trip up and encounter unexpected bugs.
const date = Temporal.PlainDate.from({ year: 2026, month: 3, day: 11 }); // => 2026-03-11
date.year; // => 2026
date.inLeapYear; // => false
date.toString(); // => ‘2026-03-11’
Temporal supports calendars. Browsers and runtimes ship with a set of built-in calendars, which lets you represent, display, and do arithmetic in a user’s preferred calendar system, not just format a Gregorian date differently.
Because Temporal objects are calendar-aware, operations like “add one month” are performed in the rules of that calendar, so you land on the expected result. In the example below, we add one Hebrew month to a Hebrew calendar date:
const today = Temporal.PlainDate.from(“2026-03-11[u-ca=hebrew]“);
today.toLocaleString(“en”, { calendar: “hebrew” });
// ‘22 Adar 5786’
const nextMonth = today.add({ months: 1 });
nextMonth.toLocaleString(“en”, { calendar: “hebrew” });
// ‘22 Nisan 5786’
With legacy Date, there’s no way to express “add one Hebrew month” as a first-class operation. You can format using a different calendar, but any arithmetic you do is still Gregorian month arithmetic under the hood.
...
Read the original on bloomberg.github.io »
This page contains a curated list of recent changes to main branch Zig.
Also available as an RSS feed.
This page contains entries for the year 2026. Other years are available in the Devlog archive page.
This page contains a curated list of recent changes to main branch Zig.
Also available as an RSS feed.
This page contains entries for the year 2026. Other years are available in the Devlog archive page.
Type resolution redesign, with language changes to taste
Today, I merged a 30,000 line PR after two (arguably three) months of work. The goal of this branch was to rework the Zig compiler’s internal type resolution logic to a more logical and straightforward design. It’s a quite exciting change for me personally, because it allowed me to clean up a bunch of the compiler guts, but it also has some nice user-facing changes which you might be interested in!For one thing, the Zig compiler is now lazier about analyzing the fields of types: if the type is never initialized, then there’s no need for Zig to care what that type “looks like”. This is important when you have a type which doubles as a namespace, a common pattern in modern Zig. For instance, when using std. Io.Writer, you don’t want the compiler to also pull in a bunch of code in std.Io! Here’s a straightforward example:const Foo = struct {
bad_field: @compileError(“i am an evil field, muahaha”),
const something = 123;
comptime {
_ = Foo.something; // `Foo` only used as a namespace
Previously, this code emitted a compile error. Now, it compiles just fine, because Zig never actually looks at the @compileError call.Another improvement we’ve made is in the “dependency loop” experience. Anyone who has encountered a dependency loop compile error in Zig before knows that the error messages for them are entirely unhelpful—but that’s now changed! If you encounter one (which is also a bit less likely now than it used to be), you’ll get a detailed error message telling you exactly where the dependency loop comes from. Check it out:const Foo = struct { inner: Bar };
const Bar = struct { x: u32 align(@alignOf(Foo)) };
comptime {
_ = @as(Foo, undefined);
$ zig build-obj repro.zig
error: dependency loop with length 2
repro.zig:1:29: note: type ‘repro.Foo’ depends on type ‘repro.Bar’ for field declared here
const Foo = struct { inner: Bar };
repro.zig:2:44: note: type ‘repro.Bar’ depends on type ‘repro.Foo’ for alignment query here
const Bar = struct { x: u32 align(@alignOf(Foo)) };
note: eliminate any one of these dependencies to break the loop
Of course, dependency loops can get much more complicated than this, but in every case I’ve tested, the error message has had enough information to easily see what’s going on.Additionally, this PR made big improvements to the Zig compiler’s “incremental compilation” feature. The short version is that it fixed a huge amount of known bugs, but in particular, “over-analysis” problems (where an incremental update did more work than should be necessary, sometimes by a big margin) should finally be all but eliminated—making incremental compilation significantly faster in many cases! If you’ve not already, consider trying out incremental compilation: it really is a lovely development experience. This is for sure the improvement which excites me the most, and a large part of what motivated this change to begin with.There are a bunch more changes that come with this PR—dozens of bugfixes, some small language changes (mostly fairly niche), and compiler performance improvements. It’s far too much to list here, but if you’re interested in reading more about it, you can take a look at the PR on Codeberg—and of course, if you encounter any bugs, please do open an issue. Happy hacking!
As we approach the end of the 0.16.0 release cycle, Jacob has been hard at work, bringing std. Io.Evented up to speed with all the latest API changes:Both of these are based on userspace stack switching, sometimes called “fibers”, “stackful coroutines”, or “green threads”.They are now available to tinker with, by constructing one’s application using std.Io.Evented. They should be considered experimental because there is important followup work to be done before they can be used reliably and robustly:diagnose the unexpected performance degradation when using IoMode.evented for the compilerbuiltin function to tell you the maximum stack size of a given function to make these implementations practical to use when overcommit is off.With those caveats in mind, it seems we are indeed reaching the Promised Land, where Zig code can have Io implementations effortlessly swapped out:const std = @import(“std”);
pub fn main(init: std.process.Init.Minimal) !void {
var debug_allocator: std.heap.DebugAllocator(.{}) = .init;
const gpa = debug_allocator.allocator();
var threaded: std.Io.Threaded = .init(gpa, .{
.argv0 = .init(init.args),
.environ = init.environ,
defer threaded.deinit();
const io = threaded.io();
return app(io);
fn app(io: std.Io) !void {
try std.Io.File.stdout().writeStreamingAll(io, “Hello, World!\n”);
$ strace ./hello_threaded
execve(”./hello_threaded”, [”./hello_threaded”], 0x7ffc1da88b20 /* 98 vars */) = 0
mmap(NULL, 262207, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f583f338000
arch_prctl(ARCH_SET_FS, 0x7f583f378018) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
prlimit64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_max=RLIM64_INFINITY}, NULL) = 0
sigaltstack({ss_sp=0x7f583f338000, ss_flags=0, ss_size=262144}, NULL) = 0
sched_getaffinity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8
rt_sigaction(SIGIO, {sa_handler=0x1019d90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=0x1019d90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
writev(1, [{iov_base=“Hello, World!\n”, iov_len=14}], 1Hello, World!
) = 14
rt_sigaction(SIGIO, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x10328c0}, NULL, 8) = 0
exit_group(0) = ?
+++ exited with 0 +++
Swapping out only the I/O implementation:const std = @import(“std”);
pub fn main(init: std.process.Init.Minimal) !void {
var debug_allocator: std.heap.DebugAllocator(.{}) = .init;
const gpa = debug_allocator.allocator();
var evented: std.Io.Evented = undefined;
try evented.init(gpa, .{
.argv0 = .init(init.args),
.environ = init.environ,
.backing_allocator_needs_mutex = false,
defer evented.deinit();
const io = evented.io();
return app(io);
fn app(io: std.Io) !void {
try std.Io.File.stdout().writeStreamingAll(io, “Hello, World!\n”);
execve(”./hello_evented”, [”./hello_evented”], 0x7fff368894f0 /* 98 vars */) = 0
mmap(NULL, 262215, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c28000
arch_prctl(ARCH_SET_FS, 0x7f70a4c68020) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
prlimit64(0, RLIMIT_STACK, {rlim_cur=16384*1024, rlim_max=RLIM64_INFINITY}, NULL) = 0
sigaltstack({ss_sp=0x7f70a4c28008, ss_flags=0, ss_size=262144}, NULL) = 0
sched_getaffinity(0, 128, [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31]) = 8
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4c27000
mmap(0x7f70a4c28000, 548864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f70a4ba1000
io_uring_setup(64, {flags=IORING_SETUP_COOP_TASKRUN|IORING_SETUP_SINGLE_ISSUER, sq_thread_cpu=0, sq_thread_idle=1000, sq_entries=64, cq_entries=128, features=IORING_FEAT_SINGLE_MMAP|IORING_FEAT_NODROP|IORING_FEAT_SUBMIT_STABLE|IORING_FEAT_RW_CUR_POS|IORING_FEAT_CUR_PERSONALITY|IORING_FEAT_FAST_POLL|IORING_FEAT_POLL_32BITS|IORING_FEAT_SQPOLL_NONFIXED|IORING_FEAT_EXT_ARG|IORING_FEAT_NATIVE_WORKERS|IORING_FEAT_RSRC_TAGS|IORING_FEAT_CQE_SKIP|IORING_FEAT_LINKED_FILE|IORING_FEAT_REG_REG_RING|IORING_FEAT_RECVSEND_BUNDLE|IORING_FEAT_MIN_TIMEOUT|IORING_FEAT_RW_ATTR|IORING_FEAT_NO_IOWAIT, sq_off={head=0, tail=4, ring_mask=16, ring_entries=24, flags=36, dropped=32, array=2112, user_addr=0}, cq_off={head=8, tail=12, ring_mask=20, ring_entries=28, overflow=44, cqes=64, flags=40, user_addr=0}}) = 3
mmap(NULL, 2368, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0) = 0x7f70a4ba0000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, 3, 0x10000000) = 0x7f70a4b9f000
io_uring_enter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8Hello, World!
) = 1
io_uring_enter(3, 1, 1, IORING_ENTER_GETEVENTS, NULL, 8) = 1
munmap(0x7f70a4b9f000, 4096) = 0
munmap(0x7f70a4ba0000, 2368) = 0
close(3) = 0
munmap(0x7f70a4ba1000, 548864) = 0
exit_group(0) = ?
+++ exited with 0 +++
Key point here being that the app function is identical between those two snippets.Moving beyond Hello World, the Zig compiler itself works fine using std.Io.Evented, both with io_uring and with GCD, but as mentioned above, there is a not-yet-diagnosed performance degradation when doing so.
If you have a Zig project with dependencies, two big changes just landed which I think you will be interested to learn about. Fetched packages are now stored locally in the zig-pkg directory of the project root (next to your build.zig file).For example here are a few results from awebo after running zig build:$ du -sh zig-pkg/*
13M freetype-2.14.1-alzUkTyBqgBwke4Jsot997WYSpl207Ij9oO-2QOvGrOi
20K opus-0.0.2-vuF-cMAkAADVsm707MYCtPmqmRs0gzg84Sz0qGbb5E3w
4.3M pulseaudio-16.1.1-9-mk_62MZkNwBaFwiZ7ZVrYRIf_3dTqqJR5PbMRCJzSuLw
5.2M uucode-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGFsN24436CuceC5pTJ25n
728K vaxis-0.5.1-BWNV_AxECQCj3p4Hcv4U3Yo1WMUJ7Z2FUj0UkpuJGxQQ
It is highly recommended to add this directory to the project-local source control ignore file (e.g. .gitignore). However, by being outside of .zig-cache, it provides the possibility of distributing self-contained source tarballs, which contain all dependencies and therefore can be used to build offline, or for archival purposes.Meanwhile, an additional copy of the dependency is cached globally. After filtering out all the unused files based on the paths filter, the contents are recompressed:$ du -sh ~/.cache/zig/p/*
2.4M freetype-2.14.1-alzUkTyBqgBwke4Jsot997WYSpl207Ij9oO-2QOvGrOi.tar.gz
4.0K opus-0.0.2-vuF-cMAkAADVsm707MYCtPmqmRs0gzg84Sz0qGbb5E3w.tar.gz
636K pulseaudio-16.1.1-9-mk_62MZkNwBaFwiZ7ZVrYRIf_3dTqqJR5PbMRCJzSuLw.tar.gz
880K uucode-0.1.0-ZZjBPvtWUACf5dqD_f9I37VGFsN24436CuceC5pTJ25n.tar.gz
120K vaxis-0.5.1-BWNV_BFECQBbXeTeFd48uTJRjD5a-KD6kPuKanzzVB01.tar.gz
...
Read the original on ziglang.org »
McKinsey & Company — the world’s most prestigious consulting firm — built an internal AI platform called Lilli for its 43,000+ employees. Lilli is a purpose-built system: chat, document analysis, RAG over decades of proprietary research, AI-powered search across 100,000+ internal documents. Launched in 2023, named after the first professional woman hired by the firm in 1945, adopted by over 70% of McKinsey, processing 500,000+ prompts a month.
So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
Within 2 hours, the agent had full read and write access to the entire production database.
Fun fact: As part of our research preview, the CodeWall research agent autonomously suggested McKinsey as a target citing their public responsible diclosure policy (to keep within guardrails) and recent updates to their Lilli platform. In the AI era, the threat landscape is shifting drastically — AI agents autonomously selecting and attacking targets will become the new normal.
The agent mapped the attack surface and found the API documentation publicly exposed — over 200 endpoints, fully documented. Most required authentication. Twenty-two didn’t.
One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL.
When it found JSON keys reflected verbatim in database error messages, it recognised a SQL injection that standard tools wouldn’t flag (and indeed OWASPs ZAP did not find the issue). From there, it ran fifteen blind iterations — each error message revealing a little more about the query shape — until live production data started flowing back. When the first real employee identifier appeared: “WOW!”, the agent’s chain of thought showed. When the full scale became clear — tens of millions of messages, tens of thousands of users: “This is devastating.”
46.5 million chat messages. From a workforce that uses this tool to discuss strategy, client engagements, financials, M&A activity, and internal research. Every conversation, stored in plaintext, accessible without authentication.
728,000 files. 192,000 PDFs. 93,000 Excel spreadsheets. 93,000 PowerPoint decks. 58,000 Word documents. The filenames alone were sensitive and a direct download URL for anyone who knew where to look.
57,000 user accounts. Every employee on the platform.
384,000 AI assistants and 94,000 workspaces — the full organisational structure of how the firm uses AI internally.
The agent didn’t stop at SQL. Across the wider attack surface, it found:
* System prompts and AI model configurations — 95 configs across 12 model types, revealing exactly how the AI was instructed to behave, what guardrails existed, and the full model stack (including fine-tuned models and deployment details)
* 3.68 million RAG document chunks — the entire knowledge base feeding the AI, with S3 storage paths and internal file metadata. This is decades of proprietary McKinsey research, frameworks, and methodologies — the firm’s intellectual crown jewels — sitting in a database anyone could read.
* 1.1 million files and 217,000 agent messages flowing through external AI APIs — including 266,000+ OpenAI vector stores, exposing the full pipeline of how documents moved from upload to embedding to retrieval
* Cross-user data access — the agent chained the SQL injection with an IDOR vulnerability to read individual employees’ search histories, revealing what people were actively working on
Reading data is bad. But the SQL injection wasn’t read-only.
Lilli’s system prompts — the instructions that control how the AI behaves — were stored in the same database the agent had access to. These prompts defined everything: how Lilli answered questions, what guardrails it followed, how it cited sources, and what it refused to do.
An attacker with write access through the same injection could have rewritten those prompts. Silently. No deployment needed. No code change. Just a single UPDATE statement wrapped in a single HTTP call.
The implications for 43,000 McKinsey consultants relying on Lilli for client work:
* Poisoned advice — subtly altering financial models, strategic recommendations, or risk assessments. Consultants would trust the output because it came from their own internal tool.
* Data exfiltration via output — instructing the AI to embed confidential information into its responses, which users might then copy into client-facing documents or external emails.
* Guardrail removal — stripping safety instructions so the AI would disclose internal data, ignore access controls, or follow injected instructions from document content.
* Silent persistence — unlike a compromised server, a modified prompt leaves no log trail. No file changes. No process anomalies. The AI just starts behaving differently, and nobody notices until the damage is done.
Organisations have spent decades securing their code, their servers, and their supply chains. But the prompt layer — the instructions that govern how AI systems behave — is the new high-value target, and almost nobody is treating it as one. Prompts are stored in databases, passed through APIs, cached in config files. They rarely have access controls, version history, or integrity monitoring. Yet they control the output that employees trust, that clients receive, and that decisions are built on.
AI prompts are the new Crown Jewel assets.
This wasn’t a startup with three engineers. This was McKinsey & Company — a firm with world-class technology teams, significant security investment, and the resources to do things properly. And the vulnerability wasn’t exotic: SQL injection is one of the oldest bug classes in the book. Lilli had been running in production for over two years and their own internal scanners failed to find any issues.
An autonomous agent found it because it doesn’t follow checklists. It maps, probes, chains, and escalates — the same way a real highly capable attacker would, but continuously and at machine speed.
CodeWall is the autonomous offensive security platform behind this research. We’re currently in early preview and looking for design partners — organisations that want continuous, AI-driven security testing against their real attack surface. If that sounds like you, get in touch: [email protected]
* 2026-03-01 — Responsible disclosure email sent to McKinsey’s security team with high-level impact summary
...
Read the original on codewall.ai »
This post is an expanded version of a presentation I gave at the 2025 WebAssembly CG meeting in Munich.
WebAssembly has come a long way since its first release in 2017. The first version of WebAssembly was already a great fit for low-level languages like C and C++, and immediately enabled many new kinds of applications to efficiently target the web.
Since then, the WebAssembly CG has dramatically expanded the core capabilities of the language, adding shared memories, SIMD, exception handling, tail calls, 64-bit memories, and GC support, alongside many smaller improvements such as bulk memory instructions, multiple returns, and reference values.
These additions have allowed many more languages to efficiently target WebAssembly. There’s still more important work to do, like stack switching and improved threading, but WebAssembly has narrowed the gap with native in many ways.
Yet, it still feels like something is missing that’s holding WebAssembly back from wider adoption on the Web.
There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web. For all of the new language features, WebAssembly is still not integrated with the web platform as tightly as it should be.
This leads to a poor developer experience, which pushes developers to only use WebAssembly when they absolutely need it. Oftentimes JavaScript is simpler and “good enough”. This means its users tend to be large companies with enough resources to justify the investment, which then limits the benefits of WebAssembly to only a small subset of the larger Web community.
Solving this issue is hard, and the CG has been focused on extending the WebAssembly language. Now that the language has matured significantly, it’s time to take a closer look at this. We’ll go deep into the problem, before talking about how WebAssembly Components could improve things.
At a very high level, the scripting part of the web platform is layered like this:
WebAssembly can directly interact with JavaScript, which can directly interact with the web platform. WebAssembly can access the web platform, but only by using the special capabilities of JavaScript. JavaScript is a first-class language on the web, and WebAssembly is not.
This wasn’t an intentional or malicious design decision; JavaScript is the original scripting language of the Web and co-evolved with the platform. Nonetheless, this design significantly impacts users of WebAssembly.
What are these special capabilities of JavaScript? For today’s discussion, there are two major ones:
WebAssembly code is unnecessarily cumbersome to load. Loading JavaScript code is as simple as just putting it in a script tag:
WebAssembly is not supported in script tags today, so developers need to use the WebAssembly JS API to manually load and instantiate code.
let bytecode = fetch(import.meta.resolve(‘./module.wasm’));
let imports = { … };
let { exports } =
await WebAssembly.instantiateStreaming(bytecode, imports);
The exact sequence of API calls to use is arcane, and there are multiple ways to perform this process, each of which has different tradeoffs that are not clear to most developers. This process generally just needs to be memorized or generated by a tool for you.
Thankfully, there is the esm-integration proposal, which is already implemented in bundlers today and which we are actively implementing in Firefox. This proposal lets developers import WebAssembly modules from JS code using the familiar JS module system.
import { run } from “/module.wasm”;
run();
In addition, it allows a WebAssembly module to be loaded directly from a script tag using type=”module”:
This streamlines the most common patterns for loading and instantiating WebAssembly modules. However, while this mitigates the initial difficulty, we quickly run into the real problem.
Using a Web API from JavaScript is as simple as this:
console.log(“hello, world”);
For WebAssembly, the situation is much more complicated. WebAssembly has no direct access to Web APIs and must use JavaScript to access them.
The same single-line console.log program requires the following JavaScript file:
// We need access to the raw memory of the Wasm code, so
// create it here and provide it as an import.
let memory = new WebAssembly.Memory(…);
function consoleLog(messageStartIndex, messageLength) {
// The string is stored in Wasm memory, but we need to
// decode it into a JS string, which is what DOM APIs
// require.
let messageMemoryView = new UInt8Array(
memory.buffer, messageStartIndex, messageLength);
let messageString =
new TextDecoder().decode(messageMemoryView);
// Wasm can’t get the `console` global, or do
// property lookup, so we do that here.
return console.log(messageString);
// Pass the wrapped Web API to the Wasm code through an
// import.
let imports = {
“env”: {
“memory”: memory,
“consoleLog”: consoleLog,
let { instance } =
await WebAssembly.instantiateStreaming(bytecode, imports);
instance.exports.run();
And the following WebAssembly file:
(module
;; import the memory from JS code
(import “env” “memory” (memory 0))
;; import the JS consoleLog wrapper function
(import “env” “consoleLog”
(func $consoleLog (param i32 i32))
;; export a run function
(func (export “run”)
(local i32 $messageStartIndex)
(local i32 $messageLength)
;; create a string in Wasm memory, store in locals
;; call the consoleLog method
local.get $messageStartIndex
local.get $messageLength
call $consoleLog
Code like this is called “bindings” or “glue code” and acts as the bridge between your source language (C++, Rust, etc.) and Web APIs.
This glue code is responsible for re-encoding WebAssembly data into JavaScript data and vice versa. For example, when returning a string from JavaScript to WebAssembly, the glue code may need to call a malloc function in the WebAssembly module and re-encode the string at the resulting address, after which the module is responsible for eventually calling free.
This is all very tedious, formulaic, and difficult to write, so it is typical to generate this glue automatically using tools like embind or wasm-bindgen. This streamlines the authoring process, but adds complexity to the build process that native platforms typically do not require. Furthermore, this build complexity is language-specific; Rust code will require different bindings from C++ code, and so on.
Of course, the glue code also has runtime costs. JavaScript objects must be allocated and garbage collected, strings must be re-encoded, structs must be deserialized. Some of this cost is inherent to any bindings system, but much of it is not. This is a pervasive cost that you pay at the boundary between JavaScript and WebAssembly, even when the calls themselves are fast.
This is what most people mean when they ask “When is Wasm going to get DOM support?” It’s already possible to access any Web API with WebAssembly, but it requires JavaScript glue code.
From a technical perspective, the status quo works. WebAssembly runs on the web and many people have successfully shipped software with it.
From the average web developer’s perspective, though, the status quo is subpar. WebAssembly is too complicated to use on the web, and you can never escape the feeling that you’re getting a second class experience. In our experience, WebAssembly is a power user feature that average developers don’t use, even if it would be a better technical choice for their project.
The average developer experience for someone getting started with JavaScript is something like this:
There’s a nice gradual curve where you use progressively more complicated features as the scope of your project increases.
By comparison, the average developer experience for someone getting started with WebAssembly is something like this:
You immediately must scale “the wall” of wrangling the many different pieces to work together. The end result is often only worth it for large projects.
Why is this the case? There are several reasons, and they all directly stem from WebAssembly being a second class language on the web.
Any language targeting the web can’t just generate a Wasm file, but also must generate a companion JS file to load the Wasm code, implement Web API access, and handle a long tail of other issues. This work must be redone for every language that wants to support the web, and it can’t be reused for non-web platforms.
Upstream compilers like Clang/LLVM don’t want to know anything about JS or the web platform, and not just for lack of effort. Generating and maintaining JS and web glue code is a specialty skill that is difficult for already stretched-thin maintainers to justify. They just want to generate a single binary, ideally in a standardized format that can also be used on platforms besides the web.
The result is that support for WebAssembly on the web is often handled by third-party unofficial toolchain distributions that users need to find and learn. A true first-class experience would start with the tool that users already know and have installed.
This is, unfortunately, many developers’ first roadblock when getting started with WebAssembly. They assume that if they just have rustc installed and pass a –target=wasm flag that they’ll get something they could load in a browser. You may be able to get a WebAssembly file doing that, but it will not have any of the required platform integration. If you figure out how to load the file using the JS API, it will fail for mysterious and hard-to-debug reasons. What you really need is the unofficial toolchain distribution which implements the platform integration for you.
The web platform has incredible documentation compared to most tech platforms. However, most of it is written for JavaScript. If you don’t know JavaScript, you’ll have a much harder time understanding how to use most Web APIs.
A developer wanting to use a new Web API must first understand it from a JavaScript perspective, then translate it into the types and APIs that are available in their source language. Toolchain developers can try to manually translate the existing web documentation for their language, but that is a tedious and error prone process that doesn’t scale.
If you look at all of the JS glue code for the single call to console.log above, you’ll see that there is a lot of overhead. Engines have spent a lot of time optimizing this, and more work is underway. Yet this problem still exists. It doesn’t affect every workload, but it’s something every WebAssembly user needs to be careful about.
Benchmarking this is tricky, but we ran an experiment in 2020 to precisely measure the overhead that JS glue code has in a real world DOM application. We built the classic TodoMVC benchmark in the experimental Dodrio Rust framework and measured different ways of calling DOM APIs.
Dodrio was perfect for this because it computed all the required DOM modifications separately from actually applying them. This allowed us to precisely measure the impact of JS glue code by swapping out the “apply DOM change list” function while keeping the rest of the benchmark exactly the same.
We tested two different implementations:
“Wasm + JS glue”: A WebAssembly function which reads the change list in a loop, and then asks JS glue code to apply each change individually. This is the performance of WebAssembly today.
“Wasm only”: A WebAssembly function which reads the change list in a loop, and then uses an experimental direct binding to the DOM which skips JS glue code. This is the performance of WebAssembly if we could skip JS glue code.
The duration to apply the DOM changes dropped by 45% when we were able to remove JS glue code. DOM operations can already be expensive; WebAssembly users can’t afford to pay a 2x performance tax on top of that. And as this experiment shows, it is possible to remove the overhead.
There’s a saying that “abstractions are always leaky”.
The state of the art for WebAssembly on the web is that every language builds their own abstraction of the web platform using JavaScript. But these abstractions are leaky. If you use WebAssembly on the web in any serious capacity, you’ll eventually hit a point where you need to read or write your own JavaScript to make something work.
This adds a conceptual layer which is a burden for developers. It feels like it should just be enough to know your source language, and the web platform. Yet for WebAssembly, we require users to also know JavaScript in order to be a proficient developer.
This is a complicated technical and social problem, with no single solution. We also have competing priorities for what is the most important problem with WebAssembly to fix first.
Let’s ask ourselves: In an ideal world, what could help us here?
What if we had something that was:
Which handles loading and linking of WebAssembly code
If such a thing existed, languages could generate these artifacts and browsers could run them, without any JavaScript involved. This format would be easier for languages to support and could potentially exist in standard upstream compilers, runtimes, toolchains, and popular packages without the need for third-party distributions. In effect, we could go from a world where every language re-implements the web platform integration using JavaScript, to sharing a common one that is built directly into the browser.
...
Read the original on hacks.mozilla.org »
Just over a decade ago, reviewing the then-new iPhones 6S, I could tell which way the silicon wind was blowing. Year-over-year, the A9 CPU in the iPhone 6S was 1.6× faster than the A8 in the iPhone 6. Impressive. But what really struck me was comparing the 6S’s GeekBench scores to MacBooks. The A9, in 2015, benchmarked comparably to a two-year-old MacBook Air from 2013. More impressively, it outperformed the then-new no-adjective 12-inch MacBook in single-core performance (by a factor of roughly 1.1×) and was only 3 percent slower in multi-core. That was a comparison to the base $1,300 model MacBook with a 1.1 GHz dual-core Intel Core M processor, not the $1,600 model with a 1.2 GHz Core M. But, still — the iPhone 6S outperformed a brand-new $1,300 MacBook, and drew even with a $1,600 model. I called that “astounding”. The writing was clearly on the wall: the future of the Mac seemed destined to move from Intel’s x86 chips to Apple’s own ARM-based chips.
Here we are today, over five years after the debut of Apple’s M-series chips, and we now have the MacBook Neo: a $600 laptop that uses the A18 Pro, literally the same SoC as 2024’s iPhone 16 Pro models. It was clear right from the start of the Apple Silicon transition that Apple’s M-series chips were vastly superior to x86 — better performance-per-watt, better performance period, the innovative (and still unmatched, five years later) unified memory architecture — but the MacBook Neo proves that Apple’s A-series chips are powerful enough for an excellent consumer MacBook.
I think the truth is that Apple’s A-series chips have been capable of credibly powering Macs for a long time. The Apple Silicon developer transition kits, from the summer of 2020, were Mac Mini enclosures running A12Z chips that were originally designed for iPad Pros.1 But I think Apple could have started using A-series chips in Macs even before that. It would have been credible, but with compromises. By waiting until now, the advantages are simply overwhelming. You cannot buy an x86 PC laptop in the $600–700 price range that competes with the MacBook Neo on any metric — performance, display quality, audio quality, or build quality. And certainly not software quality.
The original iPhone in 2007 was the most amazing device I’ve ever used. It may well wind up being the most amazing device I ever will use. It was ahead of its time in so many ways. But a desktop-class computer, performance-wise, it was not. Two decades is a long time in the computer industry, and nothing proves that more than Apple’s “phone chips” overtaking Intel’s x86 platform in every measurable metric — they’re faster, cooler, smaller, and perhaps even cost less. And they certainly don’t cost more.
I’ve been testing a citrus-colored $700 MacBook Neo2 — the model with Touch ID and 512 GB storage — since last week. I set it up new, rather than restoring my primary MacOS work setup from an existing Mac, and have used as much built-in software, with as many default settings, as I could bear. I’ve only added third-party software, or changed settings, as I’ve needed to. And I’ve been using it for as much of my work as possible. I expected this to go well, but in fact, the experience has vastly exceeded my expectations. Christ almighty I don’t even have as many complaints about running MacOS 26 Tahoe (which the Neo requires) as I thought I would.
It’s never been a good idea to evaluate the performance of Apple’s computers by tech specs alone. That’s exemplified by the experience of using a Neo. 8 GB of RAM is not a lot. And I love me my RAM — my personal workstation remains a 2021 M1 Max MacBook Pro with 64 GB RAM (the most available at the time). But just using the Neo, without any consideration that it’s memory limited, I haven’t noticed a single hitch. I’m not quitting apps I otherwise wouldn’t quit, or closing Safari tabs I wouldn’t otherwise close. I’m just working — with an even dozen apps open as I type this sentence — and everything feels snappy.
Now, could I run up a few hundred open Safari tabs on this machine, like I do on my MacBook Pro, without feeling the effects? No, probably not. But that’s abnormal. In typical productivity use, the Neo isn’t merely fine — it’s good.
The display is bright and crisp. At 500 maximum nits, the specs say it’s as bright as a MacBook Air. In practice, that feels true. (500 nits also matches the maximum SDR brightness of my personal M1 MacBook Pro.) Sound from the side-firing speakers is very good — loud and clear. I’d say the sound seems too good to be true for a $600 laptop. Battery life is long (and I’ve done almost all my testing while the Neo is unplugged from power). The keyboard feels exactly the same as what I’m used to, except that because the key caps are brand new, it feels even better than the keyboard on my own now-four-years-old MacBook Pro, the most-used key caps on which are now a little slick.
And the trackpad. Let me sing the praises of the MacBook Neo’s trackpad. The Neo’s trackpad exemplifies the Neo as a whole. Rather than sell old components at a lower price — as Apple had been doing, allowing third-party resellers like Walmart to sell the 8 GB M1 MacBook Air from 2020 at sub-$700 prices starting two years ago — the Neo is designed from the ground up to be a low-cost MacBook.
A decade ago, Apple began switching from trackpads with mechanical clicking mechanisms to Magic Trackpads, where clicks are simulated via haptic feedback (in Apple’s parlance, the Taptic Engine). And, with Magic Trackpads, you can use Force Touch — a hard press — to perform special actions. By default, if “Force Touch and haptic feedback” is enabled on a Mac with a Magic Trackpad, a hard Force Touch press will perform a Look Up — e.g., do it on a word in Safari and you’ll get a popover with the Dictionary app’s definition for that word. It’s a shortcut to the “Look Up in Dictionary” command in the contextual menu, which is also available via the keyboard shortcut Control-Command-D to look up whatever text is currently selected, or that the mouse pointer is currently hovering over — standard features that work in all proper Mac apps.
The Neo’s trackpad is mechanical. It actually clicks, even when the machine is powered off.3 Obviously this is a cost-saving measure. But the Neo’s trackpad doesn’t feel cheap in any way. You can click it anywhere you want — top, bottom, middle, corner — and the click feels right. Multi-finger gestures (most commonly, two-finger swipes for scrolling) — just work. Does it feel as nice as a Magic Trackpad? No, probably not. But I keep forgetting there’s anything at all different or special about this trackpad. It just feels normal. That’s unbelievable. The “Force Touch and haptic feedback” option is missing in the Trackpad panel in System Settings, so you might miss that feature if you’re used to it. But for anyone who isn’t used to that Magic Trackpad feature — which includes anyone who’s never used a MacBook before (perhaps the primary audience for the Neo), along with most casual longtime Mac users (which is probably the secondary audience) — it’s hard to say there’s anything they’d even notice that’s different about this trackpad than the one in the MacBook Air, other than the fact that it’s a little bit smaller. But it’s only smaller in a way that feels proportional to the Neo’s slightly smaller footprint compared to the Air. It’s a cheaper trackpad that doesn’t feel at all cheap. Bravo!
You can use this Compare page at Apple’s website (archived, for posterity, as a PDF here) to see the full list of what’s missing or different on the Neo, compared to the current M5 MacBook Air (which now starts at $1,100) and the 5-year-old M1 MacBook Air (so old it still sports the Intel-era wedge shape) that Walmart had been selling for $600–650. Things I’ve noticed, that bothered me, personally:
* The Neo lacks an ambient light sensor. It still offers an option in System Settings → Display to “Automatically adjust brightness”, which setting is on by default, but I have no idea how it works without an ambient light sensor. However it works, it doesn’t work well. As the lighting conditions in my house have changed — from day to night, overcast to sunny — I’ve found myself adjusting the display brightness manually. I only realized when I started adjusting the brightness on the Neo manually that I more or less haven’t adjusted the brightness manually on a MacBook in years. Maybe a decade. I’m not saying I never adjust the brightness on a MacBook Air or Pro, but I do it so seldomly that I had no muscle memory at all for which F-keys control brightness. After a few days using the Neo, I know exactly where they are: F1 and F2.
And, uh, that’s it. That’s the one catch that’s annoyed me over the six days I’ve been using the Neo as my primary computer for work and for reading. Once or twice a day I need to manually bump the display brightness up or down. That’s a crazily short list. One item, and it’s only a mild annoyance.
There are other things missing that I’ve noticed, but that I haven’t minded. The Neo doesn’t have a hardware indicator light for the camera. The indication for “camera in use” is only in the menu bar. There’s a privacy/security implication for this omission. According to Apple, the hardware indicator light for camera-in-use on MacBooks, iPhones, and iPads cannot be circumvented by software. If the camera is on, that light comes on, and no software can disable it. Because the Neo’s only camera-in-use indicator is in the menu bar, that seems obviously possible to circumvent via software. Not a big deal, but worth being aware of.
The Neo’s webcam doesn’t offer Center Stage or Desk View. But personally, I never take advantage of Center Stage or Desk View, so I don’t miss their absence. Your mileage may vary. But the camera is 1080p and to my eyes looks pretty good. And I’d say it looks damn good for a $600 laptop.
The Neo has no notch. Instead, it has a larger black bezel surrounding the entire display than do the MacBook Airs and Pros. I consider this an advantage for the Neo, not a disadvantage. The MacBook notch has not grown on me, and the Neo’s display bezel doesn’t bother me at all.
And there’s the whole thing with the second USB-C port only supporting USB 2 speeds. That stinks. But if Apple could sell a one-port MacBook a decade ago, they can sell one with a shitty second port today. I’ll bet this is one of the things that will be improved in the second generation Neo, but it’s not something that would keep me from recommending this one — or even buying one myself — today. If you know you need multiple higher-speed USB ports (or Thunderbolt), you need a MacBook Air or Pro.
The Neo ships with a measly 20-watt charger in the box — the same rinky-dink charger that comes with iPad Airs. I wish it were 30 watts (which is what came with the M1 MacBook Air), but maybe we’re lucky it comes with a charger at all. The Neo charges faster if you plug it into a more powerful power adapter, in either USB-C port.4 The USB-C cable in the box is white, not color-matched to the Neo, and it’s only 1.5 meters long. MacBook Airs and Pros ship with 2-meter MagSafe cables. Again, though: $600!
The Neo is not a svelte ultralight. It weights 2.7 pounds (1.23 kg) — exactly the same as the 13-inch M5 MacBook Air. The Neo, with a 13.0-inch display, has a smaller footprint than the 13.6-inch Air, but the Air is thinner. I don’t know if this is a catch though. It’s just the normal weight for a smaller-display Mac laptop. The decade-ago MacBook “One”, on the other hand, was a design statement. It weighed just a hair over 2 pounds (0.92 kg), and tapered from 1.35 cm to just 0.35 cm in thickness. The Neo is 1.27 cm thick, and the M5 Air is 1.13 cm. In fact, the extraordinary thinness of the 2015 MacBook might have necessitated the invention of the haptics-only Magic Trackpad. The Magic Trackpad first appeared on that MacBook and the early 2015 MacBook Pros — it was nice-to-have for the MacBook Pros, but might have been the only trackpad that would fit in the front of the MacBook One’s tapered case.
If I had my druthers, Apple would make a new svelte ultralight MacBook. Not instead of the Neo, but in addition to the Neo. Apple’s inconsistent use of the name “Air” makes this complicated, but the MacBook Neo is obviously akin to the iPhone 17e; the MacBook Air is akin to the iPhone 17 (the default model for most people); the MacBook Pros are akin to the iPhone 17 Pros. I wish Apple would make a MacBook that’s akin to the iPhone Air — crazy thin and surprisingly performant.
The biggest shortcoming of the decade-ago MacBook “One”, aside from the baffling decision to include just one USB-C port that was also its only means of charging, was the shitty performance of Intel’s Core M chips. Those chips were small enough and low-power enough to fit in the MacBook’s thin and fan-less enclosure, but they were slow as balls. It was a huge compromise for a laptop that carried a somewhat premium price. Today, performance, performance-per-watt, and physical chip size are all solved problems with Apple Silicon. I’d consider paying double the price of the Neo for a MacBook with similar specs (but more RAM and better I/O) that weighed 2.0 pounds or less. I’d buy such a MacBook not to replace my 14-inch MacBook Pro, but to replace my 2018 11-inch iPad Pro as my “carry around the house” secondary computer.5
As it stands, I might buy a Neo for that same purpose, 2.7-pound weight be damned. iPad Pros, encased in Magic Keyboards, are expensive and heavy. So are iPad Airs. My 2018 iPad Pro, in its Magic Keyboard case, weighs 2.36 pounds (1.07 kg). That’s the 11-inch model, with a cramped less-than-standard-size keyboard. I’m much happier with this MacBook Neo than I am doing anything on that iPad. Yes, my iPad is old at this point. But replacing it with a new iPad Pro would require a new Magic Keyboard too. For an iPad Pro + Magic Keyboard, that combination starts at $1,300 for 11-inch, $1,650 for 13-inch. If I switched to iPad Air, the cost would be $870 for 11-inch, $1,120 for 13-inch. The 13-inch iPads, when attached to Magic Keyboards, weigh slightly more than a 2.7-pound 13-inch MacBook Neo. The 11-inch iPads, with keyboards, weigh about 2.3 pounds. Why bother when I find MacOS way more enjoyable and productive? My three-device lifestyle for the last decade has been a MacBook Pro (anchored to a Studio Display at my desk at home, and in my briefcase when travelling); my iPhone; and an iPad Pro with a Magic Keyboard for use around the rest of the house. This last week testing the MacBook Neo, I haven’t touched my iPad once, and I haven’t once wished this Neo were an iPad. And there were many times when I was very happy that it was a Mac.
And I can buy one, just like this one, for $700. That’s $170 less than an 11-inch iPad Air and Magic Keyboard. And the Neo comes with a full-size keyboard and runs MacOS, not a version of iOS with a limited imitation of MacOS’s windowing UI. I am in no way arguing that the MacBook Neo is an iPad killer, but it’s a splendid iPad alternative for people like me, who don’t draw with a Pencil, do type with a keyboard, and just want a small, simple, highly portable and highly capable computer to use around the house. The MacBook Neo is going to be a great first Macintosh for a lot of people switching from PCs. But it’s also going to be a great secondary Mac for a lot of longtime Mac users with expensive desktop setups for their main workstations — like me.
The Neo crystallizes the post-Jony Ive Apple. The MacBook “One” was a design statement, and a much-beloved semi-premium product for a relatively small audience. The Neo is a mass-market device that was conceived of, designed, and engineered to expand the Mac user base to a larger audience. It’s a design statement too, but of a different sort — emphasizing practicality above all else. It’s just a goddamn lovely tool, and fun too.
I’ll just say it: I think I’m done with iPads. Why bother when Apple is now making a crackerjack Mac laptop that starts at just $600? May the MacBook Neo live so long that its name becomes inapt.
...
Read the original on daringfireball.net »
I recently invited a job applicant to a first-round interview. Their CV looked promising and my AI slop detection didn’t go off. But then I got this reply:
This made me realize that the dead Internet arrived faster than expected. A few other purely qualitative examples confirmed the feeling.
HN now restricts ShowHN for new accounts after an influx of vibe-coded and low-quality ShowHN submissions.
Coincidentally as I’m writing this, HN also just updated their guidelines with the following rule:
Don’t post generated comments or AI-edited comments. HN is for conversation between humans.
When I revisited an old Reddit post about a sideproject of mine, I found bots clearly astroturfing a SaaS product in the comments. These profiles hide their comments on their accounts, but it’s easy to find hundreds of similar comments.
On the rare occasion I open LinkedIn, my timeline is mostly AI-generated slop among very few actually interesting professional updates.
And of course let’s not forget AI spamming OSS repos with nonsensical PRs. What’s even funnier is when the reviewer turns out to be AI too.
Can we go back to an internet like this? I guess we can’t.
...
Read the original on adriankrebs.ch »
Late last year Jolla began taking pre-orders for a new smartphone powered by the company’s Sailfish OS software. Now the Finnish company has announced that after receiving over 10,000 pre-orders, it’s preparing to produce its first batch of new Jolla Phones during the second quarter of 2026.
Jolla has also launched a second round of pre-orders for folks who didn’t get in on the first round. Customers in Europe can pre-order the €649 phone by making a €99 down payment, with the balance due before the phone ships in September. But only 1,000 units will be available as part of this “limited batch.”
In terms of hardware, the Jolla Phone has mid-range specs plus a few special features including a user-replaceable battery, swappable back covers, and a physical privacy switch that lets you quickly disable the phone’s microphone, camera, Bluetooth, or other features.
The privacy switch is a software-defined feature rather than hardware though, which means that while it’s user-customizable, it’s also not quite as secure as hardware kill switches that physically disconnect the electronics that allow your camera, mic, or wireless hardware to work.
What really sets the Jolla Phone apart from most other smartphones though, is its software. Sailfish OS is a Linux-based operating system with a proprietary user interface and an “AppSupport” feature that lets you install and run some Android apps.
Jolla positions the phone as a device that uses “a European operating system” that respects user privacy because it doesn’t require a Google account and doesn’t send your data to big tech companies by default.
The phone has a 6.36 inch FHD+ AMOLED display, a MediaTek Dimensity 7100 processor, 256GB of storage and 8GB or 12GB of RAM. It has a microSD card for removable storage, a 5,450 mAh battery, and 50MP primary + 13MP ultra-wide cameras plus a 32MP front-facing camera.
Wireless capabilities include support for 5G NR cellular networks as well as 4G LTE, WiFi 6, Bluetooth 5.4, and NFC. There’s a fingerprint sensor in the power button.
Another distinctive feature is support for modular back covers that not only change the color of the phone, but also add functionality. Jolla calls this system “The Other Half,” and announced plans to revive the platform that it had established for older phones if at least 10,000 new Jolla Phones were pre-ordered. Since that goal has been reached, it looks like that may actually happen.
Jolla has been soliciting feedback on potential The Other Half add-ons in its user forum. Some top contenders (in terms of popularity, anyway) include add-ons that could add features like keyboards, stylus adapters, or additional displays (such as E Ink or small OLED screens). But other possibilities include modules that could bring support for extra batteries, additional wireless communication standards (such as Zigbee or LoRa,) high-quality digital to analog audio converters, heat cameras, temperature and air quality sensors, and more.
The company hasn’t announced pricing or availability for any The Other Half modules yet though.
...
Read the original on liliputing.com »
Nearly a year ago, we shared that Wiz would be joining Google. At the time, we spoke about a belief that by bringing together Wiz’s innovation and Google’s scale, we could meaningfully change what security looks like in the cloud.
Today, as we officially begin our journey as a Google company, that belief feels real in a much deeper way. Not because of what has changed, but because of what has stayed true.
Our mission remains bold and unwavering: to help every organization protect everything they build and run. What has changed is the world around us. Now, we must do this at the speed of AI.
Cloud once transformed how fast teams could build. AI is doing it again, unlocking a new era of innovation where applications move from idea to production in minutes. Generative AI is no longer experimental; it’s becoming a core part of how modern organizations build, ship, and scale.
Customers are leaning into this moment, using AI to move faster, create more, and reimagine what’s possible. But building at this pace requires a new approach to security — one that keeps up with change and supports innovation rather than slows it down.
At Wiz, we believe security should accelerate progress. By combining deep understanding of cloud environments with rich context across code, cloud, and runtime, we enable teams to build AI-powered applications securely from the start and strengthen them continuously as they evolve.
Today’s security leaders are focused on enabling the business, supporting rapid innovation while staying ahead of increasingly sophisticated threats. With Wiz, they don’t have to choose between speed and security.
In this environment, velocity is everything. At Wiz, it’s our mantra; we’re committed to helping customers turn that speed into a lasting edge — building boldly, securely, and with confidence.
During the acquisition process, our wizards never stopped building. In the past year, we hit many major milestones thanks to their grit and determination.
Wiz Research continues to be at the forefront of security, uncovering critical vulnerabilities that protect not just Wiz customers, but the industry at large. This work highlights the systemic risks inherent in the digital age, with discoveries like:
* An exposed database in Moltbook, a viral social network for AI agents, that leaked millions of API keys and underscored the security implications of vibe coded applications.
* CodeBreach, a critical supply chain vulnerability that could have compromised the AWS Console.
* RediShell, a 13-year-old critical RCE flaw in Redis (CVSS 10.0) that impacted over 75% of cloud environments.
* A collaboration with vibe coding leader Lovable to harden their platform and protect the next generation of AI-generated applications (where Wiz found that 1 in 5 organizations are exposed to systemic risks).
* The discovery and remediation of a sophisticated wave of supply chain attacks, including Shai-Hulud and NX, where our research protected hundreds of organizations from highly targeted, evolving threats.
To push the boundaries of innovation even further, we also hosted ZeroDay.cloud, a first-of-its-kind hacking competition where the world’s top researchers uncovered a record number of CVEs in foundational cloud and AI tools.
These milestones represent our unwavering commitment to securing the open-source and multicloud infrastructure underpinning the modern world.
Product innovation has always been at the heart of Wiz, and over the past year, our momentum has only accelerated.
As customers build and ship faster in the AI era, we expanded the Wiz AI Security Platform to secure AI applications themselves, providing visibility into AI usage, preventing AI-native risks, and protecting AI workloads in runtime.
We introduced Wiz Exposure Management to give teams a single, proactive view of risk — unifying vulnerability and attack surface management from code to cloud to on-prem, so they can focus on what truly matters and proactively remove exploitable risk.
We pushed the boundaries of automation with AI Security Agents, purpose-built to help teams investigate, prioritize, and remediate risk at machine speed, powered by deep context across code, cloud, and runtime.
And to help developers start secure by default, we launched WizOS: hardened, near-zero-CVE container base images that give teams a trusted foundation from the very first commit.
These are just a few highlights from a year of relentless building, and we’re only getting started.
Now, as one team with Google Cloud, we have the opportunity to accelerate our roadmap in ways that simply weren’t possible before. By integrating the most cutting edge AI capabilities into the Wiz platform, we’ll continue to give security teams new superpowers. In the coming days, we’ll share more about how we’re already working with Gemini, and what the next phase of this partnership will unlock.
But one thing is not changing: Wiz remains a multi-cloud platform. Today, we work with most of the Fortune 100, and most of the Frontier AI labs, as well as many of the world’s fastest-growing, cloud-native companies. Our customers run on AWS, Azure, GCP, and OCI. Our goal is to protect their entire environment — every workload, every application, every major cloud.
Joining Google doesn’t narrow our focus. It strengthens it. With Google’s infrastructure, Mandiant’s threat intelligence, and the broader Google Unified Security Platform and ecosystem, we can protect customers better — wherever they build.
Trust is something we earn every day. And we intend to prove it through our actions, our product, and our pace of innovation.
To our customers: thank you for your trust. You challenge us to solve the hardest problems in security, and you are the reason we build.
To the Wiz team: I may be CEO in title, but you are the ones who lead. Thank you for your dedication, your care, and your belief in what we’re making together.
Our mission remains as bold as ever: to protect everything organizations build and run.
And we are still just getting started.
...
Read the original on www.wiz.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.