10 interesting stories served every morning and every evening.
Trees take quite a while to grow. If someone 50 years ago planted a row of oaks or a chestnut tree on your plot of land, you have something that no amount of money or effort can replicate. The only way is to wait. Tree-lined roads, old gardens, houses sheltered by decades of canopy: if you want to start fresh on an empty plot, you will not be able to get that.
Because some things just take time.
We know this intuitively. We pay premiums for Swiss watches, Hermès bags and old properties precisely because of the time embedded in them. Either because of the time it took to build them or because of their age. We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.
Yet right now we also live in a time of instant gratification, and it’s entering how we build software and companies. As much as we can speed up code generation, the real defining element of a successful company or an Open Source project will continue to be tenacity. The ability of leadership or the maintainers to stick to a problem for years, to build relationships, to work through challenges fundamentally defined by human lifetimes.
The current generation of startup founders and programmers is obsessed with speed. Fast iteration, rapid deployment, doing everything as quickly as possible. For many things, that’s fine. You can go fast, leave some quality on the table, and learn something along the way.
But there are things where speed is actively harmful, where the friction exists for a reason. Compliance is one of those cases. There’s a strong desire to eliminate everything that processes like SOC2 require, and an entire industry of turnkey solutions has sprung up to help —
Delve just being one example, there are more.
There’s a feeling that all the things that create friction in your life should be automated away. That human involvement should be replaced by AI-based decision-making. Because it is the friction of the process that is the problem. When in fact many times the friction, or that things just take time, is precisely the point.
There’s a reason we have cooling-off periods for some important decisions in one’s life. We recognize that people need time to think about what they’re doing, and that doing something right once doesn’t mean much because you need to be able to do it over a longer period of time.
AI writes code fast which isn’t news anymore. What’s interesting is that we’re pushing this force downstream: we seemingly have this desire to ship faster than ever, to run more experiments and that creates a new desire, one to remove all the remaining friction of reviews, designing and configuring infrastructure, anything that slows the pipeline. If the machines are so great, why do we even need checklists or permission systems? Express desire, enjoy result.
Because we now believe it is important for us to just do everything faster. But increasingly, I also feel like this means that the shelf life of much of the software being created today — software that people and businesses should depend on — can be measured only in months rather than decades, and the relationships alongside.
In one of last year’s earlier YC batches, there was already a handful that just disappeared without even saying what they learned or saying goodbye to their customers. They just shut down their public presence and moved on to other things. And to me, that is not a sign of healthy iteration. That is a sign of breaking the basic trust you need to build a relationship with customers. A proper shutdown takes time and effort, and our current environment treats that as time not wisely spent. Better to just move on to the next thing.
This is extending to Open Source projects as well. All of a sudden, everything is an Open Source project, but many of them only have commits for a week or so, and then they go away because the motivation of the creator already waned. And in the name of experimentation, that is all good and well, but what makes a good Open Source project is that you think and truly believe that the person that created it is either going to stick with it for a very long period of time, or they are able to set up a strategy for succession, or they have created enough of a community that these projects will stand the test of time in one form or another.
Relatedly, I’m also increasingly skeptical of anyone who sells me something that supposedly saves my time. When all that I see is that everybody who is like me, fully onboarded into AI and agentic tools, seemingly has less and less time available because we fall into a trap where we’re immediately filling it with more things.
We all sell each other the idea that we’re going to save time, but that is not what’s happening. Any time saved gets immediately captured by competition. Someone who actually takes a breath is outmaneuvered by someone who fills every freed-up hour with new output. There is no easy way to bank the time and it just disappears.
I feel this acutely. I’m very close to the red-hot center of where economic activity around AI is taking place, and more than anything, I have less and less time, even when I try to purposefully scale back and create the space. For me this is a problem. It’s a problem because even with the best intentions, I actually find it very hard to create quality when we are quickly commoditizing software, and the machines make it so appealing.
I keep coming back to the trees. I’ve been maintaining Open Source projects for close to two decades now. The last startup I worked on, I spent 10 years at. That’s not because I’m particularly disciplined or virtuous. It’s because I or someone else, planted something, and then I kept showing up, and eventually the thing had roots that went deeper than my enthusiasm on any given day. That’s what time does! It turns some idea or plan into a commitment and a commitment into something that can shelter and grow other people.
Nobody is going to mass-produce a 50-year-old oak. And nobody is going to conjure trust, or quality, or community out of a weekend sprint. The things I value most — the projects, the relationships, the communities — are all things that took years to become what they are. No tool, no matter how fast, was going to get them there sooner.
We recently planted a new tree with Colin. I want it to grow into a large one. I know that’s going to take time, and I’m not in a rush.
...
Read the original on lucumr.pocoo.org »
Age verification is no longer a narrow mechanism for a few adult websites. Across Europe, the USA, the UK, Australia, and elsewhere, it is expanding into social media, messaging, gaming, search, and other mainstream services.
The common framing says these systems exist to protect children. That concern is real. Children are exposed to harmful content, manipulative recommendation systems, predatory behavior, and compulsive platform design. Even adults are manipulated, quite succesfully, with techniques that can influence national elections.
But from a technical and political point of view, age verification is not just a child-safety feature. It is an access control architecture. It changes the default condition of the network from open access to permissioned access. Instead of receiving content unless something is blocked, users increasingly have to prove something about themselves before a service is allowed to respond.
That shift becomes clearer when age assurance moves down into the operating system. In some US proposals, the model is no longer a one-off check at a website. It becomes a persistent age-status layer maintained by the OS and exposed to applications through a system-level interface. At that point, age verification stops looking like a limited safeguard and starts looking like a general identity layer for the whole device.
This is no longer only a proprietary-platform story either. Even the Linux desktop stack is beginning to absorb this pressure. systemd has reportedly added an optional birthDate field to userdb in response to age-assurance laws. Regulation is beginning to shape the data model of personal computing, so that higher-level components can build age-aware behavior on top.
Content moderation is about classification and filtering. It asks whether some content should be blocked, labeled, delayed, or handled differently. Guardianship is something else. It is the contextual responsibility of parents, teachers, schools, and other trusted adults to decide what is appropriate for a child, when exceptions make sense, and how supervision should evolve over time. Moderation is partly technical. Guardianship is relational, local, and situated in specific contexts.
I am also a parent. I understand the fear behind these proposals because I live with it too. Children do face real online risks. But recognizing that does not oblige us to accept any solution placed in front of us, least of all one that weakens privacy for everyone while shifting responsibility away from families, schools, and the people who actually have to guide children through digital life.
Age-verification laws collapse these two questions into one centralized answer. The result is predictable. A platform, browser vendor, app store, operating-system provider, or identity intermediary is asked to enforce what is presented as a child-protection policy, even though no centralized actor can replace the judgment of a parent, a school, or a local community.
It also fails on its own terms. The bypasses are obvious: VPNs, borrowed accounts, purchased credentials, fake credentials, and tricks against age-estimation systems. A control that is easy to evade but expensive to impose is not a serious compromise: it is an error or, one may say, a corporate data-grab.
The price is high and paid by everyone. More identity checks. More metadata. More logging. More vendors in the middle. More friction for people who lack the right device, the right papers, or the right digital skills. This is not a minor safety feature. It is a new control layer for the network.
And once that layer exists, it rarely stays confined to age. Infrastructure built for one attribute is easily reused for others: location, citizenship, legal status, platform policy, or whatever the next panic demands. This is how a limited check becomes a general gate.
Keep guardianship where it belongs: with parents, teachers, schools, and communities that can make contextual decisions, authorize exceptions, and adjust over time.
The operating system can help here, but only as a local policy surface under the control of users and guardians. It should not become a universal age-broadcasting layer for apps and remote services. That is the architectural line that matters.
Most of the harms invoked in this debate do not come from the mere existence of content online. They come from recommendation systems, dark patterns, addictive metrics, and business models that reward amplification without responsibility. If the goal is to protect minors, that is where regulation should bite.
If we are serious about reducing harm, we should stop asking how to identify everyone and start asking how to strengthen local control without turning the network into a checkpoint.
It is encouraging to see this article circulating widely, as it may contribute to a shift in how policymakers approach the issue. Given its growing visibility, I will keep a concise record here of the sequence of its coverage across media outlets, as well pilot implementations across the world.
My first account on the problem emerged from a dialogue with Brave’s developer Kyle den Hartog at a cypherpunk retreat in Berlin. It was right after facilitating the digital identity track of the event that I published a rather technical piece on the topic.
Later, as age verification measures began to take hold, and in alignment with our community facilitators at the Dyne.org foundation, we decided to discontinue Discord as a channel for participation, as the platform moved to impose age verification.
Then the systemd dispute unfolded, and I found myself, as founder of the project, as the first distro maintainer stating that we would not implement age verification in Devuan GNU/Linux, a Debian fork without systemd that has, since 2016, shown fewer bugs and security advisories. The tech journalist Lunduke picked it up immediately, setting off a wave of similar declarations across the distribution maintainer community.
That was the moment I realised the need to set out, in clear terms, the reasons behind this choice, and the grounds for a form of conscientious objection should such laws ever be enforced on our projects at Dyne.org. I then wrote a piece for Wired Italy, in Italian, my mother tongue, which is due to be published by the magazine in the coming days (link TBD).
While awaiting publication in Wired, I translated the article and published it here, in English, through our think and do tank. The piece you have just read quickly reached the front page of Hacker News, drawing nearly 400 comments from concerned readers and technical experts, a valuable body of material to build on.
As the discussion gains momentum, I am engaging with colleagues at the City of Lugano and the Plan₿ Foundation, where I have recently taken on the role of Scientific Director. The proposal is to move from analysis to action by establishing a city-wide pilot that explores technologies for locally managed guardianship, offering a constructive example for Switzerland.
We are approaching this with confidence and preparing for a rollout for Lugano within the next two years. At the same time, within the Swiss Confederation there are signs of a more grounded direction, as reflected in “The Internet Initiative” placing responsibility on Big Tech and bringing together representatives from all major Swiss political parties.
My next steps include reaching out to contacts in Europe to help broaden the discussion and contribute to a more balanced public debate, in the face of sustained pressure from corporate lobbies advancing data-extractive measures.
And you can play a meaningful role as well: engage with the issue, bring your technical and political understanding to it, and help sustain attention so that those who make up the internet are not excluded from decisions that affect it. I hope this material and the reasoning behind it can be useful in that direction. Do let us at Dyne.org know if we can assist in making visible successful local pilots that implement child protection in a sound and proportionate way.
If you like to read further, I’ve written more about the problems of European Digital Identity implementation plans and architecture.
I’ve been working on privacy and identity technology for over a decade, primarily in projects funded by the European Commission.
Among my efforts are decodeproject.eu and reflowproject.eu, various academic papers, including SD-BLS, recently published by IEEE. Additionally, with our team at The Forkbomb Company we’ve developed digital identity products as DIDROOM.com and CREDIMI.io.
...
Read the original on news.dyne.org »
We write and maintain tinygrad, the fastest growing neural network framework
It’s extremely simple, and breaks down the most complex networks into 3 OpTypes
are UnaryOps, BinaryOps, and TernaryOps.
They operate on 1-3 tensors and run elementwise.
example: SQRT, LOG2, ADD, MUL, WHERE, etc…
operate on one tensor and return a smaller tensor.
example: SUM, MAX
are virtual ops that operate on one tensor and move the data around
Copy-free with ShapeTracker.
example: RESHAPE, PERMUTE, EXPAND, etc…
But how…where are your CONVs and MATMULs? Read the code to solve this mystery.
We are now funded and full time software engineers. Very talented interns okay.
See our bounty page to judge if you might be a good fit. Bounties pay you while judging that fit.
We are also hiring for operations and hardware, but if you haven’t contributed to tinygrad your application won’t be considered.
We sell a computer called the tinybox. It comes in red, green, and soon, exa.
...
Read the original on tinygrad.org »
Over the last couple of years, we’ve seen significant growth of the e18e community and a rise in performance focused contributions because of it. A large part of this is the “cleanup” initiative, where the community has been pruning packages which are redundant, outdated, or unmaintained.
One of the most common topics that comes up as part of this is “dependency bloat” - the idea that npm dependency trees are getting larger over time, often with long since redundant code which the platform now provides natively.
In this post, I want to briefly look at what I think are the three main types of bloat in our dependency trees, why they exist, and how we can start to address them.
The graph above is a common sight in many npm dependency trees - a small utility function for something which seems like it should be natively available, followed by many similarly small deep dependencies.
So why is this a thing? Why do we need is-string instead of typeof checks? Why do we need hasown instead of Object.hasOwn (or Object.prototype.hasOwnProperty)? Three things:
Support for very old engines
Somewhere in the world, some people apparently exist who need to support ES3 - think IE6/7, or extremely early versions of Node.js.
For these people, much of what we take for granted today does not exist. For example, they don’t have any of the following:
These are all ES5 features, meaning they simply don’t exist in ES3 engines.
For these unfortunate souls who are still running old engines, they need to reimplement everything themselves, or be provided with polyfills.
Alternatively, what’d be really nice is if they upgraded.
The second reason for some of these packages is “safety”.
Basically, inside Node itself, there is a concept of “primordials”. These are essentially just global objects wrapped at startup and imported by Node from then on, to avoid Node itself being broken by someone mutating the global namespace.
For example, if Node itself uses Map and we re-define what Map is - we can break Node. To avoid this, Node keeps a reference to the original Map which it imports rather than accessing the global.
You can read more about this here in the Node repo.
This makes a lot of sense for an engine, since it really shouldn’t fall over if a script messes up the global namespace.
Some maintainers also believe this is the correct way to build packages, too. This is why we have dependencies like math-intrinsics in the graph above, which basically re-exports the various Math.* functions to avoid mutation.
Lastly, we have cross-realm values. These are basically values you have passed from one realm to another - for example, from a web page to a child or vice versa.
In this situation, a new RegExp(pattern) in an iframe, is not the same RegExp class as the one in the parent page. This means window. RegExp !== iframeWindow.RegExp, which of course means val instanceof RegExp would be false if it came from the iframe (another realm).
For example, I am a maintainer of chai, and we have this exact issue. We need to support assertions happening across realms (since a test runner may run tests in a VM or iframe), so we can’t rely on instanceof checks. For that reason, we use Object.prototype.toString.call(val) === ‘[object RegExp]’ to check if something is a regex, which works across realms since it doesn’t rely on the constructor.
In the graph above, is-string is basically doing this same job in case we passed a new String(val) from one realm to another.
All of this makes sense for a very small group of people. If you’re supporting very old engines, passing values across realms, or want protection from someone mutating the environment - these packages are exactly what you need.
The problem is that the vast majority of us don’t need any of this. We’re running a version of Node from the last 10 years, or using an evergreen browser. We don’t need to support pre-ES5 environments, we don’t pass values across frames, and we uninstall packages which break the environment.
These layers of niche compatibility somehow made their way into the “hot path” of everyday packages. The tiny group of people who actually need this stuff should be the ones seeking out special packages for it. Instead, it is reversed and we all pay the cost.
Some folks believe that packages should be broken up to an almost atomic level, creating a collection of small building blocks which can later be re-used to build other higher level things.
This kind of architecture means we end up with graphs like this:
As you can see, the most granular snippets of code have their own packages. For example, shebang-regex is the following at the time of writing this post:
By splitting code up to this atomic level, the theory is that we can then create higher level packages simply by joining the dots.
Some examples of these atomic packages to give you an idea of the granularity:
* arrify - Converts a value to an array (Array.isArray(val) ? val : [val])
* cli-boxes - A JSON file containing the edges of a box
* path-key - Get the PATH environment variable key for the current platform (PATH on Unix, Path on Windows)
* onetime - Ensure a function is only called once
* is-wsl - Check if process.platform is linux and os.release() contains microsoft
If we wanted to build a new CLI for example, we could pull a few of these in and not worry about implementation. We don’t need to do env[‘PATH’] || env[‘Path’] ourselves, we can just pull a package for that.
In reality, most or all of these packages did not end up as the reusable building blocks they were meant to be. They’re either largely duplicated across various versions in a wider tree, or they’re single-use packages which only one other package uses.
Let’s take a look at some of the most granular packages:
* shebang-regex is used almost solely by shebang-command by the same maintainer
* cli-boxes is used almost solely by boxen and ink by the same maintainer
* onetime is used almost solely by restore-cursor by the same maintainer
Each of these having only one consumer means they’re equivalent of inline code but cost us more to acquire (npm requests, tar extraction, bandwidth, etc.).
Taking a look at nuxt’s dependency tree, we can see a few of these building blocks duplicated:
Inlining them doesn’t mean we no longer duplicate the code, but it does mean we don’t pay the cost of things like version resolution, conflicts, cost of acquisition, etc.
Inlining makes duplication almost free, while packaging makes it expensive.
The more packages we have, the larger our supply chain surface area is. Every package is a potential point of failure for maintenance, security, and so on.
For example, a maintainer of many of these packages was compromised last year. This meant hundreds of tiny building blocks were compromised, which meant the higher level packages we actually install were also compromised.
Logic as simple as Array.isArray(val) ? val : [val] probably doesn’t need its own package, security, maintenance, and so on. It can just be inlined and we can avoid the risk of it being compromised.
Similar to the first pillar, this philosophy made its way into the “hot path” and probably shouldn’t have. Again, we all pay the cost to no real benefit.
If you’re building an app, you might want to use some “future” features your chosen engine doesn’t support yet. In this situation, a polyfill can come in handy - it provides a fallback implementation where the feature should be, so you can use it as if it were natively supported.
For example, temporal-polyfill polyfills the new Temporal API so we can use Temporal regardless of if the engine supports it or not.
Now, if you’re building a library instead, what should you do?
In general, no library should load a polyfill as that is a consumer’s concern and a library shouldn’t be mutating the environment around it. As an alternative, some maintainers choose to use what’s called a ponyfill (sticking to the unicorns, sparkles and rainbows theme).
A ponyfill is basically a polyfill you import rather than one which mutates the environment.
This kinda works since it means a library can use future tech by importing an implementation of it which passes through to the native one if it exists, and uses the fallback otherwise. None of this mutates the environment, so it is safe for libraries to use.
For example, fastly provides @fastly/performance-observer-polyfill, which contains both a polyfill and ponyfill for PerformanceObserver.
These ponyfills did their job at the time - they allowed the library author to use future tech without mutating the environment and without forcing the consumer to know which polyfills to install.
The problem comes when these ponyfills outstay their welcome. When the feature they fill in for is now supported by all engines we care about, the ponyfill should be removed. However, this often doesn’t happen and the ponyfill remains in place long after it’s needed.
We’re now left with many, many packages which rely on ponyfills for features we’ve all had for a decade now.
Unless these packages are being kept alive because of Pillar 1, they’re usually still used just because nobody ever thought to remove them.
When all long-term support versions of engines have the feature, the ponyfill should be removed.
Much of this bloat is so deeply nested in dependency trees today that it is a fairly hefty task to unravel it all and get to a good place. It will take time, and it will take a lot of effort from maintainers and consumers.
Having said that, I do think we can make significant progress on this front if we all work together.
Start asking yourself, “why do I have this package?” and “do I really need it?”.
If you find something which seems redundant, raise an issue with the maintainer asking if it can be removed.
If you encounter a direct dependency which has many of these issues, have a look for an alternative which doesn’t. A good start for that is the module-replacements project.
knip is a great project which can help you find and remove unused dependencies, dead code, and much more. In this case, it can be a great tool to help you find and remove dependencies you no longer use.
This doesn’t solve the problems above necessarily, but is a great starting point to help clean up the dependency tree before doing more involved work.
You can read more about how knip deals with unused dependencies in their documentation.
The e18e CLI has a super useful analyze mode to determine which dependencies are no longer needed, or have community recommended replacements.
For example, if you get something like this:
Using this, we can quickly identify which direct dependencies can be cleaned up. We can also then use the migrate command to automatically migrate some of these dependencies:
In this case, it will migrate from chalk to picocolors, a much smaller package which provides the same functionality.
In the future, this CLI will even recommend based on your environment - for example, it could suggest the native styleText instead of a colours library if you’re running a new enough Node.
npmgraph is a great tool to visualize your dependency tree and investigate where bloat is coming from.
For example, let’s take a look at the bottom half of ESLint’s dependency graph as of writing this post:
We can see in this graph that the find-up branch is isolated, in that nothing else uses its deep dependencies. For something as simple as an upwards file-system traversal, maybe we don’t need 6 packages. We can then go look for an alternative, such as empathic which has a much smaller dependency graph and achieves the same thing.
The module replacements project is being used as a central data set for the wider community to document which packages can be replaced with native functionality, or more performant alternatives.
If you’re ever in need of an alternative or just want to check your dependencies, this data set is great for that.
Similarly, if you come across packages in your tree which are made redundant by native functionality, or just have better battle-tested alternatives, this project is definitely a great place to contribute that so others can benefit from it.
Paired with the data, there’s also a codemods project which provides codemods to automatically migrate some of these packages to their suggested replacements.
We all pay the cost for an incredibly small group of people to have an unusual architecture they like, or a level of backwards compatibility they need.
This isn’t necessarily a fault of the people who made these packages, as each person should be able to build however they want. Many of them are an older generation of influential JavaScript developers - building packages in a darker time where many of the nice APIs and cross-compatibility we have today didn’t exist. They built the way they did because it was possibly the best way at the time.
The problem is that we never moved on from that. We still download all of this bloat today even though we’ve had these features for several years.
I think we can solve this by reversing things. This small group should pay the cost - they should have their own special stack pretty much only they use. Everyone else gets the modern, lightweight, and widely supported code.
Hopefully things like e18e and npmx can help with that through documentation, tooling, etc. You can also help by taking a closer look at your dependencies and asking “why?”. Raise issues with your dependencies asking them if, and why they need these packages anymore.
We can fix it.
...
Read the original on 43081j.com »
Professional video editing, right in your browserA powerful NLE editor with GPU compositing, keyframe animation, and real-time preview. No installs required. Everything you need to editBuilt on WebGPU and Rust/WASM for performance that rivals native apps.WebGPU-powered compositing via Rust/WASM delivers near-native performance for real-time previews and exports.Canvas-rendered timeline with unlimited video and audio tracks, linked clips, and cross-transitions.Animate any property with bezier easing curves. Transform, opacity, effects — everything is keyframeable.Apply brightness, contrast, saturation, blur, and hue rotation — all GPU-computed with instant preview.Everything runs in the browser. Your media stays local with the File System Access API — nothing leaves your machine.
...
Read the original on tooscut.app »
Opinions are mixed on this post. Sometimes I miss the mark with my blunt tone. In hindsight I can see why parts come across as mean-spirited. I’ve chosen my words poorly. Feedback noted, I will strive to be more positive.
The Nero reference was for the sake of a dumb pun and a slight on AI imagery, not a serious attempt to compare Dahl. Sorry for my stupidity.
If another toxic Hacker News thread is all that this post spawns, I sincerely apologise.
I visited deno.com yesterday. I wanted to know if the hundreds of hours I’d spent mastering Deno was a sunk cost. Do I continue building for the runtime, or go back to Node?
deno.com 404 not found error page stating: Sorry, there was an issue loading this page
Well I guess that pretty much sums up why a good chunk of Deno employees left the company over the last week.
Layoffs are what American corpo culture calls firing half the staff. Totally normal practice for a sustainable business. Mass layoffs are deemed better for the moral of those who remain than a weekly culling before Friday beers.
The Romans loved a good decimation.† If I were a purveyor of slop and tortured metaphors, I’d have adorned this post with a deepfake of Ryan Dahl fiddling as Deno burned. But I’m not, so the solemn screenshot will suffice.
† I read Rome, Inc. recently. Not a great book, I’m just explaining the reference.
A year ago I wrote about Deno’s decline. The facts, undeterred by my subjective scorn, painted a harsh picture; Deno Land Inc. was failing.
Deno incorporated with $4.9M of seed capital five years ago. They raised a further $21M series A a year later. Napkin math suggests a five year runway for an unprofitable company (I have no idea, I just made that up.)
Coincidentally, after my blog post topped Hacker News — always a pleasure for my inbox — Ryan Dahl (Deno CEO) clapped back on the offical Deno blog:
There’s been some criticism lately about Deno - about Deploy, KV, Fresh, and our momentum in general. You may have seen some of the criticism online; it’s made the rounds in the usual places, and attracted a fair amount of attention.
Some of that criticism is valid. In fact, I think it’s fair to say we’ve had a hand in causing some amount of fear and uncertainty by being too quiet about what we’re working on, and the future direction of our company and products. That’s on us.
Reports of Deno’s Demise Have Been Greatly Exaggerated - Ryan Dahl
Dahl mentioned that adoption had doubled following Deno 2.0.
Since the release of Deno 2 last October - barely over six months ago! - Deno adoption has more than doubled according to our monthly active user metrics.
User base doubling sounds like a flex for a lemonade stand unless you give numbers. I imagine Sequoia Capital expected faster growth regardless. The harsh truth is that Deno’s offerings have failed to capture developers’ attention. I can’t pretend to know why — I was a fanboy myself — but far too few devs care about Deno. On the rare occasions Deno gets attention on the orange site, the comments page reads like in memoriam.
I don’t even think the problem was that Deno Deploy, the main source of revenue, sucked. Deploy was plagued by highly inconsistent isolate start times. Solicited feedback was ignored. Few cared. It took an issue from Wes Bos, one of the most followed devs in the game, for anyone at Deno to wake up. Was Deploy simply a ghost town?
Deno rushed the Deploy relaunched for the end of 2025 and it became “generally available” last month. Anyone using it? Anyone care? The Deno layoffs this week suggest only a miracle would have saved jobs. The writing was on the wall.
Speaking of ghost towns, the JSR YouTube channel is so lonely I feel bad for linking it. I only do because it shows just how little interest some Deno-led projects mustered.
JSR floundered partly because Deno was couldn’t afford to invest in better infrastructure. But like everything else in the Deno ecosystem, users just weren’t interested. What makes a comparable project like NPMX flourish so quickly? Evidently, developers don’t want to replace Node and NPM. They just want what they already have but better; a drop-in improvement without friction.
To Deno and Dahl’s credit, they recognised this with the U-turn on HTTP imports. But the resulting packaging mess made things worse. JSR should have been NPMX. Deno should have gone all-in on package.json but instead we got mixed messaging and confused docs.
I could continue but it would just be cruel to dissect further. I’ve been heavily critical of Deno in the past but I really wanted it to succeed. There were genuinely good people working at Deno who lost their job and that sucks. I hope the Deno runtime survives. It’s a breath of fresh air. has far more bugs and compatibility issues than anyone will admit. Node still has too much friction around TypeScript and ECMAScript modules.
So where does Deno go from here? Over to you, Ryan.
Tradition dictates an official PR statement following layoffs. Seems weird not to have one prepared in advance. That said, today is Friday, the day to bury bad news. I may be publishing this mere hours before we hear what happens next…
Given Dahl’s recent tweets and blog post, a pivot to AI might be Deno’s gamble. By the way, it’s rather telling that all the ex-employees posted their departures on Bluesky. What that tells you depends on whether you enjoy your social media alongside Grok undressing women upon request. I digress. Idle speculation has led to baseless rumours of an OpenAI acquisition. I’m not convinced that makes sense but neither does the entire AI industry.
I’m not trying to hate on Dahl but c’mon bro you’re the CEO. What’s next for Deno? Give anyone a reason to care. Although if you’re planning a 10× resurgence with automated Mac Minis, I regret asking.
...
Read the original on dbushell.com »
I really dislike Apple’s choice to clutter macOS Tahoe’s menus with icons. It makes menus hard to scan, and a bunch of the icons Apple has chosen make no sense and are inconsistent between system applications.
Steve Troughton-Smith is my hero for finding a Terminal command to disable them:
Here’s one for the icons-in-menus haters on macOS Tahoe:
It even preserves the couple of instances you do want icons, like for window zoom/resize.
Your apps will respect this change after relaunching. I ran this a few minutes ago and already appreciate the change. I really think Apple should roll this change back in macOS 27, or offer a proper setting to disable these icons for those of us who find them distracting.
...
Read the original on 512pixels.net »
Fastest graph database tested on the LDBC Social Network Benchmark, both embedded and as a server, with a lower memory footprint than other in-memory databases. Built in Rust with vectorized execution, adaptive chunking and SIMD-optimized operations.
GQL, Cypher, Gremlin, GraphQL, SPARQL and SQL/PGQ. Choose the query language that fits the project and expertise level.
Dual data model support for both Labeled Property Graphs and RDF triples. Choose the model that fits the domain.
Embed directly into applications with zero external dependencies, or run as a standalone server with REST API and web UI. From edge devices to production clusters.
Core database engine written in Rust with no required C dependencies. Optional allocators (jemalloc/mimalloc) and TLS use C libraries for performance. Memory-safe by design with fearless concurrency.
Python (PyO3), Node.js/TypeScript (napi-rs), Go (CGO), C (FFI), C# (.NET 8 P/Invoke), Dart (dart:ffi) and WebAssembly (wasm-bindgen). Use Grafeo from the language of choice.
AI integrations (LangChain, LlamaIndex, MCP), interactive notebook widgets, browser-based graphs via WebAssembly, standalone server with web UI and benchmarking tools.
...
Read the original on grafeo.dev »
Named after floccus — the cloud formation that looks exactly like popcorn.
A free, open-source local AWS emulator. No account. No feature gates. No CI restrictions. Just docker compose up.
LocalStack’s community edition sunset in March 2026 — requiring auth tokens, dropping CI support, and freezing security updates. Floci is the no-strings-attached alternative.
# docker-compose.yml
services:
floci:
image: hectorvent/floci:latest
ports:
- “4566:4566”
volumes:
- ./data:/app/data
docker compose up
All services are available at http://localhost:4566. Use any AWS region — credentials can be anything.
export AWS_ENDPOINT_URL=http://localhost:4566
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=test
# Try it
aws s3 mb s3://my-bucket
aws sqs create-queue –queue-name my-queue
aws dynamodb list-tables
Point your existing AWS SDK at http://localhost:4566 — no other changes needed.
// Java (AWS SDK v2)
DynamoDbClient client = DynamoDbClient.builder()
.endpointOverride(URI.create(“http://localhost:4566”))
.region(Region.US_EAST_1)
.credentialsProvider(StaticCredentialsProvider.create(
AwsBasicCredentials.create(“test”, “test”)))
.build();
# Python (boto3)
import boto3
client = boto3.client(“s3”,
endpoint_url=“http://localhost:4566”,
region_name=“us-east-1”,
aws_access_key_id=“test”,
aws_secret_access_key=“test”)
// Node.js (AWS SDK v3)
import { S3Client } from “@aws-sdk/client-s3”;
const client = new S3Client({
endpoint: “http://localhost:4566”,
region: “us-east-1”,
credentials: { accessKeyId: “test”, secretAccessKey: “test” },
forcePathStyle: true,
All settings are overridable via environment variables (FLOCI_ prefix).
MIT — use it however you want.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.