10 interesting stories served every morning and every evening.




1 764 shares, 34 trendiness

Some Things Just Take Time

Trees take quite a while to grow. If some­one 50 years ago planted a row of oaks or a chest­nut tree on your plot of land, you have some­thing that no amount of money or ef­fort can repli­cate. The only way is to wait. Tree-lined roads, old gar­dens, houses shel­tered by decades of canopy: if you want to start fresh on an empty plot, you will not be able to get that.

Because some things just take time.

We know this in­tu­itively. We pay pre­mi­ums for Swiss watches, Hermès bags and old prop­er­ties pre­cisely be­cause of the time em­bed­ded in them. Either be­cause of the time it took to build them or be­cause of their age. We re­quire age min­i­mums for dri­ving, vot­ing, and drink­ing be­cause we be­lieve ma­tu­rity only comes through lived ex­pe­ri­ence.

Yet right now we also live in a time of in­stant grat­i­fi­ca­tion, and it’s en­ter­ing how we build soft­ware and com­pa­nies. As much as we can speed up code gen­er­a­tion, the real defin­ing el­e­ment of a suc­cess­ful com­pany or an Open Source pro­ject will con­tinue to be tenac­ity. The abil­ity of lead­er­ship or the main­tain­ers to stick to a prob­lem for years, to build re­la­tion­ships, to work through chal­lenges fun­da­men­tally de­fined by hu­man life­times.

The cur­rent gen­er­a­tion of startup founders and pro­gram­mers is ob­sessed with speed. Fast it­er­a­tion, rapid de­ploy­ment, do­ing every­thing as quickly as pos­si­ble. For many things, that’s fine. You can go fast, leave some qual­ity on the table, and learn some­thing along the way.

But there are things where speed is ac­tively harm­ful, where the fric­tion ex­ists for a rea­son. Compliance is one of those cases. There’s a strong de­sire to elim­i­nate every­thing that processes like SOC2 re­quire, and an en­tire in­dus­try of turnkey so­lu­tions has sprung up to help —

Delve just be­ing one ex­am­ple, there are more.

There’s a feel­ing that all the things that cre­ate fric­tion in your life should be au­to­mated away. That hu­man in­volve­ment should be re­placed by AI-based de­ci­sion-mak­ing. Because it is the fric­tion of the process that is the prob­lem. When in fact many times the fric­tion, or that things just take time, is pre­cisely the point.

There’s a rea­son we have cool­ing-off pe­ri­ods for some im­por­tant de­ci­sions in one’s life. We rec­og­nize that peo­ple need time to think about what they’re do­ing, and that do­ing some­thing right once does­n’t mean much be­cause you need to be able to do it over a longer pe­riod of time.

AI writes code fast which is­n’t news any­more. What’s in­ter­est­ing is that we’re push­ing this force down­stream: we seem­ingly have this de­sire to ship faster than ever, to run more ex­per­i­ments and that cre­ates a new de­sire, one to re­move all the re­main­ing fric­tion of re­views, de­sign­ing and con­fig­ur­ing in­fra­struc­ture, any­thing that slows the pipeline. If the ma­chines are so great, why do we even need check­lists or per­mis­sion sys­tems? Express de­sire, en­joy re­sult.

Because we now be­lieve it is im­por­tant for us to just do every­thing faster. But in­creas­ingly, I also feel like this means that the shelf life of much of the soft­ware be­ing cre­ated to­day — soft­ware that peo­ple and busi­nesses should de­pend on — can be mea­sured only in months rather than decades, and the re­la­tion­ships along­side.

In one of last year’s ear­lier YC batches, there was al­ready a hand­ful that just dis­ap­peared with­out even say­ing what they learned or say­ing good­bye to their cus­tomers. They just shut down their pub­lic pres­ence and moved on to other things. And to me, that is not a sign of healthy it­er­a­tion. That is a sign of break­ing the ba­sic trust you need to build a re­la­tion­ship with cus­tomers. A proper shut­down takes time and ef­fort, and our cur­rent en­vi­ron­ment treats that as time not wisely spent. Better to just move on to the next thing.

This is ex­tend­ing to Open Source pro­jects as well. All of a sud­den, every­thing is an Open Source pro­ject, but many of them only have com­mits for a week or so, and then they go away be­cause the mo­ti­va­tion of the cre­ator al­ready waned. And in the name of ex­per­i­men­ta­tion, that is all good and well, but what makes a good Open Source pro­ject is that you think and truly be­lieve that the per­son that cre­ated it is ei­ther go­ing to stick with it for a very long pe­riod of time, or they are able to set up a strat­egy for suc­ces­sion, or they have cre­ated enough of a com­mu­nity that these pro­jects will stand the test of time in one form or an­other.

Relatedly, I’m also in­creas­ingly skep­ti­cal of any­one who sells me some­thing that sup­pos­edly saves my time. When all that I see is that every­body who is like me, fully on­boarded into AI and agen­tic tools, seem­ingly has less and less time avail­able be­cause we fall into a trap where we’re im­me­di­ately fill­ing it with more things.

We all sell each other the idea that we’re go­ing to save time, but that is not what’s hap­pen­ing. Any time saved gets im­me­di­ately cap­tured by com­pe­ti­tion. Someone who ac­tu­ally takes a breath is out­ma­neu­vered by some­one who fills every freed-up hour with new out­put. There is no easy way to bank the time and it just dis­ap­pears.

I feel this acutely. I’m very close to the red-hot cen­ter of where eco­nomic ac­tiv­ity around AI is tak­ing place, and more than any­thing, I have less and less time, even when I try to pur­pose­fully scale back and cre­ate the space. For me this is a prob­lem. It’s a prob­lem be­cause even with the best in­ten­tions, I ac­tu­ally find it very hard to cre­ate qual­ity when we are quickly com­modi­tiz­ing soft­ware, and the ma­chines make it so ap­peal­ing.

I keep com­ing back to the trees. I’ve been main­tain­ing Open Source pro­jects for close to two decades now. The last startup I worked on, I spent 10 years at. That’s not be­cause I’m par­tic­u­larly dis­ci­plined or vir­tu­ous. It’s be­cause I or some­one else, planted some­thing, and then I kept show­ing up, and even­tu­ally the thing had roots that went deeper than my en­thu­si­asm on any given day. That’s what time does! It turns some idea or plan into a com­mit­ment and a com­mit­ment into some­thing that can shel­ter and grow other peo­ple.

Nobody is go­ing to mass-pro­duce a 50-year-old oak. And no­body is go­ing to con­jure trust, or qual­ity, or com­mu­nity out of a week­end sprint. The things I value most — the pro­jects, the re­la­tion­ships, the com­mu­ni­ties — are all things that took years to be­come what they are. No tool, no mat­ter how fast, was go­ing to get them there sooner.

We re­cently planted a new tree with Colin. I want it to grow into a large one. I know that’s go­ing to take time, and I’m not in a rush.

...

Read the original on lucumr.pocoo.org »

2 753 shares, 41 trendiness

Do Not Turn Child Protection Into Internet Access Control

Age ver­i­fi­ca­tion is no longer a nar­row mech­a­nism for a few adult web­sites. Across Europe, the USA, the UK, Australia, and else­where, it is ex­pand­ing into so­cial me­dia, mes­sag­ing, gam­ing, search, and other main­stream ser­vices.

The com­mon fram­ing says these sys­tems ex­ist to pro­tect chil­dren. That con­cern is real. Children are ex­posed to harm­ful con­tent, ma­nip­u­la­tive rec­om­men­da­tion sys­tems, preda­tory be­hav­ior, and com­pul­sive plat­form de­sign. Even adults are ma­nip­u­lated, quite suc­ces­fully, with tech­niques that can in­flu­ence na­tional elec­tions.

But from a tech­ni­cal and po­lit­i­cal point of view, age ver­i­fi­ca­tion is not just a child-safety fea­ture. It is an ac­cess con­trol ar­chi­tec­ture. It changes the de­fault con­di­tion of the net­work from open ac­cess to per­mis­sioned ac­cess. Instead of re­ceiv­ing con­tent un­less some­thing is blocked, users in­creas­ingly have to prove some­thing about them­selves be­fore a ser­vice is al­lowed to re­spond.

That shift be­comes clearer when age as­sur­ance moves down into the op­er­at­ing sys­tem. In some US pro­pos­als, the model is no longer a one-off check at a web­site. It be­comes a per­sis­tent age-sta­tus layer main­tained by the OS and ex­posed to ap­pli­ca­tions through a sys­tem-level in­ter­face. At that point, age ver­i­fi­ca­tion stops look­ing like a lim­ited safe­guard and starts look­ing like a gen­eral iden­tity layer for the whole de­vice.

This is no longer only a pro­pri­etary-plat­form story ei­ther. Even the Linux desk­top stack is be­gin­ning to ab­sorb this pres­sure. sys­temd has re­port­edly added an op­tional birth­Date field to userdb in re­sponse to age-as­sur­ance laws. Regulation is be­gin­ning to shape the data model of per­sonal com­put­ing, so that higher-level com­po­nents can build age-aware be­hav­ior on top.

Content mod­er­a­tion is about clas­si­fi­ca­tion and fil­ter­ing. It asks whether some con­tent should be blocked, la­beled, de­layed, or han­dled dif­fer­ently. Guardianship is some­thing else. It is the con­tex­tual re­spon­si­bil­ity of par­ents, teach­ers, schools, and other trusted adults to de­cide what is ap­pro­pri­ate for a child, when ex­cep­tions make sense, and how su­per­vi­sion should evolve over time. Moderation is partly tech­ni­cal. Guardianship is re­la­tional, lo­cal, and sit­u­ated in spe­cific con­texts.

I am also a par­ent. I un­der­stand the fear be­hind these pro­pos­als be­cause I live with it too. Children do face real on­line risks. But rec­og­niz­ing that does not oblige us to ac­cept any so­lu­tion placed in front of us, least of all one that weak­ens pri­vacy for every­one while shift­ing re­spon­si­bil­ity away from fam­i­lies, schools, and the peo­ple who ac­tu­ally have to guide chil­dren through dig­i­tal life.

Age-verification laws col­lapse these two ques­tions into one cen­tral­ized an­swer. The re­sult is pre­dictable. A plat­form, browser ven­dor, app store, op­er­at­ing-sys­tem provider, or iden­tity in­ter­me­di­ary is asked to en­force what is pre­sented as a child-pro­tec­tion pol­icy, even though no cen­tral­ized ac­tor can re­place the judg­ment of a par­ent, a school, or a lo­cal com­mu­nity.

It also fails on its own terms. The by­passes are ob­vi­ous: VPNs, bor­rowed ac­counts, pur­chased cre­den­tials, fake cre­den­tials, and tricks against age-es­ti­ma­tion sys­tems. A con­trol that is easy to evade but ex­pen­sive to im­pose is not a se­ri­ous com­pro­mise: it is an er­ror or, one may say, a cor­po­rate data-grab.

The price is high and paid by every­one. More iden­tity checks. More meta­data. More log­ging. More ven­dors in the mid­dle. More fric­tion for peo­ple who lack the right de­vice, the right pa­pers, or the right dig­i­tal skills. This is not a mi­nor safety fea­ture. It is a new con­trol layer for the net­work.

And once that layer ex­ists, it rarely stays con­fined to age. Infrastructure built for one at­tribute is eas­ily reused for oth­ers: lo­ca­tion, cit­i­zen­ship, le­gal sta­tus, plat­form pol­icy, or what­ever the next panic de­mands. This is how a lim­ited check be­comes a gen­eral gate.

Keep guardian­ship where it be­longs: with par­ents, teach­ers, schools, and com­mu­ni­ties that can make con­tex­tual de­ci­sions, au­tho­rize ex­cep­tions, and ad­just over time.

The op­er­at­ing sys­tem can help here, but only as a lo­cal pol­icy sur­face un­der the con­trol of users and guardians. It should not be­come a uni­ver­sal age-broad­cast­ing layer for apps and re­mote ser­vices. That is the ar­chi­tec­tural line that mat­ters.

Most of the harms in­voked in this de­bate do not come from the mere ex­is­tence of con­tent on­line. They come from rec­om­men­da­tion sys­tems, dark pat­terns, ad­dic­tive met­rics, and busi­ness mod­els that re­ward am­pli­fi­ca­tion with­out re­spon­si­bil­ity. If the goal is to pro­tect mi­nors, that is where reg­u­la­tion should bite.

If we are se­ri­ous about re­duc­ing harm, we should stop ask­ing how to iden­tify every­one and start ask­ing how to strengthen lo­cal con­trol with­out turn­ing the net­work into a check­point.

It is en­cour­ag­ing to see this ar­ti­cle cir­cu­lat­ing widely, as it may con­tribute to a shift in how pol­i­cy­mak­ers ap­proach the is­sue. Given its grow­ing vis­i­bil­ity, I will keep a con­cise record here of the se­quence of its cov­er­age across me­dia out­lets, as well pi­lot im­ple­men­ta­tions across the world.

My first ac­count on the prob­lem emerged from a di­a­logue with Brave’s de­vel­oper Kyle den Hartog at a cypher­punk re­treat in Berlin. It was right af­ter fa­cil­i­tat­ing the dig­i­tal iden­tity track of the event that I pub­lished a rather tech­ni­cal piece on the topic.

Later, as age ver­i­fi­ca­tion mea­sures be­gan to take hold, and in align­ment with our com­mu­nity fa­cil­i­ta­tors at the Dyne.org foun­da­tion, we de­cided to dis­con­tinue Discord as a chan­nel for par­tic­i­pa­tion, as the plat­form moved to im­pose age ver­i­fi­ca­tion.

Then the sys­temd dis­pute un­folded, and I found my­self, as founder of the pro­ject, as the first dis­tro main­tainer stat­ing that we would not im­ple­ment age ver­i­fi­ca­tion in Devuan GNU/Linux, a Debian fork with­out sys­temd that has, since 2016, shown fewer bugs and se­cu­rity ad­vi­sories. The tech jour­nal­ist Lunduke picked it up im­me­di­ately, set­ting off a wave of sim­i­lar de­c­la­ra­tions across the dis­tri­b­u­tion main­tainer com­mu­nity.

That was the mo­ment I re­alised the need to set out, in clear terms, the rea­sons be­hind this choice, and the grounds for a form of con­sci­en­tious ob­jec­tion should such laws ever be en­forced on our pro­jects at Dyne.org. I then wrote a piece for Wired Italy, in Italian, my mother tongue, which is due to be pub­lished by the mag­a­zine in the com­ing days (link TBD).

While await­ing pub­li­ca­tion in Wired, I trans­lated the ar­ti­cle and pub­lished it here, in English, through our think and do tank. The piece you have just read quickly reached the front page of Hacker News, draw­ing nearly 400 com­ments from con­cerned read­ers and tech­ni­cal ex­perts, a valu­able body of ma­te­r­ial to build on.

As the dis­cus­sion gains mo­men­tum, I am en­gag­ing with col­leagues at the City of Lugano and the Plan₿ Foundation, where I have re­cently taken on the role of Scientific Director. The pro­posal is to move from analy­sis to ac­tion by es­tab­lish­ing a city-wide pi­lot that ex­plores tech­nolo­gies for lo­cally man­aged guardian­ship, of­fer­ing a con­struc­tive ex­am­ple for Switzerland.

We are ap­proach­ing this with con­fi­dence and prepar­ing for a roll­out for Lugano within the next two years. At the same time, within the Swiss Confederation there are signs of a more grounded di­rec­tion, as re­flected in The Internet Initiative” plac­ing re­spon­si­bil­ity on Big Tech and bring­ing to­gether rep­re­sen­ta­tives from all ma­jor Swiss po­lit­i­cal par­ties.

My next steps in­clude reach­ing out to con­tacts in Europe to help broaden the dis­cus­sion and con­tribute to a more bal­anced pub­lic de­bate, in the face of sus­tained pres­sure from cor­po­rate lob­bies ad­vanc­ing data-ex­trac­tive mea­sures.

And you can play a mean­ing­ful role as well: en­gage with the is­sue, bring your tech­ni­cal and po­lit­i­cal un­der­stand­ing to it, and help sus­tain at­ten­tion so that those who make up the in­ter­net are not ex­cluded from de­ci­sions that af­fect it. I hope this ma­te­r­ial and the rea­son­ing be­hind it can be use­ful in that di­rec­tion. Do let us at Dyne.org know if we can as­sist in mak­ing vis­i­ble suc­cess­ful lo­cal pi­lots that im­ple­ment child pro­tec­tion in a sound and pro­por­tion­ate way.

If you like to read fur­ther, I’ve writ­ten more about the prob­lems of European Digital Identity im­ple­men­ta­tion plans and ar­chi­tec­ture.

I’ve been work­ing on pri­vacy and iden­tity tech­nol­ogy for over a decade, pri­mar­ily in pro­jects funded by the European Commission.

Among my ef­forts are de­code­pro­ject.eu and re­flow­pro­ject.eu, var­i­ous aca­d­e­mic pa­pers, in­clud­ing SD-BLS, re­cently pub­lished by IEEE. Additionally, with our team at The Forkbomb Company we’ve de­vel­oped dig­i­tal iden­tity prod­ucts as DID­ROOM.com and CRED­IMI.io.

...

Read the original on news.dyne.org »

3 532 shares, 27 trendiness

A simple and powerful neural network framework

We write and main­tain tiny­grad, the fastest grow­ing neural net­work frame­work

It’s ex­tremely sim­ple, and breaks down the most com­plex net­works into 3 OpTypes

are UnaryOps, BinaryOps, and TernaryOps.

They op­er­ate on 1-3 ten­sors and run el­e­men­t­wise.

ex­am­ple: SQRT, LOG2, ADD, MUL, WHERE, etc…

op­er­ate on one ten­sor and re­turn a smaller ten­sor.

ex­am­ple: SUM, MAX

are vir­tual ops that op­er­ate on one ten­sor and move the data around

Copy-free with ShapeTracker.

ex­am­ple: RESHAPE, PERMUTE, EXPAND, etc…

But how…where are your CONVs and MATMULs? Read the code to solve this mys­tery.

We are now funded and full time soft­ware en­gi­neers. Very tal­ented in­terns okay.

See our bounty page to judge if you might be a good fit. Bounties pay you while judg­ing that fit.

We are also hir­ing for op­er­a­tions and hard­ware, but if you haven’t con­tributed to tiny­grad your ap­pli­ca­tion won’t be con­sid­ered.

We sell a com­puter called the tiny­box. It comes in red, green, and soon, exa.

...

Read the original on tinygrad.org »

4 417 shares, 103 trendiness

Hormuz Minesweeper

...

Read the original on hormuz.pythonic.ninja »

5 381 shares, 32 trendiness

The Three Pillars of JavaScript Bloat

Over the last cou­ple of years, we’ve seen sig­nif­i­cant growth of the e18e com­mu­nity and a rise in per­for­mance fo­cused con­tri­bu­tions be­cause of it. A large part of this is the cleanup” ini­tia­tive, where the com­mu­nity has been prun­ing pack­ages which are re­dun­dant, out­dated, or un­main­tained.

One of the most com­mon top­ics that comes up as part of this is dependency bloat” - the idea that npm de­pen­dency trees are get­ting larger over time, of­ten with long since re­dun­dant code which the plat­form now pro­vides na­tively.

In this post, I want to briefly look at what I think are the three main types of bloat in our de­pen­dency trees, why they ex­ist, and how we can start to ad­dress them.

The graph above is a com­mon sight in many npm de­pen­dency trees - a small util­ity func­tion for some­thing which seems like it should be na­tively avail­able, fol­lowed by many sim­i­larly small deep de­pen­den­cies.

So why is this a thing? Why do we need is-string in­stead of typeof checks? Why do we need ha­sown in­stead of Object.hasOwn (or Object.prototype.hasOwnProperty)? Three things:

Support for very old en­gines

Somewhere in the world, some peo­ple ap­par­ently ex­ist who need to sup­port ES3 - think IE6/7, or ex­tremely early ver­sions of Node.js.

For these peo­ple, much of what we take for granted to­day does not ex­ist. For ex­am­ple, they don’t have any of the fol­low­ing:

These are all ES5 fea­tures, mean­ing they sim­ply don’t ex­ist in ES3 en­gines.

For these un­for­tu­nate souls who are still run­ning old en­gines, they need to reim­ple­ment every­thing them­selves, or be pro­vided with poly­fills.

Alternatively, what’d be re­ally nice is if they up­graded.

The sec­ond rea­son for some of these pack­ages is safety”.

Basically, in­side Node it­self, there is a con­cept of primordials”. These are es­sen­tially just global ob­jects wrapped at startup and im­ported by Node from then on, to avoid Node it­self be­ing bro­ken by some­one mu­tat­ing the global name­space.

For ex­am­ple, if Node it­self uses Map and we re-de­fine what Map is - we can break Node. To avoid this, Node keeps a ref­er­ence to the orig­i­nal Map which it im­ports rather than ac­cess­ing the global.

You can read more about this here in the Node repo.

This makes a lot of sense for an en­gine, since it re­ally should­n’t fall over if a script messes up the global name­space.

Some main­tain­ers also be­lieve this is the cor­rect way to build pack­ages, too. This is why we have de­pen­den­cies like math-in­trin­sics in the graph above, which ba­si­cally re-ex­ports the var­i­ous Math.* func­tions to avoid mu­ta­tion.

Lastly, we have cross-realm val­ues. These are ba­si­cally val­ues you have passed from one realm to an­other - for ex­am­ple, from a web page to a child or vice versa.

In this sit­u­a­tion, a new RegExp(pattern) in an iframe, is not the same RegExp class as the one in the par­ent page. This means win­dow. RegExp !== iframeWin­dow.Reg­Exp, which of course means val in­stanceof RegExp would be false if it came from the iframe (another realm).

For ex­am­ple, I am a main­tainer of chai, and we have this ex­act is­sue. We need to sup­port as­ser­tions hap­pen­ing across realms (since a test run­ner may run tests in a VM or iframe), so we can’t rely on in­stanceof checks. For that rea­son, we use Object.prototype.toString.call(val) === [object RegExp]’ to check if some­thing is a regex, which works across realms since it does­n’t rely on the con­struc­tor.

In the graph above, is-string is ba­si­cally do­ing this same job in case we passed a new String(val) from one realm to an­other.

All of this makes sense for a very small group of peo­ple. If you’re sup­port­ing very old en­gines, pass­ing val­ues across realms, or want pro­tec­tion from some­one mu­tat­ing the en­vi­ron­ment - these pack­ages are ex­actly what you need.

The prob­lem is that the vast ma­jor­ity of us don’t need any of this. We’re run­ning a ver­sion of Node from the last 10 years, or us­ing an ever­green browser. We don’t need to sup­port pre-ES5 en­vi­ron­ments, we don’t pass val­ues across frames, and we unin­stall pack­ages which break the en­vi­ron­ment.

These lay­ers of niche com­pat­i­bil­ity some­how made their way into the hot path” of every­day pack­ages. The tiny group of peo­ple who ac­tu­ally need this stuff should be the ones seek­ing out spe­cial pack­ages for it. Instead, it is re­versed and we all pay the cost.

Some folks be­lieve that pack­ages should be bro­ken up to an al­most atomic level, cre­at­ing a col­lec­tion of small build­ing blocks which can later be re-used to build other higher level things.

This kind of ar­chi­tec­ture means we end up with graphs like this:

As you can see, the most gran­u­lar snip­pets of code have their own pack­ages. For ex­am­ple, she­bang-regex is the fol­low­ing at the time of writ­ing this post:

By split­ting code up to this atomic level, the the­ory is that we can then cre­ate higher level pack­ages sim­ply by join­ing the dots.

Some ex­am­ples of these atomic pack­ages to give you an idea of the gran­u­lar­ity:

* ar­rify - Converts a value to an ar­ray (Array.isArray(val) ? val : [val])

* cli-boxes - A JSON file con­tain­ing the edges of a box

* path-key - Get the PATH en­vi­ron­ment vari­able key for the cur­rent plat­form (PATH on Unix, Path on Windows)

* one­time - Ensure a func­tion is only called once

* is-wsl - Check if process.plat­form is linux and os.re­lease() con­tains mi­crosoft

If we wanted to build a new CLI for ex­am­ple, we could pull a few of these in and not worry about im­ple­men­ta­tion. We don’t need to do env[‘PATH’] || env[‘Path’] our­selves, we can just pull a pack­age for that.

In re­al­ity, most or all of these pack­ages did not end up as the reusable build­ing blocks they were meant to be. They’re ei­ther largely du­pli­cated across var­i­ous ver­sions in a wider tree, or they’re sin­gle-use pack­ages which only one other pack­age uses.

Let’s take a look at some of the most gran­u­lar pack­ages:

* she­bang-regex is used al­most solely by she­bang-com­mand by the same main­tainer

* cli-boxes is used al­most solely by boxen and ink by the same main­tainer

* one­time is used al­most solely by re­store-cur­sor by the same main­tainer

Each of these hav­ing only one con­sumer means they’re equiv­a­lent of in­line code but cost us more to ac­quire (npm re­quests, tar ex­trac­tion, band­width, etc.).

Taking a look at nux­t’s de­pen­dency tree, we can see a few of these build­ing blocks du­pli­cated:

Inlining them does­n’t mean we no longer du­pli­cate the code, but it does mean we don’t pay the cost of things like ver­sion res­o­lu­tion, con­flicts, cost of ac­qui­si­tion, etc.

Inlining makes du­pli­ca­tion al­most free, while pack­ag­ing makes it ex­pen­sive.

The more pack­ages we have, the larger our sup­ply chain sur­face area is. Every pack­age is a po­ten­tial point of fail­ure for main­te­nance, se­cu­rity, and so on.

For ex­am­ple, a main­tainer of many of these pack­ages was com­pro­mised last year. This meant hun­dreds of tiny build­ing blocks were com­pro­mised, which meant the higher level pack­ages we ac­tu­ally in­stall were also com­pro­mised.

Logic as sim­ple as Array.isArray(val) ? val : [val] prob­a­bly does­n’t need its own pack­age, se­cu­rity, main­te­nance, and so on. It can just be in­lined and we can avoid the risk of it be­ing com­pro­mised.

Similar to the first pil­lar, this phi­los­o­phy made its way into the hot path” and prob­a­bly should­n’t have. Again, we all pay the cost to no real ben­e­fit.

If you’re build­ing an app, you might want to use some future” fea­tures your cho­sen en­gine does­n’t sup­port yet. In this sit­u­a­tion, a poly­fill can come in handy - it pro­vides a fall­back im­ple­men­ta­tion where the fea­ture should be, so you can use it as if it were na­tively sup­ported.

For ex­am­ple, tem­po­ral-poly­fill poly­fills the new Temporal API so we can use Temporal re­gard­less of if the en­gine sup­ports it or not.

Now, if you’re build­ing a li­brary in­stead, what should you do?

In gen­eral, no li­brary should load a poly­fill as that is a con­sumer’s con­cern and a li­brary should­n’t be mu­tat­ing the en­vi­ron­ment around it. As an al­ter­na­tive, some main­tain­ers choose to use what’s called a pony­fill (sticking to the uni­corns, sparkles and rain­bows theme).

A pony­fill is ba­si­cally a poly­fill you im­port rather than one which mu­tates the en­vi­ron­ment.

This kinda works since it means a li­brary can use fu­ture tech by im­port­ing an im­ple­men­ta­tion of it which passes through to the na­tive one if it ex­ists, and uses the fall­back oth­er­wise. None of this mu­tates the en­vi­ron­ment, so it is safe for li­braries to use.

For ex­am­ple, fastly pro­vides @fastly/performance-observer-polyfill, which con­tains both a poly­fill and pony­fill for PerformanceObserver.

These pony­fills did their job at the time - they al­lowed the li­brary au­thor to use fu­ture tech with­out mu­tat­ing the en­vi­ron­ment and with­out forc­ing the con­sumer to know which poly­fills to in­stall.

The prob­lem comes when these pony­fills out­stay their wel­come. When the fea­ture they fill in for is now sup­ported by all en­gines we care about, the pony­fill should be re­moved. However, this of­ten does­n’t hap­pen and the pony­fill re­mains in place long af­ter it’s needed.

We’re now left with many, many pack­ages which rely on pony­fills for fea­tures we’ve all had for a decade now.

Unless these pack­ages are be­ing kept alive be­cause of Pillar 1, they’re usu­ally still used just be­cause no­body ever thought to re­move them.

When all long-term sup­port ver­sions of en­gines have the fea­ture, the pony­fill should be re­moved.

Much of this bloat is so deeply nested in de­pen­dency trees to­day that it is a fairly hefty task to un­ravel it all and get to a good place. It will take time, and it will take a lot of ef­fort from main­tain­ers and con­sumers.

Having said that, I do think we can make sig­nif­i­cant progress on this front if we all work to­gether.

Start ask­ing your­self, why do I have this pack­age?” and do I re­ally need it?”.

If you find some­thing which seems re­dun­dant, raise an is­sue with the main­tainer ask­ing if it can be re­moved.

If you en­counter a di­rect de­pen­dency which has many of these is­sues, have a look for an al­ter­na­tive which does­n’t. A good start for that is the mod­ule-re­place­ments pro­ject.

knip is a great pro­ject which can help you find and re­move un­used de­pen­den­cies, dead code, and much more. In this case, it can be a great tool to help you find and re­move de­pen­den­cies you no longer use.

This does­n’t solve the prob­lems above nec­es­sar­ily, but is a great start­ing point to help clean up the de­pen­dency tree be­fore do­ing more in­volved work.

You can read more about how knip deals with un­used de­pen­den­cies in their doc­u­men­ta­tion.

The e18e CLI has a su­per use­ful an­a­lyze mode to de­ter­mine which de­pen­den­cies are no longer needed, or have com­mu­nity rec­om­mended re­place­ments.

For ex­am­ple, if you get some­thing like this:

Using this, we can quickly iden­tify which di­rect de­pen­den­cies can be cleaned up. We can also then use the mi­grate com­mand to au­to­mat­i­cally mi­grate some of these de­pen­den­cies:

In this case, it will mi­grate from chalk to pic­o­col­ors, a much smaller pack­age which pro­vides the same func­tion­al­ity.

In the fu­ture, this CLI will even rec­om­mend based on your en­vi­ron­ment - for ex­am­ple, it could sug­gest the na­tive style­Text in­stead of a colours li­brary if you’re run­ning a new enough Node.

npm­graph is a great tool to vi­su­al­ize your de­pen­dency tree and in­ves­ti­gate where bloat is com­ing from.

For ex­am­ple, let’s take a look at the bot­tom half of ESLint’s de­pen­dency graph as of writ­ing this post:

We can see in this graph that the find-up branch is iso­lated, in that noth­ing else uses its deep de­pen­den­cies. For some­thing as sim­ple as an up­wards file-sys­tem tra­ver­sal, maybe we don’t need 6 pack­ages. We can then go look for an al­ter­na­tive, such as em­pathic which has a much smaller de­pen­dency graph and achieves the same thing.

The mod­ule re­place­ments pro­ject is be­ing used as a cen­tral data set for the wider com­mu­nity to doc­u­ment which pack­ages can be re­placed with na­tive func­tion­al­ity, or more per­for­mant al­ter­na­tives.

If you’re ever in need of an al­ter­na­tive or just want to check your de­pen­den­cies, this data set is great for that.

Similarly, if you come across pack­ages in your tree which are made re­dun­dant by na­tive func­tion­al­ity, or just have bet­ter bat­tle-tested al­ter­na­tives, this pro­ject is def­i­nitely a great place to con­tribute that so oth­ers can ben­e­fit from it.

Paired with the data, there’s also a code­mods pro­ject which pro­vides code­mods to au­to­mat­i­cally mi­grate some of these pack­ages to their sug­gested re­place­ments.

We all pay the cost for an in­cred­i­bly small group of peo­ple to have an un­usual ar­chi­tec­ture they like, or a level of back­wards com­pat­i­bil­ity they need.

This is­n’t nec­es­sar­ily a fault of the peo­ple who made these pack­ages, as each per­son should be able to build how­ever they want. Many of them are an older gen­er­a­tion of in­flu­en­tial JavaScript de­vel­op­ers - build­ing pack­ages in a darker time where many of the nice APIs and cross-com­pat­i­bil­ity we have to­day did­n’t ex­ist. They built the way they did be­cause it was pos­si­bly the best way at the time.

The prob­lem is that we never moved on from that. We still down­load all of this bloat to­day even though we’ve had these fea­tures for sev­eral years.

I think we can solve this by re­vers­ing things. This small group should pay the cost - they should have their own spe­cial stack pretty much only they use. Everyone else gets the mod­ern, light­weight, and widely sup­ported code.

Hopefully things like e18e and npmx can help with that through doc­u­men­ta­tion, tool­ing, etc. You can also help by tak­ing a closer look at your de­pen­den­cies and ask­ing why?”. Raise is­sues with your de­pen­den­cies ask­ing them if, and why they need these pack­ages any­more.

We can fix it.

...

Read the original on 43081j.com »

6 308 shares, 18 trendiness

Video Editor

Professional video edit­ing, right in your browserA pow­er­ful NLE ed­i­tor with GPU com­posit­ing, keyframe an­i­ma­tion, and real-time pre­view. No in­stalls re­quired. Everything you need to ed­it­Built on WebGPU and Rust/WASM for per­for­mance that ri­vals na­tive apps.We­bGPU-pow­ered com­posit­ing via Rust/WASM de­liv­ers near-na­tive per­for­mance for real-time pre­views and ex­ports.Can­vas-ren­dered time­line with un­lim­ited video and au­dio tracks, linked clips, and cross-tran­si­tions.An­i­mate any prop­erty with bezier eas­ing curves. Transform, opac­ity, ef­fects — every­thing is keyframe­able.Ap­ply bright­ness, con­trast, sat­u­ra­tion, blur, and hue ro­ta­tion — all GPU-computed with in­stant pre­view.Every­thing runs in the browser. Your me­dia stays lo­cal with the File System Access API — noth­ing leaves your ma­chine.

...

Read the original on tooscut.app »

7 274 shares, 8 trendiness

404 Deno CEO not found

Opinions are mixed on this post. Sometimes I miss the mark with my blunt tone. In hind­sight I can see why parts come across as mean-spir­ited. I’ve cho­sen my words poorly. Feedback noted, I will strive to be more pos­i­tive.

The Nero ref­er­ence was for the sake of a dumb pun and a slight on AI im­agery, not a se­ri­ous at­tempt to com­pare Dahl. Sorry for my stu­pid­ity.

If an­other toxic Hacker News thread is all that this post spawns, I sin­cerely apol­o­gise.

I vis­ited deno.com yes­ter­day. I wanted to know if the hun­dreds of hours I’d spent mas­ter­ing Deno was a sunk cost. Do I con­tinue build­ing for the run­time, or go back to Node?

deno.com 404 not found er­ror page stat­ing: Sorry, there was an is­sue load­ing this page

Well I guess that pretty much sums up why a good chunk of Deno em­ploy­ees left the com­pany over the last week.

Layoffs are what American corpo cul­ture calls fir­ing half the staff. Totally nor­mal prac­tice for a sus­tain­able busi­ness. Mass lay­offs are deemed bet­ter for the moral of those who re­main than a weekly culling be­fore Friday beers.

The Romans loved a good dec­i­ma­tion.† If I were a pur­veyor of slop and tor­tured metaphors, I’d have adorned this post with a deep­fake of Ryan Dahl fid­dling as Deno burned. But I’m not, so the solemn screen­shot will suf­fice.

† I read Rome, Inc. re­cently. Not a great book, I’m just ex­plain­ing the ref­er­ence.

A year ago I wrote about Deno’s de­cline. The facts, un­de­terred by my sub­jec­tive scorn, painted a harsh pic­ture; Deno Land Inc. was fail­ing.

Deno in­cor­po­rated with $4.9M of seed cap­i­tal five years ago. They raised a fur­ther $21M se­ries A a year later. Napkin math sug­gests a five year run­way for an un­prof­itable com­pany (I have no idea, I just made that up.)

Coincidentally, af­ter my blog post topped Hacker News — al­ways a plea­sure for my in­box — Ryan Dahl (Deno CEO) clapped back on the off­i­cal Deno blog:

There’s been some crit­i­cism lately about Deno - about Deploy, KV, Fresh, and our mo­men­tum in gen­eral. You may have seen some of the crit­i­cism on­line; it’s made the rounds in the usual places, and at­tracted a fair amount of at­ten­tion.

Some of that crit­i­cism is valid. In fact, I think it’s fair to say we’ve had a hand in caus­ing some amount of fear and un­cer­tainty by be­ing too quiet about what we’re work­ing on, and the fu­ture di­rec­tion of our com­pany and prod­ucts. That’s on us.

Reports of Deno’s Demise Have Been Greatly Exaggerated - Ryan Dahl

Dahl men­tioned that adop­tion had dou­bled fol­low­ing Deno 2.0.

Since the re­lease of Deno 2 last October - barely over six months ago! - Deno adop­tion has more than dou­bled ac­cord­ing to our monthly ac­tive user met­rics.

User base dou­bling sounds like a flex for a lemon­ade stand un­less you give num­bers. I imag­ine Sequoia Capital ex­pected faster growth re­gard­less. The harsh truth is that Deno’s of­fer­ings have failed to cap­ture de­vel­op­ers’ at­ten­tion. I can’t pre­tend to know why — I was a fan­boy my­self — but far too few devs care about Deno. On the rare oc­ca­sions Deno gets at­ten­tion on the or­ange site, the com­ments page reads like in memo­riam.

I don’t even think the prob­lem was that Deno Deploy, the main source of rev­enue, sucked. Deploy was plagued by highly in­con­sis­tent iso­late start times. Solicited feed­back was ig­nored. Few cared. It took an is­sue from Wes Bos, one of the most fol­lowed devs in the game, for any­one at Deno to wake up. Was Deploy sim­ply a ghost town?

Deno rushed the Deploy re­launched for the end of 2025 and it be­came generally avail­able” last month. Anyone us­ing it? Anyone care? The Deno lay­offs this week sug­gest only a mir­a­cle would have saved jobs. The writ­ing was on the wall.

Speaking of ghost towns, the JSR YouTube chan­nel is so lonely I feel bad for link­ing it. I only do be­cause it shows just how lit­tle in­ter­est some Deno-led pro­jects mus­tered.

JSR floun­dered partly be­cause Deno was could­n’t af­ford to in­vest in bet­ter in­fra­struc­ture. But like every­thing else in the Deno ecosys­tem, users just weren’t in­ter­ested. What makes a com­pa­ra­ble pro­ject like NPMX flour­ish so quickly? Evidently, de­vel­op­ers don’t want to re­place Node and NPM. They just want what they al­ready have but bet­ter; a drop-in im­prove­ment with­out fric­tion.

To Deno and Dahl’s credit, they recog­nised this with the U-turn on HTTP im­ports. But the re­sult­ing pack­ag­ing mess made things worse. JSR should have been NPMX. Deno should have gone all-in on pack­age.json but in­stead we got mixed mes­sag­ing and con­fused docs.

I could con­tinue but it would just be cruel to dis­sect fur­ther. I’ve been heav­ily crit­i­cal of Deno in the past but I re­ally wanted it to suc­ceed. There were gen­uinely good peo­ple work­ing at Deno who lost their job and that sucks. I hope the Deno run­time sur­vives. It’s a breath of fresh air. has far more bugs and com­pat­i­bil­ity is­sues than any­one will ad­mit. Node still has too much fric­tion around TypeScript and ECMAScript mod­ules.

So where does Deno go from here? Over to you, Ryan.

Tradition dic­tates an of­fi­cial PR state­ment fol­low­ing lay­offs. Seems weird not to have one pre­pared in ad­vance. That said, to­day is Friday, the day to bury bad news. I may be pub­lish­ing this mere hours be­fore we hear what hap­pens next…

Given Dahl’s re­cent tweets and blog post, a pivot to AI might be Deno’s gam­ble. By the way, it’s rather telling that all the ex-em­ploy­ees posted their de­par­tures on Bluesky. What that tells you de­pends on whether you en­joy your so­cial me­dia along­side Grok un­dress­ing women upon re­quest. I di­gress. Idle spec­u­la­tion has led to base­less ru­mours of an OpenAI ac­qui­si­tion. I’m not con­vinced that makes sense but nei­ther does the en­tire AI in­dus­try.

I’m not try­ing to hate on Dahl but c’­mon bro you’re the CEO. What’s next for Deno? Give any­one a rea­son to care. Although if you’re plan­ning a 10× resur­gence with au­to­mated Mac Minis, I re­gret ask­ing.

...

Read the original on dbushell.com »

8 242 shares, 13 trendiness

Hide macOS Tahoe's Menu Icons With This One Simple Trick

I re­ally dis­like Apple’s choice to clut­ter ma­cOS Tahoe’s menus with icons. It makes menus hard to scan, and a bunch of the icons Apple has cho­sen make no sense and are in­con­sis­tent be­tween sys­tem ap­pli­ca­tions.

Steve Troughton-Smith is my hero for find­ing a Terminal com­mand to dis­able them:

Here’s one for the icons-in-menus haters on ma­cOS Tahoe:

It even pre­serves the cou­ple of in­stances you do want icons, like for win­dow zoom/​re­size.

Your apps will re­spect this change af­ter re­launch­ing. I ran this a few min­utes ago and al­ready ap­pre­ci­ate the change. I re­ally think Apple should roll this change back in ma­cOS 27, or of­fer a proper set­ting to dis­able these icons for those of us who find them dis­tract­ing.

...

Read the original on 512pixels.net »

9 234 shares, 9 trendiness

High-Performance Graph Database

Fastest graph data­base tested on the LDBC Social Network Benchmark, both em­bed­ded and as a server, with a lower mem­ory foot­print than other in-mem­ory data­bases. Built in Rust with vec­tor­ized ex­e­cu­tion, adap­tive chunk­ing and SIMD-optimized op­er­a­tions.

GQL, Cypher, Gremlin, GraphQL, SPARQL and SQL/PGQ. Choose the query lan­guage that fits the pro­ject and ex­per­tise level.

Dual data model sup­port for both Labeled Property Graphs and RDF triples. Choose the model that fits the do­main.

Embed di­rectly into ap­pli­ca­tions with zero ex­ter­nal de­pen­den­cies, or run as a stand­alone server with REST API and web UI. From edge de­vices to pro­duc­tion clus­ters.

Core data­base en­gine writ­ten in Rust with no re­quired C de­pen­den­cies. Optional al­lo­ca­tors (jemalloc/mimalloc) and TLS use C li­braries for per­for­mance. Memory-safe by de­sign with fear­less con­cur­rency.

Python (PyO3), Node.js/TypeScript (napi-rs), Go (CGO), C (FFI), C# (.NET 8 P/Invoke), Dart (dart:ffi) and WebAssembly (wasm-bindgen). Use Grafeo from the lan­guage of choice.

AI in­te­gra­tions (LangChain, LlamaIndex, MCP), in­ter­ac­tive note­book wid­gets, browser-based graphs via WebAssembly, stand­alone server with web UI and bench­mark­ing tools.

...

Read the original on grafeo.dev »

10 230 shares, 15 trendiness

hectorvent/floci: Light, fluffy, and always free

Named af­ter floc­cus — the cloud for­ma­tion that looks ex­actly like pop­corn.

A free, open-source lo­cal AWS em­u­la­tor. No ac­count. No fea­ture gates. No CI re­stric­tions. Just docker com­pose up.

LocalStack’s com­mu­nity edi­tion sun­set in March 2026 — re­quir­ing auth to­kens, drop­ping CI sup­port, and freez­ing se­cu­rity up­dates. Floci is the no-strings-at­tached al­ter­na­tive.

# docker-com­pose.yml

ser­vices:

floci:

im­age: hec­tor­vent/​floci:lat­est

ports:

- 4566:4566”

vol­umes:

- ./data:/app/data

docker com­pose up

All ser­vices are avail­able at http://​lo­cal­host:4566. Use any AWS re­gion — cre­den­tials can be any­thing.

ex­port AWS_ENDPOINT_URL=http://​lo­cal­host:4566

ex­port AWS_DEFAULT_REGION=us-east-1

ex­port AWS_ACCESS_KEY_ID=test

ex­port AWS_SECRET_ACCESS_KEY=test

# Try it

aws s3 mb s3://​my-bucket

aws sqs cre­ate-queue –queue-name my-queue

aws dy­namodb list-ta­bles

Point your ex­ist­ing AWS SDK at http://​lo­cal­host:4566 — no other changes needed.

// Java (AWS SDK v2)

DynamoDbClient client = DynamoDbClient.builder()

.endpointOverride(URI.create(“http://​lo­cal­host:4566))

.region(Region.US_EAST_1)

.credentialsProvider(StaticCredentialsProvider.create(

AwsBasicCredentials.create(“test”, test”)))

.build();

# Python (boto3)

im­port boto3

client = boto3.client(“s3”,

end­point_url=“http://​lo­cal­host:4566,

re­gion_­name=“us-east-1”,

aws_ac­cess_key_id=“test”,

aws_se­cret_ac­cess_key=“test”)

// Node.js (AWS SDK v3)

im­port { S3Client } from @aws-sdk/client-s3”;

const client = new S3Client({

end­point: http://​lo­cal­host:4566,

re­gion: us-east-1”,

cre­den­tials: { ac­cessKeyId: test”, sec­re­tAc­cessKey: test” },

for­cePath­Style: true,

All set­tings are over­rid­able via en­vi­ron­ment vari­ables (FLOCI_ pre­fix).

MIT — use it how­ever you want.

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.