10 interesting stories served every morning and every evening.

Thienan Tran

thienantran.com

Talking to 35 Strangers at the Gym

Published: May 1, 2026 Updated: May 4, 2026

Background

A cou­ple months ago, I was the Wizard of Loneliness. I had grad­u­ated from col­lege al­most two years prior and, while I had luck­ily found a job, I was un­suc­cess­ful in find­ing friends.

Each night, I would look up how to make friends af­ter col­lege” and find the same ad­vice given every time: do your hobby with other peo­ple, fre­quently”.

On pa­per, the gym seemed like the per­fect op­por­tu­nity to meet peo­ple since I would go there nearly every day; how­ever, ac­cord­ing to Reddit, there’s a num­ber of peo­ple who want to be left alone and can be ir­ri­tated if you in­ter­rupted their work­out to talk.

I am deeply afraid of ir­ri­tat­ing some­one or be­ing in awk­ward sit­u­a­tions. Here’s a list of things that I did as a re­sult of that fear:

Hesitated for a cou­ple min­utes be­fore wak­ing up my room­mate when the fire alarm went off

Hesitated for a cou­ple min­utes be­fore wak­ing up my room­mate when the fire alarm went off

Pretended I did­n’t know a child­hood friend when they said hi be­cause I did­n’t know how to act around peo­ple I used to know

Pretended I did­n’t know a child­hood friend when they said hi be­cause I did­n’t know how to act around peo­ple I used to know

Ignored peo­ple I knew from class in­stead of say­ing hi be­cause I did­n’t know for sure if they re­mem­bered me even though the class had only 10 peo­ple in it

Ignored peo­ple I knew from class in­stead of say­ing hi be­cause I did­n’t know for sure if they re­mem­bered me even though the class had only 10 peo­ple in it

So you can un­der­stand when I say that walk­ing up to some­one and start­ing a con­ver­sa­tion with them at the gym of all places is kinda ter­ri­fy­ing for me.

Unfortunately, there was no other good op­tion. My other hobby is pro­gram­ming, but the Syracuse Development group only meets up once a month, and ac­tiv­i­ties sug­gested by r/​Syra­cuse like vol­ley­ball and trivia night re­quire you to al­ready have friends. I did­n’t have a choice. If I wanted friends, I would have to put in the work at the gym.

Problem Statement

I am lonely and have no friends.

Procedure

I de­cided to run a lit­tle ex­per­i­ment to find some friends.

Each day, for one month, I picked out one per­son to ap­proach. Usually it would be some­one I saw fre­quently at the gym.

If they were in the mid­dle of an ex­er­cise, I waited for them to fin­ish their set.

Then, I would ap­proach them, stand near them and wave to get their at­ten­tion, and then give them my open­ing line.

Initially, my open­ing line for every­one was Hey I see you here all the time. You’re pretty strong. What’s your split?” After a week or so, I be­gan cus­tomiz­ing the open­ing line per per­son based on what I found in­ter­est­ing about them.

For in­stance, some­one was wear­ing a Boston hat and I was cu­ri­ous whether they went to school in Boston like I did, so I asked them about it. After the open­ing line, I tried to talk to them for 5 – 10 min­utes un­til they let me go. I tried not to be the one to end it be­cause I have a habit of end­ing con­ver­sa­tions early, but I did leave them alone if they ob­vi­ously did not want to talk.

Results

Here’s the raw data. I split it up by week and put it into these col­lapsi­ble things be­cause it takes up a lot of space. Click on each week to see the data for that week.

Description is a short de­scrip­tion of the per­son.

Length is how long the con­ver­sa­tion was. A short con­ver­sa­tion is 0 – 2 min­utes, a medium con­ver­sa­tion is 5 – 7 min­utes, and a long con­ver­sa­tion is 10+ min­utes.

Notes are just any­thing in­ter­est­ing about the con­ver­sa­tion or the per­son I was talk­ing to.

Aftermath is what hap­pened af­ter that con­ver­sa­tion.

Reflection

The first cou­ple days were ex­tremely dif­fi­cult. I had been con­di­tioned to be­lieve that ini­ti­at­ing a con­ver­sa­tion with a stranger was weird and it was tough to break free from that. As a re­sult, for the first few peo­ple, I would al­ways make a de­tour at the last sec­ond, i.e. make a trip to the wa­ter foun­tain. I chick­ened out! The so­lu­tion was to ap­proach the per­son as quickly as pos­si­ble so that I did­n’t have time to think about run­ning away.

Luckily, most peo­ple were re­cep­tive. I got a rush of dopamine when­ever some­one re­sponded pos­i­tively to my con­ver­sa­tion, so talk­ing to new peo­ple be­came strangely ad­dic­tive. I kept talk­ing to more and more new peo­ple each day un­til I talked to a whop­ping seven (SIX SEVENNN) new peo­ple in one day (this is why Week 3 has a lot of en­tries). It was crazy.

Something in­ter­est­ing I learned early on was that even if some­one had head­phones on, there was a good chance they were open to con­ver­sa­tion. I mean, I had my ear­buds in and I was will­ing to talk to any­body. Most peo­ple were just lis­ten­ing to mu­sic and took the head­phones off to talk.

People did­n’t al­ways re­spond pos­i­tively though. In Week 1 and Week 2, I came across a num­ber of peo­ple who were re­ally short with their re­sponses and did­n’t try to con­tinue the con­ver­sa­tion. They gave off the vibe that they did­n’t want to talk to me. It was re­ally awk­ward and al­most made me end the ex­per­i­ment.

But over time, I came to ac­cept that it’s ok if they did­n’t want to talk to me. That’s just one of the things you have to ex­pect when you do some­thing like this.

And be­ing in an awk­ward sit­u­a­tion is ac­tu­ally not that bad. It sucks in the mo­ment, but then you just take a few min­utes to calm down and then you move on with your life. You’re ok.

However, I did end up pulling back in Week 4 and Week 5. I felt like con­stantly talk­ing to more new peo­ple was pro­duc­ing di­min­ish­ing re­turns. I had al­ready es­tab­lished a con­nec­tion with many peo­ple at the gym, so it was a bet­ter use of my lim­ited time (remember I still have to work out!) to nur­ture those ex­ist­ing con­nec­tions into mean­ing­ful ones.

I ended up pri­or­i­tiz­ing the 5 – 6 peo­ple who seemed the most in­ter­ested in me.

One of these peo­ple is some­one I will re­fer to as the other Asian guy”. I got a lot closer to him than ex­pected. We re­al­ized we had the same work­out rou­tine so we be­came gym bud­dies and started work­ing out to­gether. A few weeks later, he in­vited me to his apart­ment, where he cooked me a smash burger. His girl­friend showed me graphic pic­tures of what she was learn­ing in PA school too. Then, we watched a movie with their cat. I’m re­ally grate­ful that they were kind enough to have me over as a guest.

Also, some­thing new hap­pened: in­stead of scar­ing peo­ple away, I had a pos­i­tive im­pact on some­one.

These texts were from one of the peo­ple I pri­or­i­tized, the male SU stu­dent. He had re­cently moved to Syracuse and was strug­gling to make new friends. He re­lated to a cou­ple of my videos where I talked about the same strug­gles and was su­per ap­pre­cia­tive that I talked to him that day. The fol­low­ing week, we tried out Kofta Burger af­ter a rec­om­men­da­tion from my friend who lives down­town.

The burger was de­li­cious and we had a great time.

Despite my suc­cesses, my work is­n’t done. I re­al­ized near the end of the month that what I truly wanted was to con­sis­tently hang out with peo­ple on the week­ends. Unfortunately, most of the friends I’ve made are busy on the week­end. They’re tak­ing trips to visit loved ones, go­ing to the bar (I’m not that into drink­ing), or run­ning er­rands, so it’s hard to plan any­thing.

But I guess that’s a bet­ter prob­lem to have than eter­nal lone­li­ness.

A few months ago, I was googling how to make friends af­ter col­lege” every night. Now I have peo­ple to text, peo­ple to wave to at the gym, and peo­ple who no­tice when I don’t show up for a few days. AND I be­came a more re­silient per­son who is un­afraid to do hard and scary things.

No more Wizard of Loneliness for me!

Heh this blew up on HackerNews. I want to give some more con­text for peo­ple who are un­sure if the gym was the right place to do this. And this is all in hind­sight; I did not re­al­ize this un­til now.

The gym I go to, Crunch Fitness, has a so­cial as­pect to it. While many peo­ple keep to them­selves, it’s com­mon to see peo­ple chat­ting. Sometimes they’re chat­ting in be­tween sets. Other times, they’re chat­ting on the tread­mill. The staff go out of their way to in­ter­act with us, and of­ten the peo­ple who did­n’t want to talk to me talk to other peo­ple! I guess they are more open with their friends.

The peo­ple at the gym are also re­ally sup­port­ive. I for­got to men­tion this but once, when I was do­ing hip thrusts, I messed up and did­n’t rerack the ma­chine cor­rectly. I fell on my butt and the ma­chine made a huge CLANK sound when it fell. Everybody turned to look at me. I was re­ally em­barassed. But then, one guy came and helped me re­turn the ma­chine to the start­ing po­si­tion while an­other guy swung by to make sure I was ok. He as­sured me that it hap­pened to every­one and to not let it get to me. I did­n’t know ei­ther of these peo­ple! They just wanted to help.

I don’t dis­agree that the gym is pri­mar­ily a place to work­out, but I think that it’s also a place where you can find com­mu­nity. Maybe my gym is spe­cial in how so­cial it is but maybe peo­ple are friend­lier than they ap­pear to be. I’m bet­ting on the lat­ter.

GameStop makes $55.5bn takeover offer for eBay

www.bbc.co.uk

eBay should be worth - and will be worth - a lot more money,” Cohen told the Wall Street Journal. It could be a le­git com­peti­tor to Amazon,” he added.

Under the pro­posed deal to buy eBay, Cohen would be­come the chief ex­ec­u­tive of the new firm and re­ceive no salary or bonuses, be­ing compensated solely based on the per­for­mance of the com­bined com­pany”.

GameStop, which cur­rently has a stock mar­ket val­u­a­tion of around $11.9bn, said it has a com­mit­ment let­ter from TD Securities to pro­vide around $20bn in debt to help fi­nance the takeover.

Cohen said he planned to cut costs at eBay by $2bn within a year of a deal be­ing com­pleted.

This would mainly fall across eBay’s sales and mar­ket­ing di­vi­sion, which GameStop said had failed to at­tract more users to a marketplace with near-uni­ver­sal brand recog­ni­tion”.

The pro­posal does not sound like a terribly good of­fer” as it would sad­dle eBay with GameStop’s debt, said Sucharita Kodali, a re­tail an­a­lyst at re­search firm Forrester.

It makes sense for GameStop be­cause it could lift its val­u­a­tion by be­ing linked with a larger com­pany like eBay, she told the BBC.

The truth is, we are not nec­es­sar­ily putting two strong com­pa­nies to­gether,” Kodali added.

Shares in eBay rose by 5% on Monday in New York, while GameStop fell by more than 9%.

GameStop’s shops would give eBay a na­tional net­work for its live com­merce” and other busi­ness op­er­a­tions, Cohen said.

Cohen, who be­came the GameStop boss in 2023, has crit­i­cised its slow shift into e-com­merce.

Trademark Violation: Fake Notepad++ for Mac

notepad-plus-plus.org

2026 – 05-01

Several users have re­cently re­ported a web­site pre­tend­ing to of­fer an of­fi­cial ma­cOS ver­sion of Notepad++:

notepad-plus-plus-mac.org

Let me be blunt:

This site has ab­solutely noth­ing to do with Notepad++.

It’s not au­tho­rized, not en­dorsed, and not af­fil­i­ated with the pro­ject in any way.

The owner is us­ing the Notepad++ trade­mark (the name) with­out per­mis­sion;

This is mis­lead­ing, in­ap­pro­pri­ate, and frankly dis­re­spect­ful to both the pro­ject and its users. It has al­ready fooled peo­ple - in­clud­ing tech me­dia - into be­liev­ing this is an of­fi­cial re­lease.

To be crys­tal clear:

Notepad++ has never re­leased a ma­cOS ver­sion.

Anyone claim­ing oth­er­wise is sim­ply rid­ing on the Notepad++ name.

As men­tioned in my GitHub post, I have al­ready con­tacted the owner of the fake official” web­site, and I am still wait­ing for a re­ply.

In the mean­time, if you see some­one post­ing Notepad++ is fi­nally on Mac!” on Reddit, Twitter, Mastodon, Discord, StackOverflow, or any tech blogs/​fo­rums, please re­ply with:

This is not an of­fi­cial Notepad++ re­lease. It’s an unau­tho­rized pro­ject mis­us­ing the Notepad++ trade­mark.”, and in­clude a link to this an­nounce­ment.

Thank you to the users who raised the alarm. Your vig­i­lance helps pro­tect the pro­ject from peo­ple who think they can bor­row the Notepad++ iden­tity as they please.

– Don Ho

Removable batteries in smartphones will be mandatory starting in 2027

www.ecopv-eu.com

Starting in 2027, there will be a no­tice­able change for smart­phones in the EU: The re­mov­able bat­tery is mak­ing a come­back. What used to be stan­dard is re­turn­ing due to le­gal re­quire­ments for new mod­els.

What ex­actly can we ex­pect?

Starting February 18, 2027, new smart­phones and tablets must be de­signed so that end users can re­move and re­place the bat­tery them­selves us­ing stan­dard tools. Adhesive bonds that re­quire heat to be re­moved will then be largely pro­hib­ited.

Specifically, this means the fol­low­ing for new mod­els start­ing in February 2027:

Easy re­place­ment: Batteries must be re­place­able us­ing stan­dard tools (e.g., screw­drivers).

No bar­ri­ers: The use of ad­he­sives that can only be re­moved with heat or sol­vents is pro­hib­ited.

Tools: If a spe­cial tool is re­quired for re­place­ment, the man­u­fac­turer must pro­vide it free of charge.

Spare parts guar­an­tee: Replacement bat­ter­ies must be avail­able to end users at a rea­son­able price for at least 5 years.

Why is the EU in­tro­duc­ing this?

The main dri­ver is the tran­si­tion to a true cir­cu­lar econ­omy. Currently, smart­phones are of­ten re­placed as soon as bat­tery per­for­mance de­clines, which wastes enor­mous amounts of re­sources.

Waste pre­ven­tion: Millions of tons of elec­tronic waste are gen­er­ated in the EU every year. Easily re­place­able bat­ter­ies sig­nif­i­cantly ex­tend the lifes­pan of de­vices.

Cost sav­ings: Many users shy away from ex­pen­sive re­pairs or buy­ing new de­vices. The EU es­ti­mates that con­sumers could save tens of bil­lions of eu­ros in to­tal by 2030 thanks to longer us­age cy­cles.

Resource con­ser­va­tion: Batteries con­tain valu­able raw ma­te­ri­als such as lithium and cobalt. If they are eas­ily re­mov­able, they can be sorted by type and re­cy­cled more ef­fi­ciently.

Fire safety: Batteries that are per­ma­nently glued in place are of­ten dam­aged dur­ing shred­ding, which re­peat­edly leads to dan­ger­ous fires in sort­ing fa­cil­i­ties. Clean re­moval sig­nif­i­cantly in­creases safety in the re­cy­cling process.

What does this mean for users?

DIY re­pairs: Instead of pay­ing a lot of money to visit a re­pair ser­vice, you sim­ply buy the re­place­ment part and swap it out your­self.

Higher re­sale value: Used cell phones can be resold much more eas­ily and for a higher price with a brand-new bat­tery.

Longer soft­ware sup­port: Since the hard­ware lasts longer, there is also in­creased pres­sure on man­u­fac­tur­ers to of­fer se­cu­rity up­dates for a longer pe­riod.

Will this make smart­phones thicker or less wa­ter­proof?

That is the key chal­lenge for de­sign­ers.

Modern de­vices are of­ten bonded to­gether to make them par­tic­u­larly thin and wa­ter­proof.

Removable bat­ter­ies make this de­sign more dif­fi­cult, but not im­pos­si­ble.

Manufacturers are al­ready work­ing on so­lu­tions, such as:

new seals in­stead of ad­he­sive,

more ro­bust cas­ings with screw mech­a­nisms,

mod­u­lar in­ter­nal struc­tures.

Many users fear that cell phones will break im­me­di­ately if they get wet in the rain or fall into wa­ter. That’s not true: It is en­tirely fea­si­ble to make smart­phones wa­ter­proof de­spite hav­ing a re­mov­able bat­tery. The prin­ci­ple is sim­i­lar to that of rugged out­door phones. A rub­ber gas­ket run­ning around the bat­tery cover, which is pressed into place by screws or a se­cure clip, en­sures that the in­te­rior of the hous­ing is sealed.

It is there­fore quite pos­si­ble that smart­phones will be­come slightly thicker, but sig­nif­i­cant in­creases are un­likely, as de­sign re­mains a key sell­ing point.

Are there any ex­cep­tions to the re­place­ment re­quire­ment?

Yes, but only in spe­cific cases:

Specialized hard­ware: Devices used in highly spe­cial­ized fields (e.g., med­ical di­ag­nos­tics or ex­plo­sion-proof in­dus­trial cell phones) are also ex­empt if a re­place­able bat­tery would com­pro­mise safety.

Extremely long lifes­pan: To avoid the re­place­ment re­quire­ment, a bat­tery would have to be ex­tremely durable. The bat­tery must re­tain at least 80% of its orig­i­nal ca­pac­ity af­ter 1,000 charge cy­cles. That is sig­nif­i­cantly more than many bat­ter­ies on the mar­ket to­day can achieve (often around 500 – 800 cy­cles).

Simultaneous wa­ter pro­tec­tion: In ad­di­tion to dura­bil­ity, the de­vice must be wa­ter- and dust-tight ac­cord­ing to IP67.

Another in­no­va­tion: the battery pass­port”

In ad­di­tion, the EU is in­tro­duc­ing a dig­i­tal bat­tery pass­port.

Users and re­cy­cling fa­cil­i­ties can ac­cess im­por­tant data via a printed QR code. It stores in­for­ma­tion about the bat­tery’s car­bon foot­print, the pro­por­tion of re­cy­cled ma­te­ri­als, its chem­i­cal com­po­si­tion, and its state of health.” This rep­re­sents a huge step for­ward, par­tic­u­larly for the sec­ond-hand mar­ket and pro­fes­sional re­cy­clers.

Conclusion

The new EU reg­u­la­tion marks the end of the disposable” era for smart­phones. Starting in 2027, users will ben­e­fit from longer de­vice lifes­pans, eas­ier re­pairs, and lower costs.

Even though man­u­fac­tur­ers will have to adapt their de­signs to im­prove wa­ter re­sis­tance and aes­thet­ics, the ben­e­fits for the en­vi­ron­ment and con­sumers -including less elec­tronic waste and greater trans­parency — out­weigh these changes.

Contact us for com­pre­hen­sive ad­vice on your com­pli­ance is­sues re­lat­ing to elec­tri­cal and elec­tronic equip­ment, pack­ag­ing, bat­ter­ies, and PV pan­els.

www.ecopv-eu.com/​en/​con­tact/ |  E-Mail: info@ecopv-eu.com

Supported over 20,000 cus­tomers with EPR com­pli­ance

Rated 5.0 on Google

Contact

We look for­ward to your mes­sage!

info@ecopv-eu.com

+49 6196 5835357

Frankfurter Str. 70 – 7265760 Eschborn

US healthcare marketplaces shared citizenship and race data with ad tech giants

techcrunch.com

In Brief

Posted:

7:30 AM PDT · May 4, 2026

Almost all of the 20 U.S. state gov­ern­ment-run health in­sur­ance mar­ket­places shared res­i­dents’ ap­pli­ca­tion in­for­ma­tion with ad­ver­tis­ing and tech gi­ants, in­clud­ing Google, LinkedIn, Meta, and Snap, ac­cord­ing to a new in­ves­ti­ga­tion by Bloomberg.

The re­port dri­ves home the pri­vacy prob­lems cre­ated by pixel-sized track­ers, which al­low web­site own­ers to col­lect in­for­ma­tion about their vis­i­tors, of­ten for web an­a­lyt­ics and iden­ti­fy­ing bugs. A com­mon tool in dig­i­tal ad­ver­tis­ing, these track­ers also al­low the col­lec­tion of per­sonal in­for­ma­tion if mis­con­fig­ured and placed on web­sites that con­tain sen­si­tive con­tent, such as health­care data.

Per Bloomberg, New York’s health in­sur­ance ex­change shared in­for­ma­tion with sev­eral tech com­pa­nies about a per­son’s ap­pli­ca­tion, in­clud­ing whether they pro­vided de­tails about whether they have in­car­cer­ated fam­ily mem­bers.

The health in­sur­ance ex­change for Washington, D.C. also asked res­i­dents about the per­son’s sex and race, which TikTok’s pixel tracker at­tempted to redact. Some races were masked and oth­ers were not, the pub­li­ca­tion re­ported. A spokesper­son for the Washington, D.C. ex­change told Bloomberg that res­i­dents’ email ad­dress, phone num­ber, and coun­try iden­ti­fiers were also shared with TikTok.

Washington, D.C. paused its roll­out of the TikTok tracker, and Virginia re­moved the Meta tracker from its web­site af­ter Bloomberg found it was shar­ing res­i­dents’ ZIP codes with the tech gi­ant.

This is not a new prob­lem, and has pre­vi­ously caught out tele­health star­tups and health­care gi­ants alike. Several com­pa­nies and health­care gi­ants have had to no­tify mil­lions that they in­ad­ver­tently col­lected and shared their health in­for­ma­tion with tech gi­ants, whose prof­its are de­rived from us­ing con­sumer data for ad­ver­tis­ing.

But Bloomberg’s in­ves­ti­ga­tion shows that these pixel track­ers can af­fect large swathes of the pop­u­la­tion when placed on gov­ern­ment web­sites. The pub­li­ca­tion noted that more than seven mil­lion Americans pur­chased health in­sur­ance for this year through a state health in­sur­ance ex­change.

Techcrunch event

San Francisco, CA

|

October 13 – 15, 2026

Topics

Subscribe for the in­dus­try’s biggest tech news

Latest in Privacy

I am worried about Bun

wwj.dev

Bun is great soft­ware.

I use it all the time. It is fast and prac­ti­cal, and the team ships con­stantly. It makes TypeScript a joy to work with in small scripts, apps, tests, and tool­ing. That is why this is frus­trat­ing. I want Bun to win. I want a se­ri­ous Node.js al­ter­na­tive. I want faster in­stalls, faster tests, bet­ter bundling, and less tool­chain bloat.

But I am wor­ried about Bun now.

Anthropic owns Bun

Anthropic ac­quired Bun in December 2025.

The an­nounce­ment said every­thing I wanted to hear: Bun stays open source and MIT-licensed, the same team keeps work­ing on it, and the roadmap keeps fo­cus­ing on high-per­for­mance JavaScript tool­ing and Node.js com­pat­i­bil­ity.

It also said this:

Claude Code ships as a Bun ex­e­cutable to mil­lions of users. If Bun breaks, Claude Code breaks. Anthropic has di­rect in­cen­tive to keep Bun ex­cel­lent.

Claude Code ships as a Bun ex­e­cutable to mil­lions of users. If Bun breaks, Claude Code breaks. Anthropic has di­rect in­cen­tive to keep Bun ex­cel­lent.

In December, that sounded re­as­sur­ing. Anthropic had a huge prod­uct built on Bun. That meant Anthropic had a di­rect in­cen­tive to keep Bun fast, sta­ble, and ex­cel­lent. I still think that ar­gu­ment has merit, but now cracks are show­ing.

Bun is still a great JavaScript run­time, but now it’s in the hands of a com­pany that does­n’t seem to care at all about their soft­ware.

Anthropic mod­els are still great

This is not an Anthropic bad” post. Well, not en­tirely. I still think Anthropic’s mod­els are great. Claude Opus (4.6 I guess) is still one of the best model fam­i­lies for cod­ing, writ­ing, rea­son­ing, and gen­eral dev work. The model qual­ity is not my con­cern here. My con­cern is the prod­uct layer around the mod­els. Claude Code kind of sucks to use to­day.

Claude Code used to be great

Claude Code felt in­cred­i­ble a year ago. It was one of the first AI cod­ing tools that con­vinced me de­vel­oper work­flows would change from mostly au­to­com­plete to agents. It could read a pro­ject, make fo­cused ed­its, run com­mands, fix mis­takes, and keep go­ing. It felt like a tool built by peo­ple who un­der­stood how devs ac­tu­ally work. Combined with Anthropic’s mod­els, which up un­til re­cently (GPT-5.5) were best-in-class, Claude Code felt un­beat­able.

Though even in December Claude Code was al­ready get­ting worse, it was still good and that made the Bun ac­qui­si­tion make sense to me. If Anthropic was build­ing the fu­ture of cod­ing tools, and Bun was the run­time un­der­neath those tools, maybe Bun had found the best pos­si­ble home. I was al­ways a lit­tle wor­ried about how Bun was go­ing to be­come a sus­tain­able busi­ness given it was VC funded. So the ac­qui­si­tion made sense, and I was op­ti­mistic.

Claude Code is bad

There are so many good cod­ing agents out there right now. Cursor, Augment, Codex, OpenCode, T3 Code, Pi, prob­a­bly more. For a long time Cursor was my main dri­ver, be­cause while Claude Code was get­ting worse over time Cursor (the CLI) was so good at us­ing Anthropic mod­els. Recently, I had to stop us­ing Cursor for rea­sons. I had­n’t used Claude Code in a cou­ple months, so I picked it back up and was ac­tu­ally shocked at how bad it has be­come.

In April 2026, devs started com­plain­ing about Claude Code qual­ity, limit be­hav­ior, third-party har­ness re­stric­tions, con­fus­ing billing, and slow com­mu­ni­ca­tion.

Anthropic pub­lished an en­gi­neer­ing post­mortem that blamed prod­uct-layer is­sues, in­clud­ing a re­duced de­fault rea­son­ing ef­fort, a stale-ses­sion bug, and a prompt change that hurt cod­ing qual­ity. I ap­pre­ci­ate the post­mortem. It is bet­ter than pre­tend­ing noth­ing hap­pened. Honestly, it was pos­si­bly the first time Anthropic men­tioned any­thing be­ing their own fault.

Then there was the OpenClaw mess. TechCrunch re­ported that Anthropic told Claude Code sub­scribers they would need to pay ex­tra for OpenClaw and other third-party har­nesses. That is al­ready bad enough. But the weird part came later.

Gigazine cov­ered re­ports that sim­ply hav­ing OpenClaw in git his­tory could cause Claude Code to refuse a re­quest or bill ex­tra. That ar­ti­cle quotes Theo say­ing a re­cent com­mit men­tion­ing OpenClaw in a JSON blob could trig­ger the be­hav­ior, even in an empty repo while call­ing claude -p hi” di­rectly. If you’re in­ter­ested in watch­ing the clip, it’s in­cred­i­ble.

Theo’s read, and one I find plau­si­ble, is that this looks like a prod­uct where no­body is care­fully dog­food­ing the ac­tual code-level ex­pe­ri­ence be­fore ship­ping changes. Maybe that is un­fair, I don’t know what ac­tu­ally goes on at Anthropic. But from the out­side, Claude Code looks like a tool mov­ing in the wrong di­rec­tion. More re­stric­tions, billing weird­ness, sur­prise be­hav­ior based on text in com­mits.

That is text­book en­shit­ti­fi­ca­tion.

That is why Bun wor­ries me

Bun is em­bed­ded in Claude Code. Claude Code ap­pears to be en­shit­ti­fy­ing. So now I have to worry that Bun could en­shit­tify too. Not be­cause Bun is bad. Bun is not bad. Bun is ex­cel­lent. Not be­cause the Bun team stopped car­ing. I do not be­lieve that.

The prob­lem is as Bun and its team get fur­ther in­te­grated into Anthropic, so will their poli­cies. The same poli­cies that have led to the col­lapse of Claude Code. Will we see is­sues start pop­ping up in Bun that make it seem like the team does­n’t even dog­food their own prod­uct? I don’t know, but I’m not sure I want to con­tinue us­ing it just in case.

I’ll stick with pnpm for now

The up­set­ting thing is Bun pro­vides a lot more than what pnpm of­fers that I end up hav­ing to reach for ad­di­tional de­pen­den­cies to cover. Things like built-in TypeScript sup­port in­stead of need­ing a build step, a bundler in­stead of Vite, test­ing in­stead of vitest. It’s not that the de­pen­den­cies are bad, but get­ting them all wrapped into a sin­gle tool­chain is very nice.

pnpm is not a re­place­ment for Node.js. It is not a re­place­ment for Bun ei­ther. pnpm is just a pack­age man­ager. But for most of my day-to-day work, the part of Bun I reach for most is pack­age man­age­ment. I want in­stalls to be fast. I want monore­pos to work well. I want disk us­age to be sane. Bun gives me that, and so does pnpm.

So for my pro­jects that are cur­rently us­ing Bun, I am mov­ing away from Bun and us­ing pnpm. When some­one asks me what I rec­om­mend for a JavaScript or TypeScript pro­ject to­day, my an­swer is pnpm.

I don’t rec­om­mend mov­ing away from Bun

Even though I per­son­ally am mov­ing some pro­jects away from Bun, don’t take my ad­vice as gospel. I’m just some guy on the in­ter­net. You should de­cide what is best for you. For new pro­jects, pnpm makes sense. For ex­ist­ing pro­jects, you might want to stick with Bun un­less and un­til you have a good rea­son to leave.

I hope I am wrong

I hope Bun stays great. I hope the Bun team keeps ship­ping ex­cel­lent work. I hope Anthropic gives them room to make the right calls for the JavaScript ecosys­tem. Bun can still come out of this stronger. Anthropic has money, dis­tri­b­u­tion, and a real rea­son to care about Bun’s per­for­mance and sta­bil­ity. But I do not trust the sit­u­a­tion as much as I did in December. Claude Code used to feel like proof that Anthropic un­der­stood dev tools. Now it feels like a warn­ing that Anthropic does­n’t know what it takes to main­tain and im­prove a prod­uct over time.

Bun is still great. I just do not know where it goes from here. A year from now things could be com­pletely dif­fer­ent, so I will fol­low-up and see if my pre­dic­tion is right.

Incident with Issues and Webhooks

www.githubstatus.com

Days Without GitHub Incident

www.dayswithoutgithubincident.com

openai.com

How Monero’s proof of work works

blog.alcazarsec.com

Monero’s proof of work is called RandomX.

Monero does not ask min­ers to run the same tiny hash func­tion over and over. It asks them to run a small ran­dom pro­gram on a vir­tual ma­chine, hit mem­ory hard while do­ing it, and then hash the re­sult.

Bitcoin’s proof of work is great for spe­cial­ized chips be­cause the work never changes. RandomX was built to do the op­po­site. It tries to make ef­fi­cient min­ing look as much like a nor­mal CPU work­load as pos­si­ble.

Short ver­sion

Here is the short­est use­ful sum­mary:

Monero takes the can­di­date block header plus a nonce.

It also uses an older block hash as a medium-term key.

That key builds a large shared mem­ory dataset.

The can­di­date block in­put is hashed to cre­ate a seed for a spe­cial vir­tual ma­chine.

The VM runs in­te­ger math, float­ing-point math, branches, and lots of mem­ory ac­cesses across 8 chained pro­grams.

The fi­nal ma­chine state is hashed into a 256-bit out­put.

If that out­put is be­low the net­work tar­get, the block is valid.

The in­ter­est­ing part is not the yes-or-no rule at the end. Every proof-of-work sys­tem has that. The in­ter­est­ing part is how Monero makes each hash at­tempt ex­pen­sive in the ex­act ways nor­mal CPUs are good at and cus­tom chips hate.

Why Monero does not use a sim­ple hash

If your proof of work is just run this fixed func­tion on new in­puts un­til you get a lucky out­put,” hard­ware de­sign­ers have a clear job: build sil­i­con that runs that ex­act func­tion as cheaply and as fast as pos­si­ble.

That is what hap­pened to Bitcoin with SHA-256 ASICs.

Monero did not want that path. Long be­fore RandomX, the pro­ject was ex­plicit that spe­cial­ized min­ing hard­ware cre­ates cen­tral­iza­tion pres­sure. Fewer man­u­fac­tur­ers mat­ter more. Large farms mat­ter more. Ordinary users mat­ter less.

Monero’s ear­lier an­swer was the CryptoNight fam­ily. Later, in late 2019, Monero switched to RandomX, which its own re­lease notes de­scribed as a new proof of work based on ran­dom in­struc­tions, adapted to CPUs.”

So the de­sign tar­get changed from make mem­ory mat­ter to make a whole CPU mat­ter.

The core idea be­hind RandomX

RandomX starts from one ob­ser­va­tion: CPUs are not just arith­metic boxes. They are flex­i­ble ma­chines built to run chang­ing code and jug­gle a lot of hard­ware fea­tures at once.

A mod­ern CPU has:

mul­ti­ple cache lev­els

in­te­ger units

float­ing-point units

branch han­dling

out-of-or­der ex­e­cu­tion

spec­u­la­tive ex­e­cu­tion

mem­ory con­trollers

Normal cryp­to­graphic hashes do not use much of that va­ri­ety. They mostly push data through a fixed pipeline.

RandomX tries to bind proof of work to those broader CPU strengths. Its de­sign doc­u­ment says the work must be dy­namic. That means the miner is not just feed­ing in new data. The miner is also get­ting new code to run.

That is why RandomX is based on ran­dom code ex­e­cu­tion.

What a miner is ac­tu­ally com­put­ing

At the Monero level, RandomX takes two im­por­tant in­puts:

a key K

a hash­ing in­put H

For Monero, K comes from an older block hash, called the key block. The RandomX ref­er­ence README rec­om­mends chang­ing this key every 2048 blocks with a 64-block de­lay, and that is how Monero wires it in.

That de­tail mat­ters be­cause min­ers do not re­build the heavy shared mem­ory struc­tures for every nonce. They re­build them only when the key changes, roughly every 2.8 days.

H is the can­di­date block hash­ing blob with a cho­sen nonce. That is the part min­ers keep chang­ing over and over.

So you can think of Monero min­ing like this:

the net­work gives you a medium-term en­vi­ron­ment through K

your can­di­date block gives you a per-at­tempt in­put through H

The en­vi­ron­ment changes slowly. The at­tempt changes con­stantly.

Step 1: build the cache from the key

RandomX first takes the key K and runs Argon2d on it.

Argon2d is bet­ter known as a pass­word-hash­ing and key-de­riva­tion func­tion. It is use­ful here for the same rea­son it is use­ful there: it is mem­ory-hard. It forces the ma­chine to touch a lot of mem­ory in a way that is an­noy­ing to cheat.

In the de­fault RandomX pa­ra­me­ters, this pro­duces a 256 MiB cache.

That cache is the smaller of the two big mem­ory struc­tures in RandomX. It is not the struc­ture min­ers want to use di­rectly for max­i­mum speed. It is the struc­ture used to build the big­ger one.

Step 2: ex­pand the cache into the dataset

From that 256 MiB cache, RandomX builds the dataset.

The de­fault dataset size is:

2,147,483,648 bytes base size

33,554,368 bytes ex­tra size

Together that is about 2080 MiB, a lit­tle over 2 GiB.

This odd-look­ing size is de­lib­er­ate. It is big enough to spill out of on-chip mem­ory and into DRAM, and the ex­tra non-power-of-two tail makes life more an­noy­ing for hard­ware de­sign­ers.

The dataset is read-only dur­ing hash­ing. RandomX uses it to force reg­u­lar DRAM traf­fic. The de­sign doc says each pro­gram it­er­a­tion reads one 64-byte dataset item, and across a whole hash re­sult that be­comes 16,384 dataset reads.

That gives RandomX one of its main bot­tle­necks: mem­ory ac­cess, not just arith­metic.

Step 3: ini­tial­ize the scratch­pad from the block in­put

Now RandomX turns to the per-hash in­put H.

It com­putes Hash512(H) us­ing Blake2b. That 64-byte re­sult seeds an AES-based gen­er­a­tor, which fills the scratch­pad.

The scratch­pad is the VMs work­ing mem­ory. Unlike the large dataset, it is meant to live in CPU cache, not DRAM.

Its de­fault size is 2 MiB, split to mimic CPU cache lev­els:

16 KiB L1

256 KiB L2

2 MiB L3

This is one of the smartest parts of RandomX. It uses two very dif­fer­ent mem­ory struc­tures at once:

a big dataset to hit DRAM

a smaller scratch­pad to be­have like cache-heavy code

That lets it pres­sure both the mem­ory sub­sys­tem and the CPU core.

Step 4: gen­er­ate a ran­dom pro­gram

After the scratch­pad is ready, RandomX gen­er­ates a pro­gram for its vir­tual ma­chine.

This is not a C pro­gram or a JavaScript pro­gram. It is a com­pact VM pro­gram with its own in­struc­tion set.

Two de­tails mat­ter a lot:

Every in­struc­tion is 8 bytes long.

Any 8-byte word is a valid in­struc­tion.

That sec­ond choice is a big deal. It means RandomX can gen­er­ate pro­grams by just fill­ing a buffer with ran­dom bytes. There is no slow parser and no com­pli­cated syn­tax check­ing.

Each pro­gram con­tains 256 in­struc­tions.

Those in­struc­tions are cho­sen to look like work real CPUs are built for:

in­te­ger math

64-bit mul­ti­plies

float­ing-point op­er­a­tions

128-bit vec­tor op­er­a­tions

mem­ory loads and stores

oc­ca­sional branches

The float­ing-point side is not cos­metic. RandomX uses IEEE 754 dou­ble pre­ci­sion op­er­a­tions, in­clud­ing di­vi­sion and square root, and it uses all four stan­dard round­ing modes. That makes the VM harder to col­lapse into a tiny mostly in­te­ger” cus­tom de­sign.

Step 5: run the pro­gram loop

The VM ex­e­cutes the 256-instruction pro­gram in a loop for 2048 it­er­a­tions.

During each it­er­a­tion it:

reads and writes scratch­pad mem­ory

prefetches and loads dataset items

mixes in­te­ger and float­ing-point reg­is­ter state

takes a low-prob­a­bil­ity branch when the con­di­tion says so

The de­sign doc says an av­er­age it­er­a­tion reads about 504 bytes from mem­ory and writes about 256 bytes.

That num­ber is a clue to what RandomX is re­ally do­ing. It is not a hash with ex­tra steps.” It is try­ing to be­have like messy, mixed, real soft­ware.

Why the branches are there

Branches are easy to over­look, but they mat­ter.

If the code were fully straight-line, spe­cial­ized hard­ware could op­ti­mize away more of it. Branches make sta­tic sim­pli­fi­ca­tion harder.

RandomX uses branches spar­ingly. A branch is taken with prob­a­bil­ity about 1/256, and the de­sign in­ten­tion­ally makes these branches usu­ally pre­dict as not taken.” That means they are cheap on CPUs most of the time, while still get­ting in the way of over-op­ti­mized hard­ware short­cuts.

The im­por­tant point is not that RandomX found some magic branch pre­dic­tor trick. It did not. The point is that even a lit­tle real con­trol flow makes the work­load look more like ac­tual code and less like a clean hard­ware pipeline.

Why there are 8 chained pro­grams per hash

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.