10 interesting stories served every morning and every evening.




1 1,461 shares, 78 trendiness

20 Years of Digital Life, Gone in an Instant, thanks to Apple

Summary: A ma­jor brick-and-mor­tar store sold an Apple Gift Card that Apple seem­ingly took of­fence to, and locked out my en­tire Apple ID, ef­fec­tively brick­ing my de­vices and my iCloud Account, Apple Developer ID, and every­thing as­so­ci­ated with it, and I have no re­course. Can you help? Email paris AT paris.id.au (and read on for the de­tails). ❤️

I am writ­ing this as a des­per­ate mea­sure. After nearly 30 years as a loyal cus­tomer, au­thor­ing tech­ni­cal books on Apple’s own pro­gram­ming lan­guages (Objective-C and Swift), and spend­ing tens upon tens upon tens of thou­sands of dol­lars on de­vices, apps, con­fer­ences, and ser­vices, I have been locked out of my per­sonal and pro­fes­sional dig­i­tal life with no ex­pla­na­tion and no re­course.

My Apple ID, which I have held for around 25 years (it was orig­i­nally a user­name, be­fore they had to be email ad­dresses; it’s from the iTools era), has been per­ma­nently dis­abled. This is­n’t just an email ad­dress; it is my core dig­i­tal iden­tity. It holds ter­abytes of fam­ily pho­tos, my en­tire mes­sage his­tory, and is the key to sync­ing my work across the ecosys­tem.

The Trigger: The only re­cent ac­tiv­ity on my ac­count was a re­cent at­tempt to re­deem a $500 Apple Gift Card to pay for my 6TB iCloud+ stor­age plan. The code failed. The ven­dor sug­gested that the card num­ber was likely com­pro­mised and agreed to reis­sue it. Shortly af­ter, my ac­count was locked. An Apple Support rep­re­sen­ta­tive sug­gested that this was the cause of the is­sue: in­di­cat­ing that some­thing was likely un­to­ward about this card.The card was pur­chased from a ma­jor brick-and-mor­tar re­tailer (Australians, think Woolworths scale; Americans, think Walmart scale), so if I can­not rely on the prove­nance of that, and have no re­course, what am I meant to do? We have even sent the re­ceipt, in­di­cat­ing the card’s se­r­ial num­ber and pur­chase lo­ca­tion to Apple.

The Consequence: My ac­count is flagged as closed in ac­cor­dance with the Apple Media Services Terms and Conditions”.The Damage: I ef­fec­tively have over $30,000 worth of pre­vi­ously-ac­tive bricked” hard­ware. My iPhone, iPad, Watch, and Macs can­not sync, up­date, or func­tion prop­erly. I have lost ac­cess to thou­sands of dol­lars in pur­chased soft­ware and me­dia. Apple rep­re­sen­ta­tives claim that only the Media and Services” side of my ac­count is blocked, but now my de­vices have signed me out of iMes­sage (and I can’t sign back in), and I can’t even sign out of the blocked iCloud ac­count be­cause… it’s barred from the sign-out API, as far as I can tell.I can’t even lo­gin to the Secure File Transfer” sys­tem Apple uses to ex­change in­for­ma­tion, be­cause it re­lies on an Apple ID. Most of the ways Apple has sug­gested seek­ing help from them in­volve sign­ing in to an Apple ser­vice to up­load some­thing, or com­mu­ni­cate with them. This does­n’t work as the ac­count is locked.

I can’t even down­load my iCloud Photos, as:There are re­peated auth-er­rors on my ac­count, so I can’t make Photos work;I don’t have a 6TB de­vice to sync them to, even if I could.

No Information: Support staff re­fused to tell me why the ac­count was banned or pro­vide spe­cific de­tails on the de­ci­sion. No Escalation: When I begged for an es­ca­la­tion to Executive Customer Relations (ECR), not­ing that I would lose the abil­ity to do my job and that my de­vices were use­less, I was told that an ad­di­tional es­ca­la­tion won’t lead to a dif­fer­ent out­come”.Many of the reps I’ve spo­ken to have sug­gested strange things, one of the strangest was telling me that I could phys­i­cally go to Apple’s Australian HQ at Level 3, 20 Martin Place, Sydney, and plead my case. They even put me on hold for 5 min­utes while they looked up the ad­dress.

Most in­sult­ingly, the of­fi­cial ad­vice from the Senior Advisor was to create a new Apple ac­count… and up­date the pay­ment in­for­ma­tion”.

The Legal Catch: Apple’s Terms and Conditions rely on Termination of Access.” By clos­ing my ac­count, they have re­voked my li­cense to use their ser­vices. The Technical Trap: If I fol­low their ad­vice and cre­ate a new ac­count on my cur­rent de­vices (which are likely hard­ware-flagged due to the gift card er­ror), the new ac­count will likely be linked to the banned one and dis­abled for cir­cum­vent­ing se­cu­rity mea­sures.The Developer Risk: As a pro­fes­sional Apple Developer, at­tempt­ing to dodge” a ban by cre­at­ing a new ID could lead to my Developer Program mem­ber­ship be­ing per­ma­nently black­listed, amongst other things.

I am not a ca­sual user. I have lit­er­ally writ­ten the book on Apple de­vel­op­ment (taking over the Learning Cocoa with Objective-C se­ries, which Apple them­selves used to write, for O’Reilly Media, and then 20+ books fol­low­ing that). I help run the longest-run­ning Apple de­vel­oper event not run by Apple them­selves, /dev/world. I have ef­fec­tively been an evan­ge­list for this com­pa­ny’s tech­nol­ogy for my en­tire pro­fes­sional life. We had an app on the App Store on Day 1 in every sense of the world.

I am ask­ing for a hu­man at Apple to re­view this case. I sus­pect an au­to­mated fraud flag re­gard­ing the bad gift card trig­gered a nu­clear re­sponse that front­line sup­port can­not over­ride. I have es­ca­lated this through my many friends in WWDR and SRE at Apple, with no suc­cess.

I am des­per­ate to re­solve this and re­store my dig­i­tal life. If you can help, please email paris AT paris.id.au

...

Read the original on hey.paris »

2 539 shares, 21 trendiness

OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI

One of the things that most ex­cited me about Anthropic’s new Skills mech­a­nism back in October is how easy it looked for other plat­forms to im­ple­ment. A skill is just a folder with a Markdown file and some op­tional ex­tra re­sources and scripts, so any LLM tool with the abil­ity to nav­i­gate and read from a filesys­tem should be ca­pa­ble of us­ing them. It turns out OpenAI are do­ing ex­actly that, with skills sup­port qui­etly show­ing up in both their Codex CLI tool and now also in ChatGPT it­self.

I learned about this from Elias Judin this morn­ing. It turns out the Code Interpreter fea­ture of ChatGPT now has a new /home/oai/skills folder which you can ac­cess sim­ply by prompt­ing:

I tried that my­self and got back this zip file. Here’s a UI for ex­plor­ing its con­tent (more about that tool).

So far they cover spread­sheets, docx and PDFs. Interestingly their cho­sen ap­proach for PDFs and doc­u­ments is to con­vert them to ren­dered per-page PNGs and then pass those through their vi­sion-en­abled GPT mod­els, pre­sum­ably to main­tain in­for­ma­tion from lay­out and graph­ics that would be lost if they just ran text ex­trac­tion.

Elias shared copies in a GitHub repo. They look very sim­i­lar to Anthropic’s im­ple­men­ta­tion of the same kind of idea, cur­rently pub­lished in their an­throp­ics/​skills repos­i­tory.

I tried it out by prompt­ing:

Create a PDF with a sum­mary of the rimu tree sit­u­a­tion right now and what it means for kakapo breed­ing sea­son

Sure enough, GPT-5.2 Thinking started with:

It took just over eleven min­utes to pro­duce this PDF, which was long enough that I had Claude Code for web build me a cus­tom PDF view­ing tool while I waited.

The rea­son it took so long is that it was fas­tid­i­ous about look­ing at and tweak­ing its own work. I ap­pre­ci­ated that at one point it tried ren­der­ing the PDF and no­ticed that the macrons in kākāpō were not sup­ported by the cho­sen font, so it switched to some­thing else:

Meanwhile, two weeks ago OpenAI’s open source Codex CLI tool landed a PR ti­tled feat: ex­per­i­men­tal sup­port for skills.md. The most re­cent docs for that are in docs/​skills.md.

The doc­u­men­ta­tion sug­gests that any folder in ~/.codex/skills will be treated as a skill.

I dug around and found the code that gen­er­ates the prompt that dri­ves the skill sys­tem in codex-rs/​core/​src/​skills/​ren­der.rs—here’s a Gist with a more read­able ver­sion of that prompt.

I used Claude Opus 4.5’s skill au­thor­ing skill to cre­ate this skill for cre­at­ing Datasette plu­g­ins, then in­stalled it into my Codex CLI skills folder like this:

git clone https://​github.com/​datasette/​skill \

~/.codex/skills/datasette-plugin

You have to run Codex with the –enable skills op­tion. I ran this:

cd /tmp

mkdir datasette-cowsay

cd datasette-cowsay

codex –enable skills -m gpt-5.2

- datasette-plu­g­ins — Writing Datasette plu­g­ins us­ing Python + pluggy (file: /Users/simon/.codex/skills/datasette-plugin/SKILL.md)

- Discovery — How to find/​iden­tify avail­able skills (no SKILL.md path pro­vided in the list)

Write a Datasette plu­gin in this folder adding a /-/cowsay?text=hello page that dis­plays a pre with cowsay from PyPI say­ing that text

It worked per­fectly! Here’s the plu­gin code it wrote and here’s a copy of the full Codex CLI tran­script, gen­er­ated with my ter­mi­nal-to-html tool.

You can try that out your­self if you have uvx in­stalled like this:

uvx –with https://​github.com/​si­monw/​datasette-cowsay/​archive/​refs/​heads/​main.zip \

datasette

http://​127.0.0.1:8001/-/​cowsay?text=This+is+pretty+fun

When I first wrote about skills in October I said Claude Skills are awe­some, maybe a big­ger deal than MCP. The fact that it’s just turned December and OpenAI have al­ready leaned into them in a big way re­in­forces to me that I called that one cor­rectly.

Skills are based on a very light spec­i­fi­ca­tion, if you could even call it that, but I still think it would be good for these to be for­mally doc­u­mented some­where. This could be a good ini­tia­tive for the new Agentic AI Foundation (previously) to take on.

...

Read the original on simonwillison.net »

3 272 shares, 33 trendiness

What Is the Nicest Thing A Stranger Has Ever Done for You?

So there I was, ped­al­ing my bi­cy­cle as fast as I could down a long, straight stretch of road, feel­ing great. I’d just dis­cov­ered the plea­sures of rid­ing a road bike, and I loved every minute that I could get away. Always a data geek, I tracked my mileage, av­er­age speed, heart rate, etc. It was a beau­ti­ful Indian sum­mer Sunday af­ter­noon in September. I was in my late 30s, still a baby. Out of nowhere, my chain came off right in the mid­dle of the sprint I was tim­ing. In true mas­cu­line fash­ion, I threw a fit, curs­ing and hit­ting the brakes as hard as I could. At this point, I found out that ex­pe­ri­enced rid­ers don’t do that be­cause I flew right over the han­dle­bars, land­ing on the pave­ment amid speed­ing cars. I mo­men­tar­ily lost con­scious­ness, and when I re­gained my senses, I knew I’d screwed up badly. The pain in my shoul­der was nau­se­at­ing. I could­n’t move my arm, and I had to just roll off the road onto the shoul­der. I just lay there, hurt­ing, un­able to think clearly. Within sec­onds, it seemed, a man ma­te­ri­al­ized be­side me.

He was ex­cep­tion­ally calm. He did­n’t ask me if I was OK, since I clearly was­n’t. It was ob­vi­ous that he knew what he was do­ing. He made cer­tain I could breathe, paused long enough to dial 911, and then started pulling stuff out of a med­ical bag (WTF?) to clean the ex­ten­sive road rash I had. In a minute, he asked for my home phone num­ber so he could call my wife to let her know I was go­ing to be rid­ing in an am­bu­lance to the hos­pi­tal. He told her he was an emer­gency room doc­tor who just hap­pened to be right be­hind me when I crashed. He ex­plained that he would stay with me un­til the medics ar­rived and that he would call ahead to make sure one of the doc­tors on duty would take good care of me.”

When he hung up, he asked me if I’d heard the con­ver­sa­tion. I told him that I had and that I could­n’t be­lieve how lucky I was un­der the cir­cum­stances. He agreed. To keep my mind off the pain, he just kept chat­ting, telling me that be­cause I was ar­riv­ing by am­bu­lance, I’d be treated im­me­di­ately. He told me that I’d be get­ting the good drugs” to take care of the pain. That sounded awe­some.

I don’t re­mem­ber telling him good­bye. I cer­tainly did­n’t ask him his name or find out any­thing about him. He briefed the EMTs when they ar­rived and stood there un­til the am­bu­lance doors closed. The ER was in­deed ready for me when the am­bu­lance got there. They treated me like a VIP. I got some Dilaudid for the pain, and it was in­deed the good stuff. They cov­ered the road rash with Tegaderm and took x-rays, which re­vealed that I’d torn my col­lar­bone away from my shoul­der blade. That was go­ing to re­quire a cou­ple of surg­eries and lots of phys­i­cal ther­apy. I had a con­cus­sion and was glad that I had a hel­met on.

All of this hap­pened al­most 25 years ago. I’ve had plenty of other bike wrecks, but that re­mains the worst one. My daugh­ter is a nurse, and she’s like a mag­net for car crashes, hav­ing stopped mul­ti­ple times to ren­der aid. She does­n’t do it with a smile on her face, though; emer­gency med­i­cine is­n’t her gig, and if any­one asks her if she’s a doc­tor, her stock an­swer is I’m a YMCA mem­ber.”

The guy who helped me that day was an ab­solute an­gel. I have no idea what I would have done with­out him. I did­n’t even have a cell phone at the time. But he was there at a time when I could­n’t have needed him any more badly. He helped me and then got in his car and com­pleted his trip. I think of that day of­ten, es­pe­cially when the American med­ical sys­tem makes me mad, which hap­pens reg­u­larly these days.

I’ve en­joyed the kind­ness of a lot of strangers over the years, par­tic­u­larly dur­ing the long hike my wife and I did for our hon­ey­moon (2,186 miles) when we hitch­hiked to a town in NJ in the rain and got a ride from the first car to pass. Another time, in Connecticut, a man gave us a $100 bill and told us to have a nice din­ner at the restau­rant atop Mt. Greylock, the high­est moun­tain in Massachusetts. In Virginia, a moth flew into my wife’s ear, and I mean all the way into her ear un­til it was bump­ing into her eardrum. We hiked sev­eral miles to the road and weren’t there for a minute be­fore a man stopped and took us to ur­gent care, 30 miles away.

When you get down in the dumps, I hope you have some mem­o­ries like that to look back on, to re­store your faith in hu­man­ity. There are a lot of re­ally good peo­ple in the world.

Enjoyed it? Please up­vote 👇

...

Read the original on louplummer.lol »

4 253 shares, 12 trendiness

Google Removes Sci-Hub Domains from U.S. Search Results Due to Dated Court Order

Google has re­moved dozens of new Sci-Hub do­main names from its search re­sults in the United States. Unlike typ­i­cal DMCA take­downs, the re­movals were trig­gered by a dated court or­der that was not en­forced for sev­eral years. This ap­pears to be one of the first times Google has dein­dexed an en­tire pi­rate site in the U. S. based on a site block­ing’ style in­junc­tion.

Google has re­moved dozens of new Sci-Hub do­main names from its search re­sults in the United States. Unlike typ­i­cal DMCA take­downs, the re­movals were trig­gered by a dated court or­der that was not en­forced for sev­eral years. This ap­pears to be one of the first times Google has dein­dexed an en­tire pi­rate site in the U. S. based on a site block­ing’ style in­junc­tion.

In 2017, American Chemical Society (ACS), a lead­ing source of aca­d­e­mic pub­li­ca­tions in the field of chem­istry, won a law­suit against Sci-Hub and its op­er­a­tor, Alexandra Elbakyan.

The Pirate Bay of Science’ had failed to ap­pear at a Virginia fed­eral court, re­sult­ing in an easy win for the pub­lisher and a $4.8 mil­lion de­fault judg­ment award for dam­ages.

More im­por­tant, per­haps, was the broad per­ma­nent in­junc­tion that the Virginia fed­eral court signed off on in 2017. This or­der ef­fec­tively gave ACS free rein to take down ex­ist­ing and newly reg­is­tered Sci-Hub do­main names.

The in­junc­tion also re­quired all par­ties in ac­tive con­cert or par­tic­i­pa­tion” with Sci-Hub to cease fa­cil­i­tat­ing ac­cess” to these do­main names, in­clud­ing search en­gines, host­ing providers, ISPs, and do­main name reg­is­trars, the or­der clar­i­fied.

On pa­per, this in­junc­tion en­abled ACS to re­quest American ISPs and search en­gines to block’ ex­ist­ing and fu­ture Sci-Hub do­mains. However, there was no sign that the pub­lisher was do­ing so. Aside from a few sus­pended do­mains, Sci-Hub re­mained widely ac­ces­si­ble.

Whether ACS did not feel the need to en­force the or­der against search en­gines and other in­ter­me­di­aries or if these com­pa­nies ac­tively ob­jected to the re­quested ac­tions was un­known. And as time passed, the in­junc­tion be­came a dis­tant mem­ory, at least for a few years.

Earlier this week we spot­ted a unique re­quest in the Lumen Database, where the 2018 in­junc­tion was cited. The no­tice in ques­tion asks Google to dein­dex 34 (sub)domains linked to Sci-Hub.

None of these do­mains were ref­er­enced in the 2018 in­junc­tion but are in­deed linked to Sci-Hub. Many of the par­tially redacted do­mains ap­pear to be do­main vari­a­tions of the sci­hubtw.tw mir­ror net­work, such as edu.sci­hubtw.tw and freeus.sci­hubtw.tw.

It’s sur­pris­ing to see this type of en­force­ment seven years af­ter the in­junc­tion was is­sued, but the re­quest is le­git­i­mate. Google is cer­tainly tak­ing it se­ri­ously and has dein­dexed these do­mains from its search re­sults in America. In other coun­tries, the same do­mains re­main ac­ces­si­ble.

The December 2 no­tice was sent by UK law firm Wiggin LLP, which sent a sim­i­lar re­quest in September this year, tar­get­ing a few dozen other Sci-Hub do­mains. In to­tal, we spot­ted seven no­tices, with the ear­li­est dat­ing back to 2022.

The re­sults of these re­movals are also clearly vis­i­ble in Google search. Those who search for Sci-Hub in the U. S. will see the fol­low­ing no­tice at the bot­tom of the re­sults.

It’s not clear why it took five years be­fore ACS urged Google to take ac­tion in re­sponse to the in­junc­tion. However, these re­movals are sim­i­lar to Google’s re­moval of pi­rate site do­mains in other coun­tries in re­sponse to ISP-blocking or­ders. Voluntary co­op­er­a­tion by Google was un­cov­ered shortly be­fore ACS first no­ti­fied the search en­gine.

Google’s vol­un­tary co­op­er­a­tion with ISP block­ing or­ders in Australia, the Netherlands, France, the UK, and else­where also brings up an im­por­tant ques­tion. Is Google co­op­er­at­ing with the per­ma­nent in­junc­tion in the U. S. be­cause it feels legally com­pelled to do so, or is that a vol­un­tary ges­ture too?

The 2018 in­junc­tion re­quires all par­ties in ac­tive con­cert or par­tic­i­pa­tion” with Sci-Hub to take ac­tion. While search en­gines are men­tioned as an ex­am­ple, Google and other tech com­pa­nies have pre­vi­ously ar­gued that neu­tral third-party ser­vices are not nec­es­sar­ily in ac­tive con­cert or par­tic­i­pa­tion”.

It is likely that Google main­tains this stance, opt­ing to vol­un­tar­ily com­ply with or­ders tar­get­ing other third par­ties. That would mir­ror its re­sponse to site-block­ing or­ders else­where.

We con­tacted Google hop­ing to hear an­swers to these ques­tions, but the com­pany did not re­spond to our re­quest for com­ment.

...

Read the original on torrentfreak.com »

5 231 shares, 9 trendiness

bidicalc

Skip to con­tentIn any nor­mal spread­sheet, when you change val­ues that are the in­put to some for­mu­las, the out­puts are au­to­mat­i­cally up­dated:Could it also work the other way? What if you could also change the out­put, and have the in­puts be up­dated to match the for­mula?For the past few months I’ve been re­ally cu­ri­ous about this idea. But there were so many ques­tions:Would it even be pos­si­ble at all?Could it work with very com­plex for­mu­las? With ex­po­nents? With ad­vanced math func­tions like log(), abs(), etc?How would the UX work? In a nor­mal spread­sheet, when you click on a cell that has a for­mula, you get to change the for­mu­la’s ex­pres­sion. I would need a way to let the user change ei­ther the for­mu­la’s ex­pres­sion or the cel­l’s nu­meric value. What should hap­pen if there are mul­ti­ple pos­si­ble so­lu­tions? Like in the ex­am­ple above, if you set A3 to 100, should the re­sult be 50/50, 20/80, -10000/10100? When there is a in­fi­nite num­ber of pos­si­ble so­lu­tions, how to pick one?Could it work with chained for­mu­las? Could I build a long chain of for­mu­las, up­date the fi­nal value and find the match­ing in­puts all the way back­wards?Ok, now let’s just skip to the good part! Today I’m happy to in­tro­duce:Vari­ables

A sim­ple num­ber en­tered in a cell is a vari­able: 1.0. It may be changed by the solver.Con­stant

A num­ber pre­fixed by a hash # is a con­stant. It will not be changed by the solver.Text

Cells can be in text mode. To in­put text, wrap in dou­ble quotes: Distance (km)”.Formula

Formulas can be en­tered in a cell (the tra­di­tional = pre­fix is op­tional), for ex­am­ple:The re­sult of for­mu­las will be au­to­mat­i­cally up­dated when an in­put they de­pend on changes. This is the usual for­ward up­date.The magic of bidi­calc is that once a for­mula has been com­puted, you can change the re­sult. Bidicalc will walk upstream” to change vari­able cells so that the for­mu­la’s re­sult matches the change you made. This is the back­ward up­date.To change a cell for­mu­la’s ex­pres­sion in­stead of its re­sult, click on the F icon.pow(a, b): ex­po­nen­ti­a­tion, raised to the power of exp(x): ex­po­nen­tial, the value of The solver will try its best to find a so­lu­tion. However it can fail in dif­fer­ent ways:The so­lu­tion is in­cor­rect.

This is a bug and should not hap­pen: please re­port it on GitHub, thank you!The solver re­ports no so­lu­tion”, but there is one. This could be a bug in the solver, or you have found a par­tic­u­larly dif­fi­cult root find­ing prob­lem that has so­lu­tions that are very dif­fi­cult to find us­ing float­ing point arith­metic. Please re­port it on GitHub so I can use it to im­prove the solver 😃The so­lu­tion is tech­ni­cally cor­rect but un­ex­pected.

This can hap­pen for a large class of prob­lems, typ­i­cally when there are a lot of free vari­ables (the prob­lem is heav­ily un­der­de­ter­mined) and the so­lu­tion man­i­fold is weird. For ex­am­ple, try to solve a*b*c = 1 to see this in ac­tion. To com­bat this, you can:Set some vari­ables to con­stants us­ing the hash syn­tax, i.e.: #50.Wait for me to im­ple­ment more fea­tures like do­main re­stric­tions of vari­ables.Sug­gest im­prove­ments to the open-source solver on GitHub.Keep in mind this is an ex­per­i­ment I made for fun be­cause I like math and spread­sheets. If you need to do root find­ing to com­pute the load tol­er­ance of a one mil­ion ton sus­pended bridge please don’t use bidi­calc 😄How does it work? ​Even a nor­mal spread­sheet is fairly com­plex beast. But the novel thing about bidi­calc is the back­wards solver. Mathematically, up­dat­ing a spread­sheet backward” is a (potentially un­der­de­ter­mined) root find­ing prob­lem, be­cause we are try­ing to find a vec­tor of un­knowns such that , where F is the func­tion com­puted by the cells for­mu­las, and G is the ob­jec­tive value en­tered in the cell. Note that F is not nec­es­sar­ily a sin­gle for­mula, but the re­sult of com­pos­ing an up­stream graph of cells into a sin­gle func­tion.The ac­tual root-find­ing solver is a cus­tom al­go­rithm that I made. It a gen­eral pur­pose al­go­rithm that will find one root of any con­tin­u­ous-al­most-every­where func­tion for which a com­plete syn­tac­tic ex­pres­sion is known. It uses a mix of con­tin­u­ous con­straint prop­a­ga­tion on in­ter­val union arith­metic , di­rec­tional Newton’s method and di­chotomic search. It is of course lim­ited by float­ing point pre­ci­sion and avail­able com­pu­ta­tion time.Bidi­calc is writ­ten in TypeScript and en­tirely open-source un­der the AGPL li­cence. This means that you can freely reuse, mod­ify, and share bidi­calc as long as you make the com­plete source code of your mod­i­fied ver­sion avail­able un­der the same li­cence. If you are in­ter­ested in buy­ing bidi­calc un­der a dif­fer­ent li­cence please get in touch.I haven’t taken the time to write a full deep-dive math­e­mat­i­cal ex­pla­na­tion of how it works, but if you are in­ter­ested in that please let me know. I might find some time to do it if there is in­ter­est from fel­low math nerds.If I kept im­prov­ing bidi­calc un­til it was per­fect I would have never re­leased it. So cur­rently it is im­per­fect and could be im­proved in a num­ber of ways.Do­main re­stric­tion for vari­ables. Currently the solver may as­sign any value in the in­ter­val . I’d like to add spe­cial syn­tax so that vari­able cells can be re­stricted by the user to a spe­cific in­ter­val. This would al­low guid­ing the solver and say­ing that you only want this cell to be pos­i­tive, of to be be­tween 1 and 100, for ex­am­ple.Solver im­prove­ments. The al­go­rithm works well enough for sim­ple prob­lems so I’m happy to pub­lish in this cur­rent state, but could al­ways be im­proved. There are a mil­ion ways to im­prove it in the fu­ture so that it finds bet­ter so­lu­tions, par­tic­u­larly for highly un­der­de­ter­mined cases.float64 gra­di­ents sup­port. Due to a pretty ob­scure tech­ni­cal lim­i­ta­tion of ten­sor­flowjs (that I use to com­pute gra­di­ents), the back­ward solver is par­tially lim­ited to sin­gle pre­ci­sion, even though the for­ward solver uses dou­ble pre­ci­sion via na­tive JS num­bers.UX im­prove­ments. I am not very good at front-end dev 😄. I have learned vuejs to be able to make the UX for bidi­calc but I’m not great at it. A spread­sheet in­ter­face is ac­tu­ally a mas­sive state ma­chine of com­plex and sub­tle be­hav­ior, it’s a very in­ter­est­ing pro­ject and tricky to get right. As you can see, I’ve de­cided to skip the usual spread­sheet de­sign prin­ci­ple that cells have two se­lec­tion states: soft se­lected which en­ables drag­ging, se­lec­tion, etc. and hard se­lected which en­ables chang­ing the con­tent of the cell. bidi­calc is sim­ply a CSS grid of el­e­ments.Move cell com­pu­ta­tion off the main thread. The solver is sin­gle threaded and hap­pens in the UI thread. It should be moved to a web worker to avoid lock­ing the UI.My name is Victor Poughon, I en­joy math and open-source soft­ware. If you want to see me do more stuff like this con­sider spon­sor­ing me on GitHub or Buying me a cof­fee.

...

Read the original on victorpoughon.github.io »

6 218 shares, 20 trendiness

Useful patterns for building HTML tools

I’ve started us­ing the term HTML tools to re­fer to HTML ap­pli­ca­tions that I’ve been build­ing which com­bine HTML, JavaScript, and CSS in a sin­gle file and use them to pro­vide use­ful func­tion­al­ity. I have built over 150 of these in the past two years, al­most all of them writ­ten by LLMs. This ar­ti­cle pre­sents a col­lec­tion of use­ful pat­terns I’ve dis­cov­ered along the way.

First, some ex­am­ples to show the kind of thing I’m talk­ing about:

pypi-changelog lets you gen­er­ate (and copy to clip­board) diffs be­tween dif­fer­ent PyPI pack­age re­leases.

bluesky-thread pro­vides a nested view of a dis­cus­sion thread on Bluesky.

These are some of my re­cent fa­vorites. I have dozens more like this that I use on a reg­u­lar ba­sis.

You can ex­plore my col­lec­tion on tools.si­mon­willi­son.net—the by month view is use­ful for brows­ing the en­tire col­lec­tion.

If you want to see the code and prompts, al­most all of the ex­am­ples in this post in­clude a link in their footer to view source” on GitHub. The GitHub com­mits usu­ally con­tain ei­ther the prompt it­self or a link to the tran­script used to cre­ate the tool.

These are the char­ac­ter­is­tics I have found to be most pro­duc­tive in build­ing tools of this na­ture:

A sin­gle file: in­line JavaScript and CSS in a sin­gle HTML file means the least has­sle in host­ing or dis­trib­ut­ing them, and cru­cially means you can copy and paste them out of an LLM re­sponse.

Avoid React, or any­thing with a build step. The prob­lem with React is that JSX re­quires a build step, which makes every­thing mas­sively less con­ve­nient. I prompt no re­act” and skip that whole rab­bit hole en­tirely.

Load de­pen­den­cies from a CDN. The fewer de­pen­den­cies the bet­ter, but if there’s a well known li­brary that helps solve a prob­lem I’m happy to load it from CDNjs or js­de­livr or sim­i­lar.

Keep them small. A few hun­dred lines means the main­tain­abil­ity of the code does­n’t mat­ter too much: any good LLM can read them and un­der­stand what they’re do­ing, and rewrit­ing them from scratch with help from an LLM takes just a few min­utes.

The end re­sult is a few hun­dred lines of code that can be cleanly copied and pasted into a GitHub repos­i­tory.

The eas­i­est way to build one of these tools is to start in ChatGPT or Claude or Gemini. All three have fea­tures where they can write a sim­ple HTML+JavaScript ap­pli­ca­tion and show it to you di­rectly.

Claude calls this Artifacts”, ChatGPT and Gemini both call it Canvas”. Claude has the fea­ture en­abled by de­fault, ChatGPT and Gemini may re­quire you to tog­gle it on in their tools” menus.

Try this prompt in Gemini or ChatGPT:

Build a can­vas that lets me paste in JSON and con­verts it to YAML. No React.

Or this prompt in Claude:

Build an ar­ti­fact that lets me paste in JSON and con­verts it to YAML. No React.

I al­ways add No React” to these prompts, be­cause oth­er­wise they tend to build with React, re­sult­ing in a file that is harder to copy and paste out of the LLM and use else­where. I find that at­tempts which use React take longer to dis­play (since they need to run a build step) and are more likely to con­tain crash­ing bugs for some rea­son, es­pe­cially in ChatGPT.

All three tools have share” links that pro­vide a URL to the fin­ished ap­pli­ca­tion. Examples:

Coding agents such as Claude Code and Codex CLI have the ad­van­tage that they can test the code them­selves while they work on it us­ing tools like Playwright. I of­ten up­grade to one of those when I’m work­ing on some­thing more com­pli­cated, like my Bluesky thread viewer tool shown above.

I also fre­quently use asyn­chro­nous cod­ing agents like Claude Code for web to make changes to ex­ist­ing tools. I shared a video about that in Building a tool to copy-paste share ter­mi­nal ses­sions us­ing Claude Code for web.

Claude Code for web and Codex Cloud run di­rectly against my si­monw/​tools repo, which means they can pub­lish or up­grade tools via Pull Requests (here are dozens of ex­am­ples) with­out me need­ing to copy and paste any­thing my­self.

Any time I use an ad­di­tional JavaScript li­brary as part of my tool I like to load it from a CDN.

The three ma­jor LLM plat­forms sup­port spe­cific CDNs as part of their Artifacts or Canvas fea­tures, so of­ten if you tell them Use PDF.js” or sim­i­lar they’ll be able to com­pose a URL to a CDN that’s on their al­low-list.

Sometimes you’ll need to go and look up the URL on cd­njs or js­De­livr and paste it into the chat.

CDNs like these have been around for long enough that I’ve grown to trust them, es­pe­cially for URLs that in­clude the pack­age ver­sion.

The al­ter­na­tive to CDNs is to use npm and have a build step for your pro­jects. I find this re­duces my pro­duc­tiv­ity at hack­ing on in­di­vid­ual tools and makes it harder to self-host them.

I don’t like leav­ing my HTML tools hosted by the LLM plat­forms them­selves for a cou­ple of rea­sons. First, LLM plat­forms tend to run the tools in­side a tight sand­box with a lot of re­stric­tions. They’re of­ten un­able to load data or im­ages from ex­ter­nal URLs, and some­times even fea­tures like link­ing out to other sites are dis­abled.

The end-user ex­pe­ri­ence of­ten is­n’t great ei­ther. They show warn­ing mes­sages to new users, of­ten take ad­di­tional time to load and de­light in show­ing pro­mo­tions for the plat­form that was used to cre­ate the tool.

They’re also not as re­li­able as other forms of sta­tic host­ing. If ChatGPT or Claude are hav­ing an out­age I’d like to still be able to ac­cess the tools I’ve cre­ated in the past.

Being able to eas­ily self-host is the main rea­son I like in­sist­ing on no React” and us­ing CDNs for de­pen­den­cies—the ab­sence of a build step makes host­ing tools else­where a sim­ple case of copy­ing and past­ing them out to some other provider.

My pre­ferred provider here is GitHub Pages be­cause I can paste a block of HTML into a file on github.com and have it hosted on a per­ma­nent URL a few sec­onds later. Most of my tools end up in my si­monw/​tools repos­i­tory which is con­fig­ured to serve sta­tic files at tools.si­mon­willi­son.net.

One of the most use­ful in­put/​out­put mech­a­nisms for HTML tools comes in the form of copy and paste.

I fre­quently build tools that ac­cept pasted con­tent, trans­form it in some way and let the user copy it back to their clip­board to paste some­where else.

Copy and paste on mo­bile phones is fid­dly, so I fre­quently in­clude Copy to clip­board” but­tons that pop­u­late the clip­board with a sin­gle touch.

Most op­er­at­ing sys­tem clip­boards can carry mul­ti­ple for­mats of the same copied data. That’s why you can paste con­tent from a word proces­sor in a way that pre­serves for­mat­ting, but if you paste the same thing into a text ed­i­tor you’ll get the con­tent with for­mat­ting stripped.

These rich copy op­er­a­tions are avail­able in JavaScript paste events as well, which opens up all sorts of op­por­tu­ni­ties for HTML tools.

hacker-news-thread-ex­port lets you paste in a URL to a Hacker News thread and gives you a copy­able con­densed ver­sion of the en­tire thread, suit­able for past­ing into an LLM to get a use­ful sum­mary.

paste-rich-text lets you copy from a page and paste to get the HTML—particularly use­ful on mo­bile where view-source is­n’t avail­able.

alt-text-ex­trac­tor lets you paste in im­ages and then copy out their alt text.

The key to build­ing in­ter­est­ing HTML tools is un­der­stand­ing what’s pos­si­ble. Building cus­tom de­bug­ging tools is a great way to ex­plore these op­tions.

clip­board-viewer is one of my most use­ful. You can paste any­thing into it (text, rich text, im­ages, files) and it will loop through and show you every type of paste data that’s avail­able on the clip­board.

This was key to build­ing many of my other tools, be­cause it showed me the in­vis­i­ble data that I could use to boot­strap other in­ter­est­ing pieces of func­tion­al­ity.

key­board-de­bug shows the keys (and KeyCode val­ues) cur­rently be­ing held down.

cors-fetch re­veals if a URL can be ac­cessed via CORS.

HTML tools may not have ac­cess to server-side data­bases for stor­age but it turns out you can store a lot of state di­rectly in the URL.

I like this for tools I may want to book­mark or share with other peo­ple.

icon-ed­i­tor is a cus­tom 24x24 icon ed­i­tor I built to help hack on icons for the GitHub Universe badge. It per­sists your in-progress icon de­sign in the URL so you can eas­ily book­mark and share it.

The lo­cal­Stor­age browser API lets HTML tools store data per­sis­tently on the user’s de­vice, with­out ex­pos­ing that data to the server.

I use this for larger pieces of state that don’t fit com­fort­ably in a URL, or for se­crets like API keys which I re­ally don’t want any­where near my server —even sta­tic hosts might have server logs that are out­side of my in­flu­ence.

word-counter is a sim­ple tool I built to help me write to spe­cific word counts, for things like con­fer­ence ab­stract sub­mis­sions. It uses lo­cal­Stor­age to save as you type, so your work is­n’t lost if you ac­ci­den­tally close the tab.

ren­der-mark­down uses the same trick—I some­times use this one to craft blog posts and I don’t want to lose them.

haiku is one of a num­ber of LLM demos I’ve built that re­quest an API key from the user (via the prompt() func­tion) and then store that in lo­cal­Stor­age. This one uses Claude Haiku to write haikus about what it can see through the user’s we­b­cam.

CORS stands for Cross-origin re­source shar­ing. It’s a rel­a­tively low-level de­tail which con­trols if JavaScript run­ning on one site is able to fetch data from APIs hosted on other do­mains.

APIs that pro­vide open CORS head­ers are a gold­mine for HTML tools. It’s worth build­ing a col­lec­tion of these over time.

Here are some I like:

* iNat­u­ral­ist for fetch­ing sight­ings of an­i­mals, in­clud­ing URLs to pho­tos

* GitHub be­cause any­thing in a pub­lic repos­i­tory in GitHub has a CORS-enabled anony­mous API for fetch­ing that con­tent from the raw.githubuser­con­tent.com do­main, which is be­hind a caching CDN so you don’t need to worry too much about rate lim­its or feel guilty about adding load to their in­fra­struc­ture.

* Bluesky for all sorts of op­er­a­tions

* Mastodon has gen­er­ous CORS poli­cies too, as used by ap­pli­ca­tions like phanpy.so­cial

GitHub Gists are a per­sonal fa­vorite here, be­cause they let you build apps that can per­sist state to a per­ma­nent Gist through mak­ing a cross-ori­gin API call.

species-ob­ser­va­tion-map uses iNat­u­ral­ist to show a map of re­cent sight­ings of a par­tic­u­lar species.

zip-wheel-ex­plorer fetches a .whl file for a Python pack­age from PyPI, un­zips it (in browser mem­ory) and lets you nav­i­gate the files.

github-is­sue-to-mark­down fetches is­sue de­tails and com­ments from the GitHub API (including ex­pand­ing any per­ma­nent code links) and turns them into copy­able Markdown.

ter­mi­nal-to-html can op­tion­ally save the user’s con­verted ter­mi­nal ses­sion to a Gist.

bluesky-quote-finder dis­plays quotes of a spec­i­fied Bluesky post, which can then be sorted by likes or by time.

All three of OpenAI, Anthropic and Gemini of­fer JSON APIs that can be ac­cessed via CORS di­rectly from HTML tools.

Unfortunately you still need an API key, and if you bake that key into your vis­i­ble HTML any­one can steal it and use to rack up charges on your ac­count.

I use the lo­cal­Stor­age se­crets pat­tern to store API keys for these ser­vices. This sucks from a user ex­pe­ri­ence per­spec­tive—telling users to go and cre­ate an API key and paste it into a tool is a lot of fric­tion—but it does work.

haiku uses the Claude API to write a haiku about an im­age from the user’s we­b­cam.

gem­ini-bbox demon­strates Gemini 2.5’s abil­ity to re­turn com­plex shaped im­age masks for ob­jects in im­ages, see Image seg­men­ta­tion us­ing Gemini 2.5.

You don’t need to up­load a file to a server in or­der to make use of the el­e­ment. JavaScript can ac­cess the con­tent of that file di­rectly, which opens up a wealth of op­por­tu­ni­ties for use­ful func­tion­al­ity.

ocr is the first tool I built for my col­lec­tion, de­scribed in Running OCR against PDFs and im­ages di­rectly in your browser. It uses PDF.js and Tesseract.js to al­low users to open a PDF in their browser which it then con­verts to an im­age-per-page and runs through OCR.

so­cial-me­dia-crop­per lets you open (or paste in) an ex­ist­ing im­age and then crop it to com­mon di­men­sions needed for dif­fer­ent so­cial me­dia plat­forms—2:1 for Twitter and LinkedIn, 1.4:1 for Substack etc.

ffm­peg-crop lets you open and pre­view a video file in your browser, drag a crop box within it and then copy out the ffm­peg com­mand needed to pro­duce a cropped copy on your own ma­chine.

An HTML tool can gen­er­ate a file for down­load with­out need­ing help from a server.

The JavaScript li­brary ecosys­tem has a huge range of pack­ages for gen­er­at­ing files in all kinds of use­ful for­mats.

Pyodide is a dis­tri­b­u­tion of Python that’s com­piled to WebAssembly and de­signed to run di­rectly in browsers. It’s an en­gi­neer­ing mar­vel and one of the most un­der­rated cor­ners of the Python world.

It also cleanly loads from a CDN, which means there’s no rea­son not to use it in HTML tools!

Even bet­ter, the Pyodide pro­ject in­cludes mi­cropip—a mech­a­nism that can load ex­tra pure-Python pack­ages from PyPI via CORS.

py­o­dide-bar-chart demon­strates run­ning Pyodide, Pandas and mat­plotlib to ren­der a bar chart di­rectly in the browser.

numpy-py­o­dide-lab is an ex­per­i­men­tal in­ter­ac­tive tu­to­r­ial for Numpy.

apsw-query demon­strates the APSW SQLite li­brary run­ning in a browser, us­ing it to show EXPLAIN QUERY plans for SQLite queries.

Pyodide is pos­si­ble thanks to WebAssembly. WebAssembly means that a vast col­lec­tion of soft­ware orig­i­nally writ­ten in other lan­guages can now be loaded in HTML tools as well.

Squoosh.app was the first ex­am­ple I saw that con­vinced me of the power of this pat­tern—it makes sev­eral best-in-class im­age com­pres­sion li­braries avail­able di­rectly in the browser.

I’ve used WebAssembly for a few of my own tools:

The biggest ad­van­tage of hav­ing a sin­gle pub­lic col­lec­tion of 100+ tools is that it’s easy for my LLM as­sis­tants to re­com­bine them in in­ter­est­ing ways.

Sometimes I’ll copy and paste a pre­vi­ous tool into the con­text, but when I’m work­ing with a cod­ing agent I can ref­er­ence them by name—or tell the agent to search for rel­e­vant ex­am­ples be­fore it starts work.

The source code of any work­ing tool dou­bles as clear doc­u­men­ta­tion of how some­thing can be done, in­clud­ing pat­terns for us­ing edit­ing li­braries. An LLM with one or two ex­ist­ing tools in their con­text is much more likely to pro­duce work­ing code.

And then, af­ter it had found and read the source code for zip-wheel-ex­plorer:

Build a new tool pypi-changelog.html which uses the PyPI API to get the wheel URLs of all avail­able ver­sions of a pack­age, then it dis­plays them in a list where each pair has a Show changes” click­able in be­tween them - click­ing on that fetches the full con­tents of the wheels and dis­plays a nicely ren­dered diff rep­re­sent­ing the dif­fer­ence be­tween the two, as close to a stan­dard diff for­mat as you can get with JS li­braries from CDNs, and when that is dis­played there is a Copy” but­ton which copies that diff to the clip­board

See Running OCR against PDFs and im­ages di­rectly in your browser for an­other de­tailed ex­am­ple of remix­ing tools to cre­ate some­thing new.

I like keep­ing (and pub­lish­ing) records of every­thing I do with LLMs, to help me grow my skills at us­ing them over time.

For HTML tools I built by chat­ting with an LLM plat­form di­rectly I use the share” fea­ture for those plat­forms.

For Claude Code or Codex CLI or other cod­ing agents I copy and paste the full tran­script from the ter­mi­nal into my ter­mi­nal-to-html tool and share that us­ing a Gist.

In ei­ther case I in­clude links to those tran­scripts in the com­mit mes­sage when I save the fin­ished tool to my repos­i­tory. You can see those in my tools.si­mon­willi­son.net colophon.

I’ve had so much fun ex­plor­ing the ca­pa­bil­i­ties of LLMs in this way over the past year and a half, and build­ing tools in this way has been in­valu­able in help­ing me un­der­stand both the po­ten­tial for build­ing tools with HTML and the ca­pa­bil­i­ties of the LLMs that I’m build­ing them with.

If you’re in­ter­ested in start­ing your own col­lec­tion I highly rec­om­mend it! All you need to get started is a free GitHub repos­i­tory with GitHub Pages en­abled (Settings -> Pages -> Source -> Deploy from a branch -> main) and you can start copy­ing in .html pages gen­er­ated in what­ever man­ner you like.

...

Read the original on simonwillison.net »

7 213 shares, 28 trendiness

I Tried Gleam for Advent of Code, and I Get the Hype

I do Advent of Code every year.

For the last seven years, in­clud­ing this one, I have man­aged to get all the stars. I do not say that to brag. I say it be­cause it ex­plains why I keep com­ing back.

It is one of the few tech tra­di­tions I never get bored of, even af­ter do­ing it for a long time. I like the time pres­sure. I like the com­mu­nity vibe. I like that every December I can pick one lan­guage and go all in.

Advent of Code is usu­ally 25 days. This year Eric de­cided to do 12 days in­stead.

So in­stead of 50 parts, it was 24.

That sounds like a re­laxed year. It was not, but not in a bad way.

The eas­ier days were harder than the easy days in past years, but they were also re­ally en­gag­ing and fun to work through. The hard days were hard, es­pe­cially the last three, but they were still the good kind of hard. They were prob­lems I ac­tu­ally wanted to wres­tle with.

It also changes the pac­ing in a funny way. In a nor­mal year, by day 10 you have a pretty comfy tool­box. This year it felt like the puz­zles were al­ready de­mand­ing that tool­box while I was still build­ing it.

That turned out to be a per­fect setup for learn­ing a new lan­guage.

Gleam is easy to like quickly.

The syn­tax is clean. The com­piler is help­ful, and the er­ror mes­sages are su­per duper good. Rust good.

Most im­por­tantly, the lan­guage strongly nudges you into a style that fits Advent of Code re­ally well. Parse some text. Transform it a few times. Fold. Repeat.

One thing I did not ex­pect was how good the ed­i­tor ex­pe­ri­ence would be. The LSP worked much bet­ter than I ex­pected. It ba­si­cally worked per­fectly the whole time. I used the Gleam ex­ten­sion for IntelliJ and it was great.

I also just like FP.

FP is not al­ways eas­ier, but it is of­ten eas­ier. When it clicks, you stop writ­ing in­struc­tions and you start de­scrib­ing the so­lu­tion.

The first thing I fell in love with was echo.

It is ba­si­cally a print state­ment that does not make you earn it. You can echo any value. You do not have to for­mat any­thing. You do not have to build a string. You can just drop it into a pipeline and keep go­ing.

This is the kind of thing I mean:

You can quickly in­spect val­ues at mul­ti­ple points with­out break­ing the flow.

I did miss string in­ter­po­la­tion, es­pe­cially early on. echo made up for a lot of that.

It mostly hit when I needed to gen­er­ate text, not when I needed to in­spect val­ues. The day where I gen­er­ated an LP file for glp­sol is the best ex­am­ple. It is not hard code, but it is a lot of string build­ing. Without in­ter­po­la­tion it turns into a bit of a mess of <>s.

This is a small ex­cerpt from my LP gen­er­a­tor:

It works. It is just the kind of code where you re­ally feel miss­ing in­ter­po­la­tion.

Grids are where you nor­mally ei­ther crash into out of bounds bugs, or you lit­ter your code with bounds checks you do not care about.

In my day 4 so­lu­tion I used a dict as a grid. The key er­gonomic part is that dict.get gives you an op­tion-like re­sult, which makes neigh­bour check­ing safe by de­fault.

This is the neigh­bour func­tion from my so­lu­tion:

That last line is the whole point.

No bounds checks. No sen­tinel val­ues. Out of bounds just dis­ap­pears.

I ex­pected to write parsers and helpers, and I did. What I did not ex­pect was how of­ten Gleam al­ready had the ex­act list func­tion I needed.

I read the in­put, chun­ked it into rows, trans­posed it, and sud­denly the rest of the puz­zle be­came ob­vi­ous.

In a lot of lan­guages you end up writ­ing your own trans­pose yet again. In Gleam it is al­ready there.

Another ex­am­ple is list.com­bi­na­tion_­pairs.

In day 8 I needed all pairs of 3D points. In an im­per­a­tive lan­guage you would prob­a­bly write nested loops and then ques­tion your off by one logic.

In Gleam it is a one liner:

Sometimes FP is not about be­ing clever. It is about hav­ing the right func­tion name.

If I had to pick one fea­ture that made me want to keep writ­ing Gleam af­ter AoC, it is fold_un­til.

Early exit with­out hacks is fan­tas­tic in puz­zles.

In day 8 part 2 I kept merg­ing sets un­til the first set in the list con­tained all boxes. When that hap­pens, I stop.

The core shape looks like this:

It is small, ex­plicit, and it reads like in­tent.

I also used fold_un­til in day 10 part 1 to find the small­est com­bi­na­tion size that works.

Even though I en­joyed Gleam a lot, I did hit a few re­cur­ring fric­tion points.

None of these are deal break­ers. They are just the kind of things you no­tice when you do 24 parts in a row.

This one sur­prised me on day 1.

For AoC you read a file every day. In this repo I used sim­pli­file every­where be­cause you need some­thing. It is fine, I just did not ex­pect ba­sic file IO to be out­side the stan­dard li­brary.

Day 2 part 2 pushed me into regex and I had to add gleam_reg­exp.

This is the style I used, build­ing a regex from a sub­string:

Again, to­tally fine. It just sur­prised me.

You can do [first, ..rest] and you can do [first, sec­ond].

But you can­not do [first, ..middle, last].

It is not the end of the world, but it would have made some pars­ing cleaner.

In Gleam a lot of com­par­isons are not booleans. You get an or­der value.

This is great for sort­ing. It is also very ex­plicit. It can be a bit ver­bose when you just want an check.

In day 5 I ended up writ­ing pat­terns like this:

I used bigi a few times this year.

On the Erlang VM, in­te­gers are ar­bi­trary pre­ci­sion, so you usu­ally do not care about over­flow. That is one of the nicest things about the BEAM.

If you want your Gleam code to also tar­get JavaScript, you do care. JavaScript has lim­its, and sud­denly us­ing bigi be­comes nec­es­sary for some puz­zles.

I wish that was just part of Int, with a sin­gle con­sis­tent story across tar­gets.

Day 10 part 1 was my fa­vorite part of the whole event.

The mo­ment I saw the tog­gling be­hav­ior, it clicked as XOR. Represent the lights as a num­ber. Represent each but­ton as a bit­mask. Find the small­est com­bi­na­tion of bit­masks that XOR to the tar­get.

This is the fold from my so­lu­tion:

It felt clean, it felt fast, and it felt like the rep­re­sen­ta­tion did most of the work.

I knew brute force was out. It was clearly a sys­tem of lin­ear equa­tions.

In pre­vi­ous years I would reach for Z3, but there are no Z3 bind­ings for Gleam. I tried to stay in Gleam, and I ended up gen­er­at­ing an LP file and shelling out to glp­sol us­ing shell­out.

It worked, and hon­estly the LP for­mat is beau­ti­ful.

Here is the call:

It is a hack, but it is a prag­matic hack, and that is also part of Advent of Code.

Day 11 part 2 is where I was happy I was writ­ing Gleam.

The im­por­tant de­tail was that the memo key is not just the node. It is the node plus your state.

In my case the key was:

Once I got the memo thread­ing right, it ran in­stantly.

The last day was the only puz­zle I did not fully en­joy.

Not be­cause it was bad. It just felt like it re­lied on as­sump­tions about the in­put, and I am one of those peo­ple that does not love do­ing that.

I over­thought it for a bit, then I learned it was more of a troll prob­lem. The do the ar­eas of the pieces, when fully in­ter­locked, fit on the board” heuris­tic was enough.

In my so­lu­tion it is lit­er­ally this:

Sometimes you build a beau­ti­ful men­tal model and then the right an­swer is a sin­gle in­equal­ity.

I am very happy I picked Gleam this year.

It has sharp edges, mostly around where the stan­dard li­brary draws the line and a few lan­guage con­straints that show up in puz­zle code. But it also has real strengths.

Pipelines feel good. Options and Results make un­safe prob­lems feel safe. The list tool­box is bet­ter than I ex­pected. fold_un­til is in­cred­i­ble. Once you stop try­ing to write loops and you let it be func­tional, the so­lu­tions start to feel clearer.

I can­not wait to try Gleam in a real pro­ject. I have been think­ing about us­ing it to write a web­server, and I am gen­uinely ex­cited to give it a go.

And of course, I can­not wait for next year’s Advent of Code.

If you want to look at the source for all 12 days, it is here:

...

Read the original on blog.tymscar.com »

8 208 shares, 10 trendiness

「君たちはどう生きるか」の場面写真を提供致します

...

Read the original on www.ghibli.jp »

9 194 shares, 60 trendiness

Should You Trust Your VPN Location?

In a large-scale analy­sis of 20 pop­u­lar VPNs, IPinfo found that 17 of those VPNs exit traf­fic from dif­fer­ent coun­tries than they claim. Some claim 100+ coun­tries, but many of them point to the same hand­ful of phys­i­cal data cen­ters in the US or Europe. That means the ma­jor­ity of VPN providers we an­a­lyzed don’t route your traf­fic via the coun­tries they claim to, and they claim many more coun­tries than they ac­tu­ally sup­port. An­a­lyz­ing over 150,000 exit IPs across 137 pos­si­ble exit coun­tries, and com­par­ing what providers claim to what IPinfo mea­sures, shows that:17 in 20 providers had traf­fic ex­it­ing in a dif­fer­ent coun­try.38 coun­tries were virtual-only” in our dataset (claimed by at least one provider, but never ob­served as the ac­tual traf­fic exit coun­try for any provider we tested).We were only able to ver­ify all provider an­nounced lo­ca­tions for 3 providers out of the 20.Across ~150,000 VPN exit IPs tested, ProbeNet, our in­ter­net mea­sure­ment plat­form, de­tected roughly 8,000 cases where widely-used IP datasets placed the server in the wrong coun­try — some­times thou­sands of kilo­me­ters off.This re­port walks through what we saw across VPN and IP data providers, pro­vides a closer look at two par­tic­u­larly in­ter­est­ing coun­tries, ex­plores why mea­sure­ment-based IP data mat­ters if you care where your traf­fic re­ally goes, and shares how we ran the in­ves­ti­ga­tion.Which VPNs Matched Reality (And Which Didn’t)Here is the over­lap be­tween the num­ber of listed coun­tries each VPN provider claims to of­fer ver­sus the coun­tries with real VPN traf­fic that we mea­sured — lower per­cent­ages in­di­cate providers whose claimed lists best match our data:

It’s im­por­tant to note that we used the most com­monly and widely sup­ported tech­nolo­gies in this re­search, to make com­par­i­son be­tween providers as fair as pos­si­ble while giv­ing us sig­nif­i­cant data to an­a­lyze, so this will not be the full cov­er­age for each provider. These are some of the most vis­i­ble names in the mar­ket. They also tend to have very long coun­try lists on their web­sites. Notably, three well-known providers had zero mis­matches across all the coun­tries we tested: Mullvad, IVPN, and Windscribe.Country mis­matches does­n’t au­to­mat­i­cally mean some providers of­fer bad VPNs,” but it does mean that if you’re choos­ing a VPN be­cause it claims 100+ coun­tries,” you should know that a sig­nif­i­cant share of those flags may be la­bels, or vir­tual lo­ca­tions.What Virtual Locations” Really MeanWhen a VPN lets you con­nect to, for ex­am­ple, Bahamas” or Somalia,” that does­n’t al­ways mean traf­fic routes through there. In many cases, it’s some­where en­tirely dif­fer­ent, like Miami or London, but pre­sented as if traf­fic is in the coun­try you picked.This setup is known as a vir­tual lo­ca­tion:The IP reg­istry data also says Country X” — be­cause the provider self-de­clared it that way.But the net­work mea­sure­ments (latency and rout­ing) show the traf­fic ac­tu­ally ex­its in Country Y” — of­ten thou­sands of kilo­me­ters away.The prob­lem? Without ac­tive net­work mea­sure­ment, most IP datasets will rely on what the IPs owner told the in­ter­net reg­istry or pub­lished in WHOIS/geofeeds: a self-re­ported coun­try tag. If that record is wrong or out­dated, the mis­take spreads every­where. That’s where IPinfo’s ProbeNet comes in: by run­ning live RTT tests from 1,200+ points of pres­ence world­wide, we an­chor each IP to its real-world lo­ca­tion, not just its de­clared one.Across the dataset, we found 97 coun­tries where at least one VPN brand only ever ap­peared as vir­tual or un­mea­sur­able in our data. In other words, for a no­tice­able slice of the world map, some locations” in VPNs never show up as true ex­its in our mea­sure­ments. We also found 38 coun­tries where every men­tion be­haved this way: at least one VPN claimed them, but none ever pro­duced a sta­ble, mea­sur­able exit in that coun­try in our sam­ple.You can think of these 38 as the unmeasurable” coun­tries in this study — places that ex­ist in server lists, con­fig files, and IP ge­ofeeds, but never once ap­peared as the ac­tual exit coun­try in our mea­sure­ments. They’re not ran­domly scat­tered — they clus­ter in spe­cific parts of the map. By re­gion, that in­cludes:This does­n’t prove there is zero VPN in­fra­struc­ture in those coun­tries glob­ally. It does show that, across the providers and lo­ca­tions we mea­sured, the dom­i­nant pat­tern is to serve those lo­ca­tions from else­where. Here are three of the most in­ter­est­ing ex­am­ples of how this looks at the IP level.Case Studies: Two Countries That Only Exist on the MapTo make this con­crete, let’s look at three coun­tries where every provider in our dataset turned out to be vir­tual: Bahamas, and Somalia.Bahamas: All-Inclusive, Hosted in the USIn our mea­sure­ments, five providers of­fered lo­ca­tions la­beled as Bahamas”: NordVPN, ExpressVPN, Private Internet Access, FastVPN, and IPVanish.For all of them, mea­sured traf­fic was in the United States, usu­ally with sub-mil­lisec­ond RTT to US probes.

Somalia: Mogadishu, via France and the UKSomalia ap­pears in our sam­ple for only two providers: NordVPN and ProtonVPN. Both la­bel Mogadishu ex­plic­itly in their nam­ing, but these RTTs are ex­actly what you’d ex­pect for traf­fic in Western Europe, and com­pletely in­con­sis­tent with traf­fic in East Africa. Both providers go out of their way in the la­bels (e.g. SO, Mogadishu”), but the ac­tual traf­fic is in Nice and London, not Somalia.

When Legacy IP Providers Agree With the Wrong VPN LocationsSo far, we’ve talked about VPN claims ver­sus our mea­sure­ments. But other IP data providers don’t run ac­tive RTT tests. They rely on self-de­clared IP data sources, and of­ten as­sume that if an IP is tagged as Country X,” it must ac­tu­ally be there. In these cases, the IP legacy datasets typ­i­cally follow” the VPN provider’s story: if the VPN mar­kets the end­point as Country X, the legacy IP dataset also places it in Country X.To quan­tify that, we looked at 736 VPN ex­its where ProbeNet’s mea­sured coun­try dis­agreed with one or more widely used legacy IP datasets.We then com­pared the coun­try IPinfo’s ProbeNet mea­sured (backed by RTT and rout­ing) with the coun­try re­ported by these other IP datasets and com­puted the dis­tance be­tween them. The gaps are large:How Far Off Were the Other IP Datasets?

The me­dian er­ror be­tween ProbeNet and the legacy datasets was roughly 3,100 km. On the ProbeNet side, we have strong la­tency ev­i­dence that our mea­sured coun­try is the right one:The me­dian min­i­mum RTT to a probe in the mea­sured coun­try was 0.27 ms. About 90% of these lo­ca­tions had a sub-mil­lisec­ond RTT from at least one probe.That’s what you ex­pect when traf­fic is gen­uinely in that coun­try, not thou­sands of kilo­me­ters away.An IP Example You Can Test YourselfThis be­hav­ior is much more tan­gi­ble if you can see it on a sin­gle IP. Here’s one VPN exit IP where ProbeNet places the server in the United Kingdom, backed by sub-mil­lisec­ond RTT from lo­cal probes, while other widely used legacy IP datasets place the same IP in Mauritius, 9,691 kilo­me­ters away.If you want to check this your­self, you can plug it into a pub­lic mea­sure­ment tool like https://​ping.sx/ and run pings or tracer­outes from dif­fer­ent re­gions. Tools like this one pro­vide a clear vi­sual for where la­tency is low­est.ProbeNet uses the same ba­sic idea, but at a dif­fer­ent scale: we main­tain a net­work of 1,200+ points of pres­ence (PoPs) around the world, so we can usu­ally get even closer to the real phys­i­cal lo­ca­tion than pub­lic tools with smaller net­works.If you’d like to play with more real IPs (not nec­es­sar­ily VPNs) where ProbeNet and IPinfo get the coun­try right and other datasets don’t, you can find a fuller set of ex­am­ples on our IP ge­olo­ca­tion ac­cu­racy page.Why This Happens and How It Impacts TrustIt’s worth sep­a­rat­ing tech­ni­cal rea­sons from trust is­sues. There are tech­ni­cal rea­sons to use vir­tual or hubbed in­fra­struc­ture:Risk & reg­u­la­tion. Hosting in cer­tain coun­tries can ex­pose both the provider and users to lo­cal sur­veil­lance or seizure.In­fra­struc­ture qual­ity. Some re­gions sim­ply don’t have the same den­sity of re­li­able data cen­ters or high-ca­pac­ity in­ter­net links, so run­ning servers there is harder and riskier.Per­for­mance & cost. Serving Bahamas” from Miami or Cambodia” from Singapore can be cheaper, faster, and eas­ier to main­tain.From this per­spec­tive, a vir­tual lo­ca­tion can be a rea­son­able com­pro­mise: you get a re­gional IP and con­tent un­block­ing with­out the down­sides of host­ing in a frag­ile en­vi­ron­ment.Where It Becomes a Trust ProblemLack of dis­clo­sure. Marking some­thing clearly as Virtual Bahamas (US-based)” is trans­par­ent. Listing Bahamas” along­side Germany” with­out any hint that one is vir­tual and the other is phys­i­cal blurs the line be­tween mar­ket­ing and re­al­ity.Scale of the mis­match. It’s one thing to have a few vir­tual lo­ca­tions in hard-to-host places. It’s an­other when dozens of coun­tries ex­ist only as la­bels across your en­tire foot­print, or when more than half of your tested lo­ca­tions are ac­tu­ally some­where else.Down­stream re­liance. Journalists, ac­tivists, and NGOs may pick lo­ca­tions based on safety as­sump­tions. Fraud sys­tems, com­pli­ance work­flows, and geo-re­stricted ser­vices may treat Somalia” vs France” as a mean­ing­ful dif­fer­ence. If both the VPN UI and the IP data say Somalia” while the traf­fic is phys­i­cally in France, every­one is mak­ing de­ci­sions on a false premise.That last point leads di­rectly into the IP data prob­lem that we are fo­cused on solv­ing.So How Much Should You Trust Your VPN?If you’re a VPN user, here are some prac­ti­cal take­aways from this work:Treat 100+ coun­tries” as a mar­ket­ing num­ber, not a guar­an­tee. In our sam­ple, 97 coun­tries ex­isted only as claims, not re­al­ity, across 17 providers.Check how your provider talks about lo­ca­tions. Do they clearly la­bel virtual” servers? Document where they’re ac­tu­ally hosted? Or do they qui­etly mix vir­tual and phys­i­cal lo­ca­tions in one long list?If you rely on IP data pro­fes­sion­ally, ask where it comes from. A sta­tic 99.x% ac­cu­rate world­wide” claim does­n’t tell you how an IP data provider han­dles fast-mov­ing, high-stakes en­vi­ron­ments like VPN in­fra­struc­ture.Ul­ti­mately, this is­n’t an ar­gu­ment against VPNs, or even against vir­tual lo­ca­tions. It’s an ar­gu­ment for hon­esty and ev­i­dence. If a VPN provider wants you to trust that map of flags, they should be will­ing, and able, to show that it matches the real net­work un­der­neath.Most legacy IP data providers rely on re­gional in­ter­net reg­istry (RIR) al­lo­ca­tion data and heuris­tics around rout­ing and ad­dress blocks. These providers will of­ten ac­cept self-de­clared data like cus­tomer feed­back, cor­rec­tions, and ge­ofeeds, with­out a clear way to ver­ify them. Pro­pri­etary ProbeNet with 1,200+ points of pres­ence

We main­tain an in­ter­net mea­sure­ment plat­form of PoPs in lo­ca­tions around the world.Ac­tive mea­sure­ments

For each vis­i­ble IP on the in­ter­net, in­clud­ing both IPv4 and IPv6 ad­dresses, we mea­sure RTT from mul­ti­ple probes.Ev­i­dence-based ge­olo­ca­tion

We com­bine these mea­sure­ments with IPinfo’s other sig­nals to as­sign a coun­try (and more gran­u­lar lo­ca­tion) that’s grounded in how the in­ter­net ac­tu­ally be­haves.This mea­sure­ment-first ap­proach is unique in the IP data space. Once we re­al­ized how much in­ac­cu­racy came from self-de­clared data, we started in­vest­ing heav­ily in re­search and build­ing ProbeNet to use ac­tive mea­sure­ments at scale. Our goal is to make IP data as ev­i­dence-based as pos­si­ble, ver­i­fy­ing with ob­ser­va­tion on how the in­ter­net ac­tu­ally be­haves.Our Methodology for This ReportWe ap­proached this VPN in­ves­ti­ga­tion the way a skep­ti­cal but well-equipped user would: start from the VPNs’ own claims, then test them.For each of the 20 VPN providers, we pulled to­gether three kinds of data:Mar­ket­ing promises: The servers in X coun­tries” claims and coun­try lists from their web­sites. When a coun­try was clearly listed there, we treated it as the lo­ca­tions they ac­tively pro­mote. Con­fig­u­ra­tions and lo­ca­tions lists: Configurations from dif­fer­ent pro­to­cols like OpenVPN or WireGuard were col­lected along with lo­ca­tion in­for­ma­tion avail­able on provider com­mand-line tools, mo­bile ap­pli­ca­tions, or APIs.Unique provider–lo­ca­tion en­tries: We ended up with over 6,000,000 data points and a list of provider + lo­ca­tion com­bi­na­tions we could ac­tu­ally try to con­nect to with mul­ti­ple IPs each.Step 2: Observing Where the Traffic Really GoesNext, we used IPinfo in­fra­struc­ture and ProbeNet to dial into those lo­ca­tions and watch what ac­tu­ally hap­pens:We con­nected to each VPN location” and cap­tured the exit IP ad­dresses.For each exit IP ad­dress, we used IPinfo + ProbeNet’s ac­tive mea­sure­ments to de­ter­mine a mea­sured coun­try, plus:The round-trip time (RTT) from that probe (often un­der 1 ms), which is a strong hint about phys­i­cal prox­im­i­tyNow we had two views for each lo­ca­tion:Ex­pected/​Claimed coun­try: What the VPN claims in its UI/configs/websiteMeasured coun­try: Where IPinfo + ProbeNet ac­tu­ally see the exit IPFor each lo­ca­tion where a coun­try was clearly spec­i­fied, we asked a very sim­ple ques­tion: Does the ex­pected coun­try match the mea­sured coun­try?If yes, we counted it as a match. If not, it be­came a mis­match: a lo­ca­tion where the app says one coun­try, but the traf­fic ex­its some­where else.We de­lib­er­ately used a very nar­row de­f­i­n­i­tion of mismatch.” For a lo­ca­tion to be counted, two things had to be true: the provider had to clearly claim a spe­cific coun­try (on their web­site, in their app, or in con­figs), and we had di­rect ac­tive mea­sure­ments from ProbeNet for the exit IPs be­hind that lo­ca­tion.We ig­nored any lo­ca­tions where the mar­ket­ing was am­bigu­ous, where we had­n’t mea­sured the exit di­rectly, or where we only had weaker hints like host­name strings, reg­istry data, or third-party IP data­bases. Those sig­nals can be use­ful and true, but we wanted our num­bers to be as hard-to-ar­gue-with as pos­si­ble.The re­sult is that the mis­match rates we show here are con­ser­v­a­tive. With a looser method­ol­ogy that also leaned on those ad­di­tional hints, the num­bers would al­most cer­tainly be higher, not lower.

...

Read the original on ipinfo.io »

10 178 shares, 14 trendiness

YouTube’s CEO limits his kids’ social media use — other tech bosses do the same

YouTube’s CEO Neal Mohan is the lat­est in a line of tech bosses who have ad­mit­ted to lim­it­ing their chil­dren’s so­cial me­dia use, as the harms of be­ing on­line for young peo­ple have be­come more ev­i­dent.

Mohan, who took the helm of YouTube’s lead­er­ship in 2023, was just named Time’s 2025 CEO of the Year. He said in an in­ter­view with the mag­a­zine that his chil­dren’s use of me­dia plat­forms is con­trolled and re­stricted.

We do limit their time on YouTube and other plat­forms and other forms of me­dia. On week­days we tend to be more strict, on week­ends we tend to be less so. We’re not per­fect by any stretch,” Mohan said in one TikTok video posted by Time Magazine on Thursday.

He stressed everything in mod­er­a­tion” is what works best for him and his wife, and that ex­tends to other on­line ser­vices and plat­forms. Mohan has three chil­dren: two sons and one daugh­ter.

Experts have con­tin­ued to sound the alarm on how ex­ces­sive smart­phones and so­cial me­dia use has harmed chil­dren and teenagers. Jonathan Haidt, NYU pro­fes­sor and au­thor of The Anxious Generation,” has ad­vo­cated for chil­dren to not have smart­phones be­fore the age of 14 and no ac­cess to so­cial me­dia be­fore the age of 16.

Let them have a flip phone, but re­mem­ber, a smart­phone is­n’t re­ally a phone. They could make phone calls on it, but it’s a multi-pur­pose de­vice by which the world can get to your chil­dren,” Haidt said in an in­ter­view with CNBCs Tania Bryer ear­lier this year.

This week, Australia be­came the first coun­try to for­mally bar users un­der the age of 16 from ac­cess­ing ma­jor so­cial me­dia plat­forms. Ahead of the leg­is­la­tion’s pas­sage last year, a YouGov sur­vey found that 77% of Australians backed the un­der-16 so­cial me­dia ban. Still, the roll­out has faced some re­sis­tance since be­com­ing law.

Mohan said in a more ex­ten­sive in­ter­view with Time on Wednesday that he feels a paramount re­spon­si­bil­ity” to young peo­ple and giv­ing par­ents greater con­trol over how their kids use the plat­form. YouTube Kids was launched in 2015 as a child-friendly ver­sion of the Google-owned plat­form.

He said his goal is to make it easy for all par­ents” to man­age their chil­dren’s YouTube use in a way that is suit­able to their house­hold,” es­pe­cially as every par­ent has a dif­fer­ent ap­proach.

...

Read the original on www.cnbc.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.