10 interesting stories served every morning and every evening.
Summary: A major brick-and-mortar store sold an Apple Gift Card that Apple seemingly took offence to, and locked out my entire Apple ID, effectively bricking my devices and my iCloud Account, Apple Developer ID, and everything associated with it, and I have no recourse. Can you help? Email paris AT paris.id.au (and read on for the details). ❤️
I am writing this as a desperate measure. After nearly 30 years as a loyal customer, authoring technical books on Apple’s own programming languages (Objective-C and Swift), and spending tens upon tens upon tens of thousands of dollars on devices, apps, conferences, and services, I have been locked out of my personal and professional digital life with no explanation and no recourse.
My Apple ID, which I have held for around 25 years (it was originally a username, before they had to be email addresses; it’s from the iTools era), has been permanently disabled. This isn’t just an email address; it is my core digital identity. It holds terabytes of family photos, my entire message history, and is the key to syncing my work across the ecosystem.
The Trigger: The only recent activity on my account was a recent attempt to redeem a $500 Apple Gift Card to pay for my 6TB iCloud+ storage plan. The code failed. The vendor suggested that the card number was likely compromised and agreed to reissue it. Shortly after, my account was locked. An Apple Support representative suggested that this was the cause of the issue: indicating that something was likely untoward about this card.The card was purchased from a major brick-and-mortar retailer (Australians, think Woolworths scale; Americans, think Walmart scale), so if I cannot rely on the provenance of that, and have no recourse, what am I meant to do? We have even sent the receipt, indicating the card’s serial number and purchase location to Apple.
The Consequence: My account is flagged as “closed in accordance with the Apple Media Services Terms and Conditions”.The Damage: I effectively have over $30,000 worth of previously-active “bricked” hardware. My iPhone, iPad, Watch, and Macs cannot sync, update, or function properly. I have lost access to thousands of dollars in purchased software and media. Apple representatives claim that only the “Media and Services” side of my account is blocked, but now my devices have signed me out of iMessage (and I can’t sign back in), and I can’t even sign out of the blocked iCloud account because… it’s barred from the sign-out API, as far as I can tell.I can’t even login to the “Secure File Transfer” system Apple uses to exchange information, because it relies on an Apple ID. Most of the ways Apple has suggested seeking help from them involve signing in to an Apple service to upload something, or communicate with them. This doesn’t work as the account is locked.
I can’t even download my iCloud Photos, as:There are repeated auth-errors on my account, so I can’t make Photos work;I don’t have a 6TB device to sync them to, even if I could.
No Information: Support staff refused to tell me why the account was banned or provide specific details on the decision. No Escalation: When I begged for an escalation to Executive Customer Relations (ECR), noting that I would lose the ability to do my job and that my devices were useless, I was told that “an additional escalation won’t lead to a different outcome”.Many of the reps I’ve spoken to have suggested strange things, one of the strangest was telling me that I could physically go to Apple’s Australian HQ at Level 3, 20 Martin Place, Sydney, and plead my case. They even put me on hold for 5 minutes while they looked up the address.
Most insultingly, the official advice from the Senior Advisor was to “create a new Apple account… and update the payment information”.
The Legal Catch: Apple’s Terms and Conditions rely on “Termination of Access.” By closing my account, they have revoked my license to use their services. The Technical Trap: If I follow their advice and create a new account on my current devices (which are likely hardware-flagged due to the gift card error), the new account will likely be linked to the banned one and disabled for circumventing security measures.The Developer Risk: As a professional Apple Developer, attempting to “dodge” a ban by creating a new ID could lead to my Developer Program membership being permanently blacklisted, amongst other things.
I am not a casual user. I have literally written the book on Apple development (taking over the Learning Cocoa with Objective-C series, which Apple themselves used to write, for O’Reilly Media, and then 20+ books following that). I help run the longest-running Apple developer event not run by Apple themselves, /dev/world. I have effectively been an evangelist for this company’s technology for my entire professional life. We had an app on the App Store on Day 1 in every sense of the world.
I am asking for a human at Apple to review this case. I suspect an automated fraud flag regarding the bad gift card triggered a nuclear response that frontline support cannot override. I have escalated this through my many friends in WWDR and SRE at Apple, with no success.
I am desperate to resolve this and restore my digital life. If you can help, please email paris AT paris.id.au
...
Read the original on hey.paris »
One of the things that most excited me about Anthropic’s new Skills mechanism back in October is how easy it looked for other platforms to implement. A skill is just a folder with a Markdown file and some optional extra resources and scripts, so any LLM tool with the ability to navigate and read from a filesystem should be capable of using them. It turns out OpenAI are doing exactly that, with skills support quietly showing up in both their Codex CLI tool and now also in ChatGPT itself.
I learned about this from Elias Judin this morning. It turns out the Code Interpreter feature of ChatGPT now has a new /home/oai/skills folder which you can access simply by prompting:
I tried that myself and got back this zip file. Here’s a UI for exploring its content (more about that tool).
So far they cover spreadsheets, docx and PDFs. Interestingly their chosen approach for PDFs and documents is to convert them to rendered per-page PNGs and then pass those through their vision-enabled GPT models, presumably to maintain information from layout and graphics that would be lost if they just ran text extraction.
Elias shared copies in a GitHub repo. They look very similar to Anthropic’s implementation of the same kind of idea, currently published in their anthropics/skills repository.
I tried it out by prompting:
Create a PDF with a summary of the rimu tree situation right now and what it means for kakapo breeding season
Sure enough, GPT-5.2 Thinking started with:
It took just over eleven minutes to produce this PDF, which was long enough that I had Claude Code for web build me a custom PDF viewing tool while I waited.
The reason it took so long is that it was fastidious about looking at and tweaking its own work. I appreciated that at one point it tried rendering the PDF and noticed that the macrons in kākāpō were not supported by the chosen font, so it switched to something else:
Meanwhile, two weeks ago OpenAI’s open source Codex CLI tool landed a PR titled feat: experimental support for skills.md. The most recent docs for that are in docs/skills.md.
The documentation suggests that any folder in ~/.codex/skills will be treated as a skill.
I dug around and found the code that generates the prompt that drives the skill system in codex-rs/core/src/skills/render.rs—here’s a Gist with a more readable version of that prompt.
I used Claude Opus 4.5’s skill authoring skill to create this skill for creating Datasette plugins, then installed it into my Codex CLI skills folder like this:
git clone https://github.com/datasette/skill \
~/.codex/skills/datasette-plugin
You have to run Codex with the –enable skills option. I ran this:
cd /tmp
mkdir datasette-cowsay
cd datasette-cowsay
codex –enable skills -m gpt-5.2
- datasette-plugins — Writing Datasette plugins using Python + pluggy (file: /Users/simon/.codex/skills/datasette-plugin/SKILL.md)
- Discovery — How to find/identify available skills (no SKILL.md path provided in the list)
Write a Datasette plugin in this folder adding a /-/cowsay?text=hello page that displays a pre with cowsay from PyPI saying that text
It worked perfectly! Here’s the plugin code it wrote and here’s a copy of the full Codex CLI transcript, generated with my terminal-to-html tool.
You can try that out yourself if you have uvx installed like this:
uvx –with https://github.com/simonw/datasette-cowsay/archive/refs/heads/main.zip \
datasette
http://127.0.0.1:8001/-/cowsay?text=This+is+pretty+fun
When I first wrote about skills in October I said Claude Skills are awesome, maybe a bigger deal than MCP. The fact that it’s just turned December and OpenAI have already leaned into them in a big way reinforces to me that I called that one correctly.
Skills are based on a very light specification, if you could even call it that, but I still think it would be good for these to be formally documented somewhere. This could be a good initiative for the new Agentic AI Foundation (previously) to take on.
...
Read the original on simonwillison.net »
So there I was, pedaling my bicycle as fast as I could down a long, straight stretch of road, feeling great. I’d just discovered the pleasures of riding a road bike, and I loved every minute that I could get away. Always a data geek, I tracked my mileage, average speed, heart rate, etc. It was a beautiful Indian summer Sunday afternoon in September. I was in my late 30s, still a baby. Out of nowhere, my chain came off right in the middle of the sprint I was timing. In true masculine fashion, I threw a fit, cursing and hitting the brakes as hard as I could. At this point, I found out that experienced riders don’t do that because I flew right over the handlebars, landing on the pavement amid speeding cars. I momentarily lost consciousness, and when I regained my senses, I knew I’d screwed up badly. The pain in my shoulder was nauseating. I couldn’t move my arm, and I had to just roll off the road onto the shoulder. I just lay there, hurting, unable to think clearly. Within seconds, it seemed, a man materialized beside me.
He was exceptionally calm. He didn’t ask me if I was OK, since I clearly wasn’t. It was obvious that he knew what he was doing. He made certain I could breathe, paused long enough to dial 911, and then started pulling stuff out of a medical bag (WTF?) to clean the extensive road rash I had. In a minute, he asked for my home phone number so he could call my wife to let her know I was going to be riding in an ambulance to the hospital. He told her he was an emergency room doctor who just happened to be right behind me when I crashed. He explained that he would stay with me until the medics arrived and that he would call ahead to make sure one of the doctors on duty would “take good care of me.”
When he hung up, he asked me if I’d heard the conversation. I told him that I had and that I couldn’t believe how lucky I was under the circumstances. He agreed. To keep my mind off the pain, he just kept chatting, telling me that because I was arriving by ambulance, I’d be treated immediately. He told me that I’d be getting the “good drugs” to take care of the pain. That sounded awesome.
I don’t remember telling him goodbye. I certainly didn’t ask him his name or find out anything about him. He briefed the EMTs when they arrived and stood there until the ambulance doors closed. The ER was indeed ready for me when the ambulance got there. They treated me like a VIP. I got some Dilaudid for the pain, and it was indeed the good stuff. They covered the road rash with Tegaderm and took x-rays, which revealed that I’d torn my collarbone away from my shoulder blade. That was going to require a couple of surgeries and lots of physical therapy. I had a concussion and was glad that I had a helmet on.
All of this happened almost 25 years ago. I’ve had plenty of other bike wrecks, but that remains the worst one. My daughter is a nurse, and she’s like a magnet for car crashes, having stopped multiple times to render aid. She doesn’t do it with a smile on her face, though; emergency medicine isn’t her gig, and if anyone asks her if she’s a doctor, her stock answer is “I’m a YMCA member.”
The guy who helped me that day was an absolute angel. I have no idea what I would have done without him. I didn’t even have a cell phone at the time. But he was there at a time when I couldn’t have needed him any more badly. He helped me and then got in his car and completed his trip. I think of that day often, especially when the American medical system makes me mad, which happens regularly these days.
I’ve enjoyed the kindness of a lot of strangers over the years, particularly during the long hike my wife and I did for our honeymoon (2,186 miles) when we hitchhiked to a town in NJ in the rain and got a ride from the first car to pass. Another time, in Connecticut, a man gave us a $100 bill and told us to have a nice dinner at the restaurant atop Mt. Greylock, the highest mountain in Massachusetts. In Virginia, a moth flew into my wife’s ear, and I mean all the way into her ear until it was bumping into her eardrum. We hiked several miles to the road and weren’t there for a minute before a man stopped and took us to urgent care, 30 miles away.
When you get down in the dumps, I hope you have some memories like that to look back on, to restore your faith in humanity. There are a lot of really good people in the world.
Enjoyed it? Please upvote 👇
...
Read the original on louplummer.lol »
Google has removed dozens of new Sci-Hub domain names from its search results in the United States. Unlike typical DMCA takedowns, the removals were triggered by a dated court order that was not enforced for several years. This appears to be one of the first times Google has deindexed an entire pirate site in the U. S. based on a ‘site blocking’ style injunction.
Google has removed dozens of new Sci-Hub domain names from its search results in the United States. Unlike typical DMCA takedowns, the removals were triggered by a dated court order that was not enforced for several years. This appears to be one of the first times Google has deindexed an entire pirate site in the U. S. based on a ‘site blocking’ style injunction.
In 2017, American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, won a lawsuit against Sci-Hub and its operator, Alexandra Elbakyan.
The ‘Pirate Bay of Science’ had failed to appear at a Virginia federal court, resulting in an easy win for the publisher and a $4.8 million default judgment award for damages.
More important, perhaps, was the broad permanent injunction that the Virginia federal court signed off on in 2017. This order effectively gave ACS free rein to take down existing and newly registered Sci-Hub domain names.
The injunction also required all parties “in active concert or participation” with Sci-Hub to “cease facilitating access” to these domain names, including search engines, hosting providers, ISPs, and domain name registrars, the order clarified.
On paper, this injunction enabled ACS to request American ISPs and search engines to ‘block’ existing and future Sci-Hub domains. However, there was no sign that the publisher was doing so. Aside from a few suspended domains, Sci-Hub remained widely accessible.
Whether ACS did not feel the need to enforce the order against search engines and other intermediaries or if these companies actively objected to the requested actions was unknown. And as time passed, the injunction became a distant memory, at least for a few years.
Earlier this week we spotted a unique request in the Lumen Database, where the 2018 injunction was cited. The notice in question asks Google to deindex 34 (sub)domains linked to Sci-Hub.
None of these domains were referenced in the 2018 injunction but are indeed linked to Sci-Hub. Many of the partially redacted domains appear to be domain variations of the scihubtw.tw mirror network, such as edu.scihubtw.tw and freeus.scihubtw.tw.
It’s surprising to see this type of enforcement seven years after the injunction was issued, but the request is legitimate. Google is certainly taking it seriously and has deindexed these domains from its search results in America. In other countries, the same domains remain accessible.
The December 2 notice was sent by UK law firm Wiggin LLP, which sent a similar request in September this year, targeting a few dozen other Sci-Hub domains. In total, we spotted seven notices, with the earliest dating back to 2022.
The results of these removals are also clearly visible in Google search. Those who search for Sci-Hub in the U. S. will see the following notice at the bottom of the results.
It’s not clear why it took five years before ACS urged Google to take action in response to the injunction. However, these removals are similar to Google’s removal of pirate site domains in other countries in response to ISP-blocking orders. Voluntary cooperation by Google was uncovered shortly before ACS first notified the search engine.
Google’s voluntary cooperation with ISP blocking orders in Australia, the Netherlands, France, the UK, and elsewhere also brings up an important question. Is Google cooperating with the permanent injunction in the U. S. because it feels legally compelled to do so, or is that a voluntary gesture too?
The 2018 injunction requires all parties “in active concert or participation” with Sci-Hub to take action. While search engines are mentioned as an example, Google and other tech companies have previously argued that neutral third-party services are not necessarily “in active concert or participation”.
It is likely that Google maintains this stance, opting to voluntarily comply with orders targeting other third parties. That would mirror its response to site-blocking orders elsewhere.
We contacted Google hoping to hear answers to these questions, but the company did not respond to our request for comment.
...
Read the original on torrentfreak.com »
Skip to contentIn any normal spreadsheet, when you change values that are the input to some formulas, the outputs are automatically updated:Could it also work the other way? What if you could also change the output, and have the inputs be updated to match the formula?For the past few months I’ve been really curious about this idea. But there were so many questions:Would it even be possible at all?Could it work with very complex formulas? With exponents? With advanced math functions like log(), abs(), etc?How would the UX work? In a normal spreadsheet, when you click on a cell that has a formula, you get to change the formula’s expression. I would need a way to let the user change either the formula’s expression or the cell’s numeric value. What should happen if there are multiple possible solutions? Like in the example above, if you set A3 to 100, should the result be 50/50, 20/80, -10000/10100? When there is a infinite number of possible solutions, how to pick one?Could it work with chained formulas? Could I build a long chain of formulas, update the final value and find the matching inputs all the way backwards?Ok, now let’s just skip to the good part! Today I’m happy to introduce:Variables
A simple number entered in a cell is a variable: 1.0. It may be changed by the solver.Constant
A number prefixed by a hash # is a constant. It will not be changed by the solver.Text
Cells can be in text mode. To input text, wrap in double quotes: “Distance (km)”.Formula
Formulas can be entered in a cell (the traditional = prefix is optional), for example:The result of formulas will be automatically updated when an input they depend on changes. This is the usual forward update.The magic of bidicalc is that once a formula has been computed, you can change the result. Bidicalc will walk “upstream” to change variable cells so that the formula’s result matches the change you made. This is the backward update.To change a cell formula’s expression instead of its result, click on the F icon.pow(a, b): exponentiation, raised to the power of exp(x): exponential, the value of The solver will try its best to find a solution. However it can fail in different ways:The solution is incorrect.
This is a bug and should not happen: please report it on GitHub, thank you!The solver reports “no solution”, but there is one. This could be a bug in the solver, or you have found a particularly difficult root finding problem that has solutions that are very difficult to find using floating point arithmetic. Please report it on GitHub so I can use it to improve the solver 😃The solution is technically correct but unexpected.
This can happen for a large class of problems, typically when there are a lot of free variables (the problem is heavily underdetermined) and the solution manifold is weird. For example, try to solve a*b*c = 1 to see this in action. To combat this, you can:Set some variables to constants using the hash syntax, i.e.: #50.Wait for me to implement more features like domain restrictions of variables.Suggest improvements to the open-source solver on GitHub.Keep in mind this is an experiment I made for fun because I like math and spreadsheets. If you need to do root finding to compute the load tolerance of a one milion ton suspended bridge please don’t use bidicalc 😄How does it work? Even a normal spreadsheet is fairly complex beast. But the novel thing about bidicalc is the backwards solver. Mathematically, updating a spreadsheet “backward” is a (potentially underdetermined) root finding problem, because we are trying to find a vector of unknowns such that , where F is the function computed by the cells formulas, and G is the objective value entered in the cell. Note that F is not necessarily a single formula, but the result of composing an upstream graph of cells into a single function.The actual root-finding solver is a custom algorithm that I made. It a general purpose algorithm that will find one root of any continuous-almost-everywhere function for which a complete syntactic expression is known. It uses a mix of continuous constraint propagation on interval union arithmetic , directional Newton’s method and dichotomic search. It is of course limited by floating point precision and available computation time.Bidicalc is written in TypeScript and entirely open-source under the AGPL licence. This means that you can freely reuse, modify, and share bidicalc as long as you make the complete source code of your modified version available under the same licence. If you are interested in buying bidicalc under a different licence please get in touch.I haven’t taken the time to write a full deep-dive mathematical explanation of how it works, but if you are interested in that please let me know. I might find some time to do it if there is interest from fellow math nerds.If I kept improving bidicalc until it was perfect I would have never released it. So currently it is imperfect and could be improved in a number of ways.Domain restriction for variables. Currently the solver may assign any value in the interval . I’d like to add special syntax so that variable cells can be restricted by the user to a specific interval. This would allow guiding the solver and saying that you only want this cell to be positive, of to be between 1 and 100, for example.Solver improvements. The algorithm works well enough for simple problems so I’m happy to publish in this current state, but could always be improved. There are a milion ways to improve it in the future so that it finds better solutions, particularly for highly underdetermined cases.float64 gradients support. Due to a pretty obscure technical limitation of tensorflowjs (that I use to compute gradients), the backward solver is partially limited to single precision, even though the forward solver uses double precision via native JS numbers.UX improvements. I am not very good at front-end dev 😄. I have learned vuejs to be able to make the UX for bidicalc but I’m not great at it. A spreadsheet interface is actually a massive state machine of complex and subtle behavior, it’s a very interesting project and tricky to get right. As you can see, I’ve decided to skip the usual spreadsheet design principle that cells have two selection states: soft selected which enables dragging, selection, etc. and hard selected which enables changing the content of the cell. bidicalc is simply a CSS grid of elements.Move cell computation off the main thread. The solver is single threaded and happens in the UI thread. It should be moved to a web worker to avoid locking the UI.My name is Victor Poughon, I enjoy math and open-source software. If you want to see me do more stuff like this consider sponsoring me on GitHub or Buying me a coffee.
...
Read the original on victorpoughon.github.io »
I’ve started using the term HTML tools to refer to HTML applications that I’ve been building which combine HTML, JavaScript, and CSS in a single file and use them to provide useful functionality. I have built over 150 of these in the past two years, almost all of them written by LLMs. This article presents a collection of useful patterns I’ve discovered along the way.
First, some examples to show the kind of thing I’m talking about:
pypi-changelog lets you generate (and copy to clipboard) diffs between different PyPI package releases.
bluesky-thread provides a nested view of a discussion thread on Bluesky.
These are some of my recent favorites. I have dozens more like this that I use on a regular basis.
You can explore my collection on tools.simonwillison.net—the by month view is useful for browsing the entire collection.
If you want to see the code and prompts, almost all of the examples in this post include a link in their footer to “view source” on GitHub. The GitHub commits usually contain either the prompt itself or a link to the transcript used to create the tool.
These are the characteristics I have found to be most productive in building tools of this nature:
A single file: inline JavaScript and CSS in a single HTML file means the least hassle in hosting or distributing them, and crucially means you can copy and paste them out of an LLM response.
Avoid React, or anything with a build step. The problem with React is that JSX requires a build step, which makes everything massively less convenient. I prompt “no react” and skip that whole rabbit hole entirely.
Load dependencies from a CDN. The fewer dependencies the better, but if there’s a well known library that helps solve a problem I’m happy to load it from CDNjs or jsdelivr or similar.
Keep them small. A few hundred lines means the maintainability of the code doesn’t matter too much: any good LLM can read them and understand what they’re doing, and rewriting them from scratch with help from an LLM takes just a few minutes.
The end result is a few hundred lines of code that can be cleanly copied and pasted into a GitHub repository.
The easiest way to build one of these tools is to start in ChatGPT or Claude or Gemini. All three have features where they can write a simple HTML+JavaScript application and show it to you directly.
Claude calls this “Artifacts”, ChatGPT and Gemini both call it “Canvas”. Claude has the feature enabled by default, ChatGPT and Gemini may require you to toggle it on in their “tools” menus.
Try this prompt in Gemini or ChatGPT:
Build a canvas that lets me paste in JSON and converts it to YAML. No React.
Or this prompt in Claude:
Build an artifact that lets me paste in JSON and converts it to YAML. No React.
I always add “No React” to these prompts, because otherwise they tend to build with React, resulting in a file that is harder to copy and paste out of the LLM and use elsewhere. I find that attempts which use React take longer to display (since they need to run a build step) and are more likely to contain crashing bugs for some reason, especially in ChatGPT.
All three tools have “share” links that provide a URL to the finished application. Examples:
Coding agents such as Claude Code and Codex CLI have the advantage that they can test the code themselves while they work on it using tools like Playwright. I often upgrade to one of those when I’m working on something more complicated, like my Bluesky thread viewer tool shown above.
I also frequently use asynchronous coding agents like Claude Code for web to make changes to existing tools. I shared a video about that in Building a tool to copy-paste share terminal sessions using Claude Code for web.
Claude Code for web and Codex Cloud run directly against my simonw/tools repo, which means they can publish or upgrade tools via Pull Requests (here are dozens of examples) without me needing to copy and paste anything myself.
Any time I use an additional JavaScript library as part of my tool I like to load it from a CDN.
The three major LLM platforms support specific CDNs as part of their Artifacts or Canvas features, so often if you tell them “Use PDF.js” or similar they’ll be able to compose a URL to a CDN that’s on their allow-list.
Sometimes you’ll need to go and look up the URL on cdnjs or jsDelivr and paste it into the chat.
CDNs like these have been around for long enough that I’ve grown to trust them, especially for URLs that include the package version.
The alternative to CDNs is to use npm and have a build step for your projects. I find this reduces my productivity at hacking on individual tools and makes it harder to self-host them.
I don’t like leaving my HTML tools hosted by the LLM platforms themselves for a couple of reasons. First, LLM platforms tend to run the tools inside a tight sandbox with a lot of restrictions. They’re often unable to load data or images from external URLs, and sometimes even features like linking out to other sites are disabled.
The end-user experience often isn’t great either. They show warning messages to new users, often take additional time to load and delight in showing promotions for the platform that was used to create the tool.
They’re also not as reliable as other forms of static hosting. If ChatGPT or Claude are having an outage I’d like to still be able to access the tools I’ve created in the past.
Being able to easily self-host is the main reason I like insisting on “no React” and using CDNs for dependencies—the absence of a build step makes hosting tools elsewhere a simple case of copying and pasting them out to some other provider.
My preferred provider here is GitHub Pages because I can paste a block of HTML into a file on github.com and have it hosted on a permanent URL a few seconds later. Most of my tools end up in my simonw/tools repository which is configured to serve static files at tools.simonwillison.net.
One of the most useful input/output mechanisms for HTML tools comes in the form of copy and paste.
I frequently build tools that accept pasted content, transform it in some way and let the user copy it back to their clipboard to paste somewhere else.
Copy and paste on mobile phones is fiddly, so I frequently include “Copy to clipboard” buttons that populate the clipboard with a single touch.
Most operating system clipboards can carry multiple formats of the same copied data. That’s why you can paste content from a word processor in a way that preserves formatting, but if you paste the same thing into a text editor you’ll get the content with formatting stripped.
These rich copy operations are available in JavaScript paste events as well, which opens up all sorts of opportunities for HTML tools.
hacker-news-thread-export lets you paste in a URL to a Hacker News thread and gives you a copyable condensed version of the entire thread, suitable for pasting into an LLM to get a useful summary.
paste-rich-text lets you copy from a page and paste to get the HTML—particularly useful on mobile where view-source isn’t available.
alt-text-extractor lets you paste in images and then copy out their alt text.
The key to building interesting HTML tools is understanding what’s possible. Building custom debugging tools is a great way to explore these options.
clipboard-viewer is one of my most useful. You can paste anything into it (text, rich text, images, files) and it will loop through and show you every type of paste data that’s available on the clipboard.
This was key to building many of my other tools, because it showed me the invisible data that I could use to bootstrap other interesting pieces of functionality.
keyboard-debug shows the keys (and KeyCode values) currently being held down.
cors-fetch reveals if a URL can be accessed via CORS.
HTML tools may not have access to server-side databases for storage but it turns out you can store a lot of state directly in the URL.
I like this for tools I may want to bookmark or share with other people.
icon-editor is a custom 24x24 icon editor I built to help hack on icons for the GitHub Universe badge. It persists your in-progress icon design in the URL so you can easily bookmark and share it.
The localStorage browser API lets HTML tools store data persistently on the user’s device, without exposing that data to the server.
I use this for larger pieces of state that don’t fit comfortably in a URL, or for secrets like API keys which I really don’t want anywhere near my server —even static hosts might have server logs that are outside of my influence.
word-counter is a simple tool I built to help me write to specific word counts, for things like conference abstract submissions. It uses localStorage to save as you type, so your work isn’t lost if you accidentally close the tab.
render-markdown uses the same trick—I sometimes use this one to craft blog posts and I don’t want to lose them.
haiku is one of a number of LLM demos I’ve built that request an API key from the user (via the prompt() function) and then store that in localStorage. This one uses Claude Haiku to write haikus about what it can see through the user’s webcam.
CORS stands for Cross-origin resource sharing. It’s a relatively low-level detail which controls if JavaScript running on one site is able to fetch data from APIs hosted on other domains.
APIs that provide open CORS headers are a goldmine for HTML tools. It’s worth building a collection of these over time.
Here are some I like:
* iNaturalist for fetching sightings of animals, including URLs to photos
* GitHub because anything in a public repository in GitHub has a CORS-enabled anonymous API for fetching that content from the raw.githubusercontent.com domain, which is behind a caching CDN so you don’t need to worry too much about rate limits or feel guilty about adding load to their infrastructure.
* Bluesky for all sorts of operations
* Mastodon has generous CORS policies too, as used by applications like phanpy.social
GitHub Gists are a personal favorite here, because they let you build apps that can persist state to a permanent Gist through making a cross-origin API call.
species-observation-map uses iNaturalist to show a map of recent sightings of a particular species.
zip-wheel-explorer fetches a .whl file for a Python package from PyPI, unzips it (in browser memory) and lets you navigate the files.
github-issue-to-markdown fetches issue details and comments from the GitHub API (including expanding any permanent code links) and turns them into copyable Markdown.
terminal-to-html can optionally save the user’s converted terminal session to a Gist.
bluesky-quote-finder displays quotes of a specified Bluesky post, which can then be sorted by likes or by time.
All three of OpenAI, Anthropic and Gemini offer JSON APIs that can be accessed via CORS directly from HTML tools.
Unfortunately you still need an API key, and if you bake that key into your visible HTML anyone can steal it and use to rack up charges on your account.
I use the localStorage secrets pattern to store API keys for these services. This sucks from a user experience perspective—telling users to go and create an API key and paste it into a tool is a lot of friction—but it does work.
haiku uses the Claude API to write a haiku about an image from the user’s webcam.
gemini-bbox demonstrates Gemini 2.5’s ability to return complex shaped image masks for objects in images, see Image segmentation using Gemini 2.5.
You don’t need to upload a file to a server in order to make use of the element. JavaScript can access the content of that file directly, which opens up a wealth of opportunities for useful functionality.
ocr is the first tool I built for my collection, described in Running OCR against PDFs and images directly in your browser. It uses PDF.js and Tesseract.js to allow users to open a PDF in their browser which it then converts to an image-per-page and runs through OCR.
social-media-cropper lets you open (or paste in) an existing image and then crop it to common dimensions needed for different social media platforms—2:1 for Twitter and LinkedIn, 1.4:1 for Substack etc.
ffmpeg-crop lets you open and preview a video file in your browser, drag a crop box within it and then copy out the ffmpeg command needed to produce a cropped copy on your own machine.
An HTML tool can generate a file for download without needing help from a server.
The JavaScript library ecosystem has a huge range of packages for generating files in all kinds of useful formats.
Pyodide is a distribution of Python that’s compiled to WebAssembly and designed to run directly in browsers. It’s an engineering marvel and one of the most underrated corners of the Python world.
It also cleanly loads from a CDN, which means there’s no reason not to use it in HTML tools!
Even better, the Pyodide project includes micropip—a mechanism that can load extra pure-Python packages from PyPI via CORS.
pyodide-bar-chart demonstrates running Pyodide, Pandas and matplotlib to render a bar chart directly in the browser.
numpy-pyodide-lab is an experimental interactive tutorial for Numpy.
apsw-query demonstrates the APSW SQLite library running in a browser, using it to show EXPLAIN QUERY plans for SQLite queries.
Pyodide is possible thanks to WebAssembly. WebAssembly means that a vast collection of software originally written in other languages can now be loaded in HTML tools as well.
Squoosh.app was the first example I saw that convinced me of the power of this pattern—it makes several best-in-class image compression libraries available directly in the browser.
I’ve used WebAssembly for a few of my own tools:
The biggest advantage of having a single public collection of 100+ tools is that it’s easy for my LLM assistants to recombine them in interesting ways.
Sometimes I’ll copy and paste a previous tool into the context, but when I’m working with a coding agent I can reference them by name—or tell the agent to search for relevant examples before it starts work.
The source code of any working tool doubles as clear documentation of how something can be done, including patterns for using editing libraries. An LLM with one or two existing tools in their context is much more likely to produce working code.
And then, after it had found and read the source code for zip-wheel-explorer:
Build a new tool pypi-changelog.html which uses the PyPI API to get the wheel URLs of all available versions of a package, then it displays them in a list where each pair has a “Show changes” clickable in between them - clicking on that fetches the full contents of the wheels and displays a nicely rendered diff representing the difference between the two, as close to a standard diff format as you can get with JS libraries from CDNs, and when that is displayed there is a “Copy” button which copies that diff to the clipboard
See Running OCR against PDFs and images directly in your browser for another detailed example of remixing tools to create something new.
I like keeping (and publishing) records of everything I do with LLMs, to help me grow my skills at using them over time.
For HTML tools I built by chatting with an LLM platform directly I use the “share” feature for those platforms.
For Claude Code or Codex CLI or other coding agents I copy and paste the full transcript from the terminal into my terminal-to-html tool and share that using a Gist.
In either case I include links to those transcripts in the commit message when I save the finished tool to my repository. You can see those in my tools.simonwillison.net colophon.
I’ve had so much fun exploring the capabilities of LLMs in this way over the past year and a half, and building tools in this way has been invaluable in helping me understand both the potential for building tools with HTML and the capabilities of the LLMs that I’m building them with.
If you’re interested in starting your own collection I highly recommend it! All you need to get started is a free GitHub repository with GitHub Pages enabled (Settings -> Pages -> Source -> Deploy from a branch -> main) and you can start copying in .html pages generated in whatever manner you like.
...
Read the original on simonwillison.net »
I do Advent of Code every year.
For the last seven years, including this one, I have managed to get all the stars. I do not say that to brag. I say it because it explains why I keep coming back.
It is one of the few tech traditions I never get bored of, even after doing it for a long time. I like the time pressure. I like the community vibe. I like that every December I can pick one language and go all in.
Advent of Code is usually 25 days. This year Eric decided to do 12 days instead.
So instead of 50 parts, it was 24.
That sounds like a relaxed year. It was not, but not in a bad way.
The easier days were harder than the easy days in past years, but they were also really engaging and fun to work through. The hard days were hard, especially the last three, but they were still the good kind of hard. They were problems I actually wanted to wrestle with.
It also changes the pacing in a funny way. In a normal year, by day 10 you have a pretty comfy toolbox. This year it felt like the puzzles were already demanding that toolbox while I was still building it.
That turned out to be a perfect setup for learning a new language.
Gleam is easy to like quickly.
The syntax is clean. The compiler is helpful, and the error messages are super duper good. Rust good.
Most importantly, the language strongly nudges you into a style that fits Advent of Code really well. Parse some text. Transform it a few times. Fold. Repeat.
One thing I did not expect was how good the editor experience would be. The LSP worked much better than I expected. It basically worked perfectly the whole time. I used the Gleam extension for IntelliJ and it was great.
I also just like FP.
FP is not always easier, but it is often easier. When it clicks, you stop writing instructions and you start describing the solution.
The first thing I fell in love with was echo.
It is basically a print statement that does not make you earn it. You can echo any value. You do not have to format anything. You do not have to build a string. You can just drop it into a pipeline and keep going.
This is the kind of thing I mean:
You can quickly inspect values at multiple points without breaking the flow.
I did miss string interpolation, especially early on. echo made up for a lot of that.
It mostly hit when I needed to generate text, not when I needed to inspect values. The day where I generated an LP file for glpsol is the best example. It is not hard code, but it is a lot of string building. Without interpolation it turns into a bit of a mess of <>s.
This is a small excerpt from my LP generator:
It works. It is just the kind of code where you really feel missing interpolation.
Grids are where you normally either crash into out of bounds bugs, or you litter your code with bounds checks you do not care about.
In my day 4 solution I used a dict as a grid. The key ergonomic part is that dict.get gives you an option-like result, which makes neighbour checking safe by default.
This is the neighbour function from my solution:
That last line is the whole point.
No bounds checks. No sentinel values. Out of bounds just disappears.
I expected to write parsers and helpers, and I did. What I did not expect was how often Gleam already had the exact list function I needed.
I read the input, chunked it into rows, transposed it, and suddenly the rest of the puzzle became obvious.
In a lot of languages you end up writing your own transpose yet again. In Gleam it is already there.
Another example is list.combination_pairs.
In day 8 I needed all pairs of 3D points. In an imperative language you would probably write nested loops and then question your off by one logic.
In Gleam it is a one liner:
Sometimes FP is not about being clever. It is about having the right function name.
If I had to pick one feature that made me want to keep writing Gleam after AoC, it is fold_until.
Early exit without hacks is fantastic in puzzles.
In day 8 part 2 I kept merging sets until the first set in the list contained all boxes. When that happens, I stop.
The core shape looks like this:
It is small, explicit, and it reads like intent.
I also used fold_until in day 10 part 1 to find the smallest combination size that works.
Even though I enjoyed Gleam a lot, I did hit a few recurring friction points.
None of these are deal breakers. They are just the kind of things you notice when you do 24 parts in a row.
This one surprised me on day 1.
For AoC you read a file every day. In this repo I used simplifile everywhere because you need something. It is fine, I just did not expect basic file IO to be outside the standard library.
Day 2 part 2 pushed me into regex and I had to add gleam_regexp.
This is the style I used, building a regex from a substring:
Again, totally fine. It just surprised me.
You can do [first, ..rest] and you can do [first, second].
But you cannot do [first, ..middle, last].
It is not the end of the world, but it would have made some parsing cleaner.
In Gleam a lot of comparisons are not booleans. You get an order value.
This is great for sorting. It is also very explicit. It can be a bit verbose when you just want an check.
In day 5 I ended up writing patterns like this:
I used bigi a few times this year.
On the Erlang VM, integers are arbitrary precision, so you usually do not care about overflow. That is one of the nicest things about the BEAM.
If you want your Gleam code to also target JavaScript, you do care. JavaScript has limits, and suddenly using bigi becomes necessary for some puzzles.
I wish that was just part of Int, with a single consistent story across targets.
Day 10 part 1 was my favorite part of the whole event.
The moment I saw the toggling behavior, it clicked as XOR. Represent the lights as a number. Represent each button as a bitmask. Find the smallest combination of bitmasks that XOR to the target.
This is the fold from my solution:
It felt clean, it felt fast, and it felt like the representation did most of the work.
I knew brute force was out. It was clearly a system of linear equations.
In previous years I would reach for Z3, but there are no Z3 bindings for Gleam. I tried to stay in Gleam, and I ended up generating an LP file and shelling out to glpsol using shellout.
It worked, and honestly the LP format is beautiful.
Here is the call:
It is a hack, but it is a pragmatic hack, and that is also part of Advent of Code.
Day 11 part 2 is where I was happy I was writing Gleam.
The important detail was that the memo key is not just the node. It is the node plus your state.
In my case the key was:
Once I got the memo threading right, it ran instantly.
The last day was the only puzzle I did not fully enjoy.
Not because it was bad. It just felt like it relied on assumptions about the input, and I am one of those people that does not love doing that.
I overthought it for a bit, then I learned it was more of a troll problem. The “do the areas of the pieces, when fully interlocked, fit on the board” heuristic was enough.
In my solution it is literally this:
Sometimes you build a beautiful mental model and then the right answer is a single inequality.
I am very happy I picked Gleam this year.
It has sharp edges, mostly around where the standard library draws the line and a few language constraints that show up in puzzle code. But it also has real strengths.
Pipelines feel good. Options and Results make unsafe problems feel safe. The list toolbox is better than I expected. fold_until is incredible. Once you stop trying to write loops and you let it be functional, the solutions start to feel clearer.
I cannot wait to try Gleam in a real project. I have been thinking about using it to write a webserver, and I am genuinely excited to give it a go.
And of course, I cannot wait for next year’s Advent of Code.
If you want to look at the source for all 12 days, it is here:
...
Read the original on blog.tymscar.com »
In a large-scale analysis of 20 popular VPNs, IPinfo found that 17 of those VPNs exit traffic from different countries than they claim. Some claim 100+ countries, but many of them point to the same handful of physical data centers in the US or Europe. That means the majority of VPN providers we analyzed don’t route your traffic via the countries they claim to, and they claim many more countries than they actually support. Analyzing over 150,000 exit IPs across 137 possible exit countries, and comparing what providers claim to what IPinfo measures, shows that:17 in 20 providers had traffic exiting in a different country.38 countries were “virtual-only” in our dataset (claimed by at least one provider, but never observed as the actual traffic exit country for any provider we tested).We were only able to verify all provider announced locations for 3 providers out of the 20.Across ~150,000 VPN exit IPs tested, ProbeNet, our internet measurement platform, detected roughly 8,000 cases where widely-used IP datasets placed the server in the wrong country — sometimes thousands of kilometers off.This report walks through what we saw across VPN and IP data providers, provides a closer look at two particularly interesting countries, explores why measurement-based IP data matters if you care where your traffic really goes, and shares how we ran the investigation.Which VPNs Matched Reality (And Which Didn’t)Here is the overlap between the number of listed countries each VPN provider claims to offer versus the countries with real VPN traffic that we measured — lower percentages indicate providers whose claimed lists best match our data:
It’s important to note that we used the most commonly and widely supported technologies in this research, to make comparison between providers as fair as possible while giving us significant data to analyze, so this will not be the full coverage for each provider. These are some of the most visible names in the market. They also tend to have very long country lists on their websites. Notably, three well-known providers had zero mismatches across all the countries we tested: Mullvad, IVPN, and Windscribe.Country mismatches doesn’t automatically mean some providers offer “bad VPNs,” but it does mean that if you’re choosing a VPN because it claims “100+ countries,” you should know that a significant share of those flags may be labels, or virtual locations.What “Virtual Locations” Really MeanWhen a VPN lets you connect to, for example, “Bahamas” or “Somalia,” that doesn’t always mean traffic routes through there. In many cases, it’s somewhere entirely different, like Miami or London, but presented as if traffic is in the country you picked.This setup is known as a virtual location:The IP registry data also says “Country X” — because the provider self-declared it that way.But the network measurements (latency and routing) show the traffic actually exits in “Country Y” — often thousands of kilometers away.The problem? Without active network measurement, most IP datasets will rely on what the IP’s owner told the internet registry or published in WHOIS/geofeeds: a self-reported country tag. If that record is wrong or outdated, the mistake spreads everywhere. That’s where IPinfo’s ProbeNet comes in: by running live RTT tests from 1,200+ points of presence worldwide, we anchor each IP to its real-world location, not just its declared one.Across the dataset, we found 97 countries where at least one VPN brand only ever appeared as virtual or unmeasurable in our data. In other words, for a noticeable slice of the world map, some “locations” in VPNs never show up as true exits in our measurements. We also found 38 countries where every mention behaved this way: at least one VPN claimed them, but none ever produced a stable, measurable exit in that country in our sample.You can think of these 38 as the “unmeasurable” countries in this study — places that exist in server lists, config files, and IP geofeeds, but never once appeared as the actual exit country in our measurements. They’re not randomly scattered — they cluster in specific parts of the map. By region, that includes:This doesn’t prove there is zero VPN infrastructure in those countries globally. It does show that, across the providers and locations we measured, the dominant pattern is to serve those locations from elsewhere. Here are three of the most interesting examples of how this looks at the IP level.Case Studies: Two Countries That Only Exist on the MapTo make this concrete, let’s look at three countries where every provider in our dataset turned out to be virtual: Bahamas, and Somalia.Bahamas: All-Inclusive, Hosted in the USIn our measurements, five providers offered locations labeled as “Bahamas”: NordVPN, ExpressVPN, Private Internet Access, FastVPN, and IPVanish.For all of them, measured traffic was in the United States, usually with sub-millisecond RTT to US probes.
Somalia: Mogadishu, via France and the UKSomalia appears in our sample for only two providers: NordVPN and ProtonVPN. Both label Mogadishu explicitly in their naming, but these RTTs are exactly what you’d expect for traffic in Western Europe, and completely inconsistent with traffic in East Africa. Both providers go out of their way in the labels (e.g. “SO, Mogadishu”), but the actual traffic is in Nice and London, not Somalia.
When Legacy IP Providers Agree With the Wrong VPN LocationsSo far, we’ve talked about VPN claims versus our measurements. But other IP data providers don’t run active RTT tests. They rely on self-declared IP data sources, and often assume that if an IP is tagged as “Country X,” it must actually be there. In these cases, the IP legacy datasets typically “follow” the VPN provider’s story: if the VPN markets the endpoint as Country X, the legacy IP dataset also places it in Country X.To quantify that, we looked at 736 VPN exits where ProbeNet’s measured country disagreed with one or more widely used legacy IP datasets.We then compared the country IPinfo’s ProbeNet measured (backed by RTT and routing) with the country reported by these other IP datasets and computed the distance between them. The gaps are large:How Far Off Were the Other IP Datasets?
The median error between ProbeNet and the legacy datasets was roughly 3,100 km. On the ProbeNet side, we have strong latency evidence that our measured country is the right one:The median minimum RTT to a probe in the measured country was 0.27 ms. About 90% of these locations had a sub-millisecond RTT from at least one probe.That’s what you expect when traffic is genuinely in that country, not thousands of kilometers away.An IP Example You Can Test YourselfThis behavior is much more tangible if you can see it on a single IP. Here’s one VPN exit IP where ProbeNet places the server in the United Kingdom, backed by sub-millisecond RTT from local probes, while other widely used legacy IP datasets place the same IP in Mauritius, 9,691 kilometers away.If you want to check this yourself, you can plug it into a public measurement tool like https://ping.sx/ and run pings or traceroutes from different regions. Tools like this one provide a clear visual for where latency is lowest.ProbeNet uses the same basic idea, but at a different scale: we maintain a network of 1,200+ points of presence (PoPs) around the world, so we can usually get even closer to the real physical location than public tools with smaller networks.If you’d like to play with more real IPs (not necessarily VPNs) where ProbeNet and IPinfo get the country right and other datasets don’t, you can find a fuller set of examples on our IP geolocation accuracy page.Why This Happens and How It Impacts TrustIt’s worth separating technical reasons from trust issues. There are technical reasons to use virtual or hubbed infrastructure:Risk & regulation. Hosting in certain countries can expose both the provider and users to local surveillance or seizure.Infrastructure quality. Some regions simply don’t have the same density of reliable data centers or high-capacity internet links, so running servers there is harder and riskier.Performance & cost. Serving “Bahamas” from Miami or “Cambodia” from Singapore can be cheaper, faster, and easier to maintain.From this perspective, a virtual location can be a reasonable compromise: you get a regional IP and content unblocking without the downsides of hosting in a fragile environment.Where It Becomes a Trust ProblemLack of disclosure. Marking something clearly as “Virtual Bahamas (US-based)” is transparent. Listing “Bahamas” alongside “Germany” without any hint that one is virtual and the other is physical blurs the line between marketing and reality.Scale of the mismatch. It’s one thing to have a few virtual locations in hard-to-host places. It’s another when dozens of countries exist only as labels across your entire footprint, or when more than half of your tested locations are actually somewhere else.Downstream reliance. Journalists, activists, and NGOs may pick locations based on safety assumptions. Fraud systems, compliance workflows, and geo-restricted services may treat “Somalia” vs “France” as a meaningful difference. If both the VPN UI and the IP data say “Somalia” while the traffic is physically in France, everyone is making decisions on a false premise.That last point leads directly into the IP data problem that we are focused on solving.So How Much Should You Trust Your VPN?If you’re a VPN user, here are some practical takeaways from this work:Treat “100+ countries” as a marketing number, not a guarantee. In our sample, 97 countries existed only as claims, not reality, across 17 providers.Check how your provider talks about locations. Do they clearly label “virtual” servers? Document where they’re actually hosted? Or do they quietly mix virtual and physical locations in one long list?If you rely on IP data professionally, ask where it comes from. A static “99.x% accurate worldwide” claim doesn’t tell you how an IP data provider handles fast-moving, high-stakes environments like VPN infrastructure.Ultimately, this isn’t an argument against VPNs, or even against virtual locations. It’s an argument for honesty and evidence. If a VPN provider wants you to trust that map of flags, they should be willing, and able, to show that it matches the real network underneath.Most legacy IP data providers rely on regional internet registry (RIR) allocation data and heuristics around routing and address blocks. These providers will often accept self-declared data like customer feedback, corrections, and geofeeds, without a clear way to verify them. Proprietary ProbeNet with 1,200+ points of presence
We maintain an internet measurement platform of PoPs in locations around the world.Active measurements
For each visible IP on the internet, including both IPv4 and IPv6 addresses, we measure RTT from multiple probes.Evidence-based geolocation
We combine these measurements with IPinfo’s other signals to assign a country (and more granular location) that’s grounded in how the internet actually behaves.This measurement-first approach is unique in the IP data space. Once we realized how much inaccuracy came from self-declared data, we started investing heavily in research and building ProbeNet to use active measurements at scale. Our goal is to make IP data as evidence-based as possible, verifying with observation on how the internet actually behaves.Our Methodology for This ReportWe approached this VPN investigation the way a skeptical but well-equipped user would: start from the VPNs’ own claims, then test them.For each of the 20 VPN providers, we pulled together three kinds of data:Marketing promises: The “servers in X countries” claims and country lists from their websites. When a country was clearly listed there, we treated it as the locations they actively promote. Configurations and locations lists: Configurations from different protocols like OpenVPN or WireGuard were collected along with location information available on provider command-line tools, mobile applications, or APIs.Unique provider–location entries: We ended up with over 6,000,000 data points and a list of provider + location combinations we could actually try to connect to with multiple IPs each.Step 2: Observing Where the Traffic Really GoesNext, we used IPinfo infrastructure and ProbeNet to dial into those locations and watch what actually happens:We connected to each VPN “location” and captured the exit IP addresses.For each exit IP address, we used IPinfo + ProbeNet’s active measurements to determine a measured country, plus:The round-trip time (RTT) from that probe (often under 1 ms), which is a strong hint about physical proximityNow we had two views for each location:Expected/Claimed country: What the VPN claims in its UI/configs/websiteMeasured country: Where IPinfo + ProbeNet actually see the exit IPFor each location where a country was clearly specified, we asked a very simple question: Does the expected country match the measured country?If yes, we counted it as a match. If not, it became a mismatch: a location where the app says one country, but the traffic exits somewhere else.We deliberately used a very narrow definition of “mismatch.” For a location to be counted, two things had to be true: the provider had to clearly claim a specific country (on their website, in their app, or in configs), and we had direct active measurements from ProbeNet for the exit IPs behind that location.We ignored any locations where the marketing was ambiguous, where we hadn’t measured the exit directly, or where we only had weaker hints like hostname strings, registry data, or third-party IP databases. Those signals can be useful and true, but we wanted our numbers to be as hard-to-argue-with as possible.The result is that the mismatch rates we show here are conservative. With a looser methodology that also leaned on those additional hints, the numbers would almost certainly be higher, not lower.
...
Read the original on ipinfo.io »
YouTube’s CEO Neal Mohan is the latest in a line of tech bosses who have admitted to limiting their children’s social media use, as the harms of being online for young people have become more evident.
Mohan, who took the helm of YouTube’s leadership in 2023, was just named Time’s 2025 CEO of the Year. He said in an interview with the magazine that his children’s use of media platforms is controlled and restricted.
“We do limit their time on YouTube and other platforms and other forms of media. On weekdays we tend to be more strict, on weekends we tend to be less so. We’re not perfect by any stretch,” Mohan said in one TikTok video posted by Time Magazine on Thursday.
He stressed “everything in moderation” is what works best for him and his wife, and that extends to other online services and platforms. Mohan has three children: two sons and one daughter.
Experts have continued to sound the alarm on how excessive smartphones and social media use has harmed children and teenagers. Jonathan Haidt, NYU professor and author of “The Anxious Generation,” has advocated for children to not have smartphones before the age of 14 and no access to social media before the age of 16.
“Let them have a flip phone, but remember, a smartphone isn’t really a phone. They could make phone calls on it, but it’s a multi-purpose device by which the world can get to your children,” Haidt said in an interview with CNBC’s Tania Bryer earlier this year.
This week, Australia became the first country to formally bar users under the age of 16 from accessing major social media platforms. Ahead of the legislation’s passage last year, a YouGov survey found that 77% of Australians backed the under-16 social media ban. Still, the rollout has faced some resistance since becoming law.
Mohan said in a more extensive interview with Time on Wednesday that he feels a “paramount responsibility” to young people and giving parents greater control over how their kids use the platform. YouTube Kids was launched in 2015 as a child-friendly version of the Google-owned platform.
He said his goal is “to make it easy for all parents” to manage their children’s YouTube use “in a way that is suitable to their household,” especially as every parent has a different approach.
...
Read the original on www.cnbc.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.