10 interesting stories served every morning and every evening.
C:\philes\the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds
we are in direct written correspondence with persona’s CEO, rick song. he has been responsive and engaged in good faith.
rick has committed to answering the 18 questions in 0x14 in writing. all correspondence will be published in full as part 2 of this series. the core findings, including openai-watchlistdb.withpersona.com and its 27 months of certificate transparency history, remain unaddressed.
no laws were broken. all findings come from passive recon using public sources - Shodan, CT logs, DNS, HTTP headers, and unauthenticated files served by the target’s own web server. no systems were accessed, no credentials were used, no data was modified. retrieving publicly served files is not unauthorized access - see Van Buren v. United States (593 U. S. 374, 2021), hiQ Labs v. LinkedIn (9th Cir. 2022).
this is protected journalism and security research under the First Amendment, ECHR Art. 10, CFAA safe harbor (DOJ Policy 2022), California Shield Law, GDPR Art. 85, and Israeli Basic Law: Human Dignity and Liberty.
the authors are not affiliated with any government, intelligence service, or competitor of any entity named herein. no financial interest. no compensation. this research exists in the public interest and was distributed across multiple jurisdictions, dead drops, and third-party archives before publication.
any attempt to suppress or retaliate against this publication - legal threats, DMCA abuse, employment interference, physical intimidation, or extrajudicial action - will be treated as confirmation of its findings and will trigger additional distribution. killing the messenger does not kill the message.
for the record: all authors of this document are in good health, of sound mind, and have no plans to hurt themselves, disappear, or die unexpectedly. if that changes suddenly - it wasn’t voluntary. this document, its evidence, and a list of names are held by multiple trusted third parties with instructions to publish everything in the event that anything happens to any of us. we mean anything.
to Persona and OpenAI’s legal teams: actually audit your supposed “FedRAMP” compliancy, and answer the questions in 0x14. that’s the appropriate response. everything else is the wrong one.
from: the world
to: openai, persona, the US government, ICE, the open internet
date: 2026-02-16
subject: the watchers
they told us the future would be convenient. sign up, verify your identity, talk to the machine. easy. frictionless. the brochure said “trust and safety.” the source code said SelfieSuspiciousEntityDetection.
funny how that works. you hand over your passport to use a chatbot and somewhere in a datacenter in iowa, a facial recognition algorithm is checking whether you look like a politically exposed person. your selfie gets a similarity score. your name hits a watchlist. a cron job re-screens you every few weeks just to make sure you haven’t become a terrorist since the last time you asked GPT to write a cover letter.
so what do you do? well, we looked. found source code on a government endpoint with the door wide open. facial recognition, watchlists, SAR filings, intelligence codenames, and much more.
oh, and we revealed the names of every single person responsible for this!!
following the works of eva and others on ID verification bypasses, we decided to start looking into persona, yet another KYC service that uses facial recognition to verify identities. the original goal was to add a age-verification bypass to eva’s existing k-id platform.
after trying to write a few exploits, vmfunc decided to browse their infra on shodan. it all started with a Shodan search. a single IP. 34.49.93.177 sitting on Google Cloud in Kansas City. one open port. one SSL certificate. two hostnames that tell a story nobody was supposed to read:
openai-watchlistdb.withpersona.com
openai-watchlistdb-testing.withpersona.com
not “openai-verify”, not “openai-kyc”, watchlistdb. a database. (or is it?)
it was initially meant to be a passive recon investigation, that quickly turned into a rabbit hole deep dive into how commercial AI and federal government operations work together to violate our privacy every waking second. we didn’t even have to write or perform a single exploit, the entire architecture was just on the doorstep!! 53 megabytes of unprotected source maps on a FedRAMP government endpoint, exposing the entire codebase of a platform that files Suspicious Activity Reports with FinCEN, compares your selfie to watchlist photos using facial recognition, screens you against 14 categories of adverse media from terrorism to espionage, and tags reports with codenames from active intelligence programs.
2,456 source files containing the full TypeScript codebase, every permission, every API endpoint, every compliance rule, every screening algorithm. sitting unauthenticated on the public internet. on a government platform no less.
no systems were breached. no credentials were used. every finding in this document comes from publicly accessible sources: shodan, certificate transparency logs, DNS resolution, HTTP response headers, published API documentation, public web pages, and unauthenticated JavaScript source maps served by the target’s own web server.
the infrastructure told its own story. we just listened. then we read the source code.
IP: 34.49.93.177
ASN: AS396982 (Google LLC)
provider: Google Cloud
region: global
city: Kansas City, US
open ports: 443/tcp
last seen: 2026-02-05
hostnames:
- 177.93.49.34.bc.googleusercontent.com
- openai-watchlistdb.withpersona.com
- openai-watchlistdb-testing.withpersona.com
SSL cert:
subject: CN=openai-watchlistdb.withpersona.com
issuer: C=US, O=Google Trust Services, CN=WR3
valid: Jan 24 01:24:11 2026 - Apr 24 02:20:06 2026
SANs: openai-watchlistdb.withpersona.com
openai-watchlistdb-testing.withpersona.com
serial: FDFFBF37ED89BBD710D9967B7CD92B52
HTTP response (all paths, all methods):
status: 404
body: “fault filter abort”
headers: via: 1.1 google
content-type: text/plain
Alt-Svc: h3=”:443″
the “fault filter abort” response is an Envoy proxy fault injection filter. standard in GCP/Istio service mesh deployments. the service only routes requests matching specific internal criteria (likely mTLS client certificates, specific source IPs, or API key headers). everything else just dies at the edge.
though obviously this is not a misconfiguration.. this is just a locked-down backend service that was never meant to have a public face. the only reason we even know it exists is because of certificate transparency logs and DNS.
Persona (withpersona.com) is a San Francisco-based identity verification company. their normal infrastructure runs behind Cloudflare:
withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
inquiry.withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
app.withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
api.withpersona.com -> 162.159.141.40, 172.66.1.36 (CF)
they also run a wildcard DNS record, *.withpersona.com points to Cloudflare (cloudflare.withpersona.com.cdn.cloudflare.net). we confirmed this by resolving completely fabricated subdomains:
totallynonexistent12345.withpersona.com -> 162.159.141.40 (CF)
asdflkjhasdf.withpersona.com -> 162.159.141.40 (CF)
HOWEVER, here’s where it gets interesting. OpenAI’s watchlist service breaks out of this wildcard:
openai-watchlistdb.withpersona.com -> 34.49.93.177 (GCP)
openai-watchlistdb-testing.withpersona.com -> 34.49.93.177 (GCP)
a dedicated Google Cloud instance, which isn’t even behind Cloudflare, nor on Persona’s shared infrastructure. seemingly purpose-built and isolated.
you would never do this for a simple “check this name against a list” API call, you do this when the data requires compartmentalization. when the compliance requirements for the data you’re collecting, demand that level of isolation. when the damage of a breach is bad enough to warrant dedicated infrastructure.
CT logs tell us exactly when this service went live and how it evolved.
november 2023. this service has been running for over two years.
OpenAI didn’t announce “Verified Organization” requirements until mid-2025. they didn’t publicly require ID verification for advanced model access until GPT-5. but the watchlist screening infrastructure was operational 18 months before any of that was disclosed.
we can pinpoint when they started considering going “public” with the collaboration.
https://withpersona.com/customers/openai exists since September 17th, 2024, likewise, OpenAI’s Privacy Policy update started including the following passage since their November 4th, 2024 update as well.
“Other Information You Provide: We collect other information that you provide to us, such as when you participate in our events or surveys, or when you provide us or a vendor operating on our behalf with information to establish your identity or age (collectively, “Other Information You Provide”).”
the excuses used in the public post are classical, though instead of using children as the scapegoat for invading our privacy, this time it was ”[…] To offer safe AGI, we need to make sure bad people aren’t using our services […].
only… that they quickly used this opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves.
in fact, this is nothing new, OpenAI Forum User OnceAndTwice had mentioned this already back in June last year.
Persona’s API documentation (docs.withpersona.com) is public. when a customer like OpenAI runs a government ID verification, the API returns a complete identity dossier:
personal identity:
- full legal name (including native script)
- date of birth, place of birth
- nationality, sex, height
address:
- street, city, state, postal code, country
government document:
- document type and number
- issuing authority
- issue and expiration dates
- visa status
- vehicle class/endorsements/restrictions
media:
- FRONT PHOTO of ID document (URL)
- BACK PHOTO of ID document (URL)
- SELFIE PHOTO (URL + byte size)
- VIDEO of identity capture (URL)
metadata:
- entity confidence score
- all verification check results with pass/fail reasons
- capture method used
- timestamps (created, submitted, completed, redacted)
Persona’s own case study states that OpenAI “screens millions monthly” and “automatically screens over 99% of users behind the scenes in seconds.”
behind the scenes. in seconds. millions. with customizable filters ranging from simple partial name matches to advanced facial recognition algorithms.
again, none of this is even a secret, its “hidden” in plain sight.
...
Read the original on vmfunc.re »
Yesterday, California Attorney General Rob Bonta filed for an immediate halt to what he says is a widespread price-fixing scheme run by the largest online retailer in America, Amazon. “Amazon tells vendors what prices it wants to see to maintain its own profitability,” Bonta alleged. “Amazon can do this because it is the world’s largest, most powerful online retailer.”
His claim is that Amazon has been forcing vendors who sell on and off the platform to raise prices, and cooperating with other major online retailers to do so.
Vendors, cowed by Amazon’s overwhelming bargaining leverage and fearing punishment, comply—agreeing to raise prices on competitors’ websites (often with the awareness and cooperation of the competing retailer) or to remove products from competing websites altogether. , and it should be immediately enjoined.
Amazon is scheduled for a series of trials in January of 2027, but Bonta’s legal move is a big deal, because he’s asking a court to bring Amazon to heel now, a year early. The only way a judge can do that is if he concludes Amazon is likely to lose, which means that Bonta believes his evidence is so strong it’s basically a foregone conclusion Amazon will be held liable for fostering serious harm to consumers.
The scale of the scheme is almost unfathomable; according to its latest investor reports, Amazon earned $426 billion of revenue in its 2025 North America online shopping business, which is about $3000 for every household in America. As Stacy Mitchell noted, prices for third party goods on the online platform, roughly 60% of its total sales, have been going up at 7% a year, more than twice the rate of inflation. And because this scheme impacts goods sold off of Amazon’s website as well, there’s a reasonable chance that it has had an impact on price levels overall in America. With a similar Pepsi-Walmart alleged conspiracy revealed earlier this year, it’s becoming increasingly clear that consolidation and price-fixing are linked to inflation.
How exactly does the scheme work? Long-standing readers of BIG may remember a piece in 2021 titled “Amazon Prime is an Economy-Distorting Lie” in which I laid out what’s happening. At the time, the D. C. Attorney General, a lawyer named Karl Racine, sued Amazon for prohibiting vendors that sold on its website from offering discounts outside of Amazon. Such anti-discounting provisions raise prices for consumers, and prevent new platforms from emerging to challenge Amazon.
The key leverage point for Amazon is the scale of its Prime program, which has 200 million members nationwide. As Scott Galloway noted a few years ago, more U. S. households belong to Prime than decorate a Christmas tree or go to church.
Prime members get ‘free shipping,’ which means they tend not to shop around. They just accept the price and vendor they are given on Amazon through what’s called the “Buy Box.”
So which vendor gets the ‘Buy Box’ and thus the sale to the Prime member? Here’s what I wrote in 2021.
Amazon awards the Buy Box to merchants based on a number of factors. One factor is whether a product is ‘Prime eligible,’ which is to say offered to Prime members with free shipping. In order to become Prime eligible, a seller often must use Amazon’s warehousing and logistics service, Fulfillment by Amazon (FBA). In other words, Amazon ties the ability to access Prime customers to whether a seller pays Amazon for managing its inventory. This strategy has worked - Amazon now fulfills roughly two thirds of the products bought on its platform. The high prices of overall marketplace access fees, including FBA, is how Amazon generates cash from its Marketplace and retail operations. From 2014 to 2020, the amount it charges third party sellers grew from $11.75 billion to more than $80 billion. “Seller fees now account for 21% of Amazon’s total corporate revenue,” noted Racine, also pointing out that its profit margins for Marketplace sales by third party sellers are four times higher than its own retail sales…Now, if this were all that was happening, sellers and brands could just sell outside of Amazon, avoid the 35-45% commission, and charge a lower price to entice customers. “Buy Cheaper at Walmart.com!” should be in ads all over the web. But it’s not. And that’s where the main claim from Racine comes in. Amazon uses its Buy Box algorithm to make sure that sellers can’t sell through a different store or even through their own site with a lower price and access Amazon customers, even if they would be able to sell it more cheaply. If they do, they get cut off from the Buy Box, and thus, cut off de facto from being able to sell on Amazon.
The net effect is that prices everywhere, not just on Amazon, are higher than they ordinary would be.
So that’s how the scheme worked, and Racine was the first law enforcer to act. But others followed; Bonta filed his more comprehensive lawsuit in 2022. In 2023, Federal Trade Commission Chair Lina Khan filed against Amazon on similar grounds, though with more details and additional wrinkles. The FTC found that Amazon was running something called “Project Nessie” in which it would use its algorithm to encourage other online retailers, perhaps Walmart.com or Target.com, to raise prices on similar products.
All of these cases, as well as other similar ones, have passed the necessary legal hurdle to go to trial, but an actual remedy is years away. And Amazon keeps growing through this alleged illicit behavior, inflating prices not just on its own site, but across the retail landscape.
According to Bonta, Amazon has three primary methods of inflating prices. In the first one, if Amazon and a competitor are engaged in a price war over a product, Amazon will tell its vendor that sells to its rival to increase the price directly. In the second one, if a competitor is discounting an item, Amazon will ask it to stop through a vendor. And in the third, a vendor will stop selling a product for a lower price outside of Amazon, and Amazon will then raise its price.
This kind of arrangement is known as a “hub-and-spoke” conspiracy, or “vertical price-fixing,” because it’s cooperating on price through common customers or vendors. Such a scheme distinguishes it from direct collaboration among rivals, which is a more standard “horizontal” conspiracy. The relief requested by Bonta is extensive, but amounts to barring the company from making agreements through vendors to set pricing for the online retail economy and prohibiting the company from communicating with vendors about prices and terms for non-Amazon retailers. He is also seeking a monitor to ensure Amazon stops the bad behavior.
What makes it a big deal is that it’s a request for a temporary injunction right now, meant to last until the trial process concludes or it’s otherwise lifted. Judges only grant such injunctions when they think that a party is likely going to lose, the immediate harm of the behavior is significant, and the public interest is served. While we can’t see most of the evidence because it’s redacted, Bonta must really believe he’s got the goods. And if he succeeds in this gambit, it almost certainly means Amazon has violated antitrust law on a major line of business. It also flips the incentives, because Amazon will have less of an incentive to delay a trial. Instead, it will be subject to this injunction until the trial concludes. So it may stop trying dilatory tactics.
There’s one last observation about the complaint. Again, it’s redacted, but Bonta is hinting at Amazon’s internal process to hide what it is doing.
And that wouldn’t be surprising, since the FTC has told the judge in its case that top Amazon officials, including Jeff Bezos, have been destroying evidence.
According to Law.com: “The FTC said in a heavily redacted brief on Friday that it’s missing both the ‘raw notes’ of important meetings and key messages from the Signal apps of Bezos and other senior executives, who, in some instances, set messages to automatically delete in ‘as short as ten seconds or one minute.’”
That kind of behavior is the digital equivalent of shredding documents while under a legal hold, and evidence of lawlessness. And there’s a reason for that. For as long as I’ve been writing BIG, and years before that, laws have not really applied to the rich and powerful. But our work is bearing fruit. And it’s not just Amazon. Today, the Antitrust Division won a big legal motion on its price-fixing case against a meat conspiracy led by Agri-Stats, and the Ninth Circuit had a terrific ruling on a Robinson-Patman Act price discrimination suit. As the people elect new populist politicians, enforcers and plaintiff lawyers are developing the law and the cases to match their frustration.
There’s also a change in public attitudes. In years past, a company like Amazon used to be considered innovative and consumer-friendly. Today, it is understood as bureaucratic and coercive, a result of an environment of lawlessness. Americans are increasingly angry about the situation, seeing the Epstein class and the high inflation environment as a direct threat to their welfare, a conspiracy to extract. Because it is. And at least some elected leaders see that, and are acting to stop it.
Thanks for reading! Your tips make this newsletter what it is, so please send me tips on weird monopolies, stories I’ve missed, or other thoughts. And if you liked this issue of BIG, you can sign up here for more issues, a newsletter on how to restore fair commerce, innovation, and democracy. Consider becoming a paying subscriber to support this work, or if you are a paying subscriber, giving a gift subscription to a friend, colleague, or family member. If you really liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.
...
Read the original on www.thebignewsletter.com »
Apple today announced a significant expansion of factory operations in Houston, bringing the future production of Mac mini to the U. S. for the first time. The company will also expand advanced AI server manufacturing at the factory and provide hands-on training at its new Advanced Manufacturing Center beginning later this year. Altogether, Apple’s Houston operations will create thousands of jobs.
“Apple is deeply committed to the future of American manufacturing, and we’re proud to significantly expand our footprint in Houston with the production of Mac mini starting later this year,” said Tim Cook, Apple’s CEO. “We began shipping advanced AI servers from Houston ahead of schedule, and we’re excited to accelerate that work even further.”
Technicians in protective clothing work on computers and other equipment in a Houston factory.
A worker in a lab coat stands behind an assembly line.
Technicians in protective clothing walk through the hallway of a Houston factory.
Technicians in protective clothing look at a monitor in a Houston factory.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U. S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
In Houston, workers assemble advanced AI servers, including logic boards produced onsite, which are then used in Apple data centers in the U.S.
For more than two decades, users around the world have relied on the incredibly popular Mac mini for the tremendous power it packs into its ultra-compact design. With its next-level AI capabilities, it has become an essential tool for everyone from students and aspiring creatives to small business owners. Beginning later this year, Mac mini will be produced at a new factory on Apple’s Houston manufacturing site, doubling the campus’s footprint.
Apple began producing advanced AI servers in Houston in 2025 for the first time, and production is already ahead of schedule. Servers assembled in Houston — including logic boards produced onsite — are used in Apple data centers around the country.
Beyond production, Apple is investing in the workforce that will drive American manufacturing forward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is scheduled to open its doors in Houston. Currently under construction, the dedicated facility will provide hands-on training in advanced manufacturing techniques to students, supplier employees, and American businesses of all sizes. Apple experts will teach participants the same innovative processes that are used to make Apple products, allowing American manufacturers to take their work to the next level.
A worker stands in front of a large American flag inside the under-construction Apple Advanced Manufacturing Center in Houston.
An overhead shot of the under-construction Apple Advanced Manufacturing Center in Houston.
Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will provide hands-on training to students, supplier employees, and U. S. businesses of all sizes.
Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will provide hands-on training to students, supplier employees, and U.S. businesses of all sizes.
Since announcing its $600 billion commitment to the U. S. last year, Apple and its American Manufacturing Program partners have already reached several milestones:
Apple exceeded its target and sourced more than 20 billion U.S.-made chips from 24 factories across 12 states, including those of partners like TSMC, Broadcom, and Texas Instruments.
GlobalWafers has begun production at its new $4 billion bare silicon wafer facility in Sherman, Texas. At Apple’s direction, wafers produced in Sherman will be used by Apple’s chip manufacturing partners in the U.S., including TSMC and Texas Instruments.
Supported by Apple’s investment, Amkor broke ground on its new $7 billion semiconductor advanced packaging and test facility in Peoria, Arizona, where Apple will be the first and largest customer.
Corning’s Harrodsburg, Kentucky, facility is now 100 percent dedicated to cover glass for iPhone and Apple Watch shipped globally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.
In 2026, Apple is on track to purchase well over 100 million advanced chips produced by TSMC at its Arizona facility — a significant increase from 2025.
Apple opened its Apple Manufacturing Academy in Detroit, which is already supporting more than 130 small- and medium-sized American manufacturers with hands-on training in AI, automation, and smart manufacturing. The academy recently expanded with new virtual programming, giving businesses across the country on-demand access to the curriculum developed by Apple experts and Michigan State University faculty.
Mac mini will be made at a new facility in Houston, and a soon-to-be-launched training center will support advanced manufacturing skills development
CUPERTINO, CALIFORNIA Apple today announced a significant expansion of factory operations in Houston, bringing the future production of Mac mini to the U.S. for the first time. The company will also expand advanced AI server manufacturing at the factory and provide hands-on training at its new Advanced Manufacturing Center beginning later this year. Altogether, Apple’s Houston operations will create thousands of jobs.
“Apple is deeply committed to the future of American manufacturing, and we’re proud to significantly expand our footprint in Houston with the production of Mac mini starting later this year,” said Tim Cook, Apple’s CEO. “We began shipping advanced AI servers from Houston ahead of schedule, and we’re excited to accelerate that work even further.”
For more than two decades, users around the world have relied on the incredibly popular Mac mini for the tremendous power it packs into its ultra-compact design. With its next-level AI capabilities, it has become an essential tool for everyone from students and aspiring creatives to small business owners. Beginning later this year, Mac mini will be produced at a new factory on Apple’s Houston manufacturing site, doubling the campus’s footprint.
Apple began producing advanced AI servers in Houston in 2025 for the first time, and production is already ahead of schedule. Servers assembled in Houston — including logic boards produced onsite — are used in Apple data centers around the country.
Beyond production, Apple is investing in the workforce that will drive American manufacturing forward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is scheduled to open its doors in Houston. Currently under construction, the dedicated facility will provide hands-on training in advanced manufacturing techniques to students, supplier employees, and American businesses of all sizes. Apple experts will teach participants the same innovative processes that are used to make Apple products, allowing American manufacturers to take their work to the next level.
Since announcing its $600 billion commitment to the U.S. last year, Apple and its American Manufacturing Program partners have already reached several milestones:
Apple exceeded its target and sourced more than 20 billion U.S.-made chips from 24 factories across 12 states, including those of partners like TSMC, Broadcom, and Texas Instruments.
GlobalWafers has begun production at its new $4 billion bare silicon wafer facility in Sherman, Texas. At Apple’s direction, wafers produced in Sherman will be used by Apple’s chip manufacturing partners in the U.S., including TSMC and Texas Instruments.
Supported by Apple’s investment, Amkor broke ground on its new $7 billion semiconductor advanced packaging and test facility in Peoria, Arizona, where Apple will be the first and largest customer.
Corning’s Harrodsburg, Kentucky, facility is now 100 percent dedicated to cover glass for iPhone and Apple Watch shipped globally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.
In 2026, Apple is on track to purchase well over 100 million advanced chips produced by TSMC at its Arizona facility — a significant increase from 2025.
Apple opened its Apple Manufacturing Academy in Detroit, which is already supporting more than 130 small- and medium-sized American manufacturers with hands-on training in AI, automation, and smart manufacturing. The academy recently expanded with new virtual programming, giving businesses across the country on-demand access to the curriculum developed by Apple experts and Michigan State University faculty.
About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.
Copy text
* Apple exceeded its target and sourced more than 20 billion U.S.-made chips from 24 factories across 12 states, including those of partners like TSMC, Broadcom, and Texas Instruments.
* GlobalWafers has begun production at its new $4 billion bare silicon wafer facility in Sherman, Texas. At Apple’s direction, wafers produced in Sherman will be used by Apple’s chip manufacturing partners in the U.S., including TSMC and Texas Instruments.
* Supported by Apple’s investment, Amkor broke ground on its new $7 billion semiconductor advanced packaging and test facility in Peoria, Arizona, where Apple will be the first and largest customer.
* Corning’s Harrodsburg, Kentucky, facility is now 100 percent dedicated to cover glass for iPhone and Apple Watch shipped globally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.
* In 2026, Apple is on track to purchase well over 100 million advanced chips produced by TSMC at its Arizona facility — a significant increase from 2025.
* Apple opened its Apple Manufacturing Academy in Detroit, which is already supporting more than 130 small- and medium-sized American manufacturers with hands-on training in AI, automation, and smart manufacturing. The academy recently expanded with new virtual programming, giving businesses across the country on-demand access to the curriculum developed by Apple experts and Michigan State University faculty.
...
Read the original on www.apple.com »
Denmark’s tech modernization agency plans to replace Microsoft products with open-source software to reduce dependence on U. S. tech firms.
In an interview with the local newspaper Politiken, Danish Minister for Digitalisation Caroline Stage Olsen confirmed that over half of the ministry’s staff will switch from Microsoft Office to LibreOffice next month, with a full transition to open-source software by the end of the year.
“If everything goes as expected, all employees will be on an open-source solution during the autumn,” Politiken reported, quoting Stage. The move would also help the ministry avoid the expense of managing outdated Windows 10 systems, which will lose official support in October.
LibreOffice, developed by the Berlin-based non-profit organization The Document Foundation, is available for Windows, macOS, and is the default office suite on many Linux systems. The suite includes tools for word processing, spreadsheets, presentations, vector graphics, databases, and formula editing. Stage said that the ministry could revert to Microsoft products if the transition proves too complex.
Microsoft had not responded to Recorded Future News’ request for comment as of Friday morning, Eastern U. S. time.
The ministry’s decision follows similar moves by Denmark’s two largest municipalities, Copenhagen and Aarhus, which previously announced plans to abandon Microsoft software, citing financial concerns, market dominance and political tensions with Washington. Proponents refer to the process as moving toward “digital sovereignty.”
Henrik Appel Espersen, chair of Copenhagen’s audit committee, told Politiken the move was driven by cost concerns and Microsoft’s strong grip on the market. He also cited tensions between the U. S. and Denmark during Donald Trump’s presidency, which sparked debate about data protection and reducing reliance on foreign technology.
The shift comes amid a wider European trend toward digital independence. This week, the German state of Schleswig-Holstein said that local government agencies will abandon Microsoft Office tools such as Word and Excel in favor of LibreOffice, while Open-Xchange will replace Microsoft Outlook for email and calendar functions. The state plans to complete the shift by migrating to the Linux operating system in the coming years.
Schleswig-Holstein first announced its decision to abandon Microsoft last April, saying it would be “the first state to introduce a digitally sovereign IT workplace.” “Independent, sustainable, secure: Schleswig-Holstein will be a digital pioneer region,” the state’s Minister-President said at the time.
...
Read the original on therecord.media »
Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME. In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.
But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance. “We felt that it wouldn’t actually help anyone for us to stop training AI models,” Anthropic’s chief science officer Jared Kaplan told TIME in an exclusive interview. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”The new version of the policy, which TIME reviewed, includes commitments to be more transparent about the safety risks of AI, including making additional disclosures about how Anthropic’s own models fare in safety testing. It commits to matching or surpassing the safety efforts of competitors. And it promises to “delay” Anthropic’s AI development if leaders both consider Anthropic to be leader of the AI race and think the risks of catastrophe to be significant.
But overall, the change to the RSP leaves Anthropic far less constrained by its own safety policies, which previously categorically barred it from training models above a certain level if appropriate safety measures weren’t already in place. The change comes as Anthropic, previously considered to be behind OpenAI in the AI race, rides the high of a string of technological and commercial successes. Its Claude models, especially the software-writing tool Claude Code, have won legions of devoted fans. In February, Anthropic raised $30 billion in new investments, valuing it at some $380 billion, and reported that its annualized revenue was growing at a rate of 10x per year. The company’s core business model of selling direct to businesses is seen by many investors as more credible than OpenAI’s main strategy of monetizing a vast consumer user base.
When Anthropic introduced the RSP in 2023, Kaplan says, the company hoped it would encourage rivals to adopt similar measures. (No rivals made quite as overt a promise to pause AI development, but many published lengthy reports detailing their plans to mitigate risk, which Kaplan chalks up as Anthropic exerting a good influence on the industry.) Executives also hoped the approach might eventually serve as a blueprint for binding national regulations or even international treaties, Kaplan claims. But those regulations never materialized. Instead, the Trump Administration has endorsed a let-it-rip attitude to AI development, even going so far as to attempt to nullify state regulations. No federal AI law is on the horizon. And while a global governance framework may have seemed possible in 2023, three years later it has become clear that door has closed. Meanwhile, competition for AI supremacy—between companies but also between nations—has only intensified.
To make matters worse, the science of AI evaluations has proven more complicated than Anthropic expected when it first crafted the RSP. The arrival of powerful new models meant that, in 2025, Anthropic announced it could not rule out the possibility of these models facilitating a bio-terrorist attack. But while they couldn’t rule it out, they also lacked strong scientific evidence that models did pose that kind of danger, which made it difficult to convince governments and rivals of what they saw as the need to act carefully. What the company had previously imagined might look like a bright red line was instead coming into focus as a fuzzy gradient. For nearly a year, Anthropic executives discussed ways to reshape their flagship safety policy to match this new environment, Kaplan says. One point they kept coming back to was their founding premise: the idea that to do proper AI safety research, they had to build models at the frontier of capability—even though doing so might accelerate the arrival of the dangers they feared.
In February, according to Kaplan, Amodei decided that keeping the company from training new models while competitors raced ahead would be helpful to nobody. “If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe,” the new version of the RSP, approved unanimously by Amodei and Anthropic’s board, states in its introduction. “The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research.”
Chris Painter, the director of policy at METR, a nonprofit focused on evaluating AI models for risky behavior, reviewed an early draft of the policy with Anthropic’s permission. He says the change is understandable — but also a bearish signal for the world’s ability to navigate potential AI catastrophes. The change to the RSP shows Anthropic “believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities,” Painter tells TIME. “This is more evidence that society is not prepared for the potential catastrophic risks posed by AI.”
Anthropic argues the retooled RSP is designed to keep the biggest benefits of the old one. For example, by constraining itself from releasing new models, Anthropic’s original RSP also incentivized it to quickly build safety mitigations. (Because otherwise the company would be unable to sell its AI to customers.) Anthropic says it believes it can maintain that incentive. The new policy commits the company to regularly release what it calls “Frontier Safety Roadmaps”: documents laying out a list of detailed goals for future safety measures it hopes to build.“We hope to create a forcing function for work that would otherwise be challenging to appropriately prioritize and resource, as it requires collaboration (and in some cases sacrifices) from multiple parts of the company and can be at cross-purposes with immediate competitive and commercial priorities,” the new RSP states. Anthropic says it will also commit to publishing so-called “Risk Reports” every three to six months. The reports, the company says, will “explain how capabilities, threat models (the specific ways that models might pose threats), and active risk mitigations fit together, and provide an assessment of the overall level of risk.” These documents will be more in-depth than the reports the company already publishes, a spokesperson tells TIME.
“I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps,” says Painter, the METR policy official. But he said he was “concerned” that moving away from binary thresholds under the previous RSP, by which the arrival of a certain capability could act as a tripwire to temporarily halt Anthropic’s AI development, might enable a “frog-boiling” effect, where danger slowly ramps up without a single moment that sets off alarms. Asked whether Anthropic was caving to market pressure, Kaplan argued that, in fact, Anthropic was making a renewed commitment to developing AI safely. “If all of our competitors are transparently doing the right thing when it comes to catastrophic risk, we are committed to doing as well or better,” he said. “But we don’t think it makes sense for us to stop engaging with AI research, AI safety, and most likely lose relevance as an innovator who understands the frontier of the technology, in a scenario where others are going ahead and we’re not actually contributing any additional risk to the ecosystem.”
...
Read the original on time.com »
There are many coding agents, but this one is mine.
Pi is a minimal terminal coding harness. Adapt pi to your workflows, not the other way around. Extend it with TypeScript extensions, skills, prompt templates, and themes. Bundle them as pi packages and share via npm or git.
Pi ships with powerful defaults but skips features like sub-agents and plan mode. Ask pi to build what you want, or install a package that does it your way.
Four modes: interactive, print/JSON, RPC, and SDK. See clawdbot for a real-world integration.
Anthropic, OpenAI, Google, Azure, Bedrock, Mistral, Groq, Cerebras, xAI, Hugging Face, Kimi For Coding, MiniMax, OpenRouter, Ollama, and more. Authenticate via API keys or OAuth.
Switch models mid-session with /model or Ctrl+L. Cycle through your favorites with Ctrl+P.
Add custom providers and models via models.json or extensions.
Sessions are stored as trees. Use /tree to navigate to any previous point and continue from there. All branches live in a single file. Filter by message type, label entries as bookmarks.
Export to HTML with /export, or upload to a GitHub gist with /share and get a shareable URL that renders it.
Pi’s minimal system prompt and extensibility let you do actual context engineering. Control what goes into the context window and how it’s managed.
AGENTS.md: Project instructions loaded at startup from ~/.pi/agent/, parent directories, and the current directory.
SYSTEM.md: Replace or append to the default system prompt per-project.
Compaction: Auto-summarizes older messages when approaching the context limit. Fully customizable via extensions: implement topic-based compaction, code-aware summaries, or use different summarization models.
Skills: Capability packages with instructions and tools, loaded on-demand. Progressive disclosure without busting the prompt cache. See skills.
Prompt templates: Reusable prompts as Markdown files. Type /name to expand. See prompt templates.
Dynamic context: Extensions can inject messages before each turn, filter the message history, implement RAG, or build long-term memory.
Submit messages while the agent works. Enter sends a steering message (delivered after current tool, interrupts remaining tools). Alt+Enter sends a follow-up (waits until the agent finishes).
Features that other agents bake in, you can build yourself. Extensions are TypeScript modules with access to tools, commands, keyboard shortcuts, events, and the full TUI.
Don’t want to build it? Ask pi to build it for you. Or install a package that does it your way. See the 50+ examples.
Bundle extensions, skills, prompts, and themes as packages. Install from npm or git:
Pin versions with @1.2.3 or @tag. Update all with pi update, list with pi list, configure with pi config.
Find packages on npm or Discord. Share yours with the pi-package keyword.
RPC: JSON protocol over stdin/stdout for non-Node integrations. See docs/rpc.md.
SDK: Embed pi in your apps. See clawdbot for a real-world example.
Pi is aggressively extensible so it doesn’t have to dictate your workflow. Features that other tools bake in can be built with extensions, skills, or installed from third-party pi packages. This keeps the core minimal while letting you shape pi to fit how you work.
No MCP. Build CLI tools with READMEs (see Skills), or build an extension that adds MCP support. Why?
No sub-agents. There’s many ways to do this. Spawn pi instances via tmux, or build your own with extensions, or install a package that does it your way.
No permission popups. Run in a container, or build your own confirmation flow with extensions inline with your environment and security requirements.
No plan mode. Write plans to files, or build it with extensions, or install a package.
No built-in to-dos. Use a TODO.md file, or build your own with extensions.
Read the blog post for the full rationale.
Docs: README and docs/ for everything else.
...
*This post was updated at 12:35 pm PT to fix a typo in the build time benchmarks.
Last week, one engineer and an AI model rebuilt the most popular front-end framework from scratch. The result, vinext (pronounced “vee-next”), is a drop-in replacement for Next.js, built on Vite, that deploys to Cloudflare Workers with a single command. In early benchmarks, it builds production apps up to 4x faster and produces client bundles up to 57% smaller. And we already have customers running it in production.
The whole thing cost about $1,100 in tokens.
Next.js is the most popular React framework. Millions of developers use it. It powers a huge chunk of the production web, and for good reason. The developer experience is top-notch.
But Next.js has a deployment problem when used in the broader serverless ecosystem. The tooling is entirely bespoke: Next.js has invested heavily in Turbopack but if you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run.
If you’re thinking: “Isn’t that what OpenNext does?”, you are correct.
That is indeed the problem OpenNext was built to solve. And a lot of engineering effort has gone into OpenNext from multiple providers, including us at Cloudflare. It works, but quickly runs into limitations and becomes a game of whack-a-mole.
Building on top of Next.js output as a foundation has proven to be a difficult and fragile approach. Because OpenNext has to reverse-engineer Next.js’s build output, this results in unpredictable changes between versions that take a lot of work to correct.
Next.js has been working on a first-class adapters API, and we’ve been collaborating with them on it. It’s still an early effort but even with adapters, you’re still building on the bespoke Turbopack toolchain. And adapters only cover build and deploy. During development, next dev runs exclusively in Node.js with no way to plug in a different runtime. If your application uses platform-specific APIs like Durable Objects, KV, or AI bindings, you can’t test that code in dev without workarounds.
What if instead of adapting Next.js output, we reimplemented the Next.js API surface on Vite directly? Vite is the build tool used by most of the front-end ecosystem outside of Next.js, powering frameworks like Astro, SvelteKit, Nuxt, and Remix. A clean reimplementation, not merely a wrapper or adapter. We honestly didn’t think it would work. But it’s 2026, and the cost of building software has completely changed.
We got a lot further than we expected.
Replace next with vinext in your scripts and everything else stays the same. Your existing app/, pages/, and next.config.js work as-is.
vinext dev # Development server with HMR
vinext build # Production build
vinext deploy # Build and deploy to Cloudflare Workers
This is not a wrapper around Next.js and Turbopack output. It’s an alternative implementation of the API surface: routing, server rendering, React Server Components, server actions, caching, middleware. All of it built on top of Vite as a plugin. Most importantly Vite output runs on any platform thanks to the Vite Environment API.
Early benchmarks are promising. We compared vinext against Next.js 16 using a shared 33-route App Router application.
Both frameworks are doing the same work: compiling, bundling, and preparing server-rendered routes. We disabled TypeScript type checking and ESLint in Next.js’s build (Vite doesn’t run these during builds), and used force-dynamic so Next.js doesn’t spend extra time pre-rendering static routes, which would unfairly slow down its numbers. The goal was to measure only bundler and compilation speed, nothing else. Benchmarks run on GitHub CI on every merge to main.
These benchmarks measure compilation and bundling speed, not production serving performance. The test fixture is a single 33-route app, not a representative sample of all production applications. We expect these numbers to evolve as three projects continue to develop. The full methodology and historical results are public. Take them as directional, not definitive.
The direction is encouraging, though. Vite’s architecture, and especially Rolldown (the Rust-based bundler coming in Vite 8), has structural advantages for build performance that show up clearly here.
vinext is built with Cloudflare Workers as the first deployment target. A single command takes you from source code to a running Worker:
This handles everything: builds the application, auto-generates the Worker configuration, and deploys. Both the App Router and Pages Router work on Workers, with full client-side hydration, interactive components, client-side navigation, React state.
For production caching, vinext includes a Cloudflare KV cache handler that gives you ISR (Incremental Static Regeneration) out of the box:
KV is a good default for most applications, but the caching layer is designed to be pluggable. That setCacheHandler call means you can swap in whatever backend makes sense. R2 might be a better fit for apps with large cached payloads or different access patterns. We’re also working on improvements to our Cache API that should provide a strong caching layer with less configuration. The goal is flexibility: pick the caching strategy that fits your app.
We also have a live example of Cloudflare Agents running in a Next.js app, without the need for workarounds like getPlatformProxy, since the entire app now runs in workerd, during both dev and deploy phases. This means being able to use Durable Objects, AI bindings, and every other Cloudflare-specific service without compromise. Have a look here.
The current deployment target is Cloudflare Workers, but that’s a small part of the picture. Something like 95% of vinext is pure Vite. The routing, the module shims, the SSR pipeline, the RSC integration: none of it is Cloudflare-specific.
Cloudflare is looking to work with other hosting providers about adopting this toolchain for their customers (the lift is minimal — we got a proof-of-concept working on Vercel in less than 30 minutes!). This is an open-source project, and for its long term success, we believe it’s important we work with partners across the ecosystem to ensure ongoing investment. PRs from other platforms are welcome. If you’re interested in adding a deployment target, open an issue or reach out.
We want to be clear: vinext is experimental. It’s not even one week old, and it has not yet been battle-tested with any meaningful traffic at scale. If you’re evaluating it for a production application, proceed with appropriate caution.
That said, the test suite is extensive: over 1,700 Vitest tests and 380 Playwright E2E tests, including tests ported directly from the Next.js test suite and OpenNext’s Cloudflare conformance suite. We’ve verified it against the Next.js App Router Playground. Coverage sits at 94% of the Next.js 16 API surface.
Early results from real-world customers are encouraging. We’ve been working with National Design Studio, a team that’s aiming to modernize every government interface, on one of their beta sites, CIO.gov. They’re already running vinext in production, with meaningful improvements in build times and bundle sizes.
The README is honest about what’s not supported and won’t be, and about known limitations. We want to be upfront rather than overpromise.
vinext already supports Incremental Static Regeneration (ISR) out of the box. After the first request to any page, it’s cached and revalidated in the background, just like Next.js. That part works today.
vinext does not yet support static pre-rendering at build time. In Next.js, pages without dynamic data get rendered during next build and served as static HTML. If you have dynamic routes, you use generateStaticParams() to enumerate which pages to build ahead of time. vinext doesn’t do that… yet.
This was an intentional design decision for launch. It’s on the roadmap, but if your site is 100% prebuilt HTML with static content, you probably won’t see much benefit from vinext today. That said, if one engineer can spend $1,100 in tokens and rebuild Next.js, you can probably spend $10 and migrate to a Vite-based framework designed specifically for static content, like Astro (which also deploys to Cloudflare Workers).
For sites that aren’t purely static, though, we think we can do something better than pre-rendering everything at build time.
Next.js pre-renders every page listed in generateStaticParams() during the build. A site with 10,000 product pages means 10,000 renders at build time, even though 99% of those pages may never receive a request. Builds scale linearly with page count. This is why large Next.js sites end up with 30-minute builds.
So we built Traffic-aware Pre-Rendering (TPR). It’s experimental today, and we plan to make it the default once we have more real-world testing behind it.
The idea is simple. Cloudflare is already the reverse proxy for your site. We have your traffic data. We know which pages actually get visited. So instead of pre-rendering everything or pre-rendering nothing, vinext queries Cloudflare’s zone analytics at deploy time and pre-renders only the pages that matter.
vinext deploy –experimental-tpr
Building…
Build complete (4.2s)
TPR (experimental): Analyzing traffic for my-store.com (last 24h)
TPR: 12,847 unique paths — 184 pages cover 90% of traffic
TPR: Pre-rendering 184 pages…
TPR: Pre-rendered 184 pages in 8.3s → KV cache
Deploying to Cloudflare Workers…
For a site with 100,000 product pages, the power law means 90% of traffic usually goes to 50 to 200 pages. Those get pre-rendered in seconds. Everything else falls back to on-demand SSR and gets cached via ISR after the first request. Every new deploy refreshes the set based on current traffic patterns. Pages that go viral get picked up automatically. All of this works without generateStaticParams() and without coupling your build to your production database.
A project like this would normally take a team of engineers months, if not years. Several teams at various companies have attempted it, and the scope is just enormous. We tried once at Cloudflare! Two routers, 33+ module shims, server rendering pipelines, RSC streaming, file-system routing, middleware, caching, static export. There’s a reason nobody has pulled it off.
This time we did it in under a week. One engineer (technically engineering manager) directing AI.
The first commit landed on February 13. By the end of that same evening, both the Pages Router and App Router had basic SSR working, along with middleware, server actions, and streaming. By the next afternoon, App Router Playground was rendering 10 of 11 routes. By day three, vinext deploy was shipping apps to Cloudflare Workers with full client hydration. The rest of the week was hardening: fixing edge cases, expanding the test suite, bringing API coverage to 94%.
What changed from those earlier attempts? AI got better. Way better.
Not every project would go this way. This one did because a few things happened to line up at the right time.
Next.js is well-specified. It has extensive documentation, a massive user base, and years of Stack Overflow answers and tutorials. The API surface is all over the training data. When you ask Claude to implement getServerSideProps or explain how useRouter works, it doesn’t hallucinate. It knows how Next works.
Next.js has an elaborate test suite. The Next.js repo contains thousands of E2E tests covering every feature and edge case. We ported tests directly from their suite (you can see the attribution in the code). This gave us a specification we could verify against mechanically.
Vite is an excellent foundation. Vite handles the hard parts of front-end tooling: fast HMR, native ESM, a clean plugin API, production bundling. We didn’t have to build a bundler. We just had to teach it to speak Next.js. @vitejs/plugin-rsc is still early, but it gave us React Server Components support without having to build an RSC implementation from scratch.
The models caught up. We don’t think this would have been possible even a few months ago. Earlier models couldn’t sustain coherence across a codebase this size. New models can hold the full architecture in context, reason about how modules interact, and produce correct code often enough to keep momentum going. At times, I saw it go into Next, Vite, and React internals to figure out a bug. The state-of-the-art models are impressive, and they seem to keep getting better.
All of those things had to be true at the same time. Well-documented target API, comprehensive test suite, solid build tool underneath, and a model that could actually handle the complexity. Take any one of them away and this doesn’t work nearly as well.
Almost every line of code in vinext was written by AI. But here’s the thing that matters more: every line passes the same quality gates you’d expect from human-written code. The project has 1,700+ Vitest tests, 380 Playwright E2E tests, full TypeScript type checking via tsgo, and linting via oxlint. Continuous integration runs all of it on every pull request. Establishing a set of good guardrails is critical to making AI productive in a codebase.
The process started with a plan. I spent a couple of hours going back and forth with Claude in OpenCode to define the architecture: what to build, in what order, which abstractions to use. That plan became the north star. From there, the workflow was straightforward:
Let the AI write the implementation and tests. If tests pass, merge. If not, give the AI the error output and let it iterate.
We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.
It didn’t work perfectly every time. There were PRs that were just wrong. The AI would confidently implement something that seemed right but didn’t match actual Next.js behavior. I had to course-correct regularly. Architecture decisions, prioritization, knowing when the AI was headed down a dead end: that was all me. When you give AI good direction, good context, and good guardrails, it can be very productive. But the human still has to steer.
For browser-level testing, I used agent-browser to verify actual rendered output, client-side navigation, and hydration behavior. Unit tests miss a lot of subtle browser issues. This caught them.
Over the course of the project, we ran over 800 sessions in OpenCode. Total cost: roughly $1,100 in Claude API tokens.
Why do we have so many layers in the stack? This project forced me to think deeply about this question. And to consider how AI impacts the answer.
Most abstractions in software exist because humans need help. We couldn’t hold the whole system in our heads, so we built layers to manage the complexity for us. Each layer made the next person’s job easier. That’s how you end up with frameworks on top of frameworks, wrapper libraries, thousands of lines of glue code.
AI doesn’t have the same limitation. It can hold the whole system in context and just write the code. It doesn’t need an intermediate framework to stay organized. It just needs a spec and a foundation to build on.
It’s not clear yet which abstractions are truly foundational and which ones were just crutches for human cognition. That line is going to shift a lot over the next few years. But vinext is a data point. We took an API contract, a build tool, and an AI model, and the AI wrote everything in between. No intermediate framework needed. We think this pattern will repeat across a lot of software. The layers we’ve built up over the years aren’t all going to make it.
Thanks to the Vite team. Vite is the foundation this whole thing stands on. @vitejs/plugin-rsc is still early days, but it gave me RSC support without having to build that from scratch, which would have been a dealbreaker. The Vite maintainers were responsive and helpful as I pushed the plugin into territory it hadn’t been tested in before.
We also want to acknowledge the Next.js team. They’ve spent years building a framework that raised the bar for what React development could look like. The fact that their API surface is so well-documented and their test suite so comprehensive is a big part of what made this project possible. vinext wouldn’t exist without the standard they set.
vinext includes an Agent Skill that handles migration for you. It works with Claude Code, OpenCode, Cursor, Codex, and dozens of other AI coding tools. Install it, open your Next.js project, and tell the AI to migrate:
Then open your Next.js project in any supported tool and say:
The skill handles compatibility checking, dependency installation, config generation, and dev server startup. It knows what vinext supports and will flag anything that needs manual attention.
Or if you prefer doing it by hand:
npx vinext init # Migrate an existing Next.js project
npx vinext dev # Start the dev server
npx vinext deploy # Ship to Cloudflare Workers
The source is at github.com/cloudflare/vinext. Issues, PRs, and feedback are welcome.
...
Read the original on blog.cloudflare.com »
I’ve been a .com purist for over two decades of building. Once, I broke that rule and bought a .online TLD for a small project. This is the story of how it went up in flames.
Update: Within 40 minutes of posting this on HN, the site has been removed from Google’s Safe Search blacklist. Thank you, unknown Google hero! I’ve emailed Radix to remove the darn serverHold.
Update 2: The site is finally back online. Not linking here as I don’t want this to look like a marketing stunt. Link at the bottom if you’re curious. [4]
Earlier this year, Namecheap was running a promo that let you choose one free .online or .site per account. I was working on a small product and thought, “hey, why not?” The app was a small browser, and the .online TLD just made sense in my head.
After a tiny $0.20 to cover ICANN fees, and hooking it up to Cloudflare and GitHub, I was up and running. Or so I thought.
Poking around traffic data for an unrelated domain many weeks after the purchase, I noticed there were zero visitors to the site in the last 48 hours. Loading it up led to the dreaded, all red, full page “This is an unsafe site” notice on both Firefox and Chrome. The site had a link to the App Store, some screenshots (no gore or violence or anything of that sort), and a few lines of text about the app, nothing else that could possibly cause this. [1]
Clicking through the disclaimers to load the actual site to check if it had been defaced, I was greeted with a “site not found” error. Uh oh.
After checking that Cloudflare was still activated and the CF Worker was pointing to the domain, I went to the registrar first. Namecheap is not the picture of reliability, so it seemed like a good place to start. The domain showed up fine on my account with the right expiration date. The nameservers were correct and pointed to CF.
Maybe I had gotten it wrong, so I checked the WHOIS information online. Status: serverHold. Oh no…
At this point, I double checked to make sure I hadn’t received emails from the registry, registrar, host, or Google. Nada, nothing, zilch.
I emailed Namecheap to double check what was going on (even though it’s a serverHold [2], not a clientHold [3]). They responded in a few minutes with:
Cursing under my breath, as it confirms my worst fears, I promptly submitted a request to the abuse team at Radix, the registry in our case, who responded with:
Right, let’s get ourselves off the damned Safe Browsing blacklist, eh? How hard could it be?
Very much so, I’ve now come to learn. You need to verify the domain in Google Search Console to then ask for a review and get the flag removed. But how do you get verified? Add a DNS TXT or a CNAME record. How will it work if the domain will not resolve? It won’t.
As the situation stands, the registry won’t reactivate the domain unless Google removes the flag, and Google won’t remove the flag unless I verify that I own the domain, which I physically can’t.
I’ve tried reporting the false positive here, here and here, just in case it moves the needle.
I’ve also submitted a review request to the Safe Search team (totally different from Safe Browsing) in the hopes that it might trigger a re-review elsewhere. Instead I just get a No valid pages were submitted message from Google because nothing resolves on the domain.
As a last resort, I submitted a temporary release request to the registry so Google can review the site’s contents and, hopefully, remove the flag.
I’ve made a few mistakes here that I definitely won’t be making again.
* Buying a weird TLD. .com is the gold standard. I’m never buying anything else again. Once bitten and all that.
* Not adding the domain to Google Search Console immediately. I don’t need their analytics and wasn’t really planning on having any content on the domain, so I thought, why bother? Big, big mistake.
* Not adding any uptime observability. This was just a landing page, and I wanted as few moving parts as possible.
Both Radix, the registry, and Google deserve special mention for their hair-trigger bans and painful removal processes, with no notifications or grace time to fix the issue. I’m not sure whether it’s the weird TLD that’s causing a potentially short fuse or whether I was brigaded earlier with reports. I’ll never know.
[1] A mirror can be found here to verify the site contents.
[2] serverHold is set by the registry and is a royal pain to deal with. Usually means things are FUBAR.
[3] clientHold is set by the registrar and is mostly payment or billing related.
...
Read the original on www.0xsid.com »
The app, called Nearby Glasses, has one sole purpose: Look for smart glasses nearby and warn you.
This app notifies you when smart glasses are nearby. It uses company identificators in the Bluetooth data sent out by these. Therefore, there likely are false positives (e.g. from VR headsets). Hence, please proceed with caution when approaching a person nearby wearing glasses. They might just be regular glasses, despite this app’s warning.
The app’s author Yves Jeanrenaud takes no liability whatsoever for this app nor it’s functionality. Use at your own risk. By technical design, detecting Bluetooth LE devices might sometimes just not work as expected. I am no graduated developer. This is all written in my free time and with knowledge I taught myself.
False positives are likely. This means, the app Nearby Glasses may notify you of smart glasses nearby when there might be in fact a VR headset of the same manufacturer or another product of that company’s breed. It may also miss smart glasses nearby. Again: I am no pro developer.
However, this app is free and it’s source is available (though it’s not considered foss due to the non-commercial restrition), you may review the code, change it and re-use it (under the license).
The app Nearby Glasses does not store any details about you or collects any information about you or your phone. There are no telemetry, no ads, and no other nuisance. If you install the app via Play Store, Google may know something about you and collect some stats. But the app itself does not.
If you choose to store (export) the logfile, that is completely up to you and your liability where this data go to. The logs are recorded only locally and not automatically shared with anyone. They do contain little sensitive data; in fact, only the manufacturer ID codes of BLE devices encountered.
Use with extreme caution! As stated before: There is no guarantee that detected smart glasses are really nearby. It might be another device looking technically (on the BLE adv level) similar to smart glasses.
Please do not act rashly. Think before you act upon any messages (not only from this app).
* Because I consider smart glasses an intolerable intrusion, consent neglecting, horrible piece of tech that is already used for making various and tons of equally truely disgusting ‘content’. 1, 2
* Some smart glasses feature small LED signifying a recording is going on. But this is easily disabled, whilst manufacturers claim to prevent that and take no responsibility at all (tech tends to do that for decades now). 3
* Smart glasses have been used for instant facial recognition before 4 and reportedly will be out of the box 5. This puts a lot of people in danger.
* I hope this is app is useful for someone.
* It’s a simple rather heuristic approach. Because BLE uses randomised MAC and the OSSID are not stable, nor the UUID of the service announcements, you can’t just scan for the bluetooth beacons. And, to make thinks even more dire, some like Meta, for instance, use proprietary Bluetooth services and UUIDs are not persistent, we can only rely on the communicated device names for now.
* The currently most viable approach comes from the Bluetooth SIG assigned numbers repo. Following this, the manufacturer company’s name shows up as number codes in the packet advertising header (ADV) of BLE beacons.
* this is what BLE advertising frames look like:
* According to the Bluetooth SIG assigned numbers repo, we may use these company IDs:
0x01AB for Meta Platforms, Inc. (formerly Facebook)
0x0D53 for Luxottica Group S.p.A (Who manufacturers the Meta Ray-Bans)
0x03C2 for Snapchat, Inc., that makes SNAP Spectacles
They are immutable and mandatory. Of course, Meta and other manufacturers also have other products that come with Bluetooth and therefore their ID, e.g. VR Headsets. Therefore, using these company ID codes for the app’s scanning process is prone to false positives. But if you can’t see someone wearing an Occulus Rift around you and there are no buildings where they could hide, chances are good that it’s smart glasses instead.
* 0x01AB for Meta Platforms, Inc. (formerly Facebook)
* 0x0D53 for Luxottica Group S.p.A (Who manufacturers the Meta Ray-Bans)
* 0x03C2 for Snapchat, Inc., that makes SNAP Spectacles
They are immutable and mandatory. Of course, Meta and other manufacturers also have other products that come with Bluetooth and therefore their ID, e.g. VR Headsets. Therefore, using these company ID codes for the app’s scanning process is prone to false positives. But if you can’t see someone wearing an Occulus Rift around you and there are no buildings where they could hide, chances are good that it’s smart glasses instead.
* During pairing, the smart glasses usually emit their product name, so we can scan for that, too. But it’s rare we will see that in the field. People with the intention to use smart glasses in bars, pubs, on the street, and elsewhere usually prepare for that beforehand.
* When the app recognised a Bluetooth Low Energy (BLE) device with a sufficient signal strength (see RSI below), it will push an alert message. This shall help you to act accordingly.
* The app Nearby Glasses shows a notification when smart glasses are nearby (that means, a BLE device of one of those company IDs mentioned above)
* Nearby means, the RSSI (signal strength) is less than or equal to a given value: -75 dBm by default. This default value corresponds to a medium distance and an ok-ish signal. Let me explain:
RSSI depends mainly on
-100 dBm ~ 30 — 100+ m or near signal loss
Indoors, distances are often much shorter.
RSSI drops roughly according to
RSSI ≈ -10 * n * log10(distance) + constant
* -100 dBm ~ 30 — 100+ m or near signal loss
Indoors, distances are often much shorter.
RSSI drops roughly according to
RSSI ≈ -10 * n * log10(distance) + constant
* Therefore, the default RSSI threshold of -75 dBm corresponds to about 10 to 15 meters in open space and 3 to 10 meters indoors or in crowded spaces. You got a good chance to spot that smart glasses wearing person like that.
* Nearby Glasses shows an optional debug log that is exportable (as txt file) and features a copy&paste function. Those are for advanced users (nerds) and for further debugging.
* Under Settings, you may specify the log length, the debugging (display all scan items or only ADV frames).
* You may also enter yourself some company IDs as string of hex values, e.g. “0x01AB,0x058E,0x0D53. This overrides the built-in detection, so your notification shows up for the new value(s).
* For better persistence, it uses Android’s Foreground Service. You may disable this under Settings if you don’t need it.
* The Notification Cooldown under Settings specifies how much time must pass between two warnings. Default is 10000 ms, which is 10 s.
* It is now a bit more localised:
* See Releases for APK to download. Google Play Store entry may follow soon
Install the app (from Releases or from Google Play, for now) and open it
Grant permissions to activate Bluetooth (if not already enabled) and to access devices nearby. Some versions of Android also need you to grant permissions to access your location (before Version 13, mostly). Nearby Glasses does nothing with your location info. If you don’t believe me, please look at the code
if you don’t see the scan starting, you might need to enable Foreground Service on your particular phone in the Settings menu (see below)
You’re all set! When smart glasses are detected nearby, a notification will appear. It does so until you hit Stop Scanning or terminate the app for good
In the menu (top right, the cogwheel), you may make some Settings:
Enable Foreground Service: By this, you prevent Android from pausing the app thus preventing it from alerting you. I recommend leaving this enabled
RSSI threshold: This negative number specifies how far away a device might be to be a reason for an alert by Nearby Glasses. Technically, it referes to how strong the signal is received. Closer to zero means better signal, hence fewer distance between your phone and the smart glasses. See RSSI above for explanations and guidance. I recommend leaving it on -75
Enable Notifications: You would not want to disable that
Notification Cooldown: Here, you specify, how many notifications about found smart glasses nearby you want to get. I chose 10 seconds (10000 ms) as default value. Like this, you won’t miss the notification while at the same time won’t be bothered by it too much or drain your battery too fast
Enable Log Display: Disabling this might spare you some battery
Debug: Is needed to see more than just the matching BLE frames in the log display frame. It’s useful to see if things are working
Max log lines: How long the log may get. 200 seems to be a good balance between battery life and usability of the log (for nerds like me)
BLE ADV only: This excludes other Bluetooth LE frames from the log for better readability
Override Company IDs: If you want, you can let Nearby Glasses alert you of other devices than specified above. Useful for debugging, at least for me. Leave it empty if you don’t need it or don’t know what to do with it
Every setting is saved and effective immediately. To go back, use your back button or gesture
The export function enables you to share a text-file of the app’s log. For nerds like me
You may also copy&paste the log by tapping on the log display frame
* It’s now working in the wild! I managed to get some people testing it with verified smart glasses around them. Special thanks to Lena!
* See Releases for APK to download.
* I pushed Nearby Glasses to Google Play, too. However, I will always publish releases here on GitHub and elsewhere, for those that avoid the Google Play.
* I am no BT or Android expert at all. For what I’ve learned, one could also dig deeper into the communication of the smart glasses by sniffing the BLE traffic. By doing so, we would likely not need to rely on the device behaving according to the BT specifications but could also use heuristics on the encrypted traffic transmissions without much false positives. But I haven’t looked into BT traffic packets for more than ten years. I’m glad I remembered ADV frames… So if anybody could help on this, that’d be greatly appreciated!
* Rework to canary mode. I am looking into the suggestion I got on mastodon to steer away from warning for smart glasses and rather let the app tell there are no smart glasses found so far. This means, I must rwork the scanner logic a bit and the interface
* Add an option to set false positives to an ignore list. Maybe in the notification?
* Add more manufacturers IDs of smart glasses. Right now, it’s Meta, Oakley and Snap. A list of smart glasses with cameras available would help, too.
* An iOS app might be possible, too. I have the toolchain now, but I will need a Mac to submit it to the Apple App Store in the end. And I need to dig deeper into iOS development-
* There layout issue with Google Pixel devices seems to be fixed as of Version 1.0.3. If you still can’t reach the menu as it’s mixed with the status bar somehow. Will look into that asap. Meanwhile, try to put your screen to landscape mode and rotate clockwise (to the right).
App Icon: The icon is based on Eyeglass icons created by Freepik - Flaticon
License: This app Nearby Glasses is licensed under PolyForm Noncommercial License 1.0.0.
...
Read the original on github.com »
Hacking an old Kindle to display bus arrival times
This is how I turned an old Kindle (Kindle Touch 4th Generation/K5/KT) into a live bus feed that refreshes every minute with the option to exit out of dashboard mode by pressing the menu button. It’s basically TRMNL without the $140 price tag.
Run a server accessible over the internet (or locally) that serves the Kindle image
This will be your Kindle hacking bible for steps 1 - 3. You need to figure out what version of Kindle you have, its firmware version (shorthand FW in the Kindle forum guides + readmes), download the appropriate tar file and follow jailbreak instructions.
Once you’ve successfully jailbroken your Kindle, it’s time to install some things.
KUAL is a custom Kindle app launcher. MRPI allows us to install custom apps onto the Kindle (you may not need MRPI if you have a newer Kindle). This part was frustrating - reading through forum threads gives me a headache. The most helpful resource I found was the Kindle modding wiki. Maybe other people aren’t as oblivious as me but it took me half a day to realize that the “next step” in each guide can be accessed by clicking the “Next Step” button at the bottom of the page.
A gotcha for me was that I had to follow the Setting up a Hotfix guide before attempting to install KUAL & MRPI.
After successfully installing KUAL & MRPI, I also Disabled OTA Updates because why not. I didn’t follow any other guides in the Kindle Modding wiki after disabling OTA Updates because they didn’t seem relevant.
This can be done with a KUAL extension called USBNetwork (downloadable from the Kindle hacking bible) that will allow you to SSH onto your Kindle as if it were a regular server.
However, nowhere in the forums could I find any information about how to actually install a KUAL extension using MRPI. Finally, this helpful blogpost on setting up SSH for Kindle came to the rescue. I followed the steps that explained to how to install the extension and how to setup SSH via USB. I ignored the rest of the instructions on the page because I’m not concerned about adding a password to the Kindle or setting up SSH over wifi.
If you’ve setup SSH successfully, when the Kindle is plugged in, your computer’s network tab should have a new item in ‘Connected’ mode:
Here’s what my successfully connected Kindle looks like in the network settings tab:
Congratulations! Your Kindle is now ready to run custom code.
4. Running a server that generates an image for the Kindle
How displaying custom data on the Kindle works is that we need to create a png that fits the Kindle resolution, then draw the image onto the Kindle itself.
Since I live in New Jersey, I wanted to display NJTransit bus times on my Kindle. Luckily, NJTransit has a public GraphQL server that returns bus arrival times for any stop number.
After poking around in the network tab of the NJ Transit Bus Website, I found this GraphQL query that returns the bus number, arrival time, current capacity, destination, and departing time in minutes:
If you’re also a Jersey girl, you can run the following curl to get upcoming bus times (don’t forget to replace YOUR_STOP_NUMBER):
In the majority of the guides I read during this process (two most helpful being Matt Healy’s Kindle Dashboard guide and Hemant’s Kindle Dashboard guide) they use puppeteer to convert HTML to png. This does not work for me because I’m cheap and have a single $6 Digital Ocean droplet that I use for all side projects. Every time I ran puppeteer on it the entire server shits itself.
Instead I created an endpoint that formats the bus data into HTML, then the docker container that runs the server has a cron that runs the wkhtmltoimage command to generate a new png every 3 minutes using the HTML endpoint. The server then serves the generated png file at a separate endpoint.
Here’s what the 2 relevant endpoints look like for my Kindle:
HTML endpoint used by wkhtmltoimage to generate an image
Endpoint used by the Kindle to retrieve the image
The entire server code - Dockerfile, scripts, the server itself - can be found in the server folder of my Kindle hax repo. It’s written in Node because I was originally using Puppeteer before discovering the performance issues, but it’d be a fun optimization exercise to rewrite in Go.
The most important thing is that the image needs to conform to your Kindle’s screen resolution. You can find what yours is by running eips -i when SSH-ed into the Kindle. eips is the command you’ll be using to display an image on your Kindle. I found this eips menu guide helpful
You’ll see an output like this:
My Kindle expects a 600x800 image and the image must be rotated. Without passing a rotate command during the image generation process, I got skewed images like this:
However, after rotating, the bus times could only be viewed horizontally and I wanted to mount my Kindle vertically. What that meant was I had to rotate the HTML itself. But when rotating an image then taking a snapshot, the rotation is around the center of the screen so the snapshot made by wkhtmltoimage kept on cutting off the bus times. Finally, a combination of rotate and translate gave me what I needed, which was a rotated image that was aligned to the top left of the screen:
Once you have a server with an endpoint that serves your image, you’re ready for the last step.
Going into this, I wanted two things - an easy way to exit dashboard mode and a relatively up to date bus schedule. All the guides I’ve seen thus far ran a cron on their Kindle that hit their endpoint at a specified interval. However I didn’t like this because I didn’t want the Kindle to always run the dashboard after restarts. I want to control when the dashboard is displayed and that meant creating a custom KUAL app.
bin/ # executable scripts here
menu.json # controls the menu items in the KUAL dashboard
config.xml # no clue wtf this is
Whiled SSH-ed into your Kindle, place your custom extension folder inside of /mnt/us/extensions/. If you used my custom dash code, after restarting and launching KUAL, you’ll see your custom extension listed in KUAL and after clicking into it, a single menu item titled ‘Start dashboard’:
When you press ‘Start dashboard’, you can see in the menu.json that bin/start.sh will execute. The start script has comments explaining what it does. Some interesting things I’ve never worked with before:
# ignore HUP since kual will exit after pressing start, and that might kill our long running script
trap ‘’ HUP
# ignore term since stopping the framework/gui will send a TERM signal to our script since kual is probably related to the GUI
trap ‘’ TERM
trap - TERM
trap! Here’s a helpful resource explaining the bash trap command. The TL;DR of it is that without ignoring certain signals, the script will always early exit.
Getting rtcwake to work was also annoying. For me, calling rtcwake on the default device (skipping -d flag) never worked, I had to list possible devices then choose a different one. The one that reacted to the rtcwake command was rtc1 for me
The refresh_screen function is important. This is the whole reason we did all that server and image generation stuff earlier. It retrieves an image at an endpoint, clears the screen twice, draws the image from the server and positions it slightly lower on the screen to make room for the status bar up top. The last line displays the datetime, wifi status, and battery remaining.
refresh_screen() {
curl -k “$SCREEN_URL” -o “$DIR/screen.png”
eips -c
eips -c
eips -g “$DIR/screen.png” -x 0 -y 30 -w gc16
# Draw date/time and battery at top (eips can’t print %, so we strip it from gasgauge-info -c)
eips 1 1 “$(TZ=EST5EDT date ‘+%Y-%m-%d %I:%M %p’) - wifi $(cat /sys/class/net/wlan0/operstate 2>/dev/null || echo ‘?’) - battery: $(gasgauge-info -c 2>/dev/null | sed ‘s/%//g’ || echo ‘?’)”
This part of the script listens for the user pressing the menu button.
evtest is the command that worked for me for listening for incoming events on a specified device on the kindle. In my case, any time I pressed the menu button, the evtest command outputs code 102 (Home), value 1.
When the user presses the menu button, the stop.sh script is called automatically, which will kill the dashboard, clear the screen, and restart the kindle UI so that the device can be used normally.
Now that it’s been running for more than a month, 2 things I’m thinking about:
Even though I clear the screen twice before rendering a new image, the color bleed is still pretty noticeable after it’s been running for a couple days. I have a theory that if I flash the screen completely black and then white again when the kindle goes to sleep at night, it’d solve the problem but haven’t tried it out yet.
Right now it can go for ~5 days without being plugged in. I’d love for that number to be at the 2 week mark. Turning the device off for 10 hours at night extended the battery life by ~2 days, but 2 weeks is still a long ways off. I’ve debated increasing the gap between screen refreshes since it refreshes every minute right now, but I like the (almost) live minute updates so would rather sacrifice that last if possible in the quest for longer battery life.
Overall, this thing is sick! Probably one of the most fun projects I’ve built in recent memory. We use it every day before leaving the house, and it’s so much simpler than texting a stop number to an NJ Transit phone number. I can see serving up all sorts of interesting information on the e-ink screen - calendar, weather, daily tasks, sky’s the limit.
...
Read the original on mariannefeng.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.