10 interesting stories served every morning and every evening.




1 613 shares, 17 trendiness

how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds

C:\philes\the watch­ers: how ope­nai, the US gov­ern­ment, and per­sona built an iden­tity sur­veil­lance ma­chine that files re­ports on you to the feds the watch­ers: how ope­nai, the US gov­ern­ment, and per­sona built an iden­tity sur­veil­lance ma­chine that files re­ports on you to the feds

we are in di­rect writ­ten cor­re­spon­dence with per­son­a’s CEO, rick song. he has been re­spon­sive and en­gaged in good faith.

rick has com­mit­ted to an­swer­ing the 18 ques­tions in 0x14 in writ­ing. all cor­re­spon­dence will be pub­lished in full as part 2 of this se­ries. the core find­ings, in­clud­ing ope­nai-watch­listdb.with­per­sona.com and its 27 months of cer­tifi­cate trans­parency his­tory, re­main un­ad­dressed.

no laws were bro­ken. all find­ings come from pas­sive re­con us­ing pub­lic sources - Shodan, CT logs, DNS, HTTP head­ers, and unau­then­ti­cated files served by the tar­get’s own web server. no sys­tems were ac­cessed, no cre­den­tials were used, no data was mod­i­fied. re­triev­ing pub­licly served files is not unau­tho­rized ac­cess - see Van Buren v. United States (593 U. S. 374, 2021), hiQ Labs v. LinkedIn (9th Cir. 2022).

this is pro­tected jour­nal­ism and se­cu­rity re­search un­der the First Amendment, ECHR Art. 10, CFAA safe har­bor (DOJ Policy 2022), California Shield Law, GDPR Art. 85, and Israeli Basic Law: Human Dignity and Liberty.

the au­thors are not af­fil­i­ated with any gov­ern­ment, in­tel­li­gence ser­vice, or com­peti­tor of any en­tity named herein. no fi­nan­cial in­ter­est. no com­pen­sa­tion. this re­search ex­ists in the pub­lic in­ter­est and was dis­trib­uted across mul­ti­ple ju­ris­dic­tions, dead drops, and third-party archives be­fore pub­li­ca­tion.

any at­tempt to sup­press or re­tal­i­ate against this pub­li­ca­tion - le­gal threats, DMCA abuse, em­ploy­ment in­ter­fer­ence, phys­i­cal in­tim­i­da­tion, or ex­tra­ju­di­cial ac­tion - will be treated as con­fir­ma­tion of its find­ings and will trig­ger ad­di­tional dis­tri­b­u­tion. killing the mes­sen­ger does not kill the mes­sage.

for the record: all au­thors of this doc­u­ment are in good health, of sound mind, and have no plans to hurt them­selves, dis­ap­pear, or die un­ex­pect­edly. if that changes sud­denly - it was­n’t vol­un­tary. this doc­u­ment, its ev­i­dence, and a list of names are held by mul­ti­ple trusted third par­ties with in­struc­tions to pub­lish every­thing in the event that any­thing hap­pens to any of us. we mean any­thing.

to Persona and OpenAI’s le­gal teams: ac­tu­ally au­dit your sup­posed FedRAMP” com­pli­ancy, and an­swer the ques­tions in 0x14. that’s the ap­pro­pri­ate re­sponse. every­thing else is the wrong one.

from: the world

to: ope­nai, per­sona, the US gov­ern­ment, ICE, the open in­ter­net

date: 2026-02-16

sub­ject: the watch­ers

they told us the fu­ture would be con­ve­nient. sign up, ver­ify your iden­tity, talk to the ma­chine. easy. fric­tion­less. the brochure said trust and safety.” the source code said SelfieSuspiciousEntityDetection.

funny how that works. you hand over your pass­port to use a chat­bot and some­where in a dat­a­cen­ter in iowa, a fa­cial recog­ni­tion al­go­rithm is check­ing whether you look like a po­lit­i­cally ex­posed per­son. your selfie gets a sim­i­lar­ity score. your name hits a watch­list. a cron job re-screens you every few weeks just to make sure you haven’t be­come a ter­ror­ist since the last time you asked GPT to write a cover let­ter.

so what do you do? well, we looked. found source code on a gov­ern­ment end­point with the door wide open. fa­cial recog­ni­tion, watch­lists, SAR fil­ings, in­tel­li­gence co­de­names, and much more.

oh, and we re­vealed the names of every sin­gle per­son re­spon­si­ble for this!!

fol­low­ing the works of eva and oth­ers on ID ver­i­fi­ca­tion by­passes, we de­cided to start look­ing into per­sona, yet an­other KYC ser­vice that uses fa­cial recog­ni­tion to ver­ify iden­ti­ties. the orig­i­nal goal was to add a age-ver­i­fi­ca­tion by­pass to eva’s ex­ist­ing k-id plat­form.

af­ter try­ing to write a few ex­ploits, vm­func de­cided to browse their in­fra on shodan. it all started with a Shodan search. a sin­gle IP. 34.49.93.177 sit­ting on Google Cloud in Kansas City. one open port. one SSL cer­tifi­cate. two host­names that tell a story no­body was sup­posed to read:

ope­nai-watch­listdb.with­per­sona.com

ope­nai-watch­listdb-test­ing.with­per­sona.com

not openai-verify”, not openai-kyc”, watch­listdb. a data­base. (or is it?)

it was ini­tially meant to be a pas­sive re­con in­ves­ti­ga­tion, that quickly turned into a rab­bit hole deep dive into how com­mer­cial AI and fed­eral gov­ern­ment op­er­a­tions work to­gether to vi­o­late our pri­vacy every wak­ing sec­ond. we did­n’t even have to write or per­form a sin­gle ex­ploit, the en­tire ar­chi­tec­ture was just on the doorstep!! 53 megabytes of un­pro­tected source maps on a FedRAMP gov­ern­ment end­point, ex­pos­ing the en­tire code­base of a plat­form that files Suspicious Activity Reports with FinCEN, com­pares your selfie to watch­list pho­tos us­ing fa­cial recog­ni­tion, screens you against 14 cat­e­gories of ad­verse me­dia from ter­ror­ism to es­pi­onage, and tags re­ports with co­de­names from ac­tive in­tel­li­gence pro­grams.

2,456 source files con­tain­ing the full TypeScript code­base, every per­mis­sion, every API end­point, every com­pli­ance rule, every screen­ing al­go­rithm. sit­ting unau­then­ti­cated on the pub­lic in­ter­net. on a gov­ern­ment plat­form no less.

no sys­tems were breached. no cre­den­tials were used. every find­ing in this doc­u­ment comes from pub­licly ac­ces­si­ble sources: shodan, cer­tifi­cate trans­parency logs, DNS res­o­lu­tion, HTTP re­sponse head­ers, pub­lished API doc­u­men­ta­tion, pub­lic web pages, and unau­then­ti­cated JavaScript source maps served by the tar­get’s own web server.

the in­fra­struc­ture told its own story. we just lis­tened. then we read the source code.

IP: 34.49.93.177

ASN: AS396982 (Google LLC)

provider: Google Cloud

re­gion: global

city: Kansas City, US

open ports: 443/tcp

last seen: 2026-02-05

host­names:

- 177.93.49.34.bc.googleusercontent.com

- ope­nai-watch­listdb.with­per­sona.com

- ope­nai-watch­listdb-test­ing.with­per­sona.com

SSL cert:

sub­ject: CN=openai-watchlistdb.withpersona.com

is­suer: C=US, O=Google Trust Services, CN=WR3

valid: Jan 24 01:24:11 2026 - Apr 24 02:20:06 2026

SANs: ope­nai-watch­listdb.with­per­sona.com

ope­nai-watch­listdb-test­ing.with­per­sona.com

se­r­ial: FDFFBF37ED89BBD710D9967B7CD92B52

HTTP re­sponse (all paths, all meth­ods):

sta­tus: 404

body: fault fil­ter abort”

head­ers: via: 1.1 google

con­tent-type: text/​plain

Alt-Svc: h3=”:443″

the fault fil­ter abort” re­sponse is an Envoy proxy fault in­jec­tion fil­ter. stan­dard in GCP/Istio ser­vice mesh de­ploy­ments. the ser­vice only routes re­quests match­ing spe­cific in­ter­nal cri­te­ria (likely mTLS client cer­tifi­cates, spe­cific source IPs, or API key head­ers). every­thing else just dies at the edge.

though ob­vi­ously this is not a mis­con­fig­u­ra­tion.. this is just a locked-down back­end ser­vice that was never meant to have a pub­lic face. the only rea­son we even know it ex­ists is be­cause of cer­tifi­cate trans­parency logs and DNS.

Persona (withpersona.com) is a San Francisco-based iden­tity ver­i­fi­ca­tion com­pany. their nor­mal in­fra­struc­ture runs be­hind Cloudflare:

with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

in­quiry.with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

app.with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

api.with­per­sona.com -> 162.159.141.40, 172.66.1.36 (CF)

they also run a wild­card DNS record, *.withpersona.com points to Cloudflare (cloudflare.withpersona.com.cdn.cloudflare.net). we con­firmed this by re­solv­ing com­pletely fab­ri­cated sub­do­mains:

to­tal­ly­nonex­is­ten­t12345.with­per­sona.com -> 162.159.141.40 (CF)

as­d­flkjhasdf.with­per­sona.com -> 162.159.141.40 (CF)

HOWEVER, here’s where it gets in­ter­est­ing. OpenAI’s watch­list ser­vice breaks out of this wild­card:

ope­nai-watch­listdb.with­per­sona.com -> 34.49.93.177 (GCP)

ope­nai-watch­listdb-test­ing.with­per­sona.com -> 34.49.93.177 (GCP)

a ded­i­cated Google Cloud in­stance, which is­n’t even be­hind Cloudflare, nor on Persona’s shared in­fra­struc­ture. seem­ingly pur­pose-built and iso­lated.

you would never do this for a sim­ple check this name against a list” API call, you do this when the data re­quires com­part­men­tal­iza­tion. when the com­pli­ance re­quire­ments for the data you’re col­lect­ing, de­mand that level of iso­la­tion. when the dam­age of a breach is bad enough to war­rant ded­i­cated in­fra­struc­ture.

CT logs tell us ex­actly when this ser­vice went live and how it evolved.

no­vem­ber 2023. this ser­vice has been run­ning for over two years.

OpenAI did­n’t an­nounce Verified Organization” re­quire­ments un­til mid-2025. they did­n’t pub­licly re­quire ID ver­i­fi­ca­tion for ad­vanced model ac­cess un­til GPT-5. but the watch­list screen­ing in­fra­struc­ture was op­er­a­tional 18 months be­fore any of that was dis­closed.

we can pin­point when they started con­sid­er­ing go­ing public” with the col­lab­o­ra­tion.

https://​with­per­sona.com/​cus­tomers/​ope­nai ex­ists since September 17th, 2024, like­wise, OpenAI’s Privacy Policy up­date started in­clud­ing the fol­low­ing pas­sage since their November 4th, 2024 up­date as well.

Other Information You Provide: We col­lect other in­for­ma­tion that you pro­vide to us, such as when you par­tic­i­pate in our events or sur­veys, or when you pro­vide us or a ven­dor op­er­at­ing on our be­half with in­for­ma­tion to es­tab­lish your iden­tity or age (collectively, Other Information You Provide”).”

the ex­cuses used in the pub­lic post are clas­si­cal, though in­stead of us­ing chil­dren as the scape­goat for in­vad­ing our pri­vacy, this time it was […] To of­fer safe AGI, we need to make sure bad peo­ple aren’t us­ing our ser­vices […].

only… that they quickly used this op­por­tu­nity to go from com­par­ing users against a sin­gle fed­eral watch­list, to cre­at­ing the watch­list of all users them­selves.

in fact, this is noth­ing new, OpenAI Forum User OnceAndTwice had men­tioned this al­ready back in June last year.

Persona’s API doc­u­men­ta­tion (docs.withpersona.com) is pub­lic. when a cus­tomer like OpenAI runs a gov­ern­ment ID ver­i­fi­ca­tion, the API re­turns a com­plete iden­tity dossier:

per­sonal iden­tity:

- full le­gal name (including na­tive script)

- date of birth, place of birth

- na­tion­al­ity, sex, height

ad­dress:

- street, city, state, postal code, coun­try

gov­ern­ment doc­u­ment:

- doc­u­ment type and num­ber

- is­su­ing au­thor­ity

- is­sue and ex­pi­ra­tion dates

- visa sta­tus

- ve­hi­cle class/​en­dorse­ments/​re­stric­tions

me­dia:

- FRONT PHOTO of ID doc­u­ment (URL)

- BACK PHOTO of ID doc­u­ment (URL)

- SELFIE PHOTO (URL + byte size)

- VIDEO of iden­tity cap­ture (URL)

meta­data:

- en­tity con­fi­dence score

- all ver­i­fi­ca­tion check re­sults with pass/​fail rea­sons

- cap­ture method used

- time­stamps (created, sub­mit­ted, com­pleted, redacted)

Persona’s own case study states that OpenAI screens mil­lions monthly” and automatically screens over 99% of users be­hind the scenes in sec­onds.”

be­hind the scenes. in sec­onds. mil­lions. with cus­tomiz­able fil­ters rang­ing from sim­ple par­tial name matches to ad­vanced fa­cial recog­ni­tion al­go­rithms.

again, none of this is even a se­cret, its hidden” in plain sight.

...

Read the original on vmfunc.re »

2 602 shares, 33 trendiness

Amazon BUSTED for Widespread Scheme to Inflate Prices Across the Economy

Yesterday, California Attorney General Rob Bonta filed for an im­me­di­ate halt to what he says is a wide­spread price-fix­ing scheme run by the largest on­line re­tailer in America, Amazon. Amazon tells ven­dors what prices it wants to see to main­tain its own prof­itabil­ity,” Bonta al­leged. Amazon can do this be­cause it is the world’s largest, most pow­er­ful on­line re­tailer.”

His claim is that Amazon has been forc­ing ven­dors who sell on and off the plat­form to raise prices, and co­op­er­at­ing with other ma­jor on­line re­tail­ers to do so.

Vendors, cowed by Amazon’s over­whelm­ing bar­gain­ing lever­age and fear­ing pun­ish­ment, com­ply—agree­ing to raise prices on com­peti­tors’ web­sites (often with the aware­ness and co­op­er­a­tion of the com­pet­ing re­tailer) or to re­move prod­ucts from com­pet­ing web­sites al­to­gether. , and it should be im­me­di­ately en­joined.

Amazon is sched­uled for a se­ries of tri­als in January of 2027, but Bonta’s le­gal move is a big deal, be­cause he’s ask­ing a court to bring Amazon to heel now, a year early. The only way a judge can do that is if he con­cludes Amazon is likely to lose, which means that Bonta be­lieves his ev­i­dence is so strong it’s ba­si­cally a fore­gone con­clu­sion Amazon will be held li­able for fos­ter­ing se­ri­ous harm to con­sumers.

The scale of the scheme is al­most un­fath­omable; ac­cord­ing to its lat­est in­vestor re­ports, Amazon earned $426 bil­lion of rev­enue in its 2025 North America on­line shop­ping busi­ness, which is about $3000 for every house­hold in America. As Stacy Mitchell noted, prices for third party goods on the on­line plat­form, roughly 60% of its to­tal sales, have been go­ing up at 7% a year, more than twice the rate of in­fla­tion. And be­cause this scheme im­pacts goods sold off of Amazon’s web­site as well, there’s a rea­son­able chance that it has had an im­pact on price lev­els over­all in America. With a sim­i­lar Pepsi-Walmart al­leged con­spir­acy re­vealed ear­lier this year, it’s be­com­ing in­creas­ingly clear that con­sol­i­da­tion and price-fix­ing are linked to in­fla­tion.

How ex­actly does the scheme work? Long-standing read­ers of BIG may re­mem­ber a piece in 2021 ti­tled Amazon Prime is an Economy-Distorting Lie” in which I laid out what’s hap­pen­ing. At the time, the D. C. Attorney General, a lawyer named Karl Racine, sued Amazon for pro­hibit­ing ven­dors that sold on its web­site from of­fer­ing dis­counts out­side of Amazon. Such anti-dis­count­ing pro­vi­sions raise prices for con­sumers, and pre­vent new plat­forms from emerg­ing to chal­lenge Amazon.

The key lever­age point for Amazon is the scale of its Prime pro­gram, which has 200 mil­lion mem­bers na­tion­wide. As Scott Galloway noted a few years ago, more U. S. house­holds be­long to Prime than dec­o­rate a Christmas tree or go to church.

Prime mem­bers get free ship­ping,’ which means they tend not to shop around. They just ac­cept the price and ven­dor they are given on Amazon through what’s called the Buy Box.”

So which ven­dor gets the Buy Box’ and thus the sale to the Prime mem­ber? Here’s what I wrote in 2021.

Amazon awards the Buy Box to mer­chants based on a num­ber of fac­tors. One fac­tor is whether a prod­uct is Prime el­i­gi­ble,’ which is to say of­fered to Prime mem­bers with free ship­ping. In or­der to be­come Prime el­i­gi­ble, a seller of­ten must use Amazon’s ware­hous­ing and lo­gis­tics ser­vice, Fulfillment by Amazon (FBA). In other words, Amazon ties the abil­ity to ac­cess Prime cus­tomers to whether a seller pays Amazon for man­ag­ing its in­ven­tory. This strat­egy has worked - Amazon now ful­fills roughly two thirds of the prod­ucts bought on its plat­form. The high prices of over­all mar­ket­place ac­cess fees, in­clud­ing FBA, is how Amazon gen­er­ates cash from its Marketplace and re­tail op­er­a­tions. From 2014 to 2020, the amount it charges third party sell­ers grew from $11.75 bil­lion to more than $80 bil­lion. Seller fees now ac­count for 21% of Amazon’s to­tal cor­po­rate rev­enue,” noted Racine, also point­ing out that its profit mar­gins for Marketplace sales by third party sell­ers are four times higher than its own re­tail sales…Now, if this were all that was hap­pen­ing, sell­ers and brands could just sell out­side of Amazon, avoid the 35-45% com­mis­sion, and charge a lower price to en­tice cus­tomers. Buy Cheaper at Walmart.com!” should be in ads all over the web. But it’s not. And that’s where the main claim from Racine comes in. Amazon uses its Buy Box al­go­rithm to make sure that sell­ers can’t sell through a dif­fer­ent store or even through their own site with a lower price and ac­cess Amazon cus­tomers, even if they would be able to sell it more cheaply. If they do, they get cut off from the Buy Box, and thus, cut off de facto from be­ing able to sell on Amazon.

The net ef­fect is that prices every­where, not just on Amazon, are higher than they or­di­nary would be.

So that’s how the scheme worked, and Racine was the first law en­forcer to act. But oth­ers fol­lowed; Bonta filed his more com­pre­hen­sive law­suit in 2022. In 2023, Federal Trade Commission Chair Lina Khan filed against Amazon on sim­i­lar grounds, though with more de­tails and ad­di­tional wrin­kles. The FTC found that Amazon was run­ning some­thing called Project Nessie” in which it would use its al­go­rithm to en­cour­age other on­line re­tail­ers, per­haps Walmart.com or Target.com, to raise prices on sim­i­lar prod­ucts.

All of these cases, as well as other sim­i­lar ones, have passed the nec­es­sary le­gal hur­dle to go to trial, but an ac­tual rem­edy is years away. And Amazon keeps grow­ing through this al­leged il­licit be­hav­ior, in­flat­ing prices not just on its own site, but across the re­tail land­scape.

According to Bonta, Amazon has three pri­mary meth­ods of in­flat­ing prices. In the first one, if Amazon and a com­peti­tor are en­gaged in a price war over a prod­uct, Amazon will tell its ven­dor that sells to its ri­val to in­crease the price di­rectly. In the sec­ond one, if a com­peti­tor is dis­count­ing an item, Amazon will ask it to stop through a ven­dor. And in the third, a ven­dor will stop sell­ing a prod­uct for a lower price out­side of Amazon, and Amazon will then raise its price.

This kind of arrange­ment is known as a hub-and-spoke” con­spir­acy, or vertical price-fix­ing,” be­cause it’s co­op­er­at­ing on price through com­mon cus­tomers or ven­dors. Such a scheme dis­tin­guishes it from di­rect col­lab­o­ra­tion among ri­vals, which is a more stan­dard horizontal” con­spir­acy. The re­lief re­quested by Bonta is ex­ten­sive, but amounts to bar­ring the com­pany from mak­ing agree­ments through ven­dors to set pric­ing for the on­line re­tail econ­omy and pro­hibit­ing the com­pany from com­mu­ni­cat­ing with ven­dors about prices and terms for non-Ama­zon re­tail­ers. He is also seek­ing a mon­i­tor to en­sure Amazon stops the bad be­hav­ior.

What makes it a big deal is that it’s a re­quest for a tem­po­rary in­junc­tion right now, meant to last un­til the trial process con­cludes or it’s oth­er­wise lifted. Judges only grant such in­junc­tions when they think that a party is likely go­ing to lose, the im­me­di­ate harm of the be­hav­ior is sig­nif­i­cant, and the pub­lic in­ter­est is served. While we can’t see most of the ev­i­dence be­cause it’s redacted, Bonta must re­ally be­lieve he’s got the goods. And if he suc­ceeds in this gam­bit, it al­most cer­tainly means Amazon has vi­o­lated an­titrust law on a ma­jor line of busi­ness. It also flips the in­cen­tives, be­cause Amazon will have less of an in­cen­tive to de­lay a trial. Instead, it will be sub­ject to this in­junc­tion un­til the trial con­cludes. So it may stop try­ing dila­tory tac­tics.

There’s one last ob­ser­va­tion about the com­plaint. Again, it’s redacted, but Bonta is hint­ing at Amazon’s in­ter­nal process to hide what it is do­ing.

And that would­n’t be sur­pris­ing, since the FTC has told the judge in its case that top Amazon of­fi­cials, in­clud­ing Jeff Bezos, have been de­stroy­ing ev­i­dence.

According to Law.com: The FTC said in a heav­ily redacted brief on Friday that it’s miss­ing both the raw notes’ of im­por­tant meet­ings and key mes­sages from the Signal apps of Bezos and other se­nior ex­ec­u­tives, who, in some in­stances, set mes­sages to au­to­mat­i­cally delete in as short as ten sec­onds or one minute.’”

That kind of be­hav­ior is the dig­i­tal equiv­a­lent of shred­ding doc­u­ments while un­der a le­gal hold, and ev­i­dence of law­less­ness. And there’s a rea­son for that. For as long as I’ve been writ­ing BIG, and years be­fore that, laws have not re­ally ap­plied to the rich and pow­er­ful. But our work is bear­ing fruit. And it’s not just Amazon. Today, the Antitrust Division won a big le­gal mo­tion on its price-fix­ing case against a meat con­spir­acy led by Agri-Stats, and the Ninth Circuit had a ter­rific rul­ing on a Robinson-Patman Act price dis­crim­i­na­tion suit. As the peo­ple elect new pop­ulist politi­cians, en­forcers and plain­tiff lawyers are de­vel­op­ing the law and the cases to match their frus­tra­tion.

There’s also a change in pub­lic at­ti­tudes. In years past, a com­pany like Amazon used to be con­sid­ered in­no­v­a­tive and con­sumer-friendly. Today, it is un­der­stood as bu­reau­cratic and co­er­cive, a re­sult of an en­vi­ron­ment of law­less­ness. Americans are in­creas­ingly an­gry about the sit­u­a­tion, see­ing the Epstein class and the high in­fla­tion en­vi­ron­ment as a di­rect threat to their wel­fare, a con­spir­acy to ex­tract. Because it is. And at least some elected lead­ers see that, and are act­ing to stop it.

Thanks for read­ing! Your tips make this newslet­ter what it is, so please send me tips on weird mo­nop­o­lies, sto­ries I’ve missed, or other thoughts. And if you liked this is­sue of BIG, you can sign up here for more is­sues, a newslet­ter on how to re­store fair com­merce, in­no­va­tion, and democ­racy. Consider be­com­ing a pay­ing sub­scriber to sup­port this work, or if you are a pay­ing sub­scriber, giv­ing a gift sub­scrip­tion to a friend, col­league, or fam­ily mem­ber. If you re­ally liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.

...

Read the original on www.thebignewsletter.com »

3 599 shares, 22 trendiness

Apple accelerates U.S. manufacturing with Mac mini production

Apple to­day an­nounced a sig­nif­i­cant ex­pan­sion of fac­tory op­er­a­tions in Houston, bring­ing the fu­ture pro­duc­tion of Mac mini to the U. S. for the first time. The com­pany will also ex­pand ad­vanced AI server man­u­fac­tur­ing at the fac­tory and pro­vide hands-on train­ing at its new Advanced Manufacturing Center be­gin­ning later this year. Altogether, Apple’s Houston op­er­a­tions will cre­ate thou­sands of jobs.

Apple is deeply com­mit­ted to the fu­ture of American man­u­fac­tur­ing, and we’re proud to sig­nif­i­cantly ex­pand our foot­print in Houston with the pro­duc­tion of Mac mini start­ing later this year,” said Tim Cook, Apple’s CEO. We be­gan ship­ping ad­vanced AI servers from Houston ahead of sched­ule, and we’re ex­cited to ac­cel­er­ate that work even fur­ther.”

Technicians in pro­tec­tive cloth­ing work on com­put­ers and other equip­ment in a Houston fac­tory.

A worker in a lab coat stands be­hind an as­sem­bly line.

Technicians in pro­tec­tive cloth­ing walk through the hall­way of a Houston fac­tory.

Technicians in pro­tec­tive cloth­ing look at a mon­i­tor in a Houston fac­tory.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U. S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

In Houston, work­ers as­sem­ble ad­vanced AI servers, in­clud­ing logic boards pro­duced on­site, which are then used in Apple data cen­ters in the U.S.

For more than two decades, users around the world have re­lied on the in­cred­i­bly pop­u­lar Mac mini for the tremen­dous power it packs into its ul­tra-com­pact de­sign. With its next-level AI ca­pa­bil­i­ties, it has be­come an es­sen­tial tool for every­one from stu­dents and as­pir­ing cre­atives to small busi­ness own­ers. Beginning later this year, Mac mini will be pro­duced at a new fac­tory on Apple’s Houston man­u­fac­tur­ing site, dou­bling the cam­pus’s foot­print.

Apple be­gan pro­duc­ing ad­vanced AI servers in Houston in 2025 for the first time, and pro­duc­tion is al­ready ahead of sched­ule. Servers as­sem­bled in Houston — in­clud­ing logic boards pro­duced on­site — are used in Apple data cen­ters around the coun­try.

Beyond pro­duc­tion, Apple is in­vest­ing in the work­force that will drive American man­u­fac­tur­ing for­ward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is sched­uled to open its doors in Houston. Currently un­der con­struc­tion, the ded­i­cated fa­cil­ity will pro­vide hands-on train­ing in ad­vanced man­u­fac­tur­ing tech­niques to stu­dents, sup­plier em­ploy­ees, and American busi­nesses of all sizes. Apple ex­perts will teach par­tic­i­pants the same in­no­v­a­tive processes that are used to make Apple prod­ucts, al­low­ing American man­u­fac­tur­ers to take their work to the next level.

A worker stands in front of a large American flag in­side the un­der-con­struc­tion Apple Advanced Manufacturing Center in Houston.

An over­head shot of the un­der-con­struc­tion Apple Advanced Manufacturing Center in Houston.

Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will pro­vide hands-on train­ing to stu­dents, sup­plier em­ploy­ees, and U. S. busi­nesses of all sizes.

Apple’s 20,000-square-foot Advanced Manufacturing Center opens later this year, and will pro­vide hands-on train­ing to stu­dents, sup­plier em­ploy­ees, and U.S. busi­nesses of all sizes.

Since an­nounc­ing its $600 bil­lion com­mit­ment to the U. S. last year, Apple and its American Manufacturing Program part­ners have al­ready reached sev­eral mile­stones:

Apple ex­ceeded its tar­get and sourced more than 20 bil­lion U.S.-made chips from 24 fac­to­ries across 12 states, in­clud­ing those of part­ners like TSMC, Broadcom, and Texas Instruments.

GlobalWafers has be­gun pro­duc­tion at its new $4 bil­lion bare sil­i­con wafer fa­cil­ity in Sherman, Texas. At Apple’s di­rec­tion, wafers pro­duced in Sherman will be used by Apple’s chip man­u­fac­tur­ing part­ners in the U.S., in­clud­ing TSMC and Texas Instruments.

Supported by Apple’s in­vest­ment, Amkor broke ground on its new $7 bil­lion semi­con­duc­tor ad­vanced pack­ag­ing and test fa­cil­ity in Peoria, Arizona, where Apple will be the first and largest cus­tomer.

Corning’s Harrodsburg, Kentucky, fa­cil­ity is now 100 per­cent ded­i­cated to cover glass for iPhone and Apple Watch shipped glob­ally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.

In 2026, Apple is on track to pur­chase well over 100 mil­lion ad­vanced chips pro­duced by TSMC at its Arizona fa­cil­ity — a sig­nif­i­cant in­crease from 2025.

Apple opened its Apple Manufacturing Academy in Detroit, which is al­ready sup­port­ing more than 130 small- and medium-sized American man­u­fac­tur­ers with hands-on train­ing in AI, au­toma­tion, and smart man­u­fac­tur­ing. The acad­emy re­cently ex­panded with new vir­tual pro­gram­ming, giv­ing busi­nesses across the coun­try on-de­mand ac­cess to the cur­ricu­lum de­vel­oped by Apple ex­perts and Michigan State University fac­ulty.

Mac mini will be made at a new fa­cil­ity in Houston, and a soon-to-be-launched train­ing cen­ter will sup­port ad­vanced man­u­fac­tur­ing skills de­vel­op­ment

CUPERTINO, CALIFORNIA Apple to­day an­nounced a sig­nif­i­cant ex­pan­sion of fac­tory op­er­a­tions in Houston, bring­ing the fu­ture pro­duc­tion of Mac mini to the U.S. for the first time. The com­pany will also ex­pand ad­vanced AI server man­u­fac­tur­ing at the fac­tory and pro­vide hands-on train­ing at its new Advanced Manufacturing Center be­gin­ning later this year. Altogether, Apple’s Houston op­er­a­tions will cre­ate thou­sands of jobs.

Apple is deeply com­mit­ted to the fu­ture of American man­u­fac­tur­ing, and we’re proud to sig­nif­i­cantly ex­pand our foot­print in Houston with the pro­duc­tion of Mac mini start­ing later this year,” said Tim Cook, Apple’s CEO. We be­gan ship­ping ad­vanced AI servers from Houston ahead of sched­ule, and we’re ex­cited to ac­cel­er­ate that work even fur­ther.”

For more than two decades, users around the world have re­lied on the in­cred­i­bly pop­u­lar Mac mini for the tremen­dous power it packs into its ul­tra-com­pact de­sign. With its next-level AI ca­pa­bil­i­ties, it has be­come an es­sen­tial tool for every­one from stu­dents and as­pir­ing cre­atives to small busi­ness own­ers. Beginning later this year, Mac mini will be pro­duced at a new fac­tory on Apple’s Houston man­u­fac­tur­ing site, dou­bling the cam­pus’s foot­print.

Apple be­gan pro­duc­ing ad­vanced AI servers in Houston in 2025 for the first time, and pro­duc­tion is al­ready ahead of sched­ule. Servers as­sem­bled in Houston — in­clud­ing logic boards pro­duced on­site — are used in Apple data cen­ters around the coun­try.

Beyond pro­duc­tion, Apple is in­vest­ing in the work­force that will drive American man­u­fac­tur­ing for­ward. Later this year, Apple’s 20,000-square-foot Advanced Manufacturing Center is sched­uled to open its doors in Houston. Currently un­der con­struc­tion, the ded­i­cated fa­cil­ity will pro­vide hands-on train­ing in ad­vanced man­u­fac­tur­ing tech­niques to stu­dents, sup­plier em­ploy­ees, and American busi­nesses of all sizes. Apple ex­perts will teach par­tic­i­pants the same in­no­v­a­tive processes that are used to make Apple prod­ucts, al­low­ing American man­u­fac­tur­ers to take their work to the next level.

Since an­nounc­ing its $600 bil­lion com­mit­ment to the U.S. last year, Apple and its American Manufacturing Program part­ners have al­ready reached sev­eral mile­stones:

Apple ex­ceeded its tar­get and sourced more than 20 bil­lion U.S.-made chips from 24 fac­to­ries across 12 states, in­clud­ing those of part­ners like TSMC, Broadcom, and Texas Instruments.

GlobalWafers has be­gun pro­duc­tion at its new $4 bil­lion bare sil­i­con wafer fa­cil­ity in Sherman, Texas. At Apple’s di­rec­tion, wafers pro­duced in Sherman will be used by Apple’s chip man­u­fac­tur­ing part­ners in the U.S., in­clud­ing TSMC and Texas Instruments.

Supported by Apple’s in­vest­ment, Amkor broke ground on its new $7 bil­lion semi­con­duc­tor ad­vanced pack­ag­ing and test fa­cil­ity in Peoria, Arizona, where Apple will be the first and largest cus­tomer.

Corning’s Harrodsburg, Kentucky, fa­cil­ity is now 100 per­cent ded­i­cated to cover glass for iPhone and Apple Watch shipped glob­ally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.

In 2026, Apple is on track to pur­chase well over 100 mil­lion ad­vanced chips pro­duced by TSMC at its Arizona fa­cil­ity — a sig­nif­i­cant in­crease from 2025.

Apple opened its Apple Manufacturing Academy in Detroit, which is al­ready sup­port­ing more than 130 small- and medium-sized American man­u­fac­tur­ers with hands-on train­ing in AI, au­toma­tion, and smart man­u­fac­tur­ing. The acad­emy re­cently ex­panded with new vir­tual pro­gram­ming, giv­ing busi­nesses across the coun­try on-de­mand ac­cess to the cur­ricu­lum de­vel­oped by Apple ex­perts and Michigan State University fac­ulty.

About Apple

Apple rev­o­lu­tion­ized per­sonal tech­nol­ogy with the in­tro­duc­tion of the Macintosh in 1984. Today, Apple leads the world in in­no­va­tion with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six soft­ware plat­forms — iOS, iPa­dOS, ma­cOS, watchOS, vi­sionOS, and tvOS — pro­vide seam­less ex­pe­ri­ences across all Apple de­vices and em­power peo­ple with break­through ser­vices in­clud­ing the App Store, Apple Music, Apple Pay, iCloud, and Apple TV. Apple’s more than 150,000 em­ploy­ees are ded­i­cated to mak­ing the best prod­ucts on earth and to leav­ing the world bet­ter than we found it.

Copy text

* Apple ex­ceeded its tar­get and sourced more than 20 bil­lion U.S.-made chips from 24 fac­to­ries across 12 states, in­clud­ing those of part­ners like TSMC, Broadcom, and Texas Instruments.

* GlobalWafers has be­gun pro­duc­tion at its new $4 bil­lion bare sil­i­con wafer fa­cil­ity in Sherman, Texas. At Apple’s di­rec­tion, wafers pro­duced in Sherman will be used by Apple’s chip man­u­fac­tur­ing part­ners in the U.S., in­clud­ing TSMC and Texas Instruments.

* Supported by Apple’s in­vest­ment, Amkor broke ground on its new $7 bil­lion semi­con­duc­tor ad­vanced pack­ag­ing and test fa­cil­ity in Peoria, Arizona, where Apple will be the first and largest cus­tomer.

* Corning’s Harrodsburg, Kentucky, fa­cil­ity is now 100 per­cent ded­i­cated to cover glass for iPhone and Apple Watch shipped glob­ally, and by the end of this year, every new iPhone and Apple Watch will have cover glass made in the state.

* In 2026, Apple is on track to pur­chase well over 100 mil­lion ad­vanced chips pro­duced by TSMC at its Arizona fa­cil­ity — a sig­nif­i­cant in­crease from 2025.

* Apple opened its Apple Manufacturing Academy in Detroit, which is al­ready sup­port­ing more than 130 small- and medium-sized American man­u­fac­tur­ers with hands-on train­ing in AI, au­toma­tion, and smart man­u­fac­tur­ing. The acad­emy re­cently ex­panded with new vir­tual pro­gram­ming, giv­ing busi­nesses across the coun­try on-de­mand ac­cess to the cur­ricu­lum de­vel­oped by Apple ex­perts and Michigan State University fac­ulty.

...

Read the original on www.apple.com »

4 550 shares, 84 trendiness

Danish government agency to ditch Microsoft software in push for digital independence

Denmark’s tech mod­ern­iza­tion agency plans to re­place Microsoft prod­ucts with open-source soft­ware to re­duce de­pen­dence on U. S. tech firms.

In an in­ter­view with the lo­cal news­pa­per Politiken, Danish Minister for Digitalisation Caroline Stage Olsen con­firmed that over half of the min­istry’s staff will switch from Microsoft Office to LibreOffice next month, with a full tran­si­tion to open-source soft­ware by the end of the year.

If every­thing goes as ex­pected, all em­ploy­ees will be on an open-source so­lu­tion dur­ing the au­tumn,” Politiken re­ported, quot­ing Stage. The move would also help the min­istry avoid the ex­pense of man­ag­ing out­dated Windows 10 sys­tems, which will lose of­fi­cial sup­port in October.

LibreOffice, de­vel­oped by the Berlin-based non-profit or­ga­ni­za­tion The Document Foundation, is avail­able for Windows, ma­cOS, and is the de­fault of­fice suite on many Linux sys­tems. The suite in­cludes tools for word pro­cess­ing, spread­sheets, pre­sen­ta­tions, vec­tor graph­ics, data­bases, and for­mula edit­ing. Stage said that the min­istry could re­vert to Microsoft prod­ucts if the tran­si­tion proves too com­plex.

Microsoft had not re­sponded to Recorded Future News’ re­quest for com­ment as of Friday morn­ing, Eastern U. S. time.

The min­istry’s de­ci­sion fol­lows sim­i­lar moves by Denmark’s two largest mu­nic­i­pal­i­ties, Copenhagen and Aarhus, which pre­vi­ously an­nounced plans to aban­don Microsoft soft­ware, cit­ing fi­nan­cial con­cerns, mar­ket dom­i­nance and po­lit­i­cal ten­sions with Washington. Proponents re­fer to the process as mov­ing to­ward digital sov­er­eignty.”

Henrik Appel Espersen, chair of Copenhagen’s au­dit com­mit­tee, told Politiken the move was dri­ven by cost con­cerns and Microsoft’s strong grip on the mar­ket. He also cited ten­sions be­tween the U. S. and Denmark dur­ing Donald Trump’s pres­i­dency, which sparked de­bate about data pro­tec­tion and re­duc­ing re­liance on for­eign tech­nol­ogy.

The shift comes amid a wider European trend to­ward dig­i­tal in­de­pen­dence. This week, the German state of Schleswig-Holstein said that lo­cal gov­ern­ment agen­cies will aban­don Microsoft Office tools such as Word and Excel in fa­vor of LibreOffice, while Open-Xchange will re­place Microsoft Outlook for email and cal­en­dar func­tions. The state plans to com­plete the shift by mi­grat­ing to the Linux op­er­at­ing sys­tem in the com­ing years.

Schleswig-Holstein first an­nounced its de­ci­sion to aban­don Microsoft last April, say­ing it would be the first state to in­tro­duce a dig­i­tally sov­er­eign IT work­place.” Independent, sus­tain­able, se­cure: Schleswig-Holstein will be a dig­i­tal pi­o­neer re­gion,” the state’s Minister-President said at the time.

...

Read the original on therecord.media »

5 536 shares, 34 trendiness

Anthropic Drops Flagship Safety Pledge

Anthropic, the wildly suc­cess­ful AI com­pany that has cast it­self as the most safety-con­scious of the top re­search labs, is drop­ping the cen­tral pledge of its flag­ship safety pol­icy, com­pany of­fi­cials tell TIME. In 2023, Anthropic com­mit­ted to never train an AI sys­tem un­less it could guar­an­tee in ad­vance that the com­pa­ny’s safety mea­sures were ad­e­quate. For years, its lead­ers touted that promise—the cen­tral pil­lar of their Responsible Scaling Policy (RSP)—as ev­i­dence that they are a re­spon­si­ble com­pany that would with­stand mar­ket in­cen­tives to rush to de­velop a po­ten­tially dan­ger­ous tech­nol­ogy.

But in re­cent months the com­pany de­cided to rad­i­cally over­haul the RSP. That de­ci­sion in­cluded scrap­ping the promise to not re­lease AI mod­els if Anthropic can’t guar­an­tee proper risk mit­i­ga­tions in ad­vance. We felt that it would­n’t ac­tu­ally help any­one for us to stop train­ing AI mod­els,” Anthropic’s chief sci­ence of­fi­cer Jared Kaplan told TIME in an ex­clu­sive in­ter­view. We did­n’t re­ally feel, with the rapid ad­vance of AI, that it made sense for us to make uni­lat­eral com­mit­ments … if com­peti­tors are blaz­ing ahead.”The new ver­sion of the pol­icy, which TIME re­viewed, in­cludes com­mit­ments to be more trans­par­ent about the safety risks of AI, in­clud­ing mak­ing ad­di­tional dis­clo­sures about how Anthropic’s own mod­els fare in safety test­ing. It com­mits to match­ing or sur­pass­ing the safety ef­forts of com­peti­tors. And it promises to delay” Anthropic’s AI de­vel­op­ment if lead­ers both con­sider Anthropic to be leader of the AI race and think the risks of cat­a­stro­phe to be sig­nif­i­cant.

But over­all, the change to the RSP leaves Anthropic far less con­strained by its own safety poli­cies, which pre­vi­ously cat­e­gor­i­cally barred it from train­ing mod­els above a cer­tain level if ap­pro­pri­ate safety mea­sures weren’t al­ready in place. The change comes as Anthropic, pre­vi­ously con­sid­ered to be be­hind OpenAI in the AI race, rides the high of a string of tech­no­log­i­cal and com­mer­cial suc­cesses. Its Claude mod­els, es­pe­cially the soft­ware-writ­ing tool Claude Code, have won le­gions of de­voted fans. In February, Anthropic raised $30 bil­lion in new in­vest­ments, valu­ing it at some $380 bil­lion, and re­ported that its an­nu­al­ized rev­enue was grow­ing at a rate of 10x per year. The com­pa­ny’s core busi­ness model of sell­ing di­rect to busi­nesses is seen by many in­vestors as more cred­i­ble than OpenAI’s main strat­egy of mon­e­tiz­ing a vast con­sumer user base.

When Anthropic in­tro­duced the RSP in 2023, Kaplan says, the com­pany hoped it would en­cour­age ri­vals to adopt sim­i­lar mea­sures. (No ri­vals made quite as overt a promise to pause AI de­vel­op­ment, but many pub­lished lengthy re­ports de­tail­ing their plans to mit­i­gate risk, which Kaplan chalks up as Anthropic ex­ert­ing a good in­flu­ence on the in­dus­try.) Executives also hoped the ap­proach might even­tu­ally serve as a blue­print for bind­ing na­tional reg­u­la­tions or even in­ter­na­tional treaties, Kaplan claims. But those reg­u­la­tions never ma­te­ri­al­ized. Instead, the Trump Administration has en­dorsed a let-it-rip at­ti­tude to AI de­vel­op­ment, even go­ing so far as to at­tempt to nul­lify state reg­u­la­tions. No fed­eral AI law is on the hori­zon. And while a global gov­er­nance frame­work may have seemed pos­si­ble in 2023, three years later it has be­come clear that door has closed. Meanwhile, com­pe­ti­tion for AI su­premacy—be­tween com­pa­nies but also be­tween na­tions—has only in­ten­si­fied.

To make mat­ters worse, the sci­ence of AI eval­u­a­tions has proven more com­pli­cated than Anthropic ex­pected when it first crafted the RSP. The ar­rival of pow­er­ful new mod­els meant that, in 2025, Anthropic an­nounced it could not rule out the pos­si­bil­ity of these mod­els fa­cil­i­tat­ing a bio-ter­ror­ist at­tack. But while they could­n’t rule it out, they also lacked strong sci­en­tific ev­i­dence that mod­els did pose that kind of dan­ger, which made it dif­fi­cult to con­vince gov­ern­ments and ri­vals of what they saw as the need to act care­fully. What the com­pany had pre­vi­ously imag­ined might look like a bright red line was in­stead com­ing into fo­cus as a fuzzy gra­di­ent. For nearly a year, Anthropic ex­ec­u­tives dis­cussed ways to re­shape their flag­ship safety pol­icy to match this new en­vi­ron­ment, Kaplan says. One point they kept com­ing back to was their found­ing premise: the idea that to do proper AI safety re­search, they had to build mod­els at the fron­tier of ca­pa­bil­ity—even though do­ing so might ac­cel­er­ate the ar­rival of the dan­gers they feared.

In February, ac­cord­ing to Kaplan, Amodei de­cided that keep­ing the com­pany from train­ing new mod­els while com­peti­tors raced ahead would be help­ful to no­body. If one AI de­vel­oper paused de­vel­op­ment to im­ple­ment safety mea­sures while oth­ers moved for­ward train­ing and de­ploy­ing AI sys­tems with­out strong mit­i­ga­tions, that could re­sult in a world that is less safe,” the new ver­sion of the RSP, ap­proved unan­i­mously by Amodei and Anthropic’s board, states in its in­tro­duc­tion. The de­vel­op­ers with the weak­est pro­tec­tions would set the pace, and re­spon­si­ble de­vel­op­ers would lose their abil­ity to do safety re­search.”

Chris Painter, the di­rec­tor of pol­icy at METR, a non­profit fo­cused on eval­u­at­ing AI mod­els for risky be­hav­ior, re­viewed an early draft of the pol­icy with Anthropic’s per­mis­sion. He says the change is un­der­stand­able — but also a bear­ish sig­nal for the world’s abil­ity to nav­i­gate po­ten­tial AI cat­a­stro­phes. The change to the RSP shows Anthropic believes it needs to shift into triage mode with its safety plans, be­cause meth­ods to as­sess and mit­i­gate risk are not keep­ing up with the pace of ca­pa­bil­i­ties,” Painter tells TIME. This is more ev­i­dence that so­ci­ety is not pre­pared for the po­ten­tial cat­a­strophic risks posed by AI.”

Anthropic ar­gues the re­tooled RSP is de­signed to keep the biggest ben­e­fits of the old one. For ex­am­ple, by con­strain­ing it­self from re­leas­ing new mod­els, Anthropic’s orig­i­nal RSP also in­cen­tivized it to quickly build safety mit­i­ga­tions. (Because oth­er­wise the com­pany would be un­able to sell its AI to cus­tomers.) Anthropic says it be­lieves it can main­tain that in­cen­tive. The new pol­icy com­mits the com­pany to reg­u­larly re­lease what it calls Frontier Safety Roadmaps”: doc­u­ments lay­ing out a list of de­tailed goals for fu­ture safety mea­sures it hopes to build.“We hope to cre­ate a forc­ing func­tion for work that would oth­er­wise be chal­leng­ing to ap­pro­pri­ately pri­or­i­tize and re­source, as it re­quires col­lab­o­ra­tion (and in some cases sac­ri­fices) from mul­ti­ple parts of the com­pany and can be at cross-pur­poses with im­me­di­ate com­pet­i­tive and com­mer­cial pri­or­i­ties,” the new RSP states. Anthropic says it will also com­mit to pub­lish­ing so-called Risk Reports” every three to six months. The re­ports, the com­pany says, will explain how ca­pa­bil­i­ties, threat mod­els (the spe­cific ways that mod­els might pose threats), and ac­tive risk mit­i­ga­tions fit to­gether, and pro­vide an as­sess­ment of the over­all level of risk.” These doc­u­ments will be more in-depth than the re­ports the com­pany al­ready pub­lishes, a spokesper­son tells TIME.

I like the em­pha­sis on trans­par­ent risk re­port­ing and pub­licly ver­i­fi­able safety roadmaps,” says Painter, the METR pol­icy of­fi­cial. But he said he was concerned” that mov­ing away from bi­nary thresh­olds un­der the pre­vi­ous RSP, by which the ar­rival of a cer­tain ca­pa­bil­ity could act as a trip­wire to tem­porar­ily halt Anthropic’s AI de­vel­op­ment, might en­able a frog-boiling” ef­fect, where dan­ger slowly ramps up with­out a sin­gle mo­ment that sets off alarms. Asked whether Anthropic was cav­ing to mar­ket pres­sure, Kaplan ar­gued that, in fact, Anthropic was mak­ing a re­newed com­mit­ment to de­vel­op­ing AI safely. If all of our com­peti­tors are trans­par­ently do­ing the right thing when it comes to cat­a­strophic risk, we are com­mit­ted to do­ing as well or bet­ter,” he said. But we don’t think it makes sense for us to stop en­gag­ing with AI re­search, AI safety, and most likely lose rel­e­vance as an in­no­va­tor who un­der­stands the fron­tier of the tech­nol­ogy, in a sce­nario where oth­ers are go­ing ahead and we’re not ac­tu­ally con­tribut­ing any ad­di­tional risk to the ecosys­tem.”

...

Read the original on time.com »

6 523 shares, 27 trendiness

pi.dev

There are many cod­ing agents, but this one is mine.

Pi is a min­i­mal ter­mi­nal cod­ing har­ness. Adapt pi to your work­flows, not the other way around. Extend it with TypeScript ex­ten­sions, skills, prompt tem­plates, and themes. Bundle them as pi pack­ages and share via npm or git.

Pi ships with pow­er­ful de­faults but skips fea­tures like sub-agents and plan mode. Ask pi to build what you want, or in­stall a pack­age that does it your way.

Four modes: in­ter­ac­tive, print/​JSON, RPC, and SDK. See clawd­bot for a real-world in­te­gra­tion.

Anthropic, OpenAI, Google, Azure, Bedrock, Mistral, Groq, Cerebras, xAI, Hugging Face, Kimi For Coding, MiniMax, OpenRouter, Ollama, and more. Authenticate via API keys or OAuth.

Switch mod­els mid-ses­sion with /model or Ctrl+L. Cycle through your fa­vorites with Ctrl+P.

Add cus­tom providers and mod­els via mod­els.json or ex­ten­sions.

Sessions are stored as trees. Use /tree to nav­i­gate to any pre­vi­ous point and con­tinue from there. All branches live in a sin­gle file. Filter by mes­sage type, la­bel en­tries as book­marks.

Export to HTML with /export, or up­load to a GitHub gist with /share and get a share­able URL that ren­ders it.

Pi’s min­i­mal sys­tem prompt and ex­ten­si­bil­ity let you do ac­tual con­text en­gi­neer­ing. Control what goes into the con­text win­dow and how it’s man­aged.

AGENTS.md: Project in­struc­tions loaded at startup from ~/.pi/agent/, par­ent di­rec­to­ries, and the cur­rent di­rec­tory.

SYSTEM.md: Replace or ap­pend to the de­fault sys­tem prompt per-pro­ject.

Compaction: Auto-summarizes older mes­sages when ap­proach­ing the con­text limit. Fully cus­tomiz­able via ex­ten­sions: im­ple­ment topic-based com­paction, code-aware sum­maries, or use dif­fer­ent sum­ma­riza­tion mod­els.

Skills: Capability pack­ages with in­struc­tions and tools, loaded on-de­mand. Progressive dis­clo­sure with­out bust­ing the prompt cache. See skills.

Prompt tem­plates: Reusable prompts as Markdown files. Type /name to ex­pand. See prompt tem­plates.

Dynamic con­text: Extensions can in­ject mes­sages be­fore each turn, fil­ter the mes­sage his­tory, im­ple­ment RAG, or build long-term mem­ory.

Submit mes­sages while the agent works. Enter sends a steer­ing mes­sage (delivered af­ter cur­rent tool, in­ter­rupts re­main­ing tools). Alt+Enter sends a fol­low-up (waits un­til the agent fin­ishes).

Features that other agents bake in, you can build your­self. Extensions are TypeScript mod­ules with ac­cess to tools, com­mands, key­board short­cuts, events, and the full TUI.

Don’t want to build it? Ask pi to build it for you. Or in­stall a pack­age that does it your way. See the 50+ ex­am­ples.

Bundle ex­ten­sions, skills, prompts, and themes as pack­ages. Install from npm or git:

Pin ver­sions with @1.2.3 or @tag. Update all with pi up­date, list with pi list, con­fig­ure with pi con­fig.

Find pack­ages on npm or Discord. Share yours with the pi-pack­age key­word.

RPC: JSON pro­to­col over stdin/​std­out for non-Node in­te­gra­tions. See docs/​rpc.md.

SDK: Embed pi in your apps. See clawd­bot for a real-world ex­am­ple.

Pi is ag­gres­sively ex­ten­si­ble so it does­n’t have to dic­tate your work­flow. Features that other tools bake in can be built with ex­ten­sions, skills, or in­stalled from third-party pi pack­ages. This keeps the core min­i­mal while let­ting you shape pi to fit how you work.

No MCP. Build CLI tools with READMEs (see Skills), or build an ex­ten­sion that adds MCP sup­port. Why?

No sub-agents. There’s many ways to do this. Spawn pi in­stances via tmux, or build your own with ex­ten­sions, or in­stall a pack­age that does it your way.

No per­mis­sion pop­ups. Run in a con­tainer, or build your own con­fir­ma­tion flow with ex­ten­sions in­line with your en­vi­ron­ment and se­cu­rity re­quire­ments.

No plan mode. Write plans to files, or build it with ex­ten­sions, or in­stall a pack­age.

No built-in to-dos. Use a TODO.md file, or build your own with ex­ten­sions.

Read the blog post for the full ra­tio­nale.

Docs: README and docs/ for every­thing else.

...

Read the original on pi.dev »

7 485 shares, 15 trendiness

How we rebuilt Next.js with AI in one week

*This post was up­dated at 12:35 pm PT to fix a typo in the build time bench­marks.

Last week, one en­gi­neer and an AI model re­built the most pop­u­lar front-end frame­work from scratch. The re­sult, vinext (pronounced vee-next”), is a drop-in re­place­ment for Next.js, built on Vite, that de­ploys to Cloudflare Workers with a sin­gle com­mand. In early bench­marks, it builds pro­duc­tion apps up to 4x faster and pro­duces client bun­dles up to 57% smaller. And we al­ready have cus­tomers run­ning it in pro­duc­tion.

The whole thing cost about $1,100 in to­kens.

Next.js is the most pop­u­lar React frame­work. Millions of de­vel­op­ers use it. It pow­ers a huge chunk of the pro­duc­tion web, and for good rea­son. The de­vel­oper ex­pe­ri­ence is top-notch.

But Next.js has a de­ploy­ment prob­lem when used in the broader server­less ecosys­tem. The tool­ing is en­tirely be­spoke: Next.js has in­vested heav­ily in Turbopack but if you want to de­ploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build out­put and re­shape it into some­thing the tar­get plat­form can ac­tu­ally run.

If you’re think­ing: Isn’t that what OpenNext does?”, you are cor­rect.

That is in­deed the prob­lem OpenNext was built to solve. And a lot of en­gi­neer­ing ef­fort has gone into OpenNext from mul­ti­ple providers, in­clud­ing us at Cloudflare. It works, but quickly runs into lim­i­ta­tions and be­comes a game of whack-a-mole.

Building on top of Next.js out­put as a foun­da­tion has proven to be a dif­fi­cult and frag­ile ap­proach. Because OpenNext has to re­verse-en­gi­neer Next.js’s build out­put, this re­sults in un­pre­dictable changes be­tween ver­sions that take a lot of work to cor­rect.

Next.js has been work­ing on a first-class adapters API, and we’ve been col­lab­o­rat­ing with them on it. It’s still an early ef­fort but even with adapters, you’re still build­ing on the be­spoke Turbopack tool­chain. And adapters only cover build and de­ploy. During de­vel­op­ment, next dev runs ex­clu­sively in Node.js with no way to plug in a dif­fer­ent run­time. If your ap­pli­ca­tion uses plat­form-spe­cific APIs like Durable Objects, KV, or AI bind­ings, you can’t test that code in dev with­out workarounds.

What if in­stead of adapt­ing Next.js out­put, we reim­ple­mented the Next.js API sur­face on Vite di­rectly? Vite is the build tool used by most of the front-end ecosys­tem out­side of Next.js, pow­er­ing frame­works like Astro, SvelteKit, Nuxt, and Remix. A clean reim­ple­men­ta­tion, not merely a wrap­per or adapter. We hon­estly did­n’t think it would work. But it’s 2026, and the cost of build­ing soft­ware has com­pletely changed.

We got a lot fur­ther than we ex­pected.

Replace next with vinext in your scripts and every­thing else stays the same. Your ex­ist­ing app/, pages/, and next.con­fig.js work as-is.

vinext dev # Development server with HMR

vinext build # Production build

vinext de­ploy # Build and de­ploy to Cloudflare Workers

This is not a wrap­per around Next.js and Turbopack out­put. It’s an al­ter­na­tive im­ple­men­ta­tion of the API sur­face: rout­ing, server ren­der­ing, React Server Components, server ac­tions, caching, mid­dle­ware. All of it built on top of Vite as a plu­gin. Most im­por­tantly Vite out­put runs on any plat­form thanks to the Vite Environment API.

Early bench­marks are promis­ing. We com­pared vinext against Next.js 16 us­ing a shared 33-route App Router ap­pli­ca­tion.

Both frame­works are do­ing the same work: com­pil­ing, bundling, and prepar­ing server-ren­dered routes. We dis­abled TypeScript type check­ing and ESLint in Next.js’s build (Vite does­n’t run these dur­ing builds), and used force-dy­namic so Next.js does­n’t spend ex­tra time pre-ren­der­ing sta­tic routes, which would un­fairly slow down its num­bers. The goal was to mea­sure only bundler and com­pi­la­tion speed, noth­ing else. Benchmarks run on GitHub CI on every merge to main.

These bench­marks mea­sure com­pi­la­tion and bundling speed, not pro­duc­tion serv­ing per­for­mance. The test fix­ture is a sin­gle 33-route app, not a rep­re­sen­ta­tive sam­ple of all pro­duc­tion ap­pli­ca­tions. We ex­pect these num­bers to evolve as three pro­jects con­tinue to de­velop. The full method­ol­ogy and his­tor­i­cal re­sults are pub­lic. Take them as di­rec­tional, not de­fin­i­tive.

The di­rec­tion is en­cour­ag­ing, though. Vite’s ar­chi­tec­ture, and es­pe­cially Rolldown (the Rust-based bundler com­ing in Vite 8), has struc­tural ad­van­tages for build per­for­mance that show up clearly here.

vinext is built with Cloudflare Workers as the first de­ploy­ment tar­get. A sin­gle com­mand takes you from source code to a run­ning Worker:

This han­dles every­thing: builds the ap­pli­ca­tion, auto-gen­er­ates the Worker con­fig­u­ra­tion, and de­ploys. Both the App Router and Pages Router work on Workers, with full client-side hy­dra­tion, in­ter­ac­tive com­po­nents, client-side nav­i­ga­tion, React state.

For pro­duc­tion caching, vinext in­cludes a Cloudflare KV cache han­dler that gives you ISR (Incremental Static Regeneration) out of the box:

KV is a good de­fault for most ap­pli­ca­tions, but the caching layer is de­signed to be plug­gable. That set­CacheHan­dler call means you can swap in what­ever back­end makes sense. R2 might be a bet­ter fit for apps with large cached pay­loads or dif­fer­ent ac­cess pat­terns. We’re also work­ing on im­prove­ments to our Cache API that should pro­vide a strong caching layer with less con­fig­u­ra­tion. The goal is flex­i­bil­ity: pick the caching strat­egy that fits your app.

We also have a live ex­am­ple of Cloudflare Agents run­ning in a Next.js app, with­out the need for workarounds like get­Plat­form­Proxy, since the en­tire app now runs in work­erd, dur­ing both dev and de­ploy phases. This means be­ing able to use Durable Objects, AI bind­ings, and every other Cloudflare-specific ser­vice with­out com­pro­mise. Have a look here.

The cur­rent de­ploy­ment tar­get is Cloudflare Workers, but that’s a small part of the pic­ture. Something like 95% of vinext is pure Vite. The rout­ing, the mod­ule shims, the SSR pipeline, the RSC in­te­gra­tion: none of it is Cloudflare-specific.

Cloudflare is look­ing to work with other host­ing providers about adopt­ing this tool­chain for their cus­tomers (the lift is min­i­mal — we got a proof-of-con­cept work­ing on Vercel in less than 30 min­utes!). This is an open-source pro­ject, and for its long term suc­cess, we be­lieve it’s im­por­tant we work with part­ners across the ecosys­tem to en­sure on­go­ing in­vest­ment. PRs from other plat­forms are wel­come. If you’re in­ter­ested in adding a de­ploy­ment tar­get, open an is­sue or reach out.

We want to be clear: vinext is ex­per­i­men­tal. It’s not even one week old, and it has not yet been bat­tle-tested with any mean­ing­ful traf­fic at scale. If you’re eval­u­at­ing it for a pro­duc­tion ap­pli­ca­tion, pro­ceed with ap­pro­pri­ate cau­tion.

That said, the test suite is ex­ten­sive: over 1,700 Vitest tests and 380 Playwright E2E tests, in­clud­ing tests ported di­rectly from the Next.js test suite and OpenNext’s Cloudflare con­for­mance suite. We’ve ver­i­fied it against the Next.js App Router Playground. Coverage sits at 94% of the Next.js 16 API sur­face.

Early re­sults from real-world cus­tomers are en­cour­ag­ing. We’ve been work­ing with National Design Studio, a team that’s aim­ing to mod­ern­ize every gov­ern­ment in­ter­face, on one of their beta sites, CIO.gov. They’re al­ready run­ning vinext in pro­duc­tion, with mean­ing­ful im­prove­ments in build times and bun­dle sizes.

The README is hon­est about what’s not sup­ported and won’t be, and about known lim­i­ta­tions. We want to be up­front rather than over­promise.

vinext al­ready sup­ports Incremental Static Regeneration (ISR) out of the box. After the first re­quest to any page, it’s cached and reval­i­dated in the back­ground, just like Next.js. That part works to­day.

vinext does not yet sup­port sta­tic pre-ren­der­ing at build time. In Next.js, pages with­out dy­namic data get ren­dered dur­ing next build and served as sta­tic HTML. If you have dy­namic routes, you use gen­er­at­eSta­t­ic­Params() to enu­mer­ate which pages to build ahead of time. vinext does­n’t do that… yet.

This was an in­ten­tional de­sign de­ci­sion for launch. It’s on the roadmap, but if your site is 100% pre­built HTML with sta­tic con­tent, you prob­a­bly won’t see much ben­e­fit from vinext to­day. That said, if one en­gi­neer can spend $1,100 in to­kens and re­build Next.js, you can prob­a­bly spend $10 and mi­grate to a Vite-based frame­work de­signed specif­i­cally for sta­tic con­tent, like Astro (which also de­ploys to Cloudflare Workers).

For sites that aren’t purely sta­tic, though, we think we can do some­thing bet­ter than pre-ren­der­ing every­thing at build time.

Next.js pre-ren­ders every page listed in gen­er­at­eSta­t­ic­Params() dur­ing the build. A site with 10,000 prod­uct pages means 10,000 ren­ders at build time, even though 99% of those pages may never re­ceive a re­quest. Builds scale lin­early with page count. This is why large Next.js sites end up with 30-minute builds.

So we built Traffic-aware Pre-Rendering (TPR). It’s ex­per­i­men­tal to­day, and we plan to make it the de­fault once we have more real-world test­ing be­hind it.

The idea is sim­ple. Cloudflare is al­ready the re­verse proxy for your site. We have your traf­fic data. We know which pages ac­tu­ally get vis­ited. So in­stead of pre-ren­der­ing every­thing or pre-ren­der­ing noth­ing, vinext queries Cloudflare’s zone an­a­lyt­ics at de­ploy time and pre-ren­ders only the pages that mat­ter.

vinext de­ploy –experimental-tpr

Building…

Build com­plete (4.2s)

TPR (experimental): Analyzing traf­fic for my-store.com (last 24h)

TPR: 12,847 unique paths — 184 pages cover 90% of traf­fic

TPR: Pre-rendering 184 pages…

TPR: Pre-rendered 184 pages in 8.3s → KV cache

Deploying to Cloudflare Workers…

For a site with 100,000 prod­uct pages, the power law means 90% of traf­fic usu­ally goes to 50 to 200 pages. Those get pre-ren­dered in sec­onds. Everything else falls back to on-de­mand SSR and gets cached via ISR af­ter the first re­quest. Every new de­ploy re­freshes the set based on cur­rent traf­fic pat­terns. Pages that go vi­ral get picked up au­to­mat­i­cally. All of this works with­out gen­er­at­eSta­t­ic­Params() and with­out cou­pling your build to your pro­duc­tion data­base.

A pro­ject like this would nor­mally take a team of en­gi­neers months, if not years. Several teams at var­i­ous com­pa­nies have at­tempted it, and the scope is just enor­mous. We tried once at Cloudflare! Two routers, 33+ mod­ule shims, server ren­der­ing pipelines, RSC stream­ing, file-sys­tem rout­ing, mid­dle­ware, caching, sta­tic ex­port. There’s a rea­son no­body has pulled it off.

This time we did it in un­der a week. One en­gi­neer (technically en­gi­neer­ing man­ager) di­rect­ing AI.

The first com­mit landed on February 13. By the end of that same evening, both the Pages Router and App Router had ba­sic SSR work­ing, along with mid­dle­ware, server ac­tions, and stream­ing. By the next af­ter­noon, App Router Playground was ren­der­ing 10 of 11 routes. By day three, vinext de­ploy was ship­ping apps to Cloudflare Workers with full client hy­dra­tion. The rest of the week was hard­en­ing: fix­ing edge cases, ex­pand­ing the test suite, bring­ing API cov­er­age to 94%.

What changed from those ear­lier at­tempts? AI got bet­ter. Way bet­ter.

Not every pro­ject would go this way. This one did be­cause a few things hap­pened to line up at the right time.

Next.js is well-spec­i­fied. It has ex­ten­sive doc­u­men­ta­tion, a mas­sive user base, and years of Stack Overflow an­swers and tu­to­ri­als. The API sur­face is all over the train­ing data. When you ask Claude to im­ple­ment get­Server­Side­Props or ex­plain how useRouter works, it does­n’t hal­lu­ci­nate. It knows how Next works.

Next.js has an elab­o­rate test suite. The Next.js repo con­tains thou­sands of E2E tests cov­er­ing every fea­ture and edge case. We ported tests di­rectly from their suite (you can see the at­tri­bu­tion in the code). This gave us a spec­i­fi­ca­tion we could ver­ify against me­chan­i­cally.

Vite is an ex­cel­lent foun­da­tion. Vite han­dles the hard parts of front-end tool­ing: fast HMR, na­tive ESM, a clean plu­gin API, pro­duc­tion bundling. We did­n’t have to build a bundler. We just had to teach it to speak Next.js. @vitejs/plugin-rsc is still early, but it gave us React Server Components sup­port with­out hav­ing to build an RSC im­ple­men­ta­tion from scratch.

The mod­els caught up. We don’t think this would have been pos­si­ble even a few months ago. Earlier mod­els could­n’t sus­tain co­her­ence across a code­base this size. New mod­els can hold the full ar­chi­tec­ture in con­text, rea­son about how mod­ules in­ter­act, and pro­duce cor­rect code of­ten enough to keep mo­men­tum go­ing. At times, I saw it go into Next, Vite, and React in­ter­nals to fig­ure out a bug. The state-of-the-art mod­els are im­pres­sive, and they seem to keep get­ting bet­ter.

All of those things had to be true at the same time. Well-documented tar­get API, com­pre­hen­sive test suite, solid build tool un­der­neath, and a model that could ac­tu­ally han­dle the com­plex­ity. Take any one of them away and this does­n’t work nearly as well.

Almost every line of code in vinext was writ­ten by AI. But here’s the thing that mat­ters more: every line passes the same qual­ity gates you’d ex­pect from hu­man-writ­ten code. The pro­ject has 1,700+ Vitest tests, 380 Playwright E2E tests, full TypeScript type check­ing via tsgo, and lint­ing via oxlint. Continuous in­te­gra­tion runs all of it on every pull re­quest. Establishing a set of good guardrails is crit­i­cal to mak­ing AI pro­duc­tive in a code­base.

The process started with a plan. I spent a cou­ple of hours go­ing back and forth with Claude in OpenCode to de­fine the ar­chi­tec­ture: what to build, in what or­der, which ab­strac­tions to use. That plan be­came the north star. From there, the work­flow was straight­for­ward:

Let the AI write the im­ple­men­ta­tion and tests. If tests pass, merge. If not, give the AI the er­ror out­put and let it it­er­ate.

We wired up AI agents for code re­view too. When a PR was opened, an agent re­viewed it. When re­view com­ments came back, an­other agent ad­dressed them. The feed­back loop was mostly au­to­mated.

It did­n’t work per­fectly every time. There were PRs that were just wrong. The AI would con­fi­dently im­ple­ment some­thing that seemed right but did­n’t match ac­tual Next.js be­hav­ior. I had to course-cor­rect reg­u­larly. Architecture de­ci­sions, pri­or­i­ti­za­tion, know­ing when the AI was headed down a dead end: that was all me. When you give AI good di­rec­tion, good con­text, and good guardrails, it can be very pro­duc­tive. But the hu­man still has to steer.

For browser-level test­ing, I used agent-browser to ver­ify ac­tual ren­dered out­put, client-side nav­i­ga­tion, and hy­dra­tion be­hav­ior. Unit tests miss a lot of sub­tle browser is­sues. This caught them.

Over the course of the pro­ject, we ran over 800 ses­sions in OpenCode. Total cost: roughly $1,100 in Claude API to­kens.

Why do we have so many lay­ers in the stack? This pro­ject forced me to think deeply about this ques­tion. And to con­sider how AI im­pacts the an­swer.

Most ab­strac­tions in soft­ware ex­ist be­cause hu­mans need help. We could­n’t hold the whole sys­tem in our heads, so we built lay­ers to man­age the com­plex­ity for us. Each layer made the next per­son’s job eas­ier. That’s how you end up with frame­works on top of frame­works, wrap­per li­braries, thou­sands of lines of glue code.

AI does­n’t have the same lim­i­ta­tion. It can hold the whole sys­tem in con­text and just write the code. It does­n’t need an in­ter­me­di­ate frame­work to stay or­ga­nized. It just needs a spec and a foun­da­tion to build on.

It’s not clear yet which ab­strac­tions are truly foun­da­tional and which ones were just crutches for hu­man cog­ni­tion. That line is go­ing to shift a lot over the next few years. But vinext is a data point. We took an API con­tract, a build tool, and an AI model, and the AI wrote every­thing in be­tween. No in­ter­me­di­ate frame­work needed. We think this pat­tern will re­peat across a lot of soft­ware. The lay­ers we’ve built up over the years aren’t all go­ing to make it.

Thanks to the Vite team. Vite is the foun­da­tion this whole thing stands on. @vitejs/plugin-rsc is still early days, but it gave me RSC sup­port with­out hav­ing to build that from scratch, which would have been a deal­breaker. The Vite main­tain­ers were re­spon­sive and help­ful as I pushed the plu­gin into ter­ri­tory it had­n’t been tested in be­fore.

We also want to ac­knowl­edge the Next.js team. They’ve spent years build­ing a frame­work that raised the bar for what React de­vel­op­ment could look like. The fact that their API sur­face is so well-doc­u­mented and their test suite so com­pre­hen­sive is a big part of what made this pro­ject pos­si­ble. vinext would­n’t ex­ist with­out the stan­dard they set.

vinext in­cludes an Agent Skill that han­dles mi­gra­tion for you. It works with Claude Code, OpenCode, Cursor, Codex, and dozens of other AI cod­ing tools. Install it, open your Next.js pro­ject, and tell the AI to mi­grate:

Then open your Next.js pro­ject in any sup­ported tool and say:

The skill han­dles com­pat­i­bil­ity check­ing, de­pen­dency in­stal­la­tion, con­fig gen­er­a­tion, and dev server startup. It knows what vinext sup­ports and will flag any­thing that needs man­ual at­ten­tion.

Or if you pre­fer do­ing it by hand:

npx vinext init # Migrate an ex­ist­ing Next.js pro­ject

npx vinext dev # Start the dev server

npx vinext de­ploy # Ship to Cloudflare Workers

The source is at github.com/​cloud­flare/​vinext. Issues, PRs, and feed­back are wel­come.

...

Read the original on blog.cloudflare.com »

8 440 shares, 84 trendiness

Never Buy A .online Domain

I’ve been a .com purist for over two decades of build­ing. Once, I broke that rule and bought a .online TLD for a small pro­ject. This is the story of how it went up in flames.

Update: Within 40 min­utes of post­ing this on HN, the site has been re­moved from Google’s Safe Search black­list. Thank you, un­known Google hero! I’ve emailed Radix to re­move the darn server­Hold.

Update 2: The site is fi­nally back on­line. Not link­ing here as I don’t want this to look like a mar­ket­ing stunt. Link at the bot­tom if you’re cu­ri­ous. [4]

Earlier this year, Namecheap was run­ning a promo that let you choose one free .online or .site per ac­count. I was work­ing on a small prod­uct and thought, hey, why not?” The app was a small browser, and the .online TLD just made sense in my head.

After a tiny $0.20 to cover ICANN fees, and hook­ing it up to Cloudflare and GitHub, I was up and run­ning. Or so I thought.

Poking around traf­fic data for an un­re­lated do­main many weeks af­ter the pur­chase, I no­ticed there were zero vis­i­tors to the site in the last 48 hours. Loading it up led to the dreaded, all red, full page This is an un­safe site” no­tice on both Firefox and Chrome. The site had a link to the App Store, some screen­shots (no gore or vi­o­lence or any­thing of that sort), and a few lines of text about the app, noth­ing else that could pos­si­bly cause this. [1]

Clicking through the dis­claimers to load the ac­tual site to check if it had been de­faced, I was greeted with a site not found” er­ror. Uh oh.

After check­ing that Cloudflare was still ac­ti­vated and the CF Worker was point­ing to the do­main, I went to the reg­is­trar first. Namecheap is not the pic­ture of re­li­a­bil­ity, so it seemed like a good place to start. The do­main showed up fine on my ac­count with the right ex­pi­ra­tion date. The name­servers were cor­rect and pointed to CF.

Maybe I had got­ten it wrong, so I checked the WHOIS in­for­ma­tion on­line. Status: server­Hold. Oh no…

At this point, I dou­ble checked to make sure I had­n’t re­ceived emails from the reg­istry, reg­is­trar, host, or Google. Nada, noth­ing, zilch.

I emailed Namecheap to dou­ble check what was go­ing on (even though it’s a server­Hold [2], not a clien­tHold [3]). They re­sponded in a few min­utes with:

Cursing un­der my breath, as it con­firms my worst fears, I promptly sub­mit­ted a re­quest to the abuse team at Radix, the reg­istry in our case, who re­sponded with:

Right, let’s get our­selves off the damned Safe Browsing black­list, eh? How hard could it be?

Very much so, I’ve now come to learn. You need to ver­ify the do­main in Google Search Console to then ask for a re­view and get the flag re­moved. But how do you get ver­i­fied? Add a DNS TXT or a CNAME record. How will it work if the do­main will not re­solve? It won’t.

As the sit­u­a­tion stands, the reg­istry won’t re­ac­ti­vate the do­main un­less Google re­moves the flag, and Google won’t re­move the flag un­less I ver­ify that I own the do­main, which I phys­i­cally can’t.

I’ve tried re­port­ing the false pos­i­tive here, here and here, just in case it moves the nee­dle.

I’ve also sub­mit­ted a re­view re­quest to the Safe Search team (totally dif­fer­ent from Safe Browsing) in the hopes that it might trig­ger a re-re­view else­where. Instead I just get a No valid pages were sub­mit­ted mes­sage from Google be­cause noth­ing re­solves on the do­main.

As a last re­sort, I sub­mit­ted a tem­po­rary re­lease re­quest to the reg­istry so Google can re­view the site’s con­tents and, hope­fully, re­move the flag.

I’ve made a few mis­takes here that I def­i­nitely won’t be mak­ing again.

* Buying a weird TLD. .com is the gold stan­dard. I’m never buy­ing any­thing else again. Once bit­ten and all that.

* Not adding the do­main to Google Search Console im­me­di­ately. I don’t need their an­a­lyt­ics and was­n’t re­ally plan­ning on hav­ing any con­tent on the do­main, so I thought, why bother? Big, big mis­take.

* Not adding any up­time ob­serv­abil­ity. This was just a land­ing page, and I wanted as few mov­ing parts as pos­si­ble.

Both Radix, the reg­istry, and Google de­serve spe­cial men­tion for their hair-trig­ger bans and painful re­moval processes, with no no­ti­fi­ca­tions or grace time to fix the is­sue. I’m not sure whether it’s the weird TLD that’s caus­ing a po­ten­tially short fuse or whether I was brigaded ear­lier with re­ports. I’ll never know.

[1] A mir­ror can be found here to ver­ify the site con­tents.

[2] server­Hold is set by the reg­istry and is a royal pain to deal with. Usually means things are FUBAR.

[3] clien­tHold is set by the reg­is­trar and is mostly pay­ment or billing re­lated.

...

Read the original on www.0xsid.com »

9 402 shares, 17 trendiness

yjeanrenaud/yj_nearbyglasses: attempting to detect smart glasses nearby and warn you

The app, called Nearby Glasses, has one sole pur­pose: Look for smart glasses nearby and warn you.

This app no­ti­fies you when smart glasses are nearby. It uses com­pany iden­ti­fi­ca­tors in the Bluetooth data sent out by these. Therefore, there likely are false pos­i­tives (e.g. from VR head­sets). Hence, please pro­ceed with cau­tion when ap­proach­ing a per­son nearby wear­ing glasses. They might just be reg­u­lar glasses, de­spite this ap­p’s warn­ing.

The ap­p’s au­thor Yves Jeanrenaud takes no li­a­bil­ity what­so­ever for this app nor it’s func­tion­al­ity. Use at your own risk. By tech­ni­cal de­sign, de­tect­ing Bluetooth LE de­vices might some­times just not work as ex­pected. I am no grad­u­ated de­vel­oper. This is all writ­ten in my free time and with knowl­edge I taught my­self.

False pos­i­tives are likely. This means, the app Nearby Glasses may no­tify you of smart glasses nearby when there might be in fact a VR head­set of the same man­u­fac­turer or an­other prod­uct of that com­pa­ny’s breed. It may also miss smart glasses nearby. Again: I am no pro de­vel­oper.

However, this app is free and it’s source is avail­able (though it’s not con­sid­ered foss due to the non-com­mer­cial re­stri­tion), you may re­view the code, change it and re-use it (under the li­cense).

The app Nearby Glasses does not store any de­tails about you or col­lects any in­for­ma­tion about you or your phone. There are no teleme­try, no ads, and no other nui­sance. If you in­stall the app via Play Store, Google may know some­thing about you and col­lect some stats. But the app it­self does not.

If you choose to store (export) the log­file, that is com­pletely up to you and your li­a­bil­ity where this data go to. The logs are recorded only lo­cally and not au­to­mat­i­cally shared with any­one. They do con­tain lit­tle sen­si­tive data; in fact, only the man­u­fac­turer ID codes of BLE de­vices en­coun­tered.

Use with ex­treme cau­tion! As stated be­fore: There is no guar­an­tee that de­tected smart glasses are re­ally nearby. It might be an­other de­vice look­ing tech­ni­cally (on the BLE adv level) sim­i­lar to smart glasses.

Please do not act rashly. Think be­fore you act upon any mes­sages (not only from this app).

* Because I con­sider smart glasses an in­tol­er­a­ble in­tru­sion, con­sent ne­glect­ing, hor­ri­ble piece of tech that is al­ready used for mak­ing var­i­ous and tons of equally tru­ely dis­gust­ing content’. 1, 2

* Some smart glasses fea­ture small LED sig­ni­fy­ing a record­ing is go­ing on. But this is eas­ily dis­abled, whilst man­u­fac­tur­ers claim to pre­vent that and take no re­spon­si­bil­ity at all (tech tends to do that for decades now). 3

* Smart glasses have been used for in­stant fa­cial recog­ni­tion be­fore 4 and re­port­edly will be out of the box 5. This puts a lot of peo­ple in dan­ger.

* I hope this is app is use­ful for some­one.

* It’s a sim­ple rather heuris­tic ap­proach. Because BLE uses ran­domised MAC and the OSSID are not sta­ble, nor the UUID of the ser­vice an­nounce­ments, you can’t just scan for the blue­tooth bea­cons. And, to make thinks even more dire, some like Meta, for in­stance, use pro­pri­etary Bluetooth ser­vices and UUIDs are not per­sis­tent, we can only rely on the com­mu­ni­cated de­vice names for now.

* The cur­rently most vi­able ap­proach comes from the Bluetooth SIG as­signed num­bers repo. Following this, the man­u­fac­turer com­pa­ny’s name shows up as num­ber codes in the packet ad­ver­tis­ing header (ADV) of BLE bea­cons.

* this is what BLE ad­ver­tis­ing frames look like:

* According to the Bluetooth SIG as­signed num­bers repo, we may use these com­pany IDs:

0x01AB for Meta Platforms, Inc. (formerly Facebook)

0x0D53 for Luxottica Group S.p.A (Who man­u­fac­tur­ers the Meta Ray-Bans)

0x03C2 for Snapchat, Inc., that makes SNAP Spectacles

They are im­mutable and manda­tory. Of course, Meta and other man­u­fac­tur­ers also have other prod­ucts that come with Bluetooth and there­fore their ID, e.g. VR Headsets. Therefore, us­ing these com­pany ID codes for the ap­p’s scan­ning process is prone to false pos­i­tives. But if you can’t see some­one wear­ing an Occulus Rift around you and there are no build­ings where they could hide, chances are good that it’s smart glasses in­stead.

* 0x01AB for Meta Platforms, Inc. (formerly Facebook)

* 0x0D53 for Luxottica Group S.p.A (Who man­u­fac­tur­ers the Meta Ray-Bans)

* 0x03C2 for Snapchat, Inc., that makes SNAP Spectacles

They are im­mutable and manda­tory. Of course, Meta and other man­u­fac­tur­ers also have other prod­ucts that come with Bluetooth and there­fore their ID, e.g. VR Headsets. Therefore, us­ing these com­pany ID codes for the ap­p’s scan­ning process is prone to false pos­i­tives. But if you can’t see some­one wear­ing an Occulus Rift around you and there are no build­ings where they could hide, chances are good that it’s smart glasses in­stead.

* During pair­ing, the smart glasses usu­ally emit their prod­uct name, so we can scan for that, too. But it’s rare we will see that in the field. People with the in­ten­tion to use smart glasses in bars, pubs, on the street, and else­where usu­ally pre­pare for that be­fore­hand.

* When the app recog­nised a Bluetooth Low Energy (BLE) de­vice with a suf­fi­cient sig­nal strength (see RSI be­low), it will push an alert mes­sage. This shall help you to act ac­cord­ingly.

* The app Nearby Glasses shows a no­ti­fi­ca­tion when smart glasses are nearby (that means, a BLE de­vice of one of those com­pany IDs men­tioned above)

* Nearby means, the RSSI (signal strength) is less than or equal to a given value: -75 dBm by de­fault. This de­fault value cor­re­sponds to a medium dis­tance and an ok-ish sig­nal. Let me ex­plain:

RSSI de­pends mainly on

-100 dBm ~ 30 — 100+ m or near sig­nal loss

Indoors, dis­tances are of­ten much shorter.

RSSI drops roughly ac­cord­ing to

RSSI ≈ -10 * n * log10(dis­tance) + con­stant

* -100 dBm ~ 30 — 100+ m or near sig­nal loss

Indoors, dis­tances are of­ten much shorter.

RSSI drops roughly ac­cord­ing to

RSSI ≈ -10 * n * log10(dis­tance) + con­stant

* Therefore, the de­fault RSSI thresh­old of -75 dBm cor­re­sponds to about 10 to 15 me­ters in open space and 3 to 10 me­ters in­doors or in crowded spaces. You got a good chance to spot that smart glasses wear­ing per­son like that.

* Nearby Glasses shows an op­tional de­bug log that is ex­portable (as txt file) and fea­tures a copy&paste func­tion. Those are for ad­vanced users (nerds) and for fur­ther de­bug­ging.

* Under Settings, you may spec­ify the log length, the de­bug­ging (display all scan items or only ADV frames).

* You may also en­ter your­self some com­pany IDs as string of hex val­ues, e.g. 0x01AB,0x058E,0x0D53. This over­rides the built-in de­tec­tion, so your no­ti­fi­ca­tion shows up for the new value(s).

* For bet­ter per­sis­tence, it uses Android’s Foreground Service. You may dis­able this un­der Settings if you don’t need it.

* The Notification Cooldown un­der Settings spec­i­fies how much time must pass be­tween two warn­ings. Default is 10000 ms, which is 10 s.

* It is now a bit more lo­calised:

* See Releases for APK to down­load. Google Play Store en­try may fol­low soon

Install the app (from Releases or from Google Play, for now) and open it

Grant per­mis­sions to ac­ti­vate Bluetooth (if not al­ready en­abled) and to ac­cess de­vices nearby. Some ver­sions of Android also need you to grant per­mis­sions to ac­cess your lo­ca­tion (before Version 13, mostly). Nearby Glasses does noth­ing with your lo­ca­tion info. If you don’t be­lieve me, please look at the code

if you don’t see the scan start­ing, you might need to en­able Foreground Service on your par­tic­u­lar phone in the Settings menu (see be­low)

You’re all set! When smart glasses are de­tected nearby, a no­ti­fi­ca­tion will ap­pear. It does so un­til you hit Stop Scanning or ter­mi­nate the app for good

In the menu (top right, the cog­wheel), you may make some Settings:

Enable Foreground Service: By this, you pre­vent Android from paus­ing the app thus pre­vent­ing it from alert­ing you. I rec­om­mend leav­ing this en­abled

RSSI thresh­old: This neg­a­tive num­ber spec­i­fies how far away a de­vice might be to be a rea­son for an alert by Nearby Glasses. Technically, it ref­eres to how strong the sig­nal is re­ceived. Closer to zero means bet­ter sig­nal, hence fewer dis­tance be­tween your phone and the smart glasses. See RSSI above for ex­pla­na­tions and guid­ance. I rec­om­mend leav­ing it on -75

Enable Notifications: You would not want to dis­able that

Notification Cooldown: Here, you spec­ify, how many no­ti­fi­ca­tions about found smart glasses nearby you want to get. I chose 10 sec­onds (10000 ms) as de­fault value. Like this, you won’t miss the no­ti­fi­ca­tion while at the same time won’t be both­ered by it too much or drain your bat­tery too fast

Enable Log Display: Disabling this might spare you some bat­tery

Debug: Is needed to see more than just the match­ing BLE frames in the log dis­play frame. It’s use­ful to see if things are work­ing

Max log lines: How long the log may get. 200 seems to be a good bal­ance be­tween bat­tery life and us­abil­ity of the log (for nerds like me)

BLE ADV only: This ex­cludes other Bluetooth LE frames from the log for bet­ter read­abil­ity

Override Company IDs: If you want, you can let Nearby Glasses alert you of other de­vices than spec­i­fied above. Useful for de­bug­ging, at least for me. Leave it empty if you don’t need it or don’t know what to do with it

Every set­ting is saved and ef­fec­tive im­me­di­ately. To go back, use your back but­ton or ges­ture

The ex­port func­tion en­ables you to share a text-file of the ap­p’s log. For nerds like me

You may also copy&paste the log by tap­ping on the log dis­play frame

* It’s now work­ing in the wild! I man­aged to get some peo­ple test­ing it with ver­i­fied smart glasses around them. Special thanks to Lena!

* See Releases for APK to down­load.

* I pushed Nearby Glasses to Google Play, too. However, I will al­ways pub­lish re­leases here on GitHub and else­where, for those that avoid the Google Play.

* I am no BT or Android ex­pert at all. For what I’ve learned, one could also dig deeper into the com­mu­ni­ca­tion of the smart glasses by sniff­ing the BLE traf­fic. By do­ing so, we would likely not need to rely on the de­vice be­hav­ing ac­cord­ing to the BT spec­i­fi­ca­tions but could also use heuris­tics on the en­crypted traf­fic trans­mis­sions with­out much false pos­i­tives. But I haven’t looked into BT traf­fic pack­ets for more than ten years. I’m glad I re­mem­bered ADV frames… So if any­body could help on this, that’d be greatly ap­pre­ci­ated!

* Rework to ca­nary mode. I am look­ing into the sug­ges­tion I got on mastodon to steer away from warn­ing for smart glasses and rather let the app tell there are no smart glasses found so far. This means, I must rwork the scan­ner logic a bit and the in­ter­face

* Add an op­tion to set false pos­i­tives to an ig­nore list. Maybe in the no­ti­fi­ca­tion?

* Add more man­u­fac­tur­ers IDs of smart glasses. Right now, it’s Meta, Oakley and Snap. A list of smart glasses with cam­eras avail­able would help, too.

* An iOS app might be pos­si­ble, too. I have the tool­chain now, but I will need a Mac to sub­mit it to the Apple App Store in the end. And I need to dig deeper into iOS de­vel­op­ment-

* There lay­out is­sue with Google Pixel de­vices seems to be fixed as of Version 1.0.3. If you still can’t reach the menu as it’s mixed with the sta­tus bar some­how. Will look into that asap. Meanwhile, try to put your screen to land­scape mode and ro­tate clock­wise (to the right).

App Icon: The icon is based on Eyeglass icons cre­ated by Freepik - Flaticon

License: This app Nearby Glasses is li­censed un­der PolyForm Noncommercial License 1.0.0.

...

Read the original on github.com »

10 316 shares, 14 trendiness

Repurpose your old Kindle

Hacking an old Kindle to dis­play bus ar­rival times

This is how I turned an old Kindle (Kindle Touch 4th Generation/K5/KT) into a live bus feed that re­freshes every minute with the op­tion to exit out of dash­board mode by press­ing the menu but­ton. It’s ba­si­cally TRMNL with­out the $140 price tag.

Run a server ac­ces­si­ble over the in­ter­net (or lo­cally) that serves the Kindle im­age

This will be your Kindle hack­ing bible for steps 1 - 3. You need to fig­ure out what ver­sion of Kindle you have, its firmware ver­sion (shorthand FW in the Kindle fo­rum guides + readmes), down­load the ap­pro­pri­ate tar file and fol­low jail­break in­struc­tions.

Once you’ve suc­cess­fully jail­bro­ken your Kindle, it’s time to in­stall some things.

KUAL is a cus­tom Kindle app launcher. MRPI al­lows us to in­stall cus­tom apps onto the Kindle (you may not need MRPI if you have a newer Kindle). This part was frus­trat­ing - read­ing through fo­rum threads gives me a headache. The most help­ful re­source I found was the Kindle mod­ding wiki. Maybe other peo­ple aren’t as obliv­i­ous as me but it took me half a day to re­al­ize that the next step” in each guide can be ac­cessed by click­ing the Next Step” but­ton at the bot­tom of the page.

A gotcha for me was that I had to fol­low the Setting up a Hotfix guide be­fore at­tempt­ing to in­stall KUAL & MRPI.

After suc­cess­fully in­stalling KUAL & MRPI, I also Disabled OTA Updates be­cause why not. I did­n’t fol­low any other guides in the Kindle Modding wiki af­ter dis­abling OTA Updates be­cause they did­n’t seem rel­e­vant.

This can be done with a KUAL ex­ten­sion called USBNetwork (downloadable from the Kindle hack­ing bible) that will al­low you to SSH onto your Kindle as if it were a reg­u­lar server.

However, nowhere in the fo­rums could I find any in­for­ma­tion about how to ac­tu­ally in­stall a KUAL ex­ten­sion us­ing MRPI. Finally, this help­ful blog­post on set­ting up SSH for Kindle came to the res­cue. I fol­lowed the steps that ex­plained to how to in­stall the ex­ten­sion and how to setup SSH via USB. I ig­nored the rest of the in­struc­tions on the page be­cause I’m not con­cerned about adding a pass­word to the Kindle or set­ting up SSH over wifi.

If you’ve setup SSH suc­cess­fully, when the Kindle is plugged in, your com­put­er’s net­work tab should have a new item in Connected’ mode:

Here’s what my suc­cess­fully con­nected Kindle looks like in the net­work set­tings tab:

Congratulations! Your Kindle is now ready to run cus­tom code.

4. Running a server that gen­er­ates an im­age for the Kindle

How dis­play­ing cus­tom data on the Kindle works is that we need to cre­ate a png that fits the Kindle res­o­lu­tion, then draw the im­age onto the Kindle it­self.

Since I live in New Jersey, I wanted to dis­play NJTransit bus times on my Kindle. Luckily, NJTransit has a pub­lic GraphQL server that re­turns bus ar­rival times for any stop num­ber.

After pok­ing around in the net­work tab of the NJ Transit Bus Website, I found this GraphQL query that re­turns the bus num­ber, ar­rival time, cur­rent ca­pac­ity, des­ti­na­tion, and de­part­ing time in min­utes:

If you’re also a Jersey girl, you can run the fol­low­ing curl to get up­com­ing bus times (don’t for­get to re­place YOUR_STOP_NUMBER):

In the ma­jor­ity of the guides I read dur­ing this process (two most help­ful be­ing Matt Healy’s Kindle Dashboard guide and Hemant’s Kindle Dashboard guide) they use pup­peteer to con­vert HTML to png. This does not work for me be­cause I’m cheap and have a sin­gle $6 Digital Ocean droplet that I use for all side pro­jects. Every time I ran pup­peteer on it the en­tire server shits it­self.

Instead I cre­ated an end­point that for­mats the bus data into HTML, then the docker con­tainer that runs the server has a cron that runs the wkhtml­toim­age com­mand to gen­er­ate a new png every 3 min­utes us­ing the HTML end­point. The server then serves the gen­er­ated png file at a sep­a­rate end­point.

Here’s what the 2 rel­e­vant end­points look like for my Kindle:

HTML end­point used by wkhtml­toim­age to gen­er­ate an im­age

Endpoint used by the Kindle to re­trieve the im­age

The en­tire server code - Dockerfile, scripts, the server it­self - can be found in the server folder of my Kindle hax repo. It’s writ­ten in Node be­cause I was orig­i­nally us­ing Puppeteer be­fore dis­cov­er­ing the per­for­mance is­sues, but it’d be a fun op­ti­miza­tion ex­er­cise to rewrite in Go.

The most im­por­tant thing is that the im­age needs to con­form to your Kindle’s screen res­o­lu­tion. You can find what yours is by run­ning eips -i when SSH-ed into the Kindle. eips is the com­mand you’ll be us­ing to dis­play an im­age on your Kindle. I found this eips menu guide help­ful

You’ll see an out­put like this:

My Kindle ex­pects a 600x800 im­age and the im­age must be ro­tated. Without pass­ing a ro­tate com­mand dur­ing the im­age gen­er­a­tion process, I got skewed im­ages like this:

However, af­ter ro­tat­ing, the bus times could only be viewed hor­i­zon­tally and I wanted to mount my Kindle ver­ti­cally. What that meant was I had to ro­tate the HTML it­self. But when ro­tat­ing an im­age then tak­ing a snap­shot, the ro­ta­tion is around the cen­ter of the screen so the snap­shot made by wkhtml­toim­age kept on cut­ting off the bus times. Finally, a com­bi­na­tion of ro­tate and trans­late gave me what I needed, which was a ro­tated im­age that was aligned to the top left of the screen:

Once you have a server with an end­point that serves your im­age, you’re ready for the last step.

Going into this, I wanted two things - an easy way to exit dash­board mode and a rel­a­tively up to date bus sched­ule. All the guides I’ve seen thus far ran a cron on their Kindle that hit their end­point at a spec­i­fied in­ter­val. However I did­n’t like this be­cause I did­n’t want the Kindle to al­ways run the dash­board af­ter restarts. I want to con­trol when the dash­board is dis­played and that meant cre­at­ing a cus­tom KUAL app.

bin/ # ex­e­cutable scripts here

menu.json # con­trols the menu items in the KUAL dash­board

con­fig.xml # no clue wtf this is

Whiled SSH-ed into your Kindle, place your cus­tom ex­ten­sion folder in­side of /mnt/us/extensions/. If you used my cus­tom dash code, af­ter restart­ing and launch­ing KUAL, you’ll see your cus­tom ex­ten­sion listed in KUAL and af­ter click­ing into it, a sin­gle menu item ti­tled Start dash­board’:

When you press Start dash­board’, you can see in the menu.json that bin/​start.sh will ex­e­cute. The start script has com­ments ex­plain­ing what it does. Some in­ter­est­ing things I’ve never worked with be­fore:

# ig­nore HUP since kual will exit af­ter press­ing start, and that might kill our long run­ning script

trap HUP

# ig­nore term since stop­ping the frame­work/​gui will send a TERM sig­nal to our script since kual is prob­a­bly re­lated to the GUI

trap TERM

trap - TERM

trap! Here’s a help­ful re­source ex­plain­ing the bash trap com­mand. The TL;DR of it is that with­out ig­nor­ing cer­tain sig­nals, the script will al­ways early exit.

Getting rtcwake to work was also an­noy­ing. For me, call­ing rtcwake on the de­fault de­vice (skipping -d flag) never worked, I had to list pos­si­ble de­vices then choose a dif­fer­ent one. The one that re­acted to the rtcwake com­mand was rtc1 for me

The re­fresh_screen func­tion is im­por­tant. This is the whole rea­son we did all that server and im­age gen­er­a­tion stuff ear­lier. It re­trieves an im­age at an end­point, clears the screen twice, draws the im­age from the server and po­si­tions it slightly lower on the screen to make room for the sta­tus bar up top. The last line dis­plays the date­time, wifi sta­tus, and bat­tery re­main­ing.

re­fresh_screen() {

curl -k $SCREEN_URL” -o $DIR/screen.png”

eips -c

eips -c

eips -g $DIR/screen.png” -x 0 -y 30 -w gc16

# Draw date/​time and bat­tery at top (eips can’t print %, so we strip it from gas­gauge-info -c)

eips 1 1 $(TZ=EST5EDT date +%Y-%m-%d %I:%M %p’) - wifi $(cat /sys/class/net/wlan0/operstate 2>/dev/null || echo ?’) - bat­tery: $(gasgauge-info -c 2>/dev/null | sed s/%//g’ || echo ?’)”

This part of the script lis­tens for the user press­ing the menu but­ton.

evtest is the com­mand that worked for me for lis­ten­ing for in­com­ing events on a spec­i­fied de­vice on the kin­dle. In my case, any time I pressed the menu but­ton, the evtest com­mand out­puts code 102 (Home), value 1.

When the user presses the menu but­ton, the stop.sh script is called au­to­mat­i­cally, which will kill the dash­board, clear the screen, and restart the kin­dle UI so that the de­vice can be used nor­mally.

Now that it’s been run­ning for more than a month, 2 things I’m think­ing about:

Even though I clear the screen twice be­fore ren­der­ing a new im­age, the color bleed is still pretty no­tice­able af­ter it’s been run­ning for a cou­ple days. I have a the­ory that if I flash the screen com­pletely black and then white again when the kin­dle goes to sleep at night, it’d solve the prob­lem but haven’t tried it out yet.

Right now it can go for ~5 days with­out be­ing plugged in. I’d love for that num­ber to be at the 2 week mark. Turning the de­vice off for 10 hours at night ex­tended the bat­tery life by ~2 days, but 2 weeks is still a long ways off. I’ve de­bated in­creas­ing the gap be­tween screen re­freshes since it re­freshes every minute right now, but I like the (almost) live minute up­dates so would rather sac­ri­fice that last if pos­si­ble in the quest for longer bat­tery life.

Overall, this thing is sick! Probably one of the most fun pro­jects I’ve built in re­cent mem­ory. We use it every day be­fore leav­ing the house, and it’s so much sim­pler than tex­ting a stop num­ber to an NJ Transit phone num­ber. I can see serv­ing up all sorts of in­ter­est­ing in­for­ma­tion on the e-ink screen - cal­en­dar, weather, daily tasks, sky’s the limit.

...

Read the original on mariannefeng.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.