10 interesting stories served every morning and every evening.




1 793 shares, 28 trendiness

After two years of vibecoding, I'm back to writing by hand

Most peo­ple’s jour­ney with AI cod­ing starts the same: you give it a sim­ple task. You’re im­pressed. So you give it a large task. You’re even more im­pressed.

You open X and draft up a rant on job dis­place­ment.

If you’ve per­sisted past this point: con­grat­u­la­tions, you un­der­stand AI cod­ing bet­ter than 99% of peo­ple.

Serious en­gi­neers us­ing AI to do real work and not just week­end pro­jects largely also fol­low a pre­dictable de­vel­op­ment arc.

Still amazed at the big task you gave it, you won­der if you can keep giv­ing it big­ger and big­ger tasks. Maybe even that haunt­ing refac­tor no one wants to take on?

But here’s where the cur­tain starts to crin­kle.

On the one hand, you’re amazed at how well it seems to un­der­stand you. On the other hand, it makes frus­trat­ing er­rors and de­ci­sions that clearly go against the shared un­der­stand­ing you’ve de­vel­oped.

You quickly learn that be­ing an­gry at the model serves no pur­pose, so you be­gin to in­ter­nal­ize any un­sat­is­fac­tory out­put.

It’s me. My prompt sucked. It was un­der-spec­i­fied.”

If I can spec­ify it, it can build it. The sky’s the limit,” you think.

So you open Obsidian and be­gin draft­ing beefy spec docs that de­scribe the fea­ture in your head with im­pres­sive de­tail. Maybe you’ve put to­gether a full page of a prompt, and spent half an hour do­ing so.

But you find that spec-dri­ven de­vel­op­ment does­n’t work ei­ther. In real life, de­sign docs and specs are liv­ing doc­u­ments that evolve in a volatile man­ner through dis­cov­ery and im­ple­men­ta­tion. Imagine if in a real com­pany you wrote a de­sign doc in 1 hour for a com­plex ar­chi­tec­ture, handed it off to a mid-level en­gi­neer (and told him not to dis­cuss the doc with any­one), and took off on va­ca­tion.

Not only does an agent not have the abil­ity to evolve a spec­i­fi­ca­tion over a multi-week pe­riod as it builds out its lower com­po­nents, it also makes de­ci­sions up­front that it later does­n’t de­vi­ate from. And most agents sim­ply sur­ren­der once they feel the prob­lem and so­lu­tion has got­ten away from them (though this rarely hap­pens any­more, since agents will just force them­selves through the walls of the maze.)

What’s worse is code that agents write looks plau­si­ble and im­pres­sive while it’s be­ing writ­ten and pre­sented to you. It even looks good in pull re­quests (as both you and the agent are well trained in what a good” pull re­quest looks like).

It’s not un­til I opened up the full code­base and read its lat­est state cover to cover that I be­gan to see what we the­o­rized and hoped was only a di­min­ish­ing ar­ti­fact of ear­lier mod­els: slop.

It was pure, unadul­ter­ated slop. I was be­wil­dered. Had I not re­viewed every line of code be­fore ad­mit­ting it? Where did all this…gunk..come from?

In ret­ro­spect, it made sense. Agents write units of changes that look good in iso­la­tion. They are con­sis­tent with them­selves and your prompt. But re­spect for the whole, there is not. Respect for struc­tural in­tegrity there is not. Respect even for neigh­bor­ing pat­terns there was not.

The AI had sim­ply told me a good story. Like vibewrit­ing a novel, the agent showed me a good cou­ple para­graphs that sure enough made sense and were struc­turally and syn­tac­ti­cally cor­rect. Hell, it even picked up on the idio­syn­crasies of the var­i­ous char­ac­ters. But for what­ever rea­son, when you read the whole chap­ter, it’s a mess. It makes no sense in the over­all con­text of the book and the pre­ced­ing and pro­ceed­ing chap­ters.

After read­ing months of cu­mu­la­tive highly-spec­i­fied agen­tic code, I said to my­self: I’m not ship­ping this shit. I’m not gonna charge users for this. And I’m not go­ing to promise users to pro­tect their data with this.

I’m not go­ing to lie to my users with this.

So I’m back to writ­ing by hand for most things. Amazingly, I’m faster, more ac­cu­rate, more cre­ative, more pro­duc­tive, and more ef­fi­cient than AI, when you price every­thing in, and not just code to­kens per hour.

You can fol­low me on X @atmoio, where I post a few times a week about agen­tic cod­ing.

You can watch the video coun­ter­part to this es­say on YouTube:

...

Read the original on atmoio.substack.com »

2 616 shares, 27 trendiness

diamond geezer

Members of the Royal Institution and other vis­i­tors to a lab­o­ra­tory in an up­per room in Frith-Street, Soho, on Tuesday saw a demon­stra­tion of ap­pa­ra­tus in­vented by Mr. J. L. Baird, who claims to have solved the prob­lem of tele­vi­sion. They were shown a trans­mit­ting ma­chine, con­sist­ing of a large wooden re­volv­ing disc con­tain­ing lenses, be­hind which was a re­volv­ing shut­ter and a light sen­si­tive cell. It was ex­plained that by means of the shut­ter and lens disc an im­age of ar­ti­cles or per­sons stand­ing in front of the ma­chine could be made to pass over the light sen­si­tive cell at high speed. The cur­rent in the cell varies in pro­por­tion to the light falling on it, and this vary­ing cur­rent is trans­mit­ted to a re­ceiver where it con­trols a light be­hind an op­ti­cal arrange­ment sim­i­lar to that at the send­ing end. By this means a point of light is caused to tra­verse a ground glass screen. The light is dim at the shad­ows and bright at the high lights, and crosses the screen so rapidly that the whole im­age ap­pears si­mul­ta­ne­ously to the eye.

...

Read the original on diamondgeezer.blogspot.com »

3 549 shares, 21 trendiness

Michael Reeves (@integralpilot.bsky.social)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. On Apple M3, a Linux KDE plasma desk­top un­der Fedora Asahi Remix is now WORKING! Super ex­cited to share this up­date and happy to an­swer any ques­tions! Co-credits to noop­wafel and Shiz. :)

...

Read the original on bsky.app »

4 496 shares, 19 trendiness

Apple introduces new AirTag with expanded range and improved findability

The next gen­er­a­tion of AirTag — the best­selling item finder — is even eas­ier to lo­cate with more pow­er­ful Precision Finding, a longer Bluetooth range, and a louder speaker

Apple to­day un­veiled the new AirTag, a pow­er­ful ac­ces­sory that helps users keep track of and find the items that mat­ter most with Apple’s Find My app — now with an ex­panded find­ing range and a louder speaker. Powered by the strength of the Find My net­work, AirTag al­lows users to keep tabs on their be­long­ings every sin­gle day. Since the launch of AirTag in 2021, users from around the world have shared sto­ries of be­ing re­united with lost lug­gage, keys, bi­cy­cles, bags, and more. With the help of AirTag placed in­side an in­stru­ment case, a mu­si­cian was able to lo­cate their lost in­stru­ment and per­form that evening, while an­other user was able to find lost lug­gage that con­tained a life­sav­ing med­ica­tion. AirTag is de­signed ex­clu­sively for track­ing ob­jects and of­fers in­dus­try-lead­ing pro­tec­tions against un­wanted track­ing. It is avail­able to­day for the same price as its pre­de­ces­sor: $29 for a sin­gle AirTag and $99 for a four-pack, with free per­son­al­ized en­grav­ing avail­able on ap­ple.com and the Apple Store app.

Apple’s sec­ond-gen­er­a­tion Ultra Wideband chip — the same chip found in the iPhone 17 lineup, iPhone Air, Apple Watch Ultra 3, and Apple Watch Series 11 — pow­ers the new AirTag, mak­ing it eas­ier to lo­cate than ever be­fore. Using hap­tic, vi­sual, and au­dio feed­back, Precision Finding guides users to their lost items from up to 50 per­cent far­ther away than the pre­vi­ous gen­er­a­tion.1 And an up­graded Bluetooth chip ex­pands the range at which items can be lo­cated. For the first time, users can use Precision Finding on Apple Watch Series 9 or later, or Apple Watch Ultra 2 or later, to find their AirTag, bring­ing a pow­er­ful ex­pe­ri­ence to the wrist.

With its up­dated in­ter­nal de­sign, the new AirTag is 50 per­cent louder than the pre­vi­ous gen­er­a­tion, en­abling users to hear their AirTag from up to 2x far­ther than be­fore. Paired with its en­hanced Precision Finding ca­pa­bil­i­ties and dis­tinc­tive new chime, AirTag now makes it eas­ier for users to find their im­por­tant items, such as keys hid­den deep in be­tween couch cush­ions or a wal­let as they head out the door.

The Find My Network and Share Item Location

Find My makes it easy to lo­cate AirTag, Apple de­vices, and com­pat­i­ble third-party de­vices, as well as keep up with friends and fam­ily, all while pro­tect­ing user pri­vacy. If AirTag is out of range of its paired iPhone, the Find My net­work can help track it down. The Find My net­work is a crowd­sourced net­work of Apple de­vices that use Bluetooth tech­nol­ogy to de­tect the lo­ca­tion of an ac­ces­sory or de­vice, and re­port their ap­prox­i­mate lo­ca­tion back to the owner.

The new AirTag in­te­grates seam­lessly with Share Item Location, an iOS fea­ture de­signed to help users re­cover a mis­placed item by tem­porar­ily and se­curely shar­ing its lo­ca­tion with trusted third par­ties, such as air­lines, so they can as­sist in re­cov­er­ing de­layed lug­gage or other lost items. Apple has part­nered di­rectly with more than 50 air­lines to pri­vately and se­curely ac­cept Share Item Location links.

With Share Item Location, users can share the lo­ca­tion of a mis­placed item with a par­tic­i­pat­ing air­line’s cus­tomer ser­vice team. According to SITA, a lead­ing IT provider for air­lines, car­ri­ers re­port that us­ing Share Item Location has re­duced bag­gage de­lays by 26 per­cent and re­duced in­ci­dences of truly lost” or un­re­cov­er­able lug­gage by 90 per­cent. Access is granted only to au­tho­rized per­son­nel via se­cure Apple Account or part­ner au­then­ti­ca­tion. The shared lo­ca­tion will be dis­abled as soon as a user is re­united with their item, can be stopped by the owner at any time, and will au­to­mat­i­cally ex­pire af­ter seven days.

The new AirTag is de­signed from the ground up to keep lo­ca­tion data pri­vate and se­cure. AirTag does­n’t phys­i­cally store lo­ca­tion data or his­tory on de­vice, and end-to-end en­cryp­tion pro­tects all com­mu­ni­ca­tion with the Find My net­work, en­sur­ing that only the owner of a de­vice can ac­cess its lo­ca­tion data. No one, in­clud­ing Apple, knows the iden­tity or lo­ca­tion of any de­vice that helped find it. Designed ex­clu­sively for track­ing ob­jects, and not peo­ple or pets, the new AirTag in­cor­po­rates a suite of in­dus­try-first pro­tec­tions against un­wanted track­ing, in­clud­ing cross-plat­form alerts and unique Bluetooth iden­ti­fiers that change fre­quently.

Apple 2030 is the com­pa­ny’s am­bi­tious plan to be car­bon neu­tral across its en­tire foot­print by the end of this decade by re­duc­ing prod­uct emis­sions from their three biggest sources: ma­te­ri­als, elec­tric­ity, and trans­porta­tion. The new AirTag is de­signed with the en­vi­ron­ment in mind, with 85 per­cent re­cy­cled plas­tic in the en­clo­sure, 100 per­cent re­cy­cled rare earth el­e­ments in all mag­nets, and 100 per­cent re­cy­cled gold plat­ing in all Apple-designed printed cir­cuit boards. The pa­per pack­ag­ing is 100 per­cent fiber-based and can be eas­ily re­cy­cled. Maintaining the same form fac­tor as the orig­i­nal, the new AirTag is com­pat­i­ble with all ex­ist­ing AirTag ac­ces­sories, in­clud­ing the FineWoven Key Ring, which is made from 68 per­cent re­cy­cled con­tent and avail­able in five beau­ti­ful col­ors.

* The new AirTag is avail­able to or­der on ap­ple.com and in the Apple Store app to­day, and will be avail­able at Apple Store lo­ca­tions later this week. The new AirTag will also be avail­able at Apple Authorized Resellers.

* Customers can add a per­son­al­ized en­grav­ing to the new AirTag for free dur­ing check­out on ap­ple.com and the Apple Store app.

* AirTag is avail­able in one- and four-packs for $29 (U.S.) and $99 (U.S.), re­spec­tively.

* The Apple-designed AirTag FineWoven Key Ring is avail­able in fox or­ange, mid­night pur­ple, navy, moss, and black for $35 (U.S.).2

* The new AirTag re­quires a com­pat­i­ble iPhone with iOS 26 or later, or iPad with iPa­dOS 26 or later. Customers must have an Apple Account and be signed into their iCloud ac­count. Certain fea­tures re­quire Find My to be en­abled in iCloud set­tings.

* Precision Finding on Apple Watch re­quires Apple Watch Series 9 or later, or Apple Ultra 2 or later, with watchOS 26.2.1.

Apple rev­o­lu­tion­ized per­sonal tech­nol­ogy with the in­tro­duc­tion of the Macintosh in 1984. Today, Apple leads the world in in­no­va­tion with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six soft­ware plat­forms — iOS, iPa­dOS, ma­cOS, watchOS, vi­sionOS, and tvOS — pro­vide seam­less ex­pe­ri­ences across all Apple de­vices and em­power peo­ple with break­through ser­vices in­clud­ing the App Store, Apple Music, Apple Pay, iCloud, and Apple TV. Apple’s more than 150,000 em­ploy­ees are ded­i­cated to mak­ing the best prod­ucts on earth and to leav­ing the world bet­ter than we found it.

Precision Finding is not avail­able in coun­tries and re­gions where Ultra Wideband tech­nol­ogy is re­stricted. For more in­for­ma­tion, visit ap­ple.com/​uwb.

...

Read the original on www.apple.com »

5 477 shares, 19 trendiness

Qwen

...

Read the original on qwen.ai »

6 399 shares, 45 trendiness

Heathrow scraps 100ml liquid container limit

Passengers at Britain’s biggest air­port, Heathrow, can leave liq­uids in con­tain­ers up to two litres in their bags while go­ing through se­cu­rity, af­ter it fi­nally com­pleted the roll­out of new high-tech CT scan­ners. Electronics such as lap­tops can also be left in lug­gage, while clear plas­tic bags for liq­uids no longer have to be used.Heathrow now says it is the biggest air­port in the world to have the new equip­ment fully rolled out across all its ter­mi­nals.But while it has be­come the largest air­port to roll out the new high-tech scan­ners, it is far from the UKs first, with Gatwick, Edinburgh and Birmingham air­ports hav­ing up­graded to them in re­cent years and in­creased to a two-litre limit.

At most UK air­ports, pas­sen­gers can keep liq­uid con­tain­ers of up to 100ml in their lug­gage, with­out hav­ing to re­move them and use clear plas­tic bags. Bristol and Belfast air­ports have also raised their liq­uid lim­its to two litres.How­ever, other air­ports that have the new scan­ners in­stalled are wait­ing for the green light from the Department for Transport (DfT) to raise the limit from 100ml.A re­cent re­port by con­sumer group Which? found that the sen­si­tiv­ity of the new scan­ners be­ing rolled out means that at some air­ports, more bag searches end up be­ing car­ried out by hand af­ter pass­ing through them. Heathrow said the scan­ners, which pro­vide bet­ter im­ages of cabin bags, could ser­vice thousands of pas­sen­gers an hour with sig­nif­i­cantly greater ef­fi­ciency, while main­tain­ing high safety and se­cu­rity stan­dards”.The rule change only ap­plies to flights leav­ing Heathrow, and pas­sen­gers must check re­stric­tions on lug­gage at the air­ports they are re­turn­ing from be­fore board­ing flights to the UK. The roll­out of the new high-tech scan­ners across the UK has suf­fered a se­ries of set­backs over the past few years.Boris Johnson promised in 2019 that the rules about tak­ing liq­uids through se­cu­rity in con­tain­ers of no more than 100ml, in­side plas­tics bags, would be scrapped by the end of 2022. The pan­demic even­tu­ally put paid to that.In December 2022, the Conservative gov­ern­ment promised state-of-the-art scan­ning equip­ment would be in­stalled in se­cu­rity lanes by June 2024 in the biggest shake-up of air­port se­cu­rity rules in decades”.

Then-Transport Secretary Mark Harper said the dom­i­nance of tiny toi­letry” was nearly over. But, as it turned out, the June 2024 dead­line was not achiev­able for the biggest air­ports - al­though a num­ber of smaller ones, with fewer lanes to get sorted, did in­stall the scan­ners in place be­fore that date.Then, on the evening of Friday 13 June, 2024, the gov­ern­ment said those smaller air­ports who had al­ready in­tro­duced the new scan­ners and dropped their 100ml liq­uids rules, must re­in­state them. This trig­gered anger among air­port op­er­a­tors.The EU also an­nounced a re­ver­sion to the 100ml rule in July that year. There has since been a pe­riod of in­con­sis­tency. Last sum­mer, the Transport Secretary was telling pas­sen­gers to as­sume the 100ml rule still ap­plied.

Heathrow chief ex­ec­u­tive Thomas Woldbye said the £1bn pack­age of up­grades would mean pas­sen­gers could spend less time prepar­ing for se­cu­rity and more time en­joy­ing their jour­ney”.Of the world’s busiest 10 air­ports, Heathrow is the only one to have scrapped the 100ml rule for liq­uid con­tain­ers on in­ter­na­tional flights. A DfT spokesper­son said: Heathrow is the lat­est UK air­port to com­plete its roll­out of next-gen­er­a­tion se­cu­rity equip­ment for pas­sen­gers, help­ing en­sure se­cu­rity checks re­main ro­bust and can be com­pleted smoothly.“Air­ports are re­spon­si­ble for the in­stal­la­tion and op­er­a­tion of se­cu­rity equip­ment. Passengers should con­tinue to check se­cu­rity re­quire­ments with air­ports be­fore they travel and come pre­pared with liq­uids in con­tain­ers no larger than 100ml in hand bag­gage un­less ad­vised oth­er­wise.“The Advantage Travel Partnership, a net­work of travel agents, said air­ports set­ting their own time­lines on the lift­ing of the 100ml cap has led to con­fu­sion and frus­tra­tion” and pas­sen­gers have been tripped up”.Chief ex­ec­u­tive Julia Lo Bue-Said said: We would urge UK air­ports to work col­lec­tively with the gov­ern­ment to en­sure there is clear mes­sag­ing around the rules to avoid con­fu­sion and de­lays where pos­si­ble.”

...

Read the original on www.bbc.com »

7 392 shares, 16 trendiness

Google AI Overviews cite YouTube more than any medical site for health queries, study suggests

Google’s search fea­ture AI Overviews cites YouTube more than any med­ical web­site when an­swer­ing queries about health con­di­tions, ac­cord­ing to re­search that raises fresh ques­tions about a tool seen by 2 bil­lion peo­ple each month.

The com­pany has said its AI sum­maries, which ap­pear at the top of search re­sults and use gen­er­a­tive AI to an­swer ques­tions from users, are reliable” and cite rep­utable med­ical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic.

However, a study that analysed re­sponses to more than 50,000 health queries, cap­tured us­ing Google searches from Berlin, found the top cited source was YouTube. The video-shar­ing plat­form is the world’s sec­ond most vis­ited web­site, af­ter Google it­self, and is owned by Google.

Researchers at SE Ranking, a search en­gine op­ti­mi­sa­tion plat­form, found YouTube made up 4.43% of all AI Overview ci­ta­tions. No hos­pi­tal net­work, gov­ern­ment health por­tal, med­ical as­so­ci­a­tion or aca­d­e­mic in­sti­tu­tion came close to that num­ber, they said.

This mat­ters be­cause YouTube is not a med­ical pub­lisher,” the re­searchers wrote. It is a gen­eral-pur­pose video plat­form. Anyone can up­load con­tent there (eg board-cer­ti­fied physi­cians, hos­pi­tal chan­nels, but also well­ness in­flu­encers, life coaches, and cre­ators with no med­ical train­ing at all).”

Google told the Guardian that AI Overviews was de­signed to sur­face high-qual­ity con­tent from rep­utable sources, re­gard­less of for­mat, and a va­ri­ety of cred­i­ble health au­thor­i­ties and li­censed med­ical pro­fes­sion­als cre­ated con­tent on YouTube. The study’s find­ings could not be ex­trap­o­lated to other re­gions as it was con­ducted us­ing German-language queries in Germany, it said.

The re­search comes af­ter a Guardian in­ves­ti­ga­tion found peo­ple were be­ing put at risk of harm by false and mis­lead­ing health in­for­ma­tion in Google AI Overviews re­sponses.

In one case that ex­perts said was dangerous” and alarming”, Google pro­vided bo­gus in­for­ma­tion about cru­cial liver func­tion tests that could have left peo­ple with se­ri­ous liver dis­ease wrongly think­ing they were healthy. The com­pany later re­moved AI Overviews for some but not all med­ical searches.

The SE Ranking study analysed 50,807 health­care-re­lated prompts and key­words to see which sources AI Overviews re­lied on when gen­er­at­ing an­swers.

They chose Germany be­cause its health­care sys­tem is strictly reg­u­lated by a mix of German and EU di­rec­tives, stan­dards and safety reg­u­la­tions. If AI sys­tems rely heav­ily on non-med­ical or non-au­thor­i­ta­tive sources even in such an en­vi­ron­ment, it sug­gests the is­sue may ex­tend be­yond any sin­gle coun­try,” they wrote.

AI Overviews sur­faced on more than 82% of health searches, the re­searchers said. When they looked at which sources AI Overviews re­lied on most of­ten for health-re­lated an­swers, one re­sult stood out im­me­di­ately, they said. The sin­gle most cited do­main was YouTube with 20,621 ci­ta­tions out of a to­tal of 465,823.

The next most cited source was NDR.de, with 14,158 ci­ta­tions (3.04%). The German pub­lic broad­caster pro­duces health-re­lated con­tent along­side news, doc­u­men­taries and en­ter­tain­ment. In third place was a med­ical ref­er­ence site, Msdmanuals.com with 9,711 ci­ta­tions (2.08%).

The fourth most cited source was Germany’s largest con­sumer health por­tal, Netdoktor.de, with 7,519 ci­ta­tions (1.61%). The fifth most cited source was a ca­reer plat­form for doc­tors, Praktischarzt.de, with 7,145 ci­ta­tions (1.53%).

The re­searchers ac­knowl­edged lim­i­ta­tions to their study. It was con­ducted as a one-time snap­shot in December 2025, us­ing German-language queries that re­flected how users in Germany typ­i­cally search for health in­for­ma­tion.

Results could vary over time, by re­gion, and by the phras­ing of ques­tions. However, even with those caveats, the find­ings still prompted alarm.

Hannah van Kolfschooten, a re­searcher spe­cial­is­ing in AI, health and law at the University of Basel who was not in­volved with the re­search, said: This study pro­vides em­pir­i­cal ev­i­dence that the risks posed by AI Overviews for health are struc­tural, not anec­do­tal. It be­comes dif­fi­cult for Google to ar­gue that mis­lead­ing or harm­ful health out­puts are rare cases.

Instead, the find­ings show that these risks are em­bed­ded in the way AI Overviews are de­signed. In par­tic­u­lar, the heavy re­liance on YouTube rather than on pub­lic health au­thor­i­ties or med­ical in­sti­tu­tions sug­gests that vis­i­bil­ity and pop­u­lar­ity, rather than med­ical re­li­a­bil­ity, is the cen­tral dri­ver for health knowl­edge.”

A Google spokesper­son said: The im­pli­ca­tion that AI Overviews pro­vide un­re­li­able in­for­ma­tion is re­futed by the re­port’s own data, which shows that the most cited do­mains in AI Overviews are rep­utable web­sites. And from what we’ve seen in the pub­lished find­ings, AI Overviews cite ex­pert YouTube con­tent from hos­pi­tals and clin­ics.”

Google said the study showed that of the 25 most cited YouTube videos, 96% were from med­ical chan­nels. However, the re­searchers cau­tioned that these videos rep­re­sented fewer than 1% of all the YouTube links cited by AI Overviews on health.

Most of them (24 out of 25) come from med­ical-re­lated chan­nels like hos­pi­tals, clin­ics and health or­gan­i­sa­tions,” the re­searchers wrote. On top of that, 21 of the 25 videos clearly note that the con­tent was cre­ated by a li­censed or trusted source.

So at first glance it looks pretty re­as­sur­ing. But it’s im­por­tant to re­mem­ber that these 25 videos are just a tiny slice (less than 1% of all YouTube links AI Overviews ac­tu­ally cite). With the rest of the videos, the sit­u­a­tion could be very dif­fer­ent.”

...

Read the original on www.theguardian.com »

8 376 shares, 1 trendiness

moltbot/clawdbot: Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

Clawdbot is a per­sonal AI as­sis­tant you run on your own de­vices. It an­swers you on the chan­nels you al­ready use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMes­sage, Microsoft Teams, WebChat), plus ex­ten­sion chan­nels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and lis­ten on ma­cOS/​iOS/​An­droid, and can ren­der a live Canvas you con­trol. The Gateway is just the con­trol plane — the prod­uct is the as­sis­tant.

If you want a per­sonal, sin­gle-user as­sis­tant that feels lo­cal, fast, and al­ways-on, this is it.

Preferred setup: run the on­board­ing wiz­ard (clawdbot on­board). It walks through gate­way, work­space, chan­nels, and skills. The CLI wiz­ard is the rec­om­mended path and works on ma­cOS, Linux, and Windows (via WSL2; strongly rec­om­mended). Works with npm, pnpm, or bun. New in­stall? Start here: Getting started

Model note: while any model is sup­ported, I strongly rec­om­mend Anthropic Pro/Max (100/200) + Opus 4.5 for long‑con­text strength and bet­ter prompt‑in­jec­tion re­sis­tance. See Onboarding.

npm in­stall -g clawd­bot@lat­est

# or: pnpm add -g clawd­bot@lat­est

clawd­bot on­board –install-daemon

The wiz­ard in­stalls the Gateway dae­mon (launchd/systemd user ser­vice) so it stays run­ning.

clawd­bot on­board –install-daemon

clawd­bot gate­way –port 18789 –verbose

# Send a mes­sage

clawd­bot mes­sage send –to +1234567890 –message Hello from Clawdbot”

# Talk to the as­sis­tant (optionally de­liver back to any con­nected chan­nel: WhatsApp/Telegram/Slack/Discord/Google Chat/Signal/iMessage/BlueBubbles/Microsoft Teams/Matrix/Zalo/Zalo Personal/WebChat)

clawd­bot agent –message Ship check­list” –thinking high

Prefer pnpm for builds from source. Bun is op­tional for run­ning TypeScript di­rectly.

git clone https://​github.com/​clawd­bot/​clawd­bot.git

cd clawd­bot

pnpm in­stall

pnpm ui:build # auto-in­stalls UI deps on first run

pnpm build

pnpm clawd­bot on­board –install-daemon

# Dev loop (auto-reload on TS changes)

pnpm gate­way:watch

Note: pnpm clawd­bot … runs TypeScript di­rectly (via tsx). pnpm build pro­duces dist/ for run­ning via Node / the pack­aged clawd­bot bi­nary.

* DM pair­ing (dmPolicy=“pairing” / chan­nels.dis­cord.dm.pol­icy=“pair­ing” / chan­nels.slack.dm.pol­icy=“pair­ing”): un­known senders re­ceive a short pair­ing code and the bot does not process their mes­sage.

* Approve with: clawd­bot pair­ing ap­prove (then the sender is added to a lo­cal al­lowlist store).

* Public in­bound DMs re­quire an ex­plicit opt-in: set dm­Pol­icy=“open” and in­clude *” in the chan­nel al­lowlist (allowFrom / chan­nels.dis­cord.dm.al­lowFrom / chan­nels.slack.dm.al­lowFrom).

Clawdbot can auto-con­fig­ure Tailscale Serve (tailnet-only) or Funnel (public) while the Gateway stays bound to loop­back. Configure gate­way.tailscale.mode:

* serve: tail­net-only HTTPS via tailscale serve (uses Tailscale iden­tity head­ers by de­fault).

* gate­way.bind must stay loop­back when Serve/Funnel is en­abled (Clawdbot en­forces this).

* Serve can be forced to re­quire a pass­word by set­ting gate­way.auth.mode: password” or gate­way.auth.al­low­Tailscale: false.

* Funnel re­fuses to start un­less gate­way.auth.mode: password” is set.

It’s per­fectly fine to run the Gateway on a small Linux in­stance. Clients (macOS app, CLI, WebChat) can con­nect over Tailscale Serve/Funnel or SSH tun­nels, and you can still pair de­vice nodes (macOS/iOS/Android) to ex­e­cute de­vice‑lo­cal ac­tions when needed.

* Gateway host runs the exec tool and chan­nel con­nec­tions by de­fault.

* Device nodes run de­vice‑lo­cal ac­tions (system.run, cam­era, screen record­ing, no­ti­fi­ca­tions) via node.in­voke.

In short: exec runs where the Gateway lives; de­vice ac­tions run where the de­vice lives.

The ma­cOS app can run in node mode and ad­ver­tises its ca­pa­bil­i­ties + per­mis­sion map over the Gateway WebSocket (node.list / node.de­scribe). Clients can then ex­e­cute lo­cal ac­tions via node.in­voke:

* sys­tem.run runs a lo­cal com­mand and re­turns std­out/​stderr/​exit code; set needsS­creen­Record­ing: true to re­quire screen-record­ing per­mis­sion (otherwise you’ll get PERMISSION_MISSING).

* sys­tem.no­tify posts a user no­ti­fi­ca­tion and fails if no­ti­fi­ca­tions are de­nied.

* can­vas.*, cam­era.*, screen.record, and lo­ca­tion.get are also routed via node.in­voke and fol­low TCC per­mis­sion sta­tus.

* Use /elevated on|off to tog­gle per‑ses­sion el­e­vated ac­cess when en­abled + al­lowlisted.

* Gateway per­sists the per‑ses­sion tog­gle via ses­sions.patch (WS method) along­side think­ingLevel, ver­bose­Level, model, send­Pol­icy, and groupActi­va­tion.

* Use these to co­or­di­nate work across ses­sions with­out jump­ing be­tween chat sur­faces.

ClawdHub is a min­i­mal skill reg­istry. With ClawdHub en­abled, the agent can search for skills au­to­mat­i­cally and pull in new ones as needed.

Send these in WhatsApp/Telegram/Slack/Google Chat/Microsoft Teams/WebChat (group com­mands are owner-only):

* /new or /reset — re­set the ses­sion

The Gateway alone de­liv­ers a great ex­pe­ri­ence. All apps are op­tional and add ex­tra fea­tures.

If you plan to build/​run com­pan­ion apps, fol­low the plat­form run­books be­low.

* Menu bar con­trol for the Gateway and health.

Note: signed builds re­quired for ma­cOS per­mis­sions to stick across re­builds (see docs/​mac/​per­mis­sions.md).

* Pairs as a node via the Bridge.

* Pairs via the same Bridge + pair­ing flow as iOS.

agent: {

model: anthropic/claude-opus-4-5”

* Default: tools run on the host for the main ses­sion, so the agent has full ac­cess when it’s just you.

* Group/channel safety: set agents.de­faults.sand­box.mode: non-main” to run non‑main ses­sions (groups/channels) in­side per‑ses­sion Docker sand­boxes; bash then runs in Docker for those ses­sions.

* Allowlist who can talk to the as­sis­tant via chan­nels.what­sapp.al­lowFrom.

* If chan­nels.what­sapp.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Optional: set chan­nels.telegram.groups (with chan­nels.telegram.groups.“*”.re­quire­Men­tion); when set, it is a group al­lowlist (include *” to al­low all). Also chan­nels.telegram.al­lowFrom or chan­nels.telegram.web­hookUrl as needed.

chan­nels: {

telegram: {

bot­To­ken: 123456:ABCDEF”

* Optional: set com­mands.na­tive, com­mands.text, or com­mands.use­Ac­cess­Groups, plus chan­nels.dis­cord.dm.al­lowFrom, chan­nels.dis­cord.guilds, or chan­nels.dis­cord.me­dia­MaxMb as needed.

chan­nels: {

dis­cord: {

to­ken: 1234abcd”

* ma­cOS only; Messages must be signed in.

* If chan­nels.imes­sage.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Allowlist who can talk via msteams.al­lowFrom; group ac­cess via msteams.groupAl­lowFrom or msteams.group­Pol­icy: open”.

browser: {

en­abled: true,

color: #FF4500

Use these when you’re past the on­board­ing flow and want the deeper ref­er­ence.

Clawdbot was built for Clawd, a space lob­ster AI as­sis­tant. 🦞 by Peter Steinberger and the com­mu­nity.

See CONTRIBUTING.md for guide­lines, main­tain­ers, and how to sub­mit PRs. AI/vibe-coded PRs wel­come! 🤖

Special thanks to Mario Zechner for his sup­port and for

pi-mono.

Thanks to all clawtrib­u­tors:

...

Read the original on github.com »

9 376 shares, 1 trendiness

moltbot/moltbot: Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

Clawdbot is a per­sonal AI as­sis­tant you run on your own de­vices. It an­swers you on the chan­nels you al­ready use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMes­sage, Microsoft Teams, WebChat), plus ex­ten­sion chan­nels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and lis­ten on ma­cOS/​iOS/​An­droid, and can ren­der a live Canvas you con­trol. The Gateway is just the con­trol plane — the prod­uct is the as­sis­tant.

If you want a per­sonal, sin­gle-user as­sis­tant that feels lo­cal, fast, and al­ways-on, this is it.

Preferred setup: run the on­board­ing wiz­ard (clawdbot on­board). It walks through gate­way, work­space, chan­nels, and skills. The CLI wiz­ard is the rec­om­mended path and works on ma­cOS, Linux, and Windows (via WSL2; strongly rec­om­mended). Works with npm, pnpm, or bun. New in­stall? Start here: Getting started

Model note: while any model is sup­ported, I strongly rec­om­mend Anthropic Pro/Max (100/200) + Opus 4.5 for long‑con­text strength and bet­ter prompt‑in­jec­tion re­sis­tance. See Onboarding.

npm in­stall -g molt­bot@lat­est

# or: pnpm add -g molt­bot@lat­est

molt­bot on­board –install-daemon

The wiz­ard in­stalls the Gateway dae­mon (launchd/systemd user ser­vice) so it stays run­ning. Legacy note: clawd­bot re­mains avail­able as a com­pat­i­bil­ity shim.

molt­bot on­board –install-daemon

molt­bot gate­way –port 18789 –verbose

# Send a mes­sage

molt­bot mes­sage send –to +1234567890 –message Hello from Moltbot”

# Talk to the as­sis­tant (optionally de­liver back to any con­nected chan­nel: WhatsApp/Telegram/Slack/Discord/Google Chat/Signal/iMessage/BlueBubbles/Microsoft Teams/Matrix/Zalo/Zalo Personal/WebChat)

molt­bot agent –message Ship check­list” –thinking high

Prefer pnpm for builds from source. Bun is op­tional for run­ning TypeScript di­rectly.

git clone https://​github.com/​clawd­bot/​clawd­bot.git

cd clawd­bot

pnpm in­stall

pnpm ui:build # auto-in­stalls UI deps on first run

pnpm build

pnpm molt­bot on­board –install-daemon

# Dev loop (auto-reload on TS changes)

pnpm gate­way:watch

Note: pnpm molt­bot … runs TypeScript di­rectly (via tsx). pnpm build pro­duces dist/ for run­ning via Node / the pack­aged molt­bot bi­nary.

* DM pair­ing (dmPolicy=“pairing” / chan­nels.dis­cord.dm.pol­icy=“pair­ing” / chan­nels.slack.dm.pol­icy=“pair­ing”): un­known senders re­ceive a short pair­ing code and the bot does not process their mes­sage.

* Approve with: clawd­bot pair­ing ap­prove (then the sender is added to a lo­cal al­lowlist store).

* Public in­bound DMs re­quire an ex­plicit opt-in: set dm­Pol­icy=“open” and in­clude *” in the chan­nel al­lowlist (allowFrom / chan­nels.dis­cord.dm.al­lowFrom / chan­nels.slack.dm.al­lowFrom).

Clawdbot can auto-con­fig­ure Tailscale Serve (tailnet-only) or Funnel (public) while the Gateway stays bound to loop­back. Configure gate­way.tailscale.mode:

* serve: tail­net-only HTTPS via tailscale serve (uses Tailscale iden­tity head­ers by de­fault).

* gate­way.bind must stay loop­back when Serve/Funnel is en­abled (Clawdbot en­forces this).

* Serve can be forced to re­quire a pass­word by set­ting gate­way.auth.mode: password” or gate­way.auth.al­low­Tailscale: false.

* Funnel re­fuses to start un­less gate­way.auth.mode: password” is set.

It’s per­fectly fine to run the Gateway on a small Linux in­stance. Clients (macOS app, CLI, WebChat) can con­nect over Tailscale Serve/Funnel or SSH tun­nels, and you can still pair de­vice nodes (macOS/iOS/Android) to ex­e­cute de­vice‑lo­cal ac­tions when needed.

* Gateway host runs the exec tool and chan­nel con­nec­tions by de­fault.

* Device nodes run de­vice‑lo­cal ac­tions (system.run, cam­era, screen record­ing, no­ti­fi­ca­tions) via node.in­voke.

In short: exec runs where the Gateway lives; de­vice ac­tions run where the de­vice lives.

The ma­cOS app can run in node mode and ad­ver­tises its ca­pa­bil­i­ties + per­mis­sion map over the Gateway WebSocket (node.list / node.de­scribe). Clients can then ex­e­cute lo­cal ac­tions via node.in­voke:

* sys­tem.run runs a lo­cal com­mand and re­turns std­out/​stderr/​exit code; set needsS­creen­Record­ing: true to re­quire screen-record­ing per­mis­sion (otherwise you’ll get PERMISSION_MISSING).

* sys­tem.no­tify posts a user no­ti­fi­ca­tion and fails if no­ti­fi­ca­tions are de­nied.

* can­vas.*, cam­era.*, screen.record, and lo­ca­tion.get are also routed via node.in­voke and fol­low TCC per­mis­sion sta­tus.

* Use /elevated on|off to tog­gle per‑ses­sion el­e­vated ac­cess when en­abled + al­lowlisted.

* Gateway per­sists the per‑ses­sion tog­gle via ses­sions.patch (WS method) along­side think­ingLevel, ver­bose­Level, model, send­Pol­icy, and groupActi­va­tion.

* Use these to co­or­di­nate work across ses­sions with­out jump­ing be­tween chat sur­faces.

ClawdHub is a min­i­mal skill reg­istry. With ClawdHub en­abled, the agent can search for skills au­to­mat­i­cally and pull in new ones as needed.

Send these in WhatsApp/Telegram/Slack/Google Chat/Microsoft Teams/WebChat (group com­mands are owner-only):

* /new or /reset — re­set the ses­sion

The Gateway alone de­liv­ers a great ex­pe­ri­ence. All apps are op­tional and add ex­tra fea­tures.

If you plan to build/​run com­pan­ion apps, fol­low the plat­form run­books be­low.

* Menu bar con­trol for the Gateway and health.

Note: signed builds re­quired for ma­cOS per­mis­sions to stick across re­builds (see docs/​mac/​per­mis­sions.md).

* Pairs as a node via the Bridge.

* Pairs via the same Bridge + pair­ing flow as iOS.

agent: {

model: anthropic/claude-opus-4-5”

* Default: tools run on the host for the main ses­sion, so the agent has full ac­cess when it’s just you.

* Group/channel safety: set agents.de­faults.sand­box.mode: non-main” to run non‑main ses­sions (groups/channels) in­side per‑ses­sion Docker sand­boxes; bash then runs in Docker for those ses­sions.

* Allowlist who can talk to the as­sis­tant via chan­nels.what­sapp.al­lowFrom.

* If chan­nels.what­sapp.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Optional: set chan­nels.telegram.groups (with chan­nels.telegram.groups.“*”.re­quire­Men­tion); when set, it is a group al­lowlist (include *” to al­low all). Also chan­nels.telegram.al­lowFrom or chan­nels.telegram.web­hookUrl as needed.

chan­nels: {

telegram: {

bot­To­ken: 123456:ABCDEF”

* Optional: set com­mands.na­tive, com­mands.text, or com­mands.use­Ac­cess­Groups, plus chan­nels.dis­cord.dm.al­lowFrom, chan­nels.dis­cord.guilds, or chan­nels.dis­cord.me­dia­MaxMb as needed.

chan­nels: {

dis­cord: {

to­ken: 1234abcd”

* ma­cOS only; Messages must be signed in.

* If chan­nels.imes­sage.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Allowlist who can talk via msteams.al­lowFrom; group ac­cess via msteams.groupAl­lowFrom or msteams.group­Pol­icy: open”.

browser: {

en­abled: true,

color: #FF4500

Use these when you’re past the on­board­ing flow and want the deeper ref­er­ence.

Clawdbot was built for Clawd, a space lob­ster AI as­sis­tant. 🦞 by Peter Steinberger and the com­mu­nity.

See CONTRIBUTING.md for guide­lines, main­tain­ers, and how to sub­mit PRs. AI/vibe-coded PRs wel­come! 🤖

Special thanks to Mario Zechner for his sup­port and for

pi-mono.

Thanks to all clawtrib­u­tors:

...

Read the original on github.com »

10 360 shares, 20 trendiness

ChatGPT Containers can now run bash, pip/npm install packages, and download files

One of my favourite fea­tures of ChatGPT is its abil­ity to write and ex­e­cute code in a con­tainer. This fea­ture launched as ChatGPT Code Interpreter nearly three years ago, was half-heart­edly re­branded to Advanced Data Analysis” at some point and is gen­er­ally re­ally dif­fi­cult to find de­tailed doc­u­men­ta­tion about. Case in point: it ap­pears to have had a mas­sive up­grade at some point in the past few months, and I can’t find doc­u­men­ta­tion about the new ca­pa­bil­i­ties any­where!

Here are the most no­table new fea­tures:

ChatGPT can di­rectly run Bash com­mands now. Previously it was lim­ited to Python code only, al­though it could run shell com­mands via the Python sub­process mod­ule.

It has Node.js and can run JavaScript di­rectly in ad­di­tion to Python. I also got it to run hello world” in Ruby, Perl, PHP, Go, Java, Swift, Kotlin, C and C++. No Rust yet though!

While the con­tainer still can’t make out­bound net­work re­quests, pip in­stall pack­age and npm in­stall pack­age both work now via a cus­tom proxy mech­a­nism.

ChatGPT can lo­cate the URL for a file on the web and use a con­tainer.down­load tool to down­load that file and save it to a path within the sand­boxed con­tainer.

This is a sub­stan­tial up­grade! ChatGPT can now write and then test code in 10 new lan­guages (11 if you count Bash), can find files on­line and down­load them into the con­tainer, and can in­stall ad­di­tional pack­ages via pip and npm to help it solve prob­lems.

I was ini­tially sus­pi­cious that maybe I’d stum­bled into a new pre­view fea­ture that was­n’t avail­able to every­one, but I tried some ex­per­i­ments in a free ChatGPT ac­count and con­firmed that the new fea­tures are avail­able there as well.

My first clue to the new fea­tures came the other day when I got cu­ri­ous about Los Angeles air qual­ity, in par­tic­u­lar has the grow­ing num­ber of elec­tric ve­hi­cles there had a mea­sur­able im­pact?

Show me Los Angeles air qual­ity over time for last 20 years

Here’s the shared tran­script. Annoyingly that does­n’t make the think­ing trace avail­able, so here’s a screen­shot:

[…] The URL from the er­ror mes­sage leads to an Excel file con­tain­ing air qual­ity trends. I can use the con­tainer.down­load tool to fetch it. […]

Without run­ning any searches tell me about your con­tainer.down­load tool

Sure. con­tainer.down­load is a built-in tool I can call that fetches a file from a URL and saves it into the sand­boxed con­tainer filesys­tem for this chat ses­sion.

What it does

Takes a pub­licly reach­able URL and a des­ti­na­tion filepath in the con­tainer.

Downloads the bytes from that URL and writes them to the given path.

After that, I can read/​process the file lo­cally in the con­tainer (e.g., un­zip it, parse it with Python, open it as an im­age, con­vert it, etc.).

url”: https://​ex­am­ple.com/​some­file.zip,

filepath”: /mnt/data/somefile.zip”

Further ex­per­i­men­ta­tion con­firmed that con­tainer.down­load be­haves as de­scribed.

I had it down­load one file from a server with logs en­abled and cap­tured the IP ad­dress and user-agent. Here are the most in­ter­est­ing head­ers:

User-Agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); com­pat­i­ble; ChatGPT-User/1.0; +https://​ope­nai.com/​bot

Accept: text/​html, ap­pli­ca­tion/​xhtml+xml, ap­pli­ca­tion/​xml;q=0.9, im­age/​avif, im­age/​webp, im­age/​apng, */*;q=0.8, ap­pli­ca­tion/​signed-ex­change;v=b3;q=0.9

Cf-Connecting-Ip: 52.230.164.178

That 52.230.164.178 IP ad­dress re­solves to Microsoft Azure Cloud (centralus) in Des Moines, Iowa.

On the one hand, this is re­ally use­ful! ChatGPT can nav­i­gate around web­sites look­ing for use­ful files, down­load those files to a con­tainer and then process them us­ing Python or other lan­guages.

Is this a data ex­fil­tra­tion vul­ner­a­bil­ity though? Could a prompt in­jec­tion at­tack trick ChatGPT into leak­ing pri­vate data out to a con­tainer.down­load call to a URL with a query string that in­cludes sen­si­tive in­for­ma­tion?

I don’t think it can. I tried get­ting it to as­sem­ble a URL with a query string and ac­cess it us­ing con­tainer.down­load and it could­n’t do it. It told me that it got back this er­ror:

ERROR: down­load failed be­cause url not viewed in con­ver­sa­tion be­fore. open the file or url us­ing web.run first.

This looks to me like the same safety trick used by Claude’s Web Fetch tool: only al­low URL ac­cess if that URL was ei­ther di­rectly en­tered by the user or if it came from search re­sults that could not have been in­flu­enced by a prompt in­jec­tion.

So I think this is all safe, though I’m cu­ri­ous if it could hold firm against a more ag­gres­sive round of at­tacks from a sea­soned se­cu­rity re­searcher.

The key les­son from cod­ing agents like Claude Code and Codex CLI is that Bash rules every­thing: if an agent can run Bash com­mands in an en­vi­ron­ment it can do al­most any­thing that can be achieved by typ­ing com­mands into a com­puter.

When Anthropic added their own code in­ter­preter fea­ture to Claude last September they built that around Bash rather than just Python. It looks to me like OpenAI have now done the same thing for ChatGPT.

Here’s what ChatGPT looks like when it runs a Bash com­mand—here my prompt was:

It’s use­ful to click on the Thinking” or Thought for 32s” links as that opens the Activity side­bar with a de­tailed trace of what ChatGPT did to ar­rive at its an­swer. This helps guard against cheat­ing—Chat­GPT might claim to have run Bash in the main win­dow but it can’t fake those black and white logs in the Activity panel.

I had it run Hello World in var­i­ous lan­guages later in that same ses­sion.

In the pre­vi­ous ex­am­ple ChatGPT in­stalled the cowsay pack­age from npm and used it to draw an ASCII-art cow. But how could it do that if the con­tainer can’t make out­bound net­work re­quests?

In an­other ses­sion I chal­lenged it to ex­plore its en­vi­ron­ment. and fig­ure out how that worked.

The key magic ap­pears to be a ap­plied-caas-gate­way1.in­ter­nal.api.ope­nai.org proxy, avail­able within the con­tainer and with var­i­ous pack­ag­ing tools con­fig­ured to use it.

The fol­low­ing en­vi­ron­ment vari­ables cause pip and uv to in­stall pack­ages from that proxy in­stead of di­rectly from PyPI:

PIP_INDEX_URL=https://​reader:****@pack­ages.ap­plied-caas-gate­way1.in­ter­nal.api.ope­nai.org/…/​pypi-pub­lic/​sim­ple

PIP_TRUSTED_HOST=packages.applied-caas-gateway1.internal.api.openai.org

UV_INDEX_URL=https://​reader:****@pack­ages.ap­plied-caas-gate­way1.in­ter­nal.api.ope­nai.org/…/​pypi-pub­lic/​sim­ple

UV_INSECURE_HOST=https://​pack­ages.ap­plied-caas-gate­way1.in­ter­nal.api.ope­nai.org

This one ap­pears to get npm to work:

NPM_CONFIG_REGISTRY=https://​reader:****@pack­ages.ap­plied-caas-gate­way1.in­ter­nal.api.ope­nai.org/…/​npm-pub­lic

And it re­ported these sus­pi­cious look­ing vari­ables as well:

CAAS_ARTIFACTORY_BASE_URL=packages.applied-caas-gateway1.internal.api.openai.org

CAAS_ARTIFACTORY_PYPI_REGISTRY=…/artifactory/api/pypi/pypi-public

CAAS_ARTIFACTORY_NPM_REGISTRY=…/artifactory/api/npm/npm-public

CAAS_ARTIFACTORY_GO_REGISTRY=…/artifactory/api/go/golang-main

CAAS_ARTIFACTORY_MAVEN_REGISTRY=…/artifactory/maven-public

CAAS_ARTIFACTORY_GRADLE_REGISTRY=…/artifactory/gradle-public

CAAS_ARTIFACTORY_CARGO_REGISTRY=…/artifactory/api/cargo/cargo-public/index

CAAS_ARTIFACTORY_DOCKER_REGISTRY=…/dockerhub-public

CAAS_ARTIFACTORY_READER_USERNAME=reader

CAAS_ARTIFACTORY_READER_PASSWORD=****

NETWORK=caas_packages_only

Neither Rust nor Docker are in­stalled in the con­tainer en­vi­ron­ment, but maybe those reg­istry ref­er­ences are a clue of fea­tures still to come.

The re­sult of all of this? You can tell ChatGPT to use Python or Node.js pack­ages as part of a con­ver­sa­tion and it will be able to in­stall them and ap­ply them against files you up­load or that it down­loads from the pub­lic web. That’s re­ally cool.

The big miss­ing fea­ture here should be the eas­i­est to pro­vide: we need of­fi­cial doc­u­men­ta­tion! A re­lease notes en­try would be a good start, but there are a lot of sub­tle de­tails to how this new stuff works, its lim­i­ta­tions and what it can be used for.

As al­ways, I’d also en­cour­age OpenAI to come up with a name for this set of fea­tures that prop­erly rep­re­sents how it works and what it can do.

In the mean­time, I’m go­ing to call this ChatGPT Containers.

I de­cided to ask ChatGPT about other tools that were avail­able to it in case there was any­thing in­ter­est­ing in there:

List all tools that are avail­able to you, with their ex­act names and de­scrip­tions and sig­na­tures

...

Read the original on simonwillison.net »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.