10 interesting stories served every morning and every evening.




1 779 shares, 33 trendiness

Getting a Gemini API key is an exercise in frustration — Ankur Sethi's Internet Website

Last week, I started work­ing on a new side-pro­ject. It’s a stan­dard React app partly made up of run-of-the-mill CRUD views—a per­fect fit for LLM-assisted pro­gram­ming. I rea­soned that if I could get an LLM to quickly write the bor­ing code for me, I’d have more time to fo­cus on the in­ter­est­ing prob­lems I wanted to solve.

I’ve pretty much set­tled on Claude Code as my cod­ing as­sis­tant of choice, but I’d been hear­ing great things about Google’s Gemini 3 Pro. Despite my aver­sion to Google prod­ucts, I de­cided to try it out on my new code­base.

I al­ready had Gemini CLI in­stalled, but that only gave me ac­cess to Gemini 2.5 with rate lim­its. I wanted to try out Gemini 3 Pro, and I wanted to avoid be­ing rate lim­ited. I had some spare cash to burn on this ex­per­i­ment, so I went look­ing for ways to pay for a Gemini Pro plan, if such a thing ex­isted.

Thus be­gan my grand ad­ven­ture in try­ing to give Google my money.

The name Gemini” is so over­loaded that it barely means any­thing. Based on the con­text, Gemini could re­fer to:

To make things even more con­fus­ing, Google has at least three dif­fer­ent prod­ucts just for agen­tic cod­ing: Gemini Code Assist (Gemini CLI is a part of this suite of prod­ucts), Jules, and Antigravity.

And then there’s a bunch of other GenAI stuff that is pow­ered by Gemini but does­n’t have the word Gemini in the name: Vertex AI Platform, Google AI Studio, NotebookLM, and who knows what else.

I just wanted to plug my credit card in­for­ma­tion into a form and get ac­cess to a cod­ing as­sis­tant. Instead, I was dunked into an al­pha­bet soup of prod­ucts that all seemed to do sim­i­lar things and, cru­cially, did­n’t have any gi­ant Buy Now!” but­tons for me to click.

In con­trast, both Anthropic and OpenAI have two pri­mary ways you can ac­cess their prod­ucts: via their con­sumer of­fer­ings at claude.ai and chat­gpt.com re­spec­tively, or via API cred­its that you can buy through their re­spec­tive de­vel­oper con­soles. In each case, there is a form field where you can plug in your credit card de­tails, and a big, friendly Buy Now!” but­ton to click.

After half an hour of search­ing the web, I did the ob­vi­ous thing and asked the free ver­sion of Gemini (the chat­bot, not one of those other Geminis) what to do:

How do I pay for the pro ver­sion of Gemini so i can use it in the ter­mi­nal for writ­ing code? I specif­i­cally want to use the Gemini 3 Pro model.

It thought for a sus­pi­ciously long time and told me that Gemini 3 Pro re­quired a de­vel­oper API key to use. Since the new model is still in pre­view, it’s not yet avail­able on any of the con­sumer plans. When I asked fol­low up ques­tions about pric­ing, it told me that Something went wrong”. Which trans­lates to: we broke some­thing, but we won’t tell you how to fix it.

So I asked Claude for help. Between the two LLMs, I was able to fig­ure out how to cre­ate an API key for the Gemini I wanted.

Google AI Studio is sup­posed to be the all-in-one dash­board for Google’s gen­er­a­tive AI mod­els. This is where you can ex­per­i­ment with model pa­ra­me­ters, man­age API keys, view logs, and man­age billing for your pro­jects.

I logged into Google AI Studio and cre­ated a new API key. This part was pretty straight­for­ward: I fol­lowed the on-screen in­struc­tions and had a fresh new key housed un­der a pro­ject in a few sec­onds. I then ver­i­fied that my key was work­ing with Gemini CLI.

It worked! Now all that was left to do was to pur­chase some API cred­its. Back in Google AI Studio, I saw a link ti­tled Set up billing” next to my key. It looked promis­ing, so I clicked it.

That’s where the fun re­ally be­gan.

The Set up billing” link kicked me out of Google AI Studio and into Google Cloud Console, and my heart sank. Every time I’ve logged into Google Cloud Console or AWS, I’ve wasted hours upon hours read­ing out­dated doc­u­men­ta­tion, gaz­ing in de­spair at graphs that make no sense, go­ing around in cir­cles from dash­board to dash­board, and feel­ing a strong de­sire to at­tain free­dom from this mor­tal coil.

Turns out I can’t just put $100 into my Gemini ac­count. Instead, I must first cre­ate a Billing Account. After I’ve done that, I must as­so­ci­ate it with a pro­ject. Then I’m al­lowed to add a pay­ment method to the Billing Account. And then, if I’m lucky, my API key will turn into a paid API key with Gemini Pro priv­i­leges.

So I did the thing. The whole song and dance. Including the manda­tory two-fac­tor OTP ver­i­fi­ca­tion that every Indian credit card re­quires. At the end of the process, I was greeted with a popup telling me I had to ver­ify my pay­ment method be­fore I’d be al­lowed to use it.

Wait. Didn’t I just ver­ify my pay­ment method? When I en­tered the OTP from my bank?

Nope, turns out Google hungers for more data. Who’d have thunk it?

To ver­ify my pay­ment method for re­als, I had to send Google a pic­ture of my gov­ern­ment-is­sued ID and the credit card I’d just as­so­ci­ated with my Billing Account. I had to en­sure all the num­bers on my credit card were redacted by man­u­ally plac­ing black bars on top of them in an im­age ed­i­tor, leav­ing only my name and the last four dig­its of the credit card num­ber vis­i­ble.

This felt un­nec­es­sar­ily in­tru­sive. But by this point, I was too deep in the process to quit. I was in­vested. I needed my Gemini 3 Pro, and I was will­ing to pay any price.

The up­load form for the gov­ern­ment ID re­jected my up­load twice be­fore it fi­nally ac­cepted it. It was the same ex­act ID every sin­gle time, just in dif­fer­ent file for­mats. It wanted a PNG file. Not a JPG file, nor a PDF file, but a PNG file. Did the up­load form men­tion that in the in­struc­tions? Of course not.

After jump­ing through all these hoops, I re­ceived an email from Google telling me that my ver­i­fi­ca­tion will be com­pleted in a few days.

A few days? Nothing to do but wait, I sup­pose.

At this point, I closed all my open Cloud Console tabs and went back to work. But when I was fif­teen min­utes into writ­ing some code by hand like a Neanderthal, I re­ceived a sec­ond email from Google telling me that my ver­i­fi­ca­tion was com­plete.

So for the tenth time that day, I nav­i­gated to AI Studio. For the tenth time I clicked Set up billing” on the page list­ing my API keys. For the tenth time I was told that my pro­ject was­n’t as­so­ci­ated with a billing ac­count. For the tenth time I as­so­ci­ated the pro­ject with my new billing ac­count. And fi­nally, af­ter do­ing all of this, the Quota tier” col­umn on the page list­ing my API keys said Tier 1” in­stead of Set up billing”.

Wait, Tier 1? Did that mean there were other tiers? What were tiers, any­way? Was I al­ready on the best tier? Or maybe I was on the worst one? Not im­por­tant. The im­por­tant part was that I had my API key and I’d man­aged to con­vince Google to charge me for it.

I went back to the Gemini CLI, ran the /settings com­mand, and turned on the Enable ex­per­i­men­tal fea­tures” op­tion. I ran the /models com­mand, which told me that Gemini 3 Pro was now avail­able.

When I tried send­ing a mes­sage to the LLM, it failed with this 403 er­ror:

error”: {

message”: {\n "error": {\n \“code\”: 403,\n \“message\”: \“The caller does not have per­mis­sion\”,\n \“status\”:\“PERMISSION_DENIED\“\n }\n}\n”,

code”: 403,

status”: Forbidden”

Is that JSON in­side a string in­side JSON? Yes. Yes it is.

To fig­ure out if my key was even work­ing, I tried call­ing the Gemini API from JavaScript, re­pro­duc­ing the ba­sic ex­am­ple from Google’s own doc­u­men­ta­tion.

No dice. I ran into the ex­act same er­ror.

I then tried talk­ing to Gemini 3 Pro us­ing the Playground in­side Google AI Studio. It showed me a toast mes­sage say­ing Failed to gen­er­ate con­tent. Please try again. The chat tran­script said An in­ter­nal er­ror has oc­curred.

At this point I gave up and walked away from my com­puter. It was al­ready 8pm. I’d been try­ing to get things to work since 5pm. I needed to eat din­ner, play Clair Obscur, and go to bed. I had no more time to waste and no more fucks to give.

Just as I was get­ting into bed, I re­ceived an email from Google with this sub­ject line:

Your Google Cloud and APIs billing ac­count XXXXXX-XXXXXX-XXXXXX is in good stand­ing at this time.

With the mes­sage in­side say­ing:

Based on the in­for­ma­tion you pro­vided and fur­ther analy­sis by Google, we have re­in­stated your billing ac­count XXXXXX-XXXXXX-XXXXXX. Your ac­count is in good stand­ing, and you should now have full ac­cess to your ac­count and re­lated Project(s) and Service(s).

I have no idea what any of this means, but Gemini 3 Pro started work­ing cor­rectly af­ter I re­ceived this email. It worked in the Playground, di­rectly by call­ing the API from JavaScript, and with Gemini CLI.

Problem solved, I guess. Until Google mys­te­ri­ously de­cides that my ac­count is no longer in good stand­ing.

This was such a frus­trat­ing ex­pe­ri­ence that I still haven’t tried us­ing Gemini with my new code­base, nearly a week af­ter I made all those sac­ri­fices to the Gods of Billing Account.

I un­der­stand why the process for get­ting a Gemini API key is so con­vo­luted. It’s de­signed for large or­ga­ni­za­tions, not an in­di­vid­ual de­vel­op­ers try­ing to get work done; it serves the bu­reau­cracy, not the peo­ple do­ing the work; it’s de­signed for max­i­mum com­pli­ance with gov­ern­ment reg­u­la­tions, not for ef­fi­ciency or pro­duc­tiv­ity.

Google does­n’t want my money un­less I’m an or­ga­ni­za­tion that em­ploys ten thou­sand peo­ple.

In con­trast to Google, Anthropic and OpenAI are much smaller and much more nim­ble. They’re able to make the process of set­ting up a de­vel­oper ac­count quick and easy for those of us who just want to get things done. Unlike Google, they haven’t yet be­come com­pla­cent. They need to com­pete for de­vel­oper mind­share if they are to sur­vive a decade into the fu­ture. Maybe they’ll add the same level of bu­reau­cracy to their processes as they be­come larger, but for now they’re fairly easy to deal with.

I’m still go­ing to try us­ing Gemini 3 Pro with Gemini CLI as my cod­ing as­sis­tant, but I’ll prob­a­bly cap the ex­per­i­ment to a month. Unless Gemini 3 Pro is a mas­sive im­prove­ment over its com­peti­tors, I’ll stick to us­ing tools built by or­ga­ni­za­tions that want me as a cus­tomer.

...

Read the original on ankursethi.com »

2 526 shares, 29 trendiness

Patterns.dev

Interested in our next book? Learn more about Building Large-scale JavaScript Web Apps with React

Patterns.dev is a free on­line re­source on de­sign, ren­der­ing, and per­for­mance pat­terns for build­ing pow­er­ful web apps with vanilla JavaScript or mod­ern frame­works.

Share prop­er­ties among many ob­jects of the same type

Use ob­serv­ables to no­tify sub­scribers when an event oc­curs

Split up your code into smaller, reusable pieces

Add func­tion­al­ity to ob­jects or classes with­out in­her­i­tance

Use a cen­tral me­di­a­tor ob­ject to han­dle com­mu­ni­ca­tion be­tween com­po­nents

Use a fac­tory func­tion in or­der to cre­ate ob­jects

An in­tro­duc­tion to an­i­mat­ing page tran­si­tions us­ing the View Transitions API and li­braries

Learn how to op­ti­mize your load­ing se­quence to im­prove how quickly your app is us­able

Import code that has been ex­ported by an­other mod­ule

Import parts of your code on de­mand

Load non-crit­i­cal com­po­nents when they are vis­i­ble in the view­port

Load non-crit­i­cal re­sources when a user in­ter­acts with UI re­quir­ing it

Inform the browser of crit­i­cal re­sources be­fore they are dis­cov­ered

Fetch and cache re­sources that may be re­quested some time soon

Reduce the per­for­mance im­pact third-party scripts have on your site.

Reduce the time needed to trans­fer scripts over the net­work.

Enforce sep­a­ra­tion of con­cerns by sep­a­rat­ing the view from the ap­pli­ca­tion logic

Pass reusable logic down as props to com­po­nents through­out your ap­pli­ca­tion

Use func­tions to reuse state­ful logic among mul­ti­ple com­po­nents through­out the app

Create mul­ti­ple com­po­nents that work to­gether to per­form a sin­gle task

Render your ap­pli­ca­tion’s UI on the client

Generate HTML to be ren­dered on the server in re­sponse to a user re­quest

Deliver pre-ren­dered HTML con­tent that was gen­er­ated when the site was built

Update sta­tic con­tent af­ter you have built your site

Delay load­ing JavaScript for less im­por­tant parts of the page

Generate HTML to be ren­dered on the server in re­sponse to a user re­quest

Server Components com­pli­ment SSR, ren­der­ing to an in­ter­me­di­ate ab­strac­tion with­out need­ing to add to the JavaScript bun­dle

With Next.js, there are sev­eral com­po­nents that can help im­prove Core Web Vitals met­rics

A com­pre­hen­sive guide to build­ing React apps in 2025/2026, cov­er­ing frame­works, build tools, rout­ing, state man­age­ment, and AI in­te­gra­tion

Self-contained mod­ules that cou­ple markup (HTML), logic (JS), and styles (CSS) within them

Functions to en­cap­su­late and reuse state­ful logic among mul­ti­ple com­po­nents

Enforce sep­a­ra­tion of con­cerns by sep­a­rat­ing the view from the ap­pli­ca­tion logic

Dynamically switch be­tween com­po­nents with the spe­cial

Have nested com­po­nents ac­cess data with­out us­ing props

Components that don’t ren­der their own markup

Compile-time syn­tac­tic sugar for us­ing the Composition API

We pub­lish pat­terns, tips and tricks for im­prov­ing how you ar­chi­tect apps for free. Keep in mind, de­sign pat­terns are de­scrip­tive, not pre­scrip­tive

. They can guide you when fac­ing a prob­lem other de­vel­op­ers have en­coun­tered many times be­fore, but are not a blunt tool for jam­ming into every sce­nario. Patterns.dev aims to be a cat­a­log of pat­terns (for in­creas­ing aware­ness) rather than a check­list (what you must do).

Design pat­terns are a fun­da­men­tal part of soft­ware de­vel­op­ment, as they pro­vide typ­i­cal so­lu­tions to com­monly re­cur­ring prob­lems in soft­ware de­sign. The im­ple­men­ta­tion, ben­e­fits and pit­falls of com­mon de­sign pat­terns us­ing ES2017+.React-specific de­sign pat­terns and their pos­si­ble mod­i­fi­ca­tion and im­ple­men­ta­tion us­ing React Hooks….and many more Web Performance pat­terns and op­ti­miza­tions that can help im­prove your mod­ern web app!

A com­mon cri­tique of de­sign pat­terns is that they need­lessly add com­plex­ity.

Our per­spec­tive is that pat­terns are valu­able for solv­ing spe­cific prob­lems, of­ten help­ing to com­mu­ni­cate com­mi­nal­i­ties in code prob­lems for hu­mans. If a pro­ject does­n’t have those prob­lems, there is­n’t a need to ap­ply them. Patterns can also be very lan­guage or frame­work-spe­cific (e.g. React), which can of­ten mean think­ing be­yond the scope of just the orig­i­nal GoF de­sign pat­terns.

We help you scale your we­bapps for per­for­mance

Learn about web per­for­mance pat­terns for load­ing your code more ef­fi­ciently. Unsure how to think about mod­ern ap­proaches to load­ing or ren­der­ing user-ex­pe­ri­ences? We’ve got you cov­ered.

...

Read the original on www.patterns.dev »

3 454 shares, 0 trendiness

In New York City, Congestion Pricing Leads to Marked Drop in Pollution

A new toll ap­plied to cars dri­ving in parts of New York City has led to a mea­sur­able drop in traf­fic, and with it, a 22 per­cent de­cline in par­tic­u­late pol­lu­tion, ac­cord­ing to a new study.

Congestion pric­ing came into ef­fect in January, with cars pay­ing $9 to drive through busy parts of Manhattan dur­ing peak hours. In the first six months of the pro­gram, traf­fic in the con­ges­tion zone dropped by 11 per­cent, ac­ci­dents by 14 per­cent, and com­plaints of ex­ces­sive honk­ing or other noise by 45 per­cent, of­fi­cials said.

A new study from Cornell has now tal­lied the im­pact on par­tic­u­late pol­lu­tion. Particulates is­sued from tailpipes can ag­gra­vate asthma and heart dis­ease and in­crease the risk of lung can­cer and heart at­tack. Globally, they are a lead­ing risk fac­tor for pre­ma­ture death.

Analyzing data on air qual­ity, traf­fic, and weather con­di­tions, re­searchers de­ter­mined that in the first half of this year, par­tic­u­late pol­lu­tion was down 22 per­cent in parts of Manhattan af­fected by con­ges­tion pric­ing.

The de­cline seen in New York was greater than in other cities with con­ges­tion pric­ing, such as Stockholm and London, re­searchers note. And the ef­fect ex­tended be­yond Lower Manhattan. Pricing led to a drop in pol­lu­tion across the greater met­ro­pol­i­tan area, ac­cord­ing to the study, pub­lished in the jour­nal npj Clean Air.

It’s re­ally ex­cit­ing to me that air qual­ity im­proved through­out the en­tire metro area,” said lead au­thor Timothy Fraser, of Cornell University. This tells us that con­ges­tion pric­ing did­n’t sim­ply re­lo­cate air pol­lu­tion to the sub­urbs by rerout­ing traf­fic. Instead, folks are likely choos­ing cleaner trans­porta­tion op­tions al­to­gether, like rid­ing pub­lic trans­porta­tion or sched­ul­ing de­liv­er­ies at night. This thins traf­fic and lim­its how smog com­pounds when many cars are on the road.”

...

Read the original on e360.yale.edu »

4 371 shares, 14 trendiness

How Google Maps quietly allocates survival across London’s restaurants

I needed a restau­rant rec­om­men­da­tion, so I did what every nor­mal per­son would do: I scraped every sin­gle restau­rant in Greater London and built a ma­chine-learn­ing model.

It started as a very rea­son­able prob­lem. I was tired of doom-scrolling Google Maps, try­ing to dis­en­tan­gle gen­uinely good food from what­ever the al­go­rithm had de­cided to push at me that day. Somewhere along the way, the pro­ject stopped be­ing about din­ner and be­came about some­thing slightly more un­hinged: how dig­i­tal plat­forms qui­etly re­dis­trib­ute eco­nomic sur­vival across cities.

Because once you start look­ing at London’s restau­rant scene through data, you stop see­ing all those cute in­de­pen­dents and hot new open­ings. You start see­ing an al­go­rith­mic mar­ket - one where vis­i­bil­ity com­pounds, de­mand snow­balls, and who gets to sur­vive is in­creas­ingly de­cided by code.

The pub­lic story of Google Maps is that it pas­sively re­flects what peo­ple like.” More stars, more re­views, bet­ter food. But that fram­ing ob­scures how the plat­form ac­tu­ally op­er­ates. Google Maps is not just in­dex­ing de­mand - it is ac­tively or­gan­is­ing it through a rank­ing sys­tem built on a small num­ber of core sig­nals that Google it­self has pub­licly ac­knowl­edged: rel­e­vance, dis­tance, and promi­nence.

Relevance” is in­ferred from text match­ing be­tween your search query and busi­ness meta­data. Distance” is purely spa­tial. But prominence” is where the po­lit­i­cal econ­omy be­gins. Google de­fines promi­nence us­ing sig­nals such as re­view vol­ume, re­view ve­loc­ity, av­er­age rat­ing, brand recog­ni­tion, and broader web vis­i­bil­ity. In other words, it is not just what peo­ple think of a place - it is how of­ten peo­ple in­ter­act with it, talk about it, and al­ready recog­nise it.

Visibility on these ranked lists de­ter­mines foot traf­fic. Foot traf­fic de­ter­mines how quickly re­views ac­cu­mu­late. Review ac­cu­mu­la­tion then feeds di­rectly back into the promi­nence sig­nal. The sys­tem com­pounds. Early dis­cov­ery gen­er­ates de­mand. Demand gen­er­ates data. Data gen­er­ates fu­ture dis­cov­ery. This cre­ates a cu­mu­la­tive-ad­van­tage dy­namic that looks re­mark­ably sim­i­lar to the way cap­i­tal com­pounds in fi­nan­cial mar­kets. This is es­sen­tially Robert Merton’s Matthew Effect ap­plied to ke­bab shops - unto every one that hath shall be given.’

This dis­pro­por­tion­ately re­wards chains and al­ready-cen­tral venues. Chains ben­e­fit from cross-lo­ca­tion brand recog­ni­tion. High-footfall ar­eas gen­er­ate re­views faster, mean­ing venues in those zones climb the promi­nence rank­ing more quickly even at iden­ti­cal un­der­ly­ing qual­ity. By con­trast, new in­de­pen­dents face a clas­sic cold-start prob­lem: with­out re­views they are hard to find, and with­out be­ing found they strug­gle to ac­cu­mu­late re­views at all. What looks like neu­tral con­sumer choice is there­fore bet­ter un­der­stood as al­go­rith­mi­cally me­di­ated mar­ket de­sign.

In eco­nom­ics, this dy­namic closely re­sem­bles the logic of a mar­ket maker: an in­ter­me­di­ary that does not merely re­flect un­der­ly­ing sup­ply and de­mand, but ac­tively shapes liq­uid­ity, match­ing, and price dis­cov­ery. Platforms like Google Maps per­form an anal­o­gous func­tion for lo­cal ser­vices by con­trol­ling vis­i­bil­ity rather than prices di­rectly. In the lan­guage of dig­i­tal eco­nom­ics, rank­ing al­go­rithms act as at­ten­tion al­lo­ca­tors, steer­ing de­mand to­ward some firms and away from oth­ers.

If Google Maps now acts as a kind of mar­ket maker for ur­ban de­mand, the ob­vi­ous next ques­tion is: what would the city look like with­out that am­pli­fi­ca­tion layer? In other words, how do you sep­a­rate a restau­ran­t’s in­trin­sic per­for­mance from the vis­i­bil­ity ef­fects of the plat­form it­self?

To get at that, I built a ma­chine-learn­ing model - a gra­di­ent-boosted de­ci­sion tree (for the ML crowd: HistGradientBoostingRegressor from scikit-learn) - to pre­dict what a restau­ran­t’s Google rat­ing should be, given only its struc­tural char­ac­ter­is­tics. This class of model is de­signed for large, messy, mixed-type tab­u­lar data and is par­tic­u­larly good at cap­tur­ing in­ter­ac­tion ef­fects, with­out me hav­ing to spec­ify those by hand. Features in­clude how many re­views it has (log-transformed to re­flect di­min­ish­ing re­turns to at­ten­tion), what cui­sine it serves, whether it is part of a chain or an in­de­pen­dent, its price level, broad venue types (restaurant, café, take­away, bar), and where it sits in the city via a spa­tial grid.

Quick aside: for a sub­set of places I also scraped re­view text, lan­guages, and pho­tos. But for this first full-city run I stayed within the Google Maps API free tier - partly for re­pro­ducibil­ity, partly be­cause pre­vi­ous grid-scrap­ing ad­ven­tures taught me that cloud bills com­pound faster than re­view counts. So, for fu­ture ver­sions, more fea­tures will only im­prove things. In par­tic­u­lar, who is do­ing the re­view­ing mat­ters. A five-star re­view of an Indian restau­rant writ­ten in Hindi prob­a­bly car­ries a dif­fer­ent sig­nal than one writ­ten by some­one or­der­ing chips with every­thing. (No judg­ment of white British peo­ple ofc…)

One prac­ti­cal prob­lem I ran into early on is that Google Maps is sur­pris­ingly bad at cat­e­goris­ing cuisines. A huge share of restau­rants are la­belled vaguely (“restaurant”, cafe”, meal take­away”), in­con­sis­tently, or just in­cor­rectly. So I ended up build­ing a sep­a­rate cui­sine-clas­si­fi­ca­tion model that pre­dicts cui­sine from restau­rant names, menu lan­guage, and re­view text where avail­able. In other words, the cui­sine fil­ters in the dash­board are not just Google’s tags - they’re ma­chine-learned. This mat­ters more than it might sound: if you mis­clas­sify cuisines, you mis­read di­ver­sity, clus­ter­ing, and who ac­tu­ally com­petes with whom on the high street. Btw, I briefly con­sid­ered clas­si­fy­ing Pret A Manger as French, just to see if it would make any French peo­ple an­grier at me than they al­ready are. I did­n’t. But I thought about it.

Before any mod­el­ling hap­pens, all fea­tures go through a stan­dard pre­pro­cess­ing pipeline - im­pu­ta­tion, en­cod­ing, the usual. Crucially, the model is trained only to learn the map­ping be­tween ob­serv­able plat­form-vis­i­ble fea­tures and rat­ings. This al­lows me to gen­er­ate a coun­ter­fac­tual ex­pected rat­ing for each restau­rant - what the plat­form would typ­i­cally as­sign un­der those struc­tural con­di­tions. The dif­fer­ence be­tween a restau­ran­t’s real rat­ing and this pre­dicted rat­ing is what I call the rat­ing resid­ual. A pos­i­tive resid­ual means the restau­rant per­forms ma­te­ri­ally bet­ter than its plat­form base­line would sug­gest. A neg­a­tive resid­ual means it un­der­per­forms rel­a­tive to what the al­go­rithm nor­mally re­wards. This is not a per­fect mea­sure of food qual­ity - but it is a pow­er­ful mea­sure of al­go­rith­mic mis­pric­ing: where so­cial or culi­nary value di­verges from what the plat­form struc­turally am­pli­fies.

One caveat: some restau­rants pay for pro­moted pins or lo­cal-search ads. Because paid vis­i­bil­ity is­n’t pub­licly dis­closed, I can’t es­ti­mate how many - which is it­self a sign of how opaque plat­form in­flu­ence has be­come. My resid­u­als may partly re­flect ad spend I can’t ob­serve.

To sum­marise this, I built the London food dash­board. The dash­board cur­rently al­lows users to search by name and fil­ter by un­der­rated gems (identified by my ma­chine learn­ing al­go­rithm), cui­sine, bor­ough, price level, min rat­ing, and re­view vol­ume. It is still very much a ver­sion-one pro­to­type - but it is al­ready a work­ing mi­cro­scope into London’s al­go­rith­mic food econ­omy.

If you want to ex­plore it your­self, you can find it on my per­sonal web­site at: lau­ren­leek.eu/​food-map .

Naturally, I im­me­di­ately stress-tested it on my favourite part of London: Islington (maybe all this promo - also in my pre­vi­ous Substack on UK seg­re­ga­tion - makes me qual­ify for a coun­cil tax re­bate? - I’m look­ing at you Jeremy Corbyn…). I switched on my underrated gems” fil­ter - that’s the ML resid­ual at work - set a min­i­mum rat­ing and re­view count, ex­clude the eye-wa­ter­ingly ex­pen­sive op­tions, and let the bub­bles guide me. Bigger, darker bub­bles mean places my model thinks the al­go­rithm is un­der­valu­ing.

And just like that, I had din­ner plans. Do try it your­self.

Btw, this is very much still a beta ver­sion - which means bugs, blind spots, and lots of room to grow. If some­thing looks odd, miss­ing, or wrong, please leave fea­ture ideas and sug­ges­tions in the com­ments or via the com­ments on my web­site. Unlike the VS Code GitHub tracker and its 13.8k open is­sues, I re­ally do read them.

But restau­rants don’t fail alone - they fail in ecosys­tems. I also wanted to un­der­stand what hap­pens when plat­form dy­nam­ics scale up from restau­rants to en­tire neigh­bour­hood food ecosys­tems. So I added a sec­ond mod­el­ling layer.

First, I ag­gre­gate restau­rants into small spa­tial cells (the hexa­gons you see on the maps - be­cause squares are for peo­ple who haven’t thought hard enough about edge ef­fects) and com­pute sum­mary fea­tures for each area: restau­rant den­sity, mean rat­ing, mean resid­ual, to­tal re­views, chain share, cui­sine en­tropy, and price level. I then stan­dard­ise these and run prin­ci­pal com­po­nent analy­sis (PCA) to com­press them into a sin­gle con­tin­u­ous hub score that cap­tures over­all restaurant ecosys­tem strength” in one di­men­sion. Finally, I ap­ply K-means clus­ter­ing to the same fea­ture space to clas­sify ar­eas into four struc­tural types: elite, strong, every­day, and weak hubs.

At first glance, the pat­terns look com­fort­ingly fa­mil­iar. Central London dom­i­nates. Of course it does. But what mat­ters is not just where the hubs are - it’s what kind of hubs they are. Using the full hub score rather than raw rat­ings alone, I iden­tify the five most struc­turally pow­er­ful restau­rant hubs in London. They are the places where den­sity, al­go­rith­mic at­ten­tion, in­de­pen­dent sur­vival, and con­sumer spend­ing power all line up at once. They are la­beled on the maps. I am de­lib­er­ately re­fus­ing to rank them loudly in prose in or­der to avoid start­ing neigh­bour­hood wars at scale (and to not dis­ap­point Islington) - but the vi­sual story is al­ready ex­tremely clear.

Overlaying this with the cui­sine den­sity pan­els re­veals some­thing even sharper. London’s culi­nary di­ver­sity is not evenly dis­trib­uted across its plat­form econ­omy. Migrant cuisines clus­ter strongly in parts of the city where al­go­rith­mic vis­i­bil­ity is struc­turally weaker. Italian, Indian, Turkish, Chinese, Thai, British, Japanese, French, American, and fish-and-chips all trace dis­tinct set­tle­ment his­to­ries, labour net­works, re­tail for­mats, and re­la­tion­ships to cap­i­tal and rent. Some cuisines form long, con­tigu­ous cor­ri­dors. Others ap­pear as punc­tu­ated clus­ters tied to spe­cific high streets or in­come brack­ets.

Cuisine di­ver­sity, in other words, is not just about taste. It is about where fam­i­lies set­tled, which high streets re­mained af­ford­able long enough for a sec­ond gen­er­a­tion to open busi­nesses, and which parts of the city ex­pe­ri­enced dis­place­ment be­fore culi­nary ecosys­tems could ma­ture. (If this part es­pe­cially speaks to you, I go much deeper into it in Food for thought: lo­cal restau­rant di­ver­sity meets mi­gra­tion).

The Take-Away and Some Unwanted Policy Advice

This pro­ject started as a search prob­lem and ended as some­thing more. The most im­por­tant re­sult is­n’t which neigh­bour­hood tops the rank­ings - it’s the re­al­i­sa­tion that plat­forms now qui­etly struc­ture sur­vival in every­day ur­ban mar­kets. London’s restau­rant scene is no longer or­gan­ised by taste alone. It is or­gan­ised by vis­i­bil­ity that com­pounds, rent that rises when dis­cov­ery ar­rives, and al­go­rithms that al­lo­cate at­ten­tion long be­fore con­sumers ever show up. What looks like choice” is in­creas­ingly the down­stream ef­fect of rank­ing sys­tems.

For pol­icy, that shifts the frame. If dis­cov­ery now shapes small-busi­ness sur­vival, then com­pe­ti­tion, fair­ness, and ur­ban re­gen­er­a­tion can no longer ig­nore plat­form rank­ing sys­tems. Councils can re­build streets and lib­er­alise li­cens­ing all they like - but al­go­rith­mic in­vis­i­bil­ity can still leave places eco­nom­i­cally stranded. Platform trans­parency and au­ditabil­ity are no longer niche tech de­bates; they are qui­etly be­com­ing tools of lo­cal eco­nomic pol­icy. At min­i­mum, rank­ing al­go­rithms with this much eco­nomic con­se­quence should be au­ditable. We au­dit fi­nan­cial mar­kets. We should au­dit at­ten­tion mar­kets too.

For a nav­i­ga­tion app, Google maps has a re­mark­able amount of power.

Just say­ing.

I’m also work­ing on other maps (including a map of the best cy­cling and run­ning routes with ex­cel­lent cafés along the way, be­cause I have needs). More broadly, I’m in­vest­ing more and more time into build­ing higher-qual­ity pub­lic data pro­jects. If you have an idea you’d like to see built - pitch it to me. And if you en­joy this kind of work, you can al­ways Buy Me A Coffee or sub­scribe to help fund the next round of over-en­gi­neered maps.

...

Read the original on laurenleek.substack.com »

5 341 shares, 10 trendiness

Building a High-End AI Desktop

Running large lan­guage mod­els lo­cally has al­ways been a game of com­pro­mise. You ei­ther spend \$10,000+ on con­sumer GPUs that can barely han­dle 70 B pa­ra­me­ter mod­els, or you dream about en­ter­prise hard­ware you’ll never af­ford. The Grace-Hopper plat­form—Nvidi­a’s uni­fied CPU-GPU su­per­chip ar­chi­tec­ture—rep­re­sents the kind of dream-rig AI in­fra­struc­ture LocalLlama drools over, with sys­tems typ­i­cally cost­ing well over \$100,000 and ex­clu­sively avail­able to data cen­ters and re­search in­sti­tu­tions.

So when I stum­bled across a Grace-Hopper sys­tem be­ing sold for 10K euro on Reddit, my first thought was obviously fake.” My sec­ond thought was I won­der if he’ll take 7.5K euro?”.

This is the story of how I bought en­ter­prise-grade AI hard­ware de­signed for liq­uid-cooled server racks that was con­verted to air cool­ing, re-con­verted it back to wa­ter cool­ing, sur­vived mul­ti­ple near-dis­as­ters (including GPUs re­port­ing tem­per­a­tures of 16 mil­lion de­grees), and ended up with a desk­top that can run 235B pa­ra­me­ter mod­els at home. It’s a tale of ques­tion­able de­ci­sions, cre­ative prob­lem-solv­ing, and what hap­pens when you try to turn dat­a­cen­ter equip­ment into a daily dri­ver.

If you’ve ever won­dered what it takes to run truly large mod­els lo­cally, or if you’re just here to watch some­one dis­as­sem­ble $80,000 worth of hard­ware with noth­ing but hope and iso­propanol, you’re in the right place.

Early this year, while brows­ing r/​Lo­cal­L­LaMA/​new, I came across a ridicu­lously good deal. How good? These were the specs for the server of­fered for 10K euro, and a se­ri­ous up­grade to my 4x RTX 4090 rig:

UPDATE:Since I bought this, DDR5 RAM prices have be­come in­sane. 960GB of fast DDR5 now costs more than what I paid for the whole Grace-Hopper sys­tem 🤯

H100s cost about 30-40,000 euro each, and this sys­tem has two of them­Grace-Hop­per NVL2 sys­tems are ba­si­cally not for sale for con­sumers any­way!

The Reddit thread ex­plained the rea­son the sys­tem was be­ing sold cheap:

The main rea­son why is that it is a Frankensystem con­verted from liq­uid-cooled to air­cooled. Also it is not very pretty and not rack­able, be­cause it has a 48V power sup­ply at­tached. It is orig­i­nally di­rectly from Nvidia.

I im­me­di­ately of­fered to buy it, be­cause why not? If it was a scam, I could al­ways back out, but I wanted to be first in line!

It turns out I live near the seller, and he runs an on­line shop that sells mod­i­fied Nvidia server equip­ment as desk­tops. It still seemed pretty risky, so I did some re­search and found a video re­view of one of his Desktops on Youtube. With the deal now seem­ing at least plau­si­ble, and the seller only a two-hour drive away and agree­ing to take cash, it was time to take a Bavarian road trip.

I ar­rived at a farm­house in a small for­est, and met Bernhard the pro­pri­etor of GPTshop.ai. He showed me a nice work­shop (plasma cut­ters, an elec­tron­ics lab, etc.) from which he fab­ri­cates cus­tom cases for the high-end H100 desk­tops he builds. These desk­tops seem pretty damn nice, so it’s un­for­tu­nate that his web­shop gives off shady vibes; the busi­ness reg­is­tra­tion in the Cayman Islands def­i­nitely does­n’t help. What I can say though is that this item was heav­ily dis­counted, and not the fancy ul­tra high-end desk­tops that he usu­ally sells.

Disclaimer: I have zero af­fil­i­a­tion with GPTshop.ai be­yond hand­ing them a stack of cash and re­ceiv­ing a dust-cov­ered server in re­turn. If this were a spon­sored post, they prob­a­bly would­n’t let me men­tion the 16 mil­lion de­gree GPU tem­per­a­tures or the part where I had to free-sol­der com­po­nents while pray­ing to the elec­tron­ics gods.

The server it­self was not in great con­di­tion. These things run ex­tremely loud and high-through­put fans, and these had sucked in a lot of dust, coat­ing the main­board so heav­ily I could­n’t tell the color of the PCB. However, it booted up and ran OK, so I handed over a wad of cash, strapped it into the back­seat of my car with the seat­belt (it weighed ~20 kg), and drove it home.

Did I men­tion it’s loud? Firing up the sys­tem is phys­i­cally painful. There are 8x Sunon dual-fan mod­ules, and each is as loud as a pow­er­ful vac­uum cleaner, but with a much higher and more an­noy­ing pitch. With all 8 run­ning at full power, hear­ing pro­tec­tion is nec­es­sary - I could hear the sys­tem run­ning in my base­ment with the win­dows closed from 50 me­ters away! My wife im­me­di­ately (and quite fairly), banned its use at home. We both work home-of­fice and it was sim­ply too loud for on­line meet­ings. But I had other plans any­way…

First things first, I of course quickly de­cided and then pro­ceeded to strip down the server, af­ter first photo-doc­u­ment­ing the var­i­ous con­nec­tors be­tween the var­i­ous PCBs, mod­ules and main­board.

The ma­jor­ity of the dust was vac­u­umed off dur­ing dis­as­sem­bly, but there was clearly a lot more un­der the Grace-Hopper mod­ules. After re­mov­ing those as well, I de­cided to go with a full wash­down of the main­board.

I pur­chased a few litres of Isopropanol, and with a soft brush I went over the whole board a few times to get the re­main­ing fine dust from in­side con­nec­tors and be­tween SMD-component pins.

I sus­pected there might also be dust in­side the Grace-Hopper mod­ules, but ac­tu­ally, I re­ally just wanted to pop them open to poke around.

The main­board went on my heated floor to dry for a week, while I moved on to re­plac­ing the cool­ing sys­tem.

I had looked into build­ing a cus­tom wa­ter-cool­ing block, but I was wor­ried about leaks, when I found cheap all-in-one wa­ter cool­ing sys­tems for ~40 euro each on sale. Two per GH200 mod­ule would be suf­fi­cient, so I care­fully mea­sured the di­men­sions of the GPU die and CPU, as well as screw lo­ca­tions, and threw those into Fusion 360 to model up an adapter block.

I have a Bambu X1, which came in very handy for pro­to­typ­ing the adapter blocks. The tol­er­ances have to be very tight, so I printed sev­eral cut-away ver­sions to make sure there was solid con­tact to the bare GPU die, and a safe mar­gin from con­tact to frag­ile parts.

The parts were then sent for CNC milling, and were de­liv­ered as the main­board was fin­ished dry­ing. After us­ing yet more iso­propanol to clean off the ma­chin­ing oil, they were mounted with­out much fuss.

My go-to ma­te­r­ial for this kind of pro­ject is ProfilAlu from eBay. It’s cheap, stiff, and de­liv­ered pre-cut for as­sem­bly. I put to­gether a de­sign in Fusion 360, and had the parts in a few days. The var­i­ous mounts how­ever were much more work. I needed to de­sign a few dozen cus­tom mounts for the var­i­ous PCBs and air-fil­ter fix­ings; this used up a few ki­los of fil­a­ment to get things just right.

I have dis­cov­ered that one of the most ter­ri­fy­ing sounds you will ever hear is a pop’ fol­lowed by a fizzle’ com­ing from the $80,000 main­board you just worked on. The smell of magic smoke mo­ments later gen­er­ates more of a sense of dread.

The idea was sim­ple enough: I have 8 pow­er­ful fans that each must draw a huge amount of cur­rent and run at 12V. At the same time, I have four wa­ter cool­ing sys­tems that also run at 12 V. Simple right? I swap the reg­u­lar con­sumer 3-pin fan con­nec­tor from the Arctic cooler with the weird server fan con­nec­tor, and I have can run them off the main­board with fan speed con­trol!

Problem 1. What the heck were these main­board fan con­nec­tors? They looked like tiny Molex, but I did­n’t rec­og­nize them. I think I fi­nally found some for sale, but they were ~20 euro each and I have prin­ci­ples! So, I mapped out the wiring and with some snip­ping and sol­der­ing, I had adapters made and the sys­tem re­built. Then came the pop and fiz­zle… My es­ti­ma­tions on the cur­rent draw must have been a bit off.

Problem 2. After dis­as­sem­bling the fancy adapter I just made and rewiring the fans, I found out that sev­eral fans did­n’t work any more. Hmmmm. Swapping the var­i­ous fans around made it clear: some of the main­board fan head­ers weren’t work­ing. I grabbed my DIY ther­mal cam­era (a topic for an­other blog post), and looked all over the board, be­fore spot­ting what looked like a warm MOSFET (basically a switch). I googled the mark­ings, but no re­sults.

Problem 3. I needed a new way to power the 12V AIO Water Coolers. The main power sup­ply pro­vides 48V at 62.5 Amps, which seemed a bit high, and I was­n’t ready to run them in se­ries af­ter the last small incident’. I picked up a cheapy 12V-5A power sup­ply from Amazon, be­cause next day de­liv­ery’, and it was un­der 10 euro. When this ar­rived, my cool­ing sys­tem was op­er­a­tional again!

The sys­tem did­n’t start to boot any­more. Checking the logs, I saw 6 crit­i­cal er­rors, one for each dead fan dri­ver among the 8 pairs:

With the fans re­moved, the BMC (Baseboard Management Controller) im­me­di­ately pan­icked, and shut down the main­board to pre­vent ther­mal dam­age, even with the wa­ter cool­ers in place. So, I dis­abled the fan-check sub­sys­tem.

Great! I could start the boot process, and even reach lo­gin! But only about 1 time in 4… Not op­ti­mal. And even logged in, the server would crash within 2 min­utes.

Looking into the BMC logs, I saw:

* System pow­ers off within ~30 sec­onds of suc­cess­ful boot

But why?!!? I had shut down the hard­ware mon­i­tor­ing.

Warning: Your GPU should not reach 16,777,214 Celsius dur­ing boot. Imagine what would hap­pen un­der load!

This took some time to de­bug, as I was quite sure the sen­sors could not phys­i­cally han­dle read­ing tem­per­a­tures over 16 mil­lion Celsius… But then I no­ticed some­thing in­ter­est­ing about that spe­cific num­ber:

This is 2²⁴ - 2, which is sus­pi­ciously close to the max­i­mum value of a 24-bit un­signed in­te­ger. In the hard­ware world, this is the equiv­a­lent of a sen­sor throw­ing up its hands and scream­ing I have no idea what’s hap­pen­ing!” When hard­ware can’t read a value prop­erly—whether due to a loose con­nec­tion, dam­aged cir­cuit, or ini­tial­iza­tion fail­ure—it of­ten re­turns the max­i­mum (or near-max­i­mum) rep­re­sentable value. It’s like the dig­i­tal ver­sion of a shrug.

The logs con­firmed this the­ory: see­ing 1.67772e+07 (16,777,214) was­n’t ev­i­dence that my GPU had achieved nu­clear fu­sion tem­per­a­tures 🔥—it was ev­i­dence that the tem­per­a­ture sen­sor had sim­ply stopped work­ing. And if a sen­sor er­ror is in­ter­mit­tent, the most likely cul­prit is a loose con­nec­tion or phys­i­cal dam­age.

After spend­ing way too long pur­su­ing soft­ware so­lu­tions (because who wants to dis­as­sem­ble every­thing again?), I fi­nally ac­cepted the in­evitable and broke out the screw­drivers.

I hap­pened to have bought a new mi­cro­scope ear­lier this year, and it turned out to be the per­fect tool for di­ag­nos­ing and fix­ing the is­sue. Near one of the mod­ules, I found some dam­aged sur­face mount com­po­nents. The dam­age must have hap­pened af­ter clean­ing, prob­a­bly dur­ing the re­assem­bly of the mod­ules with the cop­per adapters. They weigh over 2 kg, so a slight bump would have eas­ily caused this dam­age. Amazingly, the tiny com­po­nents were still at­tached to the traces, and so I could mea­sure them eas­ily: a 100 nF ca­pac­i­tor, and 4.7k re­sis­tor (both of which I had on-hand, as they are stan­dard val­ues for de­cou­pling cir­cuits). The bad news? I had huge 0805” sized parts (2mm long), these were tiny 0402” (1mm long). And one of the traces was just gone.

With some very fid­dly sol­der­ing, and scratch­ing off the sol­der mask on the PCB to ex­pose more trace, I was able to free sol­der’ the parts into a won­der­ful 3D sculp­ture which was then lib­er­ally coated in UV-curing mask resin, set, and then held in place with sticky tape. Very pro­fes­sional. After re­assem­bly, the sys­tem booted smoothly.

* Cool-looking mesh to pro­tect the wa­ter-cool­ing ra­di­a­tors and dust fil­ters

Getting the ac­tual GPU work­ing was also painful, so I’ll leave the de­tails here for fu­ture ad­ven­tur­ers:

That’s what you’re here for, maybe? I have only just started, but af­ter com­pil­ing the lat­est Llama.cpp ver­sion us­ing 144 cores in 90 sec­onds, here’s some bench­marks on larger LLMs:

This is pretty un­op­ti­mized, but it’s look­ing promis­ing so far! During the LLM tests I hit around 300W per GPU, far from the 900W max.

Here’s what the en­tire build ac­tu­ally cost me, from the ini­tial pur­chase to the fi­nal touches:

Not in­cluded: hear­ing pro­tec­tion (absolutely nec­es­sary), the mi­cro­scope I al­ready owned (but proved es­sen­tial), sev­eral failed 3D prints, and the emo­tional cost of see­ing 16,777,214°C” in sys­tem logs.

So, was it worth it? I now have a desk­top that can run 235B pa­ra­me­ter mod­els at home for less than the cost of a sin­gle H100. It re­quired dis­as­sem­bling $80,000 worth of en­ter­prise hard­ware, de­bug­ging sen­sors that re­ported tem­per­a­tures ap­proach­ing the sur­face of the sun, and free-sol­der­ing com­po­nents un­der a mi­cro­scope. Your mileage may vary. Literally: I had to drive two hours to pick this thing up.

...

Read the original on dnhkng.github.io »

6 325 shares, 36 trendiness

Meta shuts down global accounts linked to abortion advice and queer content

Meta has re­moved or re­stricted dozens of ac­counts be­long­ing to abor­tion ac­cess providers, queer groups and re­pro­duc­tive health or­gan­i­sa­tions in the past weeks in what cam­paign­ers call one of the biggest waves of cen­sor­ship” on its plat­forms in years.

The take­downs and re­stric­tions be­gan in October and tar­geted the Facebook, Instagram and WhatsApp ac­counts of more than 50 or­gan­i­sa­tions world­wide, some serv­ing tens of thou­sands of peo­ple — in what ap­pears to be a grow­ing push by Meta to limit re­pro­duc­tive health and queer con­tent across its plat­forms. Many of these were from Europe and the UK, how­ever the bans also af­fected groups serv­ing women in Asia, Latin America and the Middle East.

Repro Uncensored, an NGO track­ing dig­i­tal cen­sor­ship against move­ments fo­cused on gen­der, health and jus­tice, said that it had tracked 210 in­ci­dents of ac­count re­movals and se­vere re­stric­tions af­fect­ing these groups this year, com­pared with 81 last year.

Meta de­nied an es­ca­lat­ing trend of cen­sor­ship. Every or­gan­i­sa­tion and in­di­vid­ual on our plat­forms is sub­ject to the same set of rules, and any claims of en­force­ment based on group af­fil­i­a­tion or ad­vo­cacy are base­less,” it said in a state­ment, adding that its poli­cies on abor­tion-re­lated con­tent had not changed.

Campaigners say the ac­tions in­di­cate that Meta is tak­ing its Trump-era ap­proach to wom­en’s health and LGBTQ+ is­sues global. Earlier this year, it ap­peared to shadow-ban” or re­move the ac­counts of or­gan­i­sa­tions on Instagram or Facebook help­ing Americans to find abor­tion pills. Shadow-banning is when a so­cial me­dia plat­form se­verely re­stricts the vis­i­bil­ity of a user’s con­tent with­out telling the user.

In this lat­est purge, it blocked abor­tion hot­lines in coun­tries where abor­tion is le­gal, banned queer and sex-pos­i­tive ac­counts in Europe, and re­moved posts with even non-ex­plicit, car­toon de­pic­tions of nu­dity.

Within this last year, es­pe­cially since the new US pres­i­dency, we have seen a def­i­nite in­crease in ac­counts be­ing taken down — not only in the US, but also world­wide as a rip­ple ef­fect,” said Martha Dimitratou, ex­ec­u­tive di­rec­tor of Repro Uncensored.

This has been, to my knowl­edge, at least one of the biggest waves of cen­sor­ship we are see­ing,” she said.

Campaigners have ac­cused Meta of be­ing con­de­scend­ing and un­re­spon­sive, with the com­pany of­fer­ing only vague rea­sons why cer­tain ac­counts were taken down — and ap­pear­ing un­will­ing to en­gage.

In one email shared with the Guardian, a Meta con­sul­tant ap­pears to in­vite a num­ber of re­pro­duc­tive health or­gan­i­sa­tions to a closed-door on­line brief­ing about the chal­lenges that you are fac­ing with Meta’s con­tent mod­er­a­tion poli­cies”.

The email says the meet­ing will not be an op­por­tu­nity to raise cri­tiques of Meta’s prac­tices or to of­fer rec­om­men­da­tions for pol­icy changes”.

Dimitratou said such closed-door meet­ings had hap­pened be­fore, say­ing they reinforce the power im­bal­ance that al­lows big tech to de­cide whose voices are am­pli­fied and whose are si­lenced”.

In an­other in­stance, a Meta em­ployee coun­selled an af­fected or­gan­i­sa­tion in a per­sonal mes­sage to sim­ply move away from the plat­form en­tirely and start a mail­ing list, say­ing that bans were likely to con­tinue. Meta said it did not send this mes­sage.

Meta’s re­cent take­downs are part of a broader pat­tern of the com­pany purg­ing ac­counts, and then — at times — ap­pear­ing to back­track af­ter pub­lic pres­sure, said Carolina Are, a fel­low at Northumbria University’s Centre for Digital Citizens.

It would­n’t be as much of a prob­lem if plat­forms’ ap­peals ac­tu­ally worked, but they don’t. And ap­peals are the ba­sis of any de­mo­c­ra­tic jus­tice sys­tem,” she added.

Meta said that it aimed to re­duce en­force­ment mis­takes against ac­counts on its plat­form, but added that the ap­peals process for banned ac­counts had be­come frus­trat­ingly slow.

Organisations af­fected by the bans in­clude Netherlands-registered Women Help Women, a non­profit of­fer­ing in­for­ma­tion about abor­tion to women world­wide, in­clud­ing in Brazil, the Philippines and Poland. It fields about 150,000 emails from women each year, said its ex­ec­u­tive di­rec­tor, Kinga Jelinska.

Women Help Women has been on Facebook for 11 years, said Jelinska, and while its ac­count had been sus­pended be­fore, this was the first time it was banned out­right. The ban could be life-threatening”, she said, push­ing some women to­wards dan­ger­ous, less re­li­able in­for­ma­tion sources. Little ex­pla­na­tion was given for the ban.

A mes­sage from Meta to the group dated 13 November said its page does not fol­low our Community Standards on pre­scrip­tion drugs”, adding: We know this is dis­ap­point­ing, but we want to keep Facebook safe and wel­com­ing for every­one.”

It’s a very la­conic ex­pla­na­tion, a feel­ing of opac­ity,” Jelinska said. They just re­moved it. That’s it. We don’t even know which post it was about.”

Meta said more than half of the ac­counts flagged by Repro Uncensored have been re­in­stated, in­clud­ing Women Help Women which it said was taken down in er­ror. The dis­abled ac­counts were cor­rectly re­moved for vi­o­lat­ing a va­ri­ety of our poli­cies in­clud­ing our Human Exploitation pol­icy,” it added.

Jacarandas was founded by a group of young fem­i­nists when abor­tion was de­crim­i­nalised in Colombia in 2022, to ad­vise women and girls on how to get a free, le­gal abor­tion. The group’s ex­ec­u­tive di­rec­tor, Viviana Monsalve, said its WhatsApp helpline had been blocked then re­in­stated three times since October. The WhatsApp ac­count is cur­rently banned and Monsalve said they had re­ceived lit­tle in­for­ma­tion from Meta about whether this would con­tinue.

We wrote [Meta] an email and said, hey, we are a fem­i­nist or­gan­i­sa­tion. We work in abor­tion. Abortion is al­lowed in Colombia up to 24 weeks. It’s al­lowed to give in­for­ma­tion about it,’” said Monsalve.

Without Meta’s co­op­er­a­tion, Monsalve said it was dif­fi­cult to plan for the fu­ture. You are not sure if [a ban] will hap­pen to­mor­row or af­ter to­mor­row, be­cause they did­n’t an­swer any­thing.”

Meta said: Our poli­cies and en­force­ment re­gard­ing abor­tion med­ica­tion-re­lated con­tent have not changed: we al­low posts and ads pro­mot­ing health­care ser­vices like abor­tion, as well as dis­cus­sion and de­bate around them, as long as they fol­low our poli­cies.”

While groups such as Jacarandas and Women Help Women had their ac­counts re­moved out­right, other groups said that they in­creas­ingly faced Meta re­strict­ing their posts and shadow-ban­ning their con­tent.

Fatma Ibrahim, the di­rec­tor of the Sex Talk Arabic, a UK-based plat­form which of­fers Arabic-language con­tent on sex­ual and re­pro­duc­tive health, said that the or­gan­i­sa­tion had re­ceived a mes­sage al­most every week from Meta over the past year say­ing that its page didn’t fol­low the rules” and would not be sug­gested to other peo­ple, based on posts re­lated to sex­u­al­ity and sex­ual health.

Two weeks ago, these mes­sages es­ca­lated to a warn­ing, in which Meta noted its new poli­cies on nu­dity and re­moved a post from the Sex Talk Arabic’s page. The of­fend­ing post was an artis­tic de­pic­tion of a naked cou­ple, ob­scured by hearts.

Ibrahim said the warn­ing was condescending”, and that Meta’s mod­er­a­tion was US-centric and lacked con­text.

Despite the prof­its they make from our re­gion, they don’t in­vest enough to un­der­stand the so­cial is­sues women fight against and why we use so­cial me­dia plat­forms for such fights,” she said.

...

Read the original on www.theguardian.com »

7 318 shares, 46 trendiness

The highest quality codebase

Have you seen one of the ex­per­i­ments where peo­ple have been re-feed­ing the same im­age to the AI agent a bunch of times?

Or Marques Brownlee’s youtube videos where the video is re­u­ploaded a 1000 times?

Over the Thanksgiving week­end I had some time on my hands and tasked Claude to write an app that gues­ti­mates macronu­tri­ents in some foods based on de­scrip­tion + photo. There’s some in­ter­est­ing setup in get­ting it right, but that’s bor­ing. It has cre­ated a great, func­tional app for me, but then I forced it to do a small, evil ex­per­i­ment for me.

I’ve writ­ten a quick script that looped over my code­base and ran this com­mand.

#!/usr/bin/env bash

set -euo pipefail

PROMPT=“Ultrathink. You’re a prin­ci­pal en­gi­neer. Do not ask me any ques­tions. We need to im­prove the qual­ity of this code­base. Implement im­prove­ments to code­base qual­ity.”

MAX_ITERS=“200”

for i in $(seq 1 $MAX_ITERS”); do

claude –dangerously-skip-permissions -p $PROMPT

git add -A

if git diff –cached –quiet; then

echo No changes this round, skip­ping com­mit.”

else

git com­mit –no-verify -m yolo run #$i: $PROMPT

done

…and havoc it wrecked. Over 200 times of un­mit­i­gated mad­ness. I have tweaked the prompt here and there when I’ve been see­ing it overindex­ing on a sin­gle thing, but with enough it­er­a­tions it started cov­er­ing a lot of ground.. from full code cov­er­age and more tests than func­tional code, to rust-style Result types, to.. es­ti­mat­ing en­tropy of hash­ing func­tion (???).

This was run­ning for around 36 hours and took me some time to grok through, but let’s see what it did. The en­tire repo is here btw. The branch you’re look­ing for is high­est-qual­ity.

This app is around 4-5 screens. Take a photo, add de­scrip­tion, get AI re­sponse. Simple as that.

The ver­sion pre im­prov­ing qual­ity” was al­ready pretty large. We are talk­ing around 20k lines of TS, around 9.7k is in var­i­ous __tests__ di­rec­to­ries. This was slightly in­ten­tional - when work­ing with Claude Code, hav­ing good self-val­i­da­tion har­ness greatly im­proves the qual­ity of re­sults.

cloc . –exclude-dir=node_modules,dist,build,.expo,.husky,.maestro,Pods

132 text files.

127 unique files.

11 files ig­nored.

github.com/​Al­Da­nial/​cloc v 2.04 T=0.11 s (1167.4 files/​s, 487085.6 lines/​s)

Language files blank com­ment code

JSON 4 0 0 23733

TypeScript 99 3019 1541 20160

Markdown 11 1004 0 2700

JavaScript 9 26 51 269

Bourne Shell 2 34 41 213

YAML 2 35 2 162

SUM: 127 4118 1635 47237

But in the af­ter­math - 84 thou­sand! We went 20k -> 84k on improvements” to the qual­ity of the code­base.

cloc . –exclude-dir=node_modules,dist,build,.expo,.husky,.maestro,Pods

285 text files.

281 unique files.

10 files ig­nored.

github.com/​Al­Da­nial/​cloc v 2.04 T=0.60 s (468.1 files/​s, 268654.5 lines/​s)

Language files blank com­ment code

TypeScript 247 17587 18749 84185

JSON 5 0 0 24863

Markdown 14 4151 0 10391

JavaScript 9 41 140 598

Bourne Shell 3 41 41 228

YAML 3 50 3 215

SUM: 281 21870 18933 120480

Tests alone went from 10k to 60k LOC!

cloc . \

–exclude-dir=node_modules,dist,build,.expo,.husky,.maestro,Pods \

–match-d=‘__tests__’

138 text files.

138 unique files.

1 file ig­nored.

github.com/​Al­Da­nial/​cloc v 2.04 T=0.23 s (612.9 files/​s, 346313.3 lines/​s)

Language files blank com­ment code

TypeScript 138 13919 3685 60366

SUM: 138 13919 3685 60366

We went from around 700 to a whoop­ing 5369 tests. In the orig­i­nal pro­ject I had e2e tests us­ing ac­tual sim­u­la­tor - they are pretty im­por­tant to make sure that the cod­ing agent has closed feed­back loop, but in the process of im­prov­ing the qual­ity they seemed to have been for­got­ten ¯\_(ツ)_/¯.

Btw. we went from ~1500 lines of com­ments to 18.7k.

OK, but what did it ac­tu­ally do? I have the full log of what Claude Code was out­putting in the sum­mary af­ter every run in. You can check it here

Claude Code re­ally did­n’t like us­ing 3rd party li­braries and cre­ated a ton of ran­dom util­i­ties.

I can sort of re­spect that the de­pen­dency list is pretty small, but at the cost of very un­main­tain­able 20k+ lines of util­i­ties. I guess it re­ally wanted to avoid sup­ply-chain at­tacks.

Some of them are re­ally un­nec­es­sary and could be re­placed with off the shelf so­lu­tion:

* Full on hi­er­ar­chi­cal log­ger with built in per­for­mance track­ing in­stead of us­ing some­thing sim­ple off the shelf lib/​log­ger.ts

* React Hooks. Some of them are spe­cific to our use-case, but bunch of them re­ally does­n’t have to be rein­vented (or in­vented in the first place).

Some are just in­sane - here are my fa­vorites!

* The Result Type im­ple­men­ta­tion lib/​re­sult.ts - This mod­ule pro­vides a Result type (similar to Rust’s Result.

I like Rust’s re­sult-han­dling sys­tem, I don’t think it works very well if you try to bring it to the en­tire ecosys­tem that al­ready is stan­dard­ized on er­ror throw­ing. In my pre­vi­ous job we ex­per­i­mented with do­ing that in Python. It was­n’t click­ing very well with peo­ple and us­ing it felt pretty forced. I’d stray away from it.

This made me gig­gle be­cause of course AI started bring­ing pat­terns from Rust. There’s lib/​op­tion.ts too.

* Functional pro­gram­ming util­i­ties lib/​func­tional.ts - Type-safe com­po­si­tion, cur­ry­ing, over­loads for 20+ params, this has it all.

In some it­er­a­tions, cod­ing agent put on a hat of se­cu­rity en­gi­neer. For in­stance - it cre­ated a has­Min­i­ma­lEn­tropy func­tion meant to detect ob­vi­ously fake keys with low char­ac­ter va­ri­ety”. I don’t know why.

To en­sure we have proper scal­a­bil­ity it has im­ple­mented cir­cuit break­ing and jit­ter­ing ex­po­nen­tial back­off. The only API we are talk­ing to is OpenAI/Anthropic. You’re wel­come.

The pos­i­tive - there’s been a lot of time spent on mak­ing sure that we have strict type check­ing, we don’t overly cast (as any as T) and, hey, I re­spect that.

The prompt, in all its ver­sions, al­ways fo­cuses on us im­prov­ing the code­base qual­ity. It was dis­ap­point­ing to see how that met­ric is per­ceived by AI agent. The lead­ing prin­ci­ple was to de­fine a few van­ity met­rics and push for more is bet­ter”.

In mes­sage log, the agent of­ten boasts about the num­ber of tests added, or that code cov­er­age (ugh) is over some ar­bi­trary per­cent­age. We end up with an ab­solute moloch of un­main­tain­able code in the name of qual­ity. But hey, the num­ber is go­ing up.

All in all, the pro­ject has more code to main­tained, most of it largely use­less. Tons of tests got added, but some tests that mat­tered the most (maestro e2e tests that val­i­dated the app still works) were for­got­ten. It had some mo­ments of goodness”, like mak­ing sure type­checks are of high qual­ity.

To truly re­sem­ble the test of redraw this im­age 1000 times”/“​re­u­pload this video 1000 times”, I think the loop would have to be two step:

* Implement a fresh pro­ject based off of this de­scrip­tion

This was ob­vi­ously done in jest, I did­n’t ex­pect that this will im­prove the qual­ity of code­base in the ways that I think truly mat­ters. I’ve prompted Claude Code to fail­ure here and it def­i­nitely pro­duced some funny re­sults.

I still use cod­ing agents for my day to day de­vel­op­ment. If any­thing it feels like time spent re­view­ing AI code was not a waste of time.

…oh and the app still works, there’s no new fea­tures, and just a few new bugs.

...

Read the original on gricha.dev »

8 316 shares, 45 trendiness

Disney making $1 billion investment in OpenAI, will allow characters on Sora AI video generator

The Walt Disney Co. on Thursday an­nounced it will make a $1 bil­lion eq­uity in­vest­ment in OpenAI and will al­low users to make videos with its copy­righted char­ac­ters on its Sora app.

OpenAI launched Sora in September, and it al­lows users to cre­ate short videos by sim­ply typ­ing in a prompt.

As part of the star­tup’s new three-year li­cens­ing agree­ment with Disney, Sora users will be able make con­tent with more than 200 char­ac­ters across Disney, Marvel, Pixar and Star Wars start­ing next year.

The rapid ad­vance­ment of ar­ti­fi­cial in­tel­li­gence marks an im­por­tant mo­ment for our in­dus­try, and through this col­lab­o­ra­tion with OpenAI we will thought­fully and re­spon­si­bly ex­tend the reach of our sto­ry­telling through gen­er­a­tive AI, while re­spect­ing and pro­tect­ing cre­ators and their works,” Disney CEO Bob Iger said in a state­ment.

As part of the agree­ment, Disney said it will re­ceive war­rants to pur­chase ad­di­tional eq­uity and will be­come a ma­jor OpenAI cus­tomer.

Disney is de­ploy­ing OpenAI’s chat­bot, ChatGPT, to its em­ploy­ees and will work with its tech­nol­ogy to build new tools and ex­pe­ri­ences, ac­cord­ing to a re­lease.

When Sora launched this fall, the app rock­eted to the top of Apple’s App Store and gen­er­ated a storm of con­tro­versy as users flooded the plat­form with videos of pop­u­lar brands and char­ac­ters.

The Motion Picture Association said in October that OpenAI needed to take immediate and de­ci­sive ac­tion” to pre­vent copy­right in­fringe­ment on Sora.

OpenAI CEO Sam Altman said more granular con­trol” over char­ac­ter gen­er­a­tion was com­ing, ac­cord­ing to a blog post fol­low­ing the launch.

...

Read the original on www.cnbc.com »

9 301 shares, 49 trendiness

- YouTube

...

Read the original on www.youtube.com »

10 245 shares, 16 trendiness

The Linux kernel is just a program

Part 1 of 1 in Linux Inside Out

Part 1 of 1 in Linux Inside Out

Most books and courses in­tro­duce Linux through shell com­mands, leav­ing the ker­nel as a mys­te­ri­ous black box do­ing magic be­hind the scenes.

In this post, we will run some ex­per­i­ments to de­mys­tify it: the Linux ker­nel is just a bi­nary that you can build and run.

The ex­per­i­ments are de­signed so you can fol­low along if you have a Linux PC. But this is com­pletely op­tional, the goal is to build a men­tal model about how Linux works, see­ing how com­po­nents of the sys­tem fit to­gether.

But first let’s talk about what a ker­nel is.

Computers are built from CPUs, mem­ory, and other de­vices, like video cards, net­work cards, key­boards, dis­plays, and a lot of other stuff.

These de­vices can be man­u­fac­tured by dif­fer­ent com­pa­nies, have dif­fer­ent ca­pa­bil­i­ties, and can be pro­grammed dif­fer­ently.

An op­er­at­ing sys­tem ker­nel pro­vides an ab­strac­tion to use these de­vices and re­sources con­ve­niently and se­curely. Without one, writ­ing pro­grams would be much more dif­fi­cult. We would need to write the low-level code to use every de­vice that our pro­gram needs, and it’s likely that it would­n’t work on other com­put­ers.

* gives us APIs to in­ter­act with the hard­ware over a uni­fied in­ter­face

* man­ages how pro­grams can use the com­put­er’s CPU, mem­ory and other re­sources

* pro­vides ac­cess con­trol over what re­sources can a pro­gram ac­cess

* pro­vides ad­di­tional fea­tures like fire­walls, file sys­tems, mech­a­nisms for pro­grams to com­mu­ni­cate, etc.

The clos­est anal­ogy from the soft­ware de­vel­op­ment world is that the ker­nel is a run­time for our com­puter.

On most Linux dis­tri­b­u­tions we will find the ker­nel un­der the /boot di­rec­tory. Let’s en­ter the di­rec­tory and list its con­tents:

~$ cd /boot

/boot$ ls -1

System.map-6.12.43+deb13-amd64

System.map-6.12.48+deb13-amd64

con­fig-6.12.43+de­b13-amd64

con­fig-6.12.48+de­b13-amd64

efi

grub

ini­trd.img-6.12.43+de­b13-amd64

ini­trd.img-6.12.48+de­b13-amd64

vm­linuz-6.12.43+de­b13-amd64

vm­linuz-6.12.48+de­b13-amd64

We see a few files here, but the one we are look­ing for is vm­linuz-6.12.48+de­b13-amd64. This sin­gle file is the ker­nel.

If you ever won­dered what this name means:

* 6.12.48+deb13: this is the ker­nel ver­sion, and the dis­tri­b­u­tion (Debian 13)

* amd64: this is the ar­chi­tec­ture of our sys­tem

Note: Different dis­tri­b­u­tions may use slightly dif­fer­ent nam­ing con­ven­tions. vm­linuz is com­monly the bootable com­pressed ker­nel im­age.

In our first ex­per­i­ment we will copy this ker­nel into an­other di­rec­tory and run it.

First, let’s cre­ate a di­rec­tory and copy the ker­nel there.

Note: Your ker­nel ver­sion might dif­fer, re­mem­ber to check it be­fore the cp com­mand.

/boot$ cd

~$ mkdir linux-in­side-out

~$ cd linux-in­side-out/

~/linux-inside-out$ cp /boot/vmlinuz-6.12.48+deb13-amd64 .

~/linux-inside-out$ ls -lh

to­tal 12M

-rw-r–r– 1 zsoltkac­sandi zsoltkac­sandi 12M Dec 1 09:44 vm­linuz-6.12.48+de­b13-amd64

Then in­stall some tools that are needed for this ex­per­i­ment.

We will use QEMU, a vir­tual ma­chine em­u­la­tor, be­cause our ker­nel needs some­thing that works like a com­puter, and be­cause we do not want to mess up our orig­i­nal op­er­at­ing sys­tem.

~$ sudo apt up­date

~$ sudo apt in­stall -y qemu-sys­tem-x86 qemu-utils

Then start a vir­tual ma­chine with our ker­nel:

~/linux-inside-out$ qemu-sys­tem-x86_64 \

-m 256M \

-kernel vm­linuz-6.12.48+de­b13-amd64 \

-append console=ttyS0” \

-nographic

The out­put should be some­thing like this:

SeaBIOS (version 1.16.3-debian-1.16.3-2)

iPXE (https://​ipxe.org) 00:03.0 CA00 PCI2.10 PnP PMM+06FC6D30+06F06D30 CA00

Booting from ROM

Probing EDD (edd=off to dis­able)… o

[ 0.000000] Linux ver­sion 6.12.48+deb13-amd64 ([email protected]) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, )

[ 0.000000] Command line: con­sole=ttyS0

[ 2.055627] RAS: Correctable Errors col­lec­tor ini­tial­ized.

[ 2.161843] clk: Disabling un­used clocks

[ 2.162218] PM: genpd: Disabling un­used power do­mains

[ 2.179652] /dev/root: Can’t open block­dev

[ 2.180871] VFS: Cannot open root de­vice ” or un­known-block(0,0): er­ror -6

[ 2.181038] Please ap­pend a cor­rect root=” boot op­tion; here are the avail­able par­ti­tions:

[ 2.181368] List of all bdev filesys­tems:

[ 2.181477] fuse­blk

[ 2.181516]

[ 2.181875] Kernel panic - not sync­ing: VFS: Unable to mount root fs on un­known-block(0,0)

[ 2.182495] CPU: 0 UID: 0 PID: 1 Comm: swap­per/​0 Not tainted 6.12.48+deb13-amd64 #1 Debian 6.12.48-1

[ 2.182802] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014

[ 2.186426] Kernel Offset: 0x30e00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)

[ 2.186949] –-[ end Kernel panic - not sync­ing: VFS: Unable to mount root fs on un­known-block(0,0) ]–-

You can exit by press­ing + then .

So, we’ve just started the same ker­nel that is run­ning on our com­puter. It took 2 sec­onds, it printed a lot of log mes­sages, then pan­icked.

This panic is not a bug, ac­tu­ally this is ex­pected - once our ker­nel ini­tial­izes it­self, it tries to mount the root filesys­tem, and hand over con­trol to a pro­gram called init.

So let’s give one to it.

We will write a sim­ple pro­gram that we will use as an init pro­gram.

We will use Golang for two rea­sons:

* it has an easy to learn syn­tax, that read­ers com­ing from dif­fer­ent back­grounds can pick up and un­der­stand quickly

* it can build a sta­t­i­cally-linked bi­nary with no C de­pen­den­cies, mak­ing it portable and per­fect for our min­i­mal ex­per­i­ment

First let’s in­stall Golang, and cre­ate a new pro­ject called init:

~/linux-inside-out$ sudo apt -y in­stall golang

~/linux-inside-out$ mkdir init

~/linux-inside-out$ cd init

~/linux-inside-out/init$

~/linux-inside-out/init$ go mod init init

go: cre­at­ing new go.mod: mod­ule init

go: to add mod­ule re­quire­ments and sums:

go mod tidy

pack­age main

im­port (

fmt”

os”

time”

func main() {

fmt.Println(“Hello from Go init!“)

fmt.Println(“PID:”, os.Get­pid()) // print­ing the PID (process ID)

for i := 0; ; i++ { // every two sec­onds print­ing the text tick {tick num­ber}”

fmt.Println(“tick”, i)

...

Read the original on serversfor.dev »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.