10 interesting stories served every morning and every evening.
Last week, I started working on a new side-project. It’s a standard React app partly made up of run-of-the-mill CRUD views—a perfect fit for LLM-assisted programming. I reasoned that if I could get an LLM to quickly write the boring code for me, I’d have more time to focus on the interesting problems I wanted to solve.
I’ve pretty much settled on Claude Code as my coding assistant of choice, but I’d been hearing great things about Google’s Gemini 3 Pro. Despite my aversion to Google products, I decided to try it out on my new codebase.
I already had Gemini CLI installed, but that only gave me access to Gemini 2.5 with rate limits. I wanted to try out Gemini 3 Pro, and I wanted to avoid being rate limited. I had some spare cash to burn on this experiment, so I went looking for ways to pay for a Gemini Pro plan, if such a thing existed.
Thus began my grand adventure in trying to give Google my money.
The name “Gemini” is so overloaded that it barely means anything. Based on the context, Gemini could refer to:
To make things even more confusing, Google has at least three different products just for agentic coding: Gemini Code Assist (Gemini CLI is a part of this suite of products), Jules, and Antigravity.
And then there’s a bunch of other GenAI stuff that is powered by Gemini but doesn’t have the word Gemini in the name: Vertex AI Platform, Google AI Studio, NotebookLM, and who knows what else.
I just wanted to plug my credit card information into a form and get access to a coding assistant. Instead, I was dunked into an alphabet soup of products that all seemed to do similar things and, crucially, didn’t have any giant “Buy Now!” buttons for me to click.
In contrast, both Anthropic and OpenAI have two primary ways you can access their products: via their consumer offerings at claude.ai and chatgpt.com respectively, or via API credits that you can buy through their respective developer consoles. In each case, there is a form field where you can plug in your credit card details, and a big, friendly “Buy Now!” button to click.
After half an hour of searching the web, I did the obvious thing and asked the free version of Gemini (the chatbot, not one of those other Geminis) what to do:
How do I pay for the pro version of Gemini so i can use it in the terminal for writing code? I specifically want to use the Gemini 3 Pro model.
It thought for a suspiciously long time and told me that Gemini 3 Pro required a developer API key to use. Since the new model is still in preview, it’s not yet available on any of the consumer plans. When I asked follow up questions about pricing, it told me that “Something went wrong”. Which translates to: we broke something, but we won’t tell you how to fix it.
So I asked Claude for help. Between the two LLMs, I was able to figure out how to create an API key for the Gemini I wanted.
Google AI Studio is supposed to be the all-in-one dashboard for Google’s generative AI models. This is where you can experiment with model parameters, manage API keys, view logs, and manage billing for your projects.
I logged into Google AI Studio and created a new API key. This part was pretty straightforward: I followed the on-screen instructions and had a fresh new key housed under a project in a few seconds. I then verified that my key was working with Gemini CLI.
It worked! Now all that was left to do was to purchase some API credits. Back in Google AI Studio, I saw a link titled “Set up billing” next to my key. It looked promising, so I clicked it.
That’s where the fun really began.
The “Set up billing” link kicked me out of Google AI Studio and into Google Cloud Console, and my heart sank. Every time I’ve logged into Google Cloud Console or AWS, I’ve wasted hours upon hours reading outdated documentation, gazing in despair at graphs that make no sense, going around in circles from dashboard to dashboard, and feeling a strong desire to attain freedom from this mortal coil.
Turns out I can’t just put $100 into my Gemini account. Instead, I must first create a Billing Account. After I’ve done that, I must associate it with a project. Then I’m allowed to add a payment method to the Billing Account. And then, if I’m lucky, my API key will turn into a paid API key with Gemini Pro privileges.
So I did the thing. The whole song and dance. Including the mandatory two-factor OTP verification that every Indian credit card requires. At the end of the process, I was greeted with a popup telling me I had to verify my payment method before I’d be allowed to use it.
Wait. Didn’t I just verify my payment method? When I entered the OTP from my bank?
Nope, turns out Google hungers for more data. Who’d have thunk it?
To verify my payment method for reals, I had to send Google a picture of my government-issued ID and the credit card I’d just associated with my Billing Account. I had to ensure all the numbers on my credit card were redacted by manually placing black bars on top of them in an image editor, leaving only my name and the last four digits of the credit card number visible.
This felt unnecessarily intrusive. But by this point, I was too deep in the process to quit. I was invested. I needed my Gemini 3 Pro, and I was willing to pay any price.
The upload form for the government ID rejected my upload twice before it finally accepted it. It was the same exact ID every single time, just in different file formats. It wanted a PNG file. Not a JPG file, nor a PDF file, but a PNG file. Did the upload form mention that in the instructions? Of course not.
After jumping through all these hoops, I received an email from Google telling me that my verification will be completed in a few days.
A few days? Nothing to do but wait, I suppose.
At this point, I closed all my open Cloud Console tabs and went back to work. But when I was fifteen minutes into writing some code by hand like a Neanderthal, I received a second email from Google telling me that my verification was complete.
So for the tenth time that day, I navigated to AI Studio. For the tenth time I clicked “Set up billing” on the page listing my API keys. For the tenth time I was told that my project wasn’t associated with a billing account. For the tenth time I associated the project with my new billing account. And finally, after doing all of this, the “Quota tier” column on the page listing my API keys said “Tier 1” instead of “Set up billing”.
Wait, Tier 1? Did that mean there were other tiers? What were tiers, anyway? Was I already on the best tier? Or maybe I was on the worst one? Not important. The important part was that I had my API key and I’d managed to convince Google to charge me for it.
I went back to the Gemini CLI, ran the /settings command, and turned on the “Enable experimental features” option. I ran the /models command, which told me that Gemini 3 Pro was now available.
When I tried sending a message to the LLM, it failed with this 403 error:
“error”: {
“message”: “{\n "error": {\n \“code\”: 403,\n \“message\”: \“The caller does not have permission\”,\n \“status\”:\“PERMISSION_DENIED\“\n }\n}\n”,
“code”: 403,
“status”: “Forbidden”
Is that JSON inside a string inside JSON? Yes. Yes it is.
To figure out if my key was even working, I tried calling the Gemini API from JavaScript, reproducing the basic example from Google’s own documentation.
No dice. I ran into the exact same error.
I then tried talking to Gemini 3 Pro using the Playground inside Google AI Studio. It showed me a toast message saying Failed to generate content. Please try again. The chat transcript said An internal error has occurred.
At this point I gave up and walked away from my computer. It was already 8pm. I’d been trying to get things to work since 5pm. I needed to eat dinner, play Clair Obscur, and go to bed. I had no more time to waste and no more fucks to give.
Just as I was getting into bed, I received an email from Google with this subject line:
Your Google Cloud and APIs billing account XXXXXX-XXXXXX-XXXXXX is in good standing at this time.
With the message inside saying:
Based on the information you provided and further analysis by Google, we have reinstated your billing account XXXXXX-XXXXXX-XXXXXX. Your account is in good standing, and you should now have full access to your account and related Project(s) and Service(s).
I have no idea what any of this means, but Gemini 3 Pro started working correctly after I received this email. It worked in the Playground, directly by calling the API from JavaScript, and with Gemini CLI.
Problem solved, I guess. Until Google mysteriously decides that my account is no longer in good standing.
This was such a frustrating experience that I still haven’t tried using Gemini with my new codebase, nearly a week after I made all those sacrifices to the Gods of Billing Account.
I understand why the process for getting a Gemini API key is so convoluted. It’s designed for large organizations, not an individual developers trying to get work done; it serves the bureaucracy, not the people doing the work; it’s designed for maximum compliance with government regulations, not for efficiency or productivity.
Google doesn’t want my money unless I’m an organization that employs ten thousand people.
In contrast to Google, Anthropic and OpenAI are much smaller and much more nimble. They’re able to make the process of setting up a developer account quick and easy for those of us who just want to get things done. Unlike Google, they haven’t yet become complacent. They need to compete for developer mindshare if they are to survive a decade into the future. Maybe they’ll add the same level of bureaucracy to their processes as they become larger, but for now they’re fairly easy to deal with.
I’m still going to try using Gemini 3 Pro with Gemini CLI as my coding assistant, but I’ll probably cap the experiment to a month. Unless Gemini 3 Pro is a massive improvement over its competitors, I’ll stick to using tools built by organizations that want me as a customer.
...
Read the original on ankursethi.com »
Interested in our next book? Learn more about Building Large-scale JavaScript Web Apps with React
Patterns.dev is a free online resource on design, rendering, and performance patterns for building powerful web apps with vanilla JavaScript or modern frameworks.
Share properties among many objects of the same type
Use observables to notify subscribers when an event occurs
Split up your code into smaller, reusable pieces
Add functionality to objects or classes without inheritance
Use a central mediator object to handle communication between components
Use a factory function in order to create objects
An introduction to animating page transitions using the View Transitions API and libraries
Learn how to optimize your loading sequence to improve how quickly your app is usable
Import code that has been exported by another module
Import parts of your code on demand
Load non-critical components when they are visible in the viewport
Load non-critical resources when a user interacts with UI requiring it
Inform the browser of critical resources before they are discovered
Fetch and cache resources that may be requested some time soon
Reduce the performance impact third-party scripts have on your site.
Reduce the time needed to transfer scripts over the network.
Enforce separation of concerns by separating the view from the application logic
Pass reusable logic down as props to components throughout your application
Use functions to reuse stateful logic among multiple components throughout the app
Create multiple components that work together to perform a single task
Render your application’s UI on the client
Generate HTML to be rendered on the server in response to a user request
Deliver pre-rendered HTML content that was generated when the site was built
Update static content after you have built your site
Delay loading JavaScript for less important parts of the page
Generate HTML to be rendered on the server in response to a user request
Server Components compliment SSR, rendering to an intermediate abstraction without needing to add to the JavaScript bundle
With Next.js, there are several components that can help improve Core Web Vitals metrics
A comprehensive guide to building React apps in 2025/2026, covering frameworks, build tools, routing, state management, and AI integration
Self-contained modules that couple markup (HTML), logic (JS), and styles (CSS) within them
Functions to encapsulate and reuse stateful logic among multiple components
Enforce separation of concerns by separating the view from the application logic
Dynamically switch between components with the special
Have nested components access data without using props
Components that don’t render their own markup
Compile-time syntactic sugar for using the Composition API
We publish patterns, tips and tricks for improving how you architect apps for free. Keep in mind, design patterns are descriptive, not prescriptive
. They can guide you when facing a problem other developers have encountered many times before, but are not a blunt tool for jamming into every scenario. Patterns.dev aims to be a catalog of patterns (for increasing awareness) rather than a checklist (what you must do).
Design patterns are a fundamental part of software development, as they provide typical solutions to commonly recurring problems in software design. The implementation, benefits and pitfalls of common design patterns using ES2017+.React-specific design patterns and their possible modification and implementation using React Hooks….and many more Web Performance patterns and optimizations that can help improve your modern web app!
A common critique of design patterns is that they needlessly add complexity.
Our perspective is that patterns are valuable for solving specific problems, often helping to communicate comminalities in code problems for humans. If a project doesn’t have those problems, there isn’t a need to apply them. Patterns can also be very language or framework-specific (e.g. React), which can often mean thinking beyond the scope of just the original GoF design patterns.
We help you scale your webapps for performance
Learn about web performance patterns for loading your code more efficiently. Unsure how to think about modern approaches to loading or rendering user-experiences? We’ve got you covered.
...
Read the original on www.patterns.dev »
A new toll applied to cars driving in parts of New York City has led to a measurable drop in traffic, and with it, a 22 percent decline in particulate pollution, according to a new study.
Congestion pricing came into effect in January, with cars paying $9 to drive through busy parts of Manhattan during peak hours. In the first six months of the program, traffic in the congestion zone dropped by 11 percent, accidents by 14 percent, and complaints of excessive honking or other noise by 45 percent, officials said.
A new study from Cornell has now tallied the impact on particulate pollution. Particulates issued from tailpipes can aggravate asthma and heart disease and increase the risk of lung cancer and heart attack. Globally, they are a leading risk factor for premature death.
Analyzing data on air quality, traffic, and weather conditions, researchers determined that in the first half of this year, particulate pollution was down 22 percent in parts of Manhattan affected by congestion pricing.
The decline seen in New York was greater than in other cities with congestion pricing, such as Stockholm and London, researchers note. And the effect extended beyond Lower Manhattan. Pricing led to a drop in pollution across the greater metropolitan area, according to the study, published in the journal npj Clean Air.
“It’s really exciting to me that air quality improved throughout the entire metro area,” said lead author Timothy Fraser, of Cornell University. “This tells us that congestion pricing didn’t simply relocate air pollution to the suburbs by rerouting traffic. Instead, folks are likely choosing cleaner transportation options altogether, like riding public transportation or scheduling deliveries at night. This thins traffic and limits how smog compounds when many cars are on the road.”
...
Read the original on e360.yale.edu »
I needed a restaurant recommendation, so I did what every normal person would do: I scraped every single restaurant in Greater London and built a machine-learning model.
It started as a very reasonable problem. I was tired of doom-scrolling Google Maps, trying to disentangle genuinely good food from whatever the algorithm had decided to push at me that day. Somewhere along the way, the project stopped being about dinner and became about something slightly more unhinged: how digital platforms quietly redistribute economic survival across cities.
Because once you start looking at London’s restaurant scene through data, you stop seeing all those cute independents and hot new openings. You start seeing an algorithmic market - one where visibility compounds, demand snowballs, and who gets to survive is increasingly decided by code.
The public story of Google Maps is that it passively reflects “what people like.” More stars, more reviews, better food. But that framing obscures how the platform actually operates. Google Maps is not just indexing demand - it is actively organising it through a ranking system built on a small number of core signals that Google itself has publicly acknowledged: relevance, distance, and prominence.
“Relevance” is inferred from text matching between your search query and business metadata. “Distance” is purely spatial. But “prominence” is where the political economy begins. Google defines prominence using signals such as review volume, review velocity, average rating, brand recognition, and broader web visibility. In other words, it is not just what people think of a place - it is how often people interact with it, talk about it, and already recognise it.
Visibility on these ranked lists determines foot traffic. Foot traffic determines how quickly reviews accumulate. Review accumulation then feeds directly back into the prominence signal. The system compounds. Early discovery generates demand. Demand generates data. Data generates future discovery. This creates a cumulative-advantage dynamic that looks remarkably similar to the way capital compounds in financial markets. This is essentially Robert Merton’s Matthew Effect applied to kebab shops - ‘unto every one that hath shall be given.’
This disproportionately rewards chains and already-central venues. Chains benefit from cross-location brand recognition. High-footfall areas generate reviews faster, meaning venues in those zones climb the prominence ranking more quickly even at identical underlying quality. By contrast, new independents face a classic cold-start problem: without reviews they are hard to find, and without being found they struggle to accumulate reviews at all. What looks like neutral consumer choice is therefore better understood as algorithmically mediated market design.
In economics, this dynamic closely resembles the logic of a market maker: an intermediary that does not merely reflect underlying supply and demand, but actively shapes liquidity, matching, and price discovery. Platforms like Google Maps perform an analogous function for local services by controlling visibility rather than prices directly. In the language of digital economics, ranking algorithms act as attention allocators, steering demand toward some firms and away from others.
If Google Maps now acts as a kind of market maker for urban demand, the obvious next question is: what would the city look like without that amplification layer? In other words, how do you separate a restaurant’s intrinsic performance from the visibility effects of the platform itself?
To get at that, I built a machine-learning model - a gradient-boosted decision tree (for the ML crowd: HistGradientBoostingRegressor from scikit-learn) - to predict what a restaurant’s Google rating should be, given only its structural characteristics. This class of model is designed for large, messy, mixed-type tabular data and is particularly good at capturing interaction effects, without me having to specify those by hand. Features include how many reviews it has (log-transformed to reflect diminishing returns to attention), what cuisine it serves, whether it is part of a chain or an independent, its price level, broad venue types (restaurant, café, takeaway, bar), and where it sits in the city via a spatial grid.
Quick aside: for a subset of places I also scraped review text, languages, and photos. But for this first full-city run I stayed within the Google Maps API free tier - partly for reproducibility, partly because previous grid-scraping adventures taught me that cloud bills compound faster than review counts. So, for future versions, more features will only improve things. In particular, who is doing the reviewing matters. A five-star review of an Indian restaurant written in Hindi probably carries a different signal than one written by someone ordering chips with everything. (No judgment of white British people ofc…)
One practical problem I ran into early on is that Google Maps is surprisingly bad at categorising cuisines. A huge share of restaurants are labelled vaguely (“restaurant”, “cafe”, “meal takeaway”), inconsistently, or just incorrectly. So I ended up building a separate cuisine-classification model that predicts cuisine from restaurant names, menu language, and review text where available. In other words, the cuisine filters in the dashboard are not just Google’s tags - they’re machine-learned. This matters more than it might sound: if you misclassify cuisines, you misread diversity, clustering, and who actually competes with whom on the high street. Btw, I briefly considered classifying Pret A Manger as French, just to see if it would make any French people angrier at me than they already are. I didn’t. But I thought about it.
Before any modelling happens, all features go through a standard preprocessing pipeline - imputation, encoding, the usual. Crucially, the model is trained only to learn the mapping between observable platform-visible features and ratings. This allows me to generate a counterfactual expected rating for each restaurant - what the platform would typically assign under those structural conditions. The difference between a restaurant’s real rating and this predicted rating is what I call the rating residual. A positive residual means the restaurant performs materially better than its platform baseline would suggest. A negative residual means it underperforms relative to what the algorithm normally rewards. This is not a perfect measure of food quality - but it is a powerful measure of algorithmic mispricing: where social or culinary value diverges from what the platform structurally amplifies.
One caveat: some restaurants pay for promoted pins or local-search ads. Because paid visibility isn’t publicly disclosed, I can’t estimate how many - which is itself a sign of how opaque platform influence has become. My residuals may partly reflect ad spend I can’t observe.
To summarise this, I built the London food dashboard. The dashboard currently allows users to search by name and filter by underrated gems (identified by my machine learning algorithm), cuisine, borough, price level, min rating, and review volume. It is still very much a version-one prototype - but it is already a working microscope into London’s algorithmic food economy.
If you want to explore it yourself, you can find it on my personal website at: laurenleek.eu/food-map .
Naturally, I immediately stress-tested it on my favourite part of London: Islington (maybe all this promo - also in my previous Substack on UK segregation - makes me qualify for a council tax rebate? - I’m looking at you Jeremy Corbyn…). I switched on my “underrated gems” filter - that’s the ML residual at work - set a minimum rating and review count, exclude the eye-wateringly expensive options, and let the bubbles guide me. Bigger, darker bubbles mean places my model thinks the algorithm is undervaluing.
And just like that, I had dinner plans. Do try it yourself.
Btw, this is very much still a beta version - which means bugs, blind spots, and lots of room to grow. If something looks odd, missing, or wrong, please leave feature ideas and suggestions in the comments or via the comments on my website. Unlike the VS Code GitHub tracker and its 13.8k open issues, I really do read them.
But restaurants don’t fail alone - they fail in ecosystems. I also wanted to understand what happens when platform dynamics scale up from restaurants to entire neighbourhood food ecosystems. So I added a second modelling layer.
First, I aggregate restaurants into small spatial cells (the hexagons you see on the maps - because squares are for people who haven’t thought hard enough about edge effects) and compute summary features for each area: restaurant density, mean rating, mean residual, total reviews, chain share, cuisine entropy, and price level. I then standardise these and run principal component analysis (PCA) to compress them into a single continuous hub score that captures overall “restaurant ecosystem strength” in one dimension. Finally, I apply K-means clustering to the same feature space to classify areas into four structural types: elite, strong, everyday, and weak hubs.
At first glance, the patterns look comfortingly familiar. Central London dominates. Of course it does. But what matters is not just where the hubs are - it’s what kind of hubs they are. Using the full hub score rather than raw ratings alone, I identify the five most structurally powerful restaurant hubs in London. They are the places where density, algorithmic attention, independent survival, and consumer spending power all line up at once. They are labeled on the maps. I am deliberately refusing to rank them loudly in prose in order to avoid starting neighbourhood wars at scale (and to not disappoint Islington) - but the visual story is already extremely clear.
Overlaying this with the cuisine density panels reveals something even sharper. London’s culinary diversity is not evenly distributed across its platform economy. Migrant cuisines cluster strongly in parts of the city where algorithmic visibility is structurally weaker. Italian, Indian, Turkish, Chinese, Thai, British, Japanese, French, American, and fish-and-chips all trace distinct settlement histories, labour networks, retail formats, and relationships to capital and rent. Some cuisines form long, contiguous corridors. Others appear as punctuated clusters tied to specific high streets or income brackets.
Cuisine diversity, in other words, is not just about taste. It is about where families settled, which high streets remained affordable long enough for a second generation to open businesses, and which parts of the city experienced displacement before culinary ecosystems could mature. (If this part especially speaks to you, I go much deeper into it in Food for thought: local restaurant diversity meets migration).
The Take-Away and Some Unwanted Policy Advice
This project started as a search problem and ended as something more. The most important result isn’t which neighbourhood tops the rankings - it’s the realisation that platforms now quietly structure survival in everyday urban markets. London’s restaurant scene is no longer organised by taste alone. It is organised by visibility that compounds, rent that rises when discovery arrives, and algorithms that allocate attention long before consumers ever show up. What looks like “choice” is increasingly the downstream effect of ranking systems.
For policy, that shifts the frame. If discovery now shapes small-business survival, then competition, fairness, and urban regeneration can no longer ignore platform ranking systems. Councils can rebuild streets and liberalise licensing all they like - but algorithmic invisibility can still leave places economically stranded. Platform transparency and auditability are no longer niche tech debates; they are quietly becoming tools of local economic policy. At minimum, ranking algorithms with this much economic consequence should be auditable. We audit financial markets. We should audit attention markets too.
For a navigation app, Google maps has a remarkable amount of power.
Just saying.
I’m also working on other maps (including a map of the best cycling and running routes with excellent cafés along the way, because I have needs). More broadly, I’m investing more and more time into building higher-quality public data projects. If you have an idea you’d like to see built - pitch it to me. And if you enjoy this kind of work, you can always Buy Me A Coffee or subscribe to help fund the next round of over-engineered maps.
...
Read the original on laurenleek.substack.com »
Running large language models locally has always been a game of compromise. You either spend \$10,000+ on consumer GPUs that can barely handle 70 B parameter models, or you dream about enterprise hardware you’ll never afford. The Grace-Hopper platform—Nvidia’s unified CPU-GPU superchip architecture—represents the kind of dream-rig AI infrastructure LocalLlama drools over, with systems typically costing well over \$100,000 and exclusively available to data centers and research institutions.
So when I stumbled across a Grace-Hopper system being sold for 10K euro on Reddit, my first thought was “obviously fake.” My second thought was “I wonder if he’ll take 7.5K euro?”.
This is the story of how I bought enterprise-grade AI hardware designed for liquid-cooled server racks that was converted to air cooling, re-converted it back to water cooling, survived multiple near-disasters (including GPUs reporting temperatures of 16 million degrees), and ended up with a desktop that can run 235B parameter models at home. It’s a tale of questionable decisions, creative problem-solving, and what happens when you try to turn datacenter equipment into a daily driver.
If you’ve ever wondered what it takes to run truly large models locally, or if you’re just here to watch someone disassemble $80,000 worth of hardware with nothing but hope and isopropanol, you’re in the right place.
Early this year, while browsing r/LocalLLaMA/new, I came across a ridiculously good deal. How good? These were the specs for the server offered for 10K euro, and a serious upgrade to my 4x RTX 4090 rig:
UPDATE:Since I bought this, DDR5 RAM prices have become insane. 960GB of fast DDR5 now costs more than what I paid for the whole Grace-Hopper system 🤯
H100s cost about 30-40,000 euro each, and this system has two of themGrace-Hopper NVL2 systems are basically not for sale for consumers anyway!
The Reddit thread explained the reason the system was being sold cheap:
The main reason why is that it is a Frankensystem converted from liquid-cooled to aircooled. Also it is not very pretty and not rackable, because it has a 48V power supply attached. It is originally directly from Nvidia.
I immediately offered to buy it, because why not? If it was a scam, I could always back out, but I wanted to be first in line!
It turns out I live near the seller, and he runs an online shop that sells modified Nvidia server equipment as desktops. It still seemed pretty risky, so I did some research and found a video review of one of his Desktops on Youtube. With the deal now seeming at least plausible, and the seller only a two-hour drive away and agreeing to take cash, it was time to take a Bavarian road trip.
I arrived at a farmhouse in a small forest, and met Bernhard the proprietor of GPTshop.ai. He showed me a nice workshop (plasma cutters, an electronics lab, etc.) from which he fabricates custom cases for the high-end H100 desktops he builds. These desktops seem pretty damn nice, so it’s unfortunate that his webshop gives off shady vibes; the business registration in the Cayman Islands definitely doesn’t help. What I can say though is that this item was heavily discounted, and not the fancy ultra high-end desktops that he usually sells.
Disclaimer: I have zero affiliation with GPTshop.ai beyond handing them a stack of cash and receiving a dust-covered server in return. If this were a sponsored post, they probably wouldn’t let me mention the 16 million degree GPU temperatures or the part where I had to free-solder components while praying to the electronics gods.
The server itself was not in great condition. These things run extremely loud and high-throughput fans, and these had sucked in a lot of dust, coating the mainboard so heavily I couldn’t tell the color of the PCB. However, it booted up and ran OK, so I handed over a wad of cash, strapped it into the backseat of my car with the seatbelt (it weighed ~20 kg), and drove it home.
Did I mention it’s loud? Firing up the system is physically painful. There are 8x Sunon dual-fan modules, and each is as loud as a powerful vacuum cleaner, but with a much higher and more annoying pitch. With all 8 running at full power, hearing protection is necessary - I could hear the system running in my basement with the windows closed from 50 meters away! My wife immediately (and quite fairly), banned its use at home. We both work home-office and it was simply too loud for online meetings. But I had other plans anyway…
First things first, I of course quickly decided and then proceeded to strip down the server, after first photo-documenting the various connectors between the various PCBs, modules and mainboard.
The majority of the dust was vacuumed off during disassembly, but there was clearly a lot more under the Grace-Hopper modules. After removing those as well, I decided to go with a full washdown of the mainboard.
I purchased a few litres of Isopropanol, and with a soft brush I went over the whole board a few times to get the remaining fine dust from inside connectors and between SMD-component pins.
I suspected there might also be dust inside the Grace-Hopper modules, but actually, I really just wanted to pop them open to poke around.
The mainboard went on my heated floor to dry for a week, while I moved on to replacing the cooling system.
I had looked into building a custom water-cooling block, but I was worried about leaks, when I found cheap all-in-one water cooling systems for ~40 euro each on sale. Two per GH200 module would be sufficient, so I carefully measured the dimensions of the GPU die and CPU, as well as screw locations, and threw those into Fusion 360 to model up an adapter block.
I have a Bambu X1, which came in very handy for prototyping the adapter blocks. The tolerances have to be very tight, so I printed several cut-away versions to make sure there was solid contact to the bare GPU die, and a safe margin from contact to fragile parts.
The parts were then sent for CNC milling, and were delivered as the mainboard was finished drying. After using yet more isopropanol to clean off the machining oil, they were mounted without much fuss.
My go-to material for this kind of project is ProfilAlu from eBay. It’s cheap, stiff, and delivered pre-cut for assembly. I put together a design in Fusion 360, and had the parts in a few days. The various mounts however were much more work. I needed to design a few dozen custom mounts for the various PCBs and air-filter fixings; this used up a few kilos of filament to get things just right.
I have discovered that one of the most terrifying sounds you will ever hear is a ‘pop’ followed by a ‘fizzle’ coming from the $80,000 mainboard you just worked on. The smell of magic smoke moments later generates more of a sense of dread.
The idea was simple enough: I have 8 powerful fans that each must draw a huge amount of current and run at 12V. At the same time, I have four water cooling systems that also run at 12 V. Simple right? I swap the regular consumer 3-pin fan connector from the Arctic cooler with the weird server fan connector, and I have can run them off the mainboard with fan speed control!
Problem 1. What the heck were these mainboard fan connectors? They looked like tiny Molex, but I didn’t recognize them. I think I finally found some for sale, but they were ~20 euro each and I have principles! So, I mapped out the wiring and with some snipping and soldering, I had adapters made and the system rebuilt. Then came the pop and fizzle… My estimations on the current draw must have been a bit off.
Problem 2. After disassembling the fancy adapter I just made and rewiring the fans, I found out that several fans didn’t work any more. Hmmmm. Swapping the various fans around made it clear: some of the mainboard fan headers weren’t working. I grabbed my DIY thermal camera (a topic for another blog post), and looked all over the board, before spotting what looked like a warm MOSFET (basically a switch). I googled the markings, but no results.
Problem 3. I needed a new way to power the 12V AIO Water Coolers. The main power supply provides 48V at 62.5 Amps, which seemed a bit high, and I wasn’t ready to run them in series after the last small ‘incident’. I picked up a cheapy 12V-5A power supply from Amazon, because ‘next day delivery’, and it was under 10 euro. When this arrived, my cooling system was operational again!
The system didn’t start to boot anymore. Checking the logs, I saw 6 critical errors, one for each dead fan driver among the 8 pairs:
With the fans removed, the BMC (Baseboard Management Controller) immediately panicked, and shut down the mainboard to prevent thermal damage, even with the water coolers in place. So, I disabled the fan-check subsystem.
Great! I could start the boot process, and even reach login! But only about 1 time in 4… Not optimal. And even logged in, the server would crash within 2 minutes.
Looking into the BMC logs, I saw:
* System powers off within ~30 seconds of successful boot
But why?!!? I had shut down the hardware monitoring.
Warning: Your GPU should not reach 16,777,214 Celsius during boot. Imagine what would happen under load!
This took some time to debug, as I was quite sure the sensors could not physically handle reading temperatures over 16 million Celsius… But then I noticed something interesting about that specific number:
This is 2²⁴ - 2, which is suspiciously close to the maximum value of a 24-bit unsigned integer. In the hardware world, this is the equivalent of a sensor throwing up its hands and screaming “I have no idea what’s happening!” When hardware can’t read a value properly—whether due to a loose connection, damaged circuit, or initialization failure—it often returns the maximum (or near-maximum) representable value. It’s like the digital version of a shrug.
The logs confirmed this theory: seeing 1.67772e+07 (16,777,214) wasn’t evidence that my GPU had achieved nuclear fusion temperatures 🔥—it was evidence that the temperature sensor had simply stopped working. And if a sensor error is intermittent, the most likely culprit is a loose connection or physical damage.
After spending way too long pursuing software solutions (because who wants to disassemble everything again?), I finally accepted the inevitable and broke out the screwdrivers.
I happened to have bought a new microscope earlier this year, and it turned out to be the perfect tool for diagnosing and fixing the issue. Near one of the modules, I found some damaged surface mount components. The damage must have happened after cleaning, probably during the reassembly of the modules with the copper adapters. They weigh over 2 kg, so a slight bump would have easily caused this damage. Amazingly, the tiny components were still attached to the traces, and so I could measure them easily: a 100 nF capacitor, and 4.7k resistor (both of which I had on-hand, as they are standard values for decoupling circuits). The bad news? I had huge “0805” sized parts (2mm long), these were tiny “0402” (1mm long). And one of the traces was just gone.
With some very fiddly soldering, and scratching off the solder mask on the PCB to expose more trace, I was able to ‘free solder’ the parts into a wonderful 3D sculpture which was then liberally coated in UV-curing mask resin, set, and then held in place with sticky tape. Very professional. After reassembly, the system booted smoothly.
* Cool-looking mesh to protect the water-cooling radiators and dust filters
Getting the actual GPU working was also painful, so I’ll leave the details here for future adventurers:
That’s what you’re here for, maybe? I have only just started, but after compiling the latest Llama.cpp version using 144 cores in 90 seconds, here’s some benchmarks on larger LLMs:
This is pretty unoptimized, but it’s looking promising so far! During the LLM tests I hit around 300W per GPU, far from the 900W max.
Here’s what the entire build actually cost me, from the initial purchase to the final touches:
Not included: hearing protection (absolutely necessary), the microscope I already owned (but proved essential), several failed 3D prints, and the emotional cost of seeing “16,777,214°C” in system logs.
So, was it worth it? I now have a desktop that can run 235B parameter models at home for less than the cost of a single H100. It required disassembling $80,000 worth of enterprise hardware, debugging sensors that reported temperatures approaching the surface of the sun, and free-soldering components under a microscope. Your mileage may vary. Literally: I had to drive two hours to pick this thing up.
...
Read the original on dnhkng.github.io »
Meta has removed or restricted dozens of accounts belonging to abortion access providers, queer groups and reproductive health organisations in the past weeks in what campaigners call one of the “biggest waves of censorship” on its platforms in years.
The takedowns and restrictions began in October and targeted the Facebook, Instagram and WhatsApp accounts of more than 50 organisations worldwide, some serving tens of thousands of people — in what appears to be a growing push by Meta to limit reproductive health and queer content across its platforms. Many of these were from Europe and the UK, however the bans also affected groups serving women in Asia, Latin America and the Middle East.
Repro Uncensored, an NGO tracking digital censorship against movements focused on gender, health and justice, said that it had tracked 210 incidents of account removals and severe restrictions affecting these groups this year, compared with 81 last year.
Meta denied an escalating trend of censorship. “Every organisation and individual on our platforms is subject to the same set of rules, and any claims of enforcement based on group affiliation or advocacy are baseless,” it said in a statement, adding that its policies on abortion-related content had not changed.
Campaigners say the actions indicate that Meta is taking its Trump-era approach to women’s health and LGBTQ+ issues global. Earlier this year, it appeared to “shadow-ban” or remove the accounts of organisations on Instagram or Facebook helping Americans to find abortion pills. Shadow-banning is when a social media platform severely restricts the visibility of a user’s content without telling the user.
In this latest purge, it blocked abortion hotlines in countries where abortion is legal, banned queer and sex-positive accounts in Europe, and removed posts with even non-explicit, cartoon depictions of nudity.
“Within this last year, especially since the new US presidency, we have seen a definite increase in accounts being taken down — not only in the US, but also worldwide as a ripple effect,” said Martha Dimitratou, executive director of Repro Uncensored.
“This has been, to my knowledge, at least one of the biggest waves of censorship we are seeing,” she said.
Campaigners have accused Meta of being condescending and unresponsive, with the company offering only vague reasons why certain accounts were taken down — and appearing unwilling to engage.
In one email shared with the Guardian, a Meta consultant appears to invite a number of reproductive health organisations to a closed-door online briefing about “the challenges that you are facing with Meta’s content moderation policies”.
The email says the meeting “will not be an opportunity to raise critiques of Meta’s practices or to offer recommendations for policy changes”.
Dimitratou said such closed-door meetings had happened before, saying they “reinforce the power imbalance that allows big tech to decide whose voices are amplified and whose are silenced”.
In another instance, a Meta employee counselled an affected organisation in a personal message to simply move away from the platform entirely and start a mailing list, saying that bans were likely to continue. Meta said it did not send this message.
Meta’s recent takedowns are part of a broader pattern of the company purging accounts, and then — at times — appearing to backtrack after public pressure, said Carolina Are, a fellow at Northumbria University’s Centre for Digital Citizens.
“It wouldn’t be as much of a problem if platforms’ appeals actually worked, but they don’t. And appeals are the basis of any democratic justice system,” she added.
Meta said that it aimed to reduce enforcement mistakes against accounts on its platform, but added that the appeals process for banned accounts had become frustratingly slow.
Organisations affected by the bans include Netherlands-registered Women Help Women, a nonprofit offering information about abortion to women worldwide, including in Brazil, the Philippines and Poland. It fields about 150,000 emails from women each year, said its executive director, Kinga Jelinska.
Women Help Women has been on Facebook for 11 years, said Jelinska, and while its account had been suspended before, this was the first time it was banned outright. The ban could be “life-threatening”, she said, pushing some women towards dangerous, less reliable information sources. Little explanation was given for the ban.
A message from Meta to the group dated 13 November said its page “does not follow our Community Standards on prescription drugs”, adding: “We know this is disappointing, but we want to keep Facebook safe and welcoming for everyone.”
“It’s a very laconic explanation, a feeling of opacity,” Jelinska said. “They just removed it. That’s it. We don’t even know which post it was about.”
Meta said more than half of the accounts flagged by Repro Uncensored have been reinstated, including Women Help Women which it said was taken down in error. “The disabled accounts were correctly removed for violating a variety of our policies including our Human Exploitation policy,” it added.
Jacarandas was founded by a group of young feminists when abortion was decriminalised in Colombia in 2022, to advise women and girls on how to get a free, legal abortion. The group’s executive director, Viviana Monsalve, said its WhatsApp helpline had been blocked then reinstated three times since October. The WhatsApp account is currently banned and Monsalve said they had received little information from Meta about whether this would continue.
“We wrote [Meta] an email and said, ‘hey, we are a feminist organisation. We work in abortion. Abortion is allowed in Colombia up to 24 weeks. It’s allowed to give information about it,’” said Monsalve.
Without Meta’s cooperation, Monsalve said it was difficult to plan for the future. “You are not sure if [a ban] will happen tomorrow or after tomorrow, because they didn’t answer anything.”
Meta said: “Our policies and enforcement regarding abortion medication-related content have not changed: we allow posts and ads promoting healthcare services like abortion, as well as discussion and debate around them, as long as they follow our policies.”
While groups such as Jacarandas and Women Help Women had their accounts removed outright, other groups said that they increasingly faced Meta restricting their posts and shadow-banning their content.
Fatma Ibrahim, the director of the Sex Talk Arabic, a UK-based platform which offers Arabic-language content on sexual and reproductive health, said that the organisation had received a message almost every week from Meta over the past year saying that its page “didn’t follow the rules” and would not be suggested to other people, based on posts related to sexuality and sexual health.
Two weeks ago, these messages escalated to a warning, in which Meta noted its new policies on nudity and removed a post from the Sex Talk Arabic’s page. The offending post was an artistic depiction of a naked couple, obscured by hearts.
Ibrahim said the warning was “condescending”, and that Meta’s moderation was US-centric and lacked context.
“Despite the profits they make from our region, they don’t invest enough to understand the social issues women fight against and why we use social media platforms for such fights,” she said.
...
Read the original on www.theguardian.com »
Have you seen one of the experiments where people have been re-feeding the same image to the AI agent a bunch of times?
Or Marques Brownlee’s youtube videos where the video is reuploaded a 1000 times?
Over the Thanksgiving weekend I had some time on my hands and tasked Claude to write an app that guestimates macronutrients in some foods based on description + photo. There’s some interesting setup in getting it right, but that’s boring. It has created a great, functional app for me, but then I forced it to do a small, evil experiment for me.
I’ve written a quick script that looped over my codebase and ran this command.
#!/usr/bin/env bash
set -euo pipefail
PROMPT=“Ultrathink. You’re a principal engineer. Do not ask me any questions. We need to improve the quality of this codebase. Implement improvements to codebase quality.”
MAX_ITERS=“200”
for i in $(seq 1 “$MAX_ITERS”); do
claude –dangerously-skip-permissions -p “$PROMPT”
git add -A
if git diff –cached –quiet; then
echo “No changes this round, skipping commit.”
else
git commit –no-verify -m “yolo run #$i: $PROMPT”
fi
done
…and havoc it wrecked. Over 200 times of unmitigated madness. I have tweaked the prompt here and there when I’ve been seeing it overindexing on a single thing, but with enough iterations it started covering a lot of ground.. from full code coverage and more tests than functional code, to rust-style Result types, to.. estimating entropy of hashing function (???).
This was running for around 36 hours and took me some time to grok through, but let’s see what it did. The entire repo is here btw. The branch you’re looking for is highest-quality.
This app is around 4-5 screens. Take a photo, add description, get AI response. Simple as that.
The version “pre improving quality” was already pretty large. We are talking around 20k lines of TS, around 9.7k is in various __tests__ directories. This was slightly intentional - when working with Claude Code, having good self-validation harness greatly improves the quality of results.
cloc . –exclude-dir=node_modules,dist,build,.expo,.husky,.maestro,Pods
132 text files.
127 unique files.
11 files ignored.
github.com/AlDanial/cloc v 2.04 T=0.11 s (1167.4 files/s, 487085.6 lines/s)
Language files blank comment code
JSON 4 0 0 23733
TypeScript 99 3019 1541 20160
Markdown 11 1004 0 2700
JavaScript 9 26 51 269
Bourne Shell 2 34 41 213
YAML 2 35 2 162
SUM: 127 4118 1635 47237
But in the aftermath - 84 thousand! We went 20k -> 84k on “improvements” to the quality of the codebase.
cloc . –exclude-dir=node_modules,dist,build,.expo,.husky,.maestro,Pods
285 text files.
281 unique files.
10 files ignored.
github.com/AlDanial/cloc v 2.04 T=0.60 s (468.1 files/s, 268654.5 lines/s)
Language files blank comment code
TypeScript 247 17587 18749 84185
JSON 5 0 0 24863
Markdown 14 4151 0 10391
JavaScript 9 41 140 598
Bourne Shell 3 41 41 228
YAML 3 50 3 215
SUM: 281 21870 18933 120480
Tests alone went from 10k to 60k LOC!
cloc . \
–exclude-dir=node_modules,dist,build,.expo,.husky,.maestro,Pods \
–match-d=‘__tests__’
138 text files.
138 unique files.
1 file ignored.
github.com/AlDanial/cloc v 2.04 T=0.23 s (612.9 files/s, 346313.3 lines/s)
Language files blank comment code
TypeScript 138 13919 3685 60366
SUM: 138 13919 3685 60366
We went from around 700 to a whooping 5369 tests. In the original project I had e2e tests using actual simulator - they are pretty important to make sure that the coding agent has closed feedback loop, but in the process of improving the quality they seemed to have been forgotten ¯\_(ツ)_/¯.
Btw. we went from ~1500 lines of comments to 18.7k.
OK, but what did it actually do? I have the full log of what Claude Code was outputting in the summary after every run in. You can check it here
Claude Code really didn’t like using 3rd party libraries and created a ton of random utilities.
I can sort of respect that the dependency list is pretty small, but at the cost of very unmaintainable 20k+ lines of utilities. I guess it really wanted to avoid supply-chain attacks.
Some of them are really unnecessary and could be replaced with off the shelf solution:
* Full on hierarchical logger with built in performance tracking instead of using something simple off the shelf lib/logger.ts
* React Hooks. Some of them are specific to our use-case, but bunch of them really doesn’t have to be reinvented (or invented in the first place).
Some are just insane - here are my favorites!
* The Result Type implementation lib/result.ts - This module provides a Result type (similar to Rust’s Result.
I like Rust’s result-handling system, I don’t think it works very well if you try to bring it to the entire ecosystem that already is standardized on error throwing. In my previous job we experimented with doing that in Python. It wasn’t clicking very well with people and using it felt pretty forced. I’d stray away from it.
This made me giggle because of course AI started bringing patterns from Rust. There’s lib/option.ts too.
* Functional programming utilities lib/functional.ts - Type-safe composition, currying, overloads for 20+ params, this has it all.
In some iterations, coding agent put on a hat of security engineer. For instance - it created a hasMinimalEntropy function meant to “detect obviously fake keys with low character variety”. I don’t know why.
To ensure we have proper scalability it has implemented circuit breaking and jittering exponential backoff. The only API we are talking to is OpenAI/Anthropic. You’re welcome.
The positive - there’s been a lot of time spent on making sure that we have strict type checking, we don’t overly cast (as any as T) and, hey, I respect that.
The prompt, in all its versions, always focuses on us improving the codebase quality. It was disappointing to see how that metric is perceived by AI agent. The leading principle was to define a few vanity metrics and push for “more is better”.
In message log, the agent often boasts about the number of tests added, or that code coverage (ugh) is over some arbitrary percentage. We end up with an absolute moloch of unmaintainable code in the name of quality. But hey, the number is going up.
All in all, the project has more code to maintained, most of it largely useless. Tons of tests got added, but some tests that mattered the most (maestro e2e tests that validated the app still works) were forgotten. It had some moments of “goodness”, like making sure typechecks are of high quality.
To truly resemble the test of “redraw this image 1000 times”/“reupload this video 1000 times”, I think the loop would have to be two step:
* Implement a fresh project based off of this description
This was obviously done in jest, I didn’t expect that this will improve the quality of codebase in the ways that I think truly matters. I’ve prompted Claude Code to failure here and it definitely produced some funny results.
I still use coding agents for my day to day development. If anything it feels like time spent reviewing AI code was not a waste of time.
…oh and the app still works, there’s no new features, and just a few new bugs.
...
Read the original on gricha.dev »
The Walt Disney Co. on Thursday announced it will make a $1 billion equity investment in OpenAI and will allow users to make videos with its copyrighted characters on its Sora app.
OpenAI launched Sora in September, and it allows users to create short videos by simply typing in a prompt.
As part of the startup’s new three-year licensing agreement with Disney, Sora users will be able make content with more than 200 characters across Disney, Marvel, Pixar and Star Wars starting next year.
“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Disney CEO Bob Iger said in a statement.
As part of the agreement, Disney said it will receive warrants to purchase additional equity and will become a major OpenAI customer.
Disney is deploying OpenAI’s chatbot, ChatGPT, to its employees and will work with its technology to build new tools and experiences, according to a release.
When Sora launched this fall, the app rocketed to the top of Apple’s App Store and generated a storm of controversy as users flooded the platform with videos of popular brands and characters.
The Motion Picture Association said in October that OpenAI needed to take “immediate and decisive action” to prevent copyright infringement on Sora.
OpenAI CEO Sam Altman said more “granular control” over character generation was coming, according to a blog post following the launch.
...
Read the original on www.cnbc.com »
Part 1 of 1 in Linux Inside Out
Part 1 of 1 in Linux Inside Out
Most books and courses introduce Linux through shell commands, leaving the kernel as a mysterious black box doing magic behind the scenes.
In this post, we will run some experiments to demystify it: the Linux kernel is just a binary that you can build and run.
The experiments are designed so you can follow along if you have a Linux PC. But this is completely optional, the goal is to build a mental model about how Linux works, seeing how components of the system fit together.
But first let’s talk about what a kernel is.
Computers are built from CPUs, memory, and other devices, like video cards, network cards, keyboards, displays, and a lot of other stuff.
These devices can be manufactured by different companies, have different capabilities, and can be programmed differently.
An operating system kernel provides an abstraction to use these devices and resources conveniently and securely. Without one, writing programs would be much more difficult. We would need to write the low-level code to use every device that our program needs, and it’s likely that it wouldn’t work on other computers.
* gives us APIs to interact with the hardware over a unified interface
* manages how programs can use the computer’s CPU, memory and other resources
* provides access control over what resources can a program access
* provides additional features like firewalls, file systems, mechanisms for programs to communicate, etc.
The closest analogy from the software development world is that the kernel is a runtime for our computer.
On most Linux distributions we will find the kernel under the /boot directory. Let’s enter the directory and list its contents:
~$ cd /boot
/boot$ ls -1
System.map-6.12.43+deb13-amd64
System.map-6.12.48+deb13-amd64
config-6.12.43+deb13-amd64
config-6.12.48+deb13-amd64
efi
grub
initrd.img-6.12.43+deb13-amd64
initrd.img-6.12.48+deb13-amd64
vmlinuz-6.12.43+deb13-amd64
vmlinuz-6.12.48+deb13-amd64
We see a few files here, but the one we are looking for is vmlinuz-6.12.48+deb13-amd64. This single file is the kernel.
If you ever wondered what this name means:
* 6.12.48+deb13: this is the kernel version, and the distribution (Debian 13)
* amd64: this is the architecture of our system
Note: Different distributions may use slightly different naming conventions. vmlinuz is commonly the bootable compressed kernel image.
In our first experiment we will copy this kernel into another directory and run it.
First, let’s create a directory and copy the kernel there.
Note: Your kernel version might differ, remember to check it before the cp command.
/boot$ cd
~$ mkdir linux-inside-out
~$ cd linux-inside-out/
~/linux-inside-out$ cp /boot/vmlinuz-6.12.48+deb13-amd64 .
~/linux-inside-out$ ls -lh
total 12M
-rw-r–r– 1 zsoltkacsandi zsoltkacsandi 12M Dec 1 09:44 vmlinuz-6.12.48+deb13-amd64
Then install some tools that are needed for this experiment.
We will use QEMU, a virtual machine emulator, because our kernel needs something that works like a computer, and because we do not want to mess up our original operating system.
~$ sudo apt update
~$ sudo apt install -y qemu-system-x86 qemu-utils
Then start a virtual machine with our kernel:
~/linux-inside-out$ qemu-system-x86_64 \
-m 256M \
-kernel vmlinuz-6.12.48+deb13-amd64 \
-append “console=ttyS0” \
-nographic
The output should be something like this:
SeaBIOS (version 1.16.3-debian-1.16.3-2)
iPXE (https://ipxe.org) 00:03.0 CA00 PCI2.10 PnP PMM+06FC6D30+06F06D30 CA00
Booting from ROM…
Probing EDD (edd=off to disable)… o
[ 0.000000] Linux version 6.12.48+deb13-amd64 ([email protected]) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, )
[ 0.000000] Command line: console=ttyS0
[ 2.055627] RAS: Correctable Errors collector initialized.
[ 2.161843] clk: Disabling unused clocks
[ 2.162218] PM: genpd: Disabling unused power domains
[ 2.179652] /dev/root: Can’t open blockdev
[ 2.180871] VFS: Cannot open root device “” or unknown-block(0,0): error -6
[ 2.181038] Please append a correct “root=” boot option; here are the available partitions:
[ 2.181368] List of all bdev filesystems:
[ 2.181477] fuseblk
[ 2.181516]
[ 2.181875] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 2.182495] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.48+deb13-amd64 #1 Debian 6.12.48-1
[ 2.182802] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 2.186426] Kernel Offset: 0x30e00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[ 2.186949] –-[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]–-
You can exit by pressing + then .
So, we’ve just started the same kernel that is running on our computer. It took 2 seconds, it printed a lot of log messages, then panicked.
This panic is not a bug, actually this is expected - once our kernel initializes itself, it tries to mount the root filesystem, and hand over control to a program called init.
So let’s give one to it.
We will write a simple program that we will use as an init program.
We will use Golang for two reasons:
* it has an easy to learn syntax, that readers coming from different backgrounds can pick up and understand quickly
* it can build a statically-linked binary with no C dependencies, making it portable and perfect for our minimal experiment
First let’s install Golang, and create a new project called init:
~/linux-inside-out$ sudo apt -y install golang
~/linux-inside-out$ mkdir init
~/linux-inside-out$ cd init
~/linux-inside-out/init$
~/linux-inside-out/init$ go mod init init
go: creating new go.mod: module init
go: to add module requirements and sums:
go mod tidy
package main
import (
“fmt”
“os”
“time”
func main() {
fmt.Println(“Hello from Go init!“)
fmt.Println(“PID:”, os.Getpid()) // printing the PID (process ID)
for i := 0; ; i++ { // every two seconds printing the text “tick {tick number}”
fmt.Println(“tick”, i)
...
Read the original on serversfor.dev »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.