10 interesting stories served every morning and every evening.
i was at bombay airport. some dude was watching reels on full volume and laughing loudly. asking nicely doesn’t work anymore. me being me, didn’t have the courage to speak up.
so i built a tiny app that plays back the same audio it hears, delayed by ~2 seconds. asked claude, it spat out a working version in one prompt. surprisingly WORKS.
something something auditory feedback loop something something cognitive dissonance. idk i’m not a neuroscientist. all i know is it makes people shut up and that’s good enough for me.
straight up honest - originally called this “make-it-stop” but then saw @TimDarcet also built similar and named it STFU. wayyyyy better name. so stole it. sorry not sorry.
made with spite and web audio api. do whatever you want with it.
yo, meanwhile if you are new here, you might find my, other side projects kinda funny.
...
Read the original on github.com »
In the blog post, they talk about their experiments with running “coding agents autonomously for weeks” with the explicit goal of
understand[ing] how far we can push the frontier of agentic coding for projects that typically take human teams months to complete
They talk about some approaches they tried, why they think those failed, and how to address the difficulties.
Finally they arrived at a point where something “solved most of our coordination problems and let us scale to very large projects without any single agent”, which then led to this:
To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files. You can explore the source code on GitHub (https://github.com/wilsonzlin/fastrender)
This is where things get a bit murky and unclear. They claim “Despite the codebase size, new agents can still understand it and make meaningful progress” and “Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts”, but they never actually say if this is successful or not, is it actually working? Can you run this browser yourself? We don’t know and they never say explicitly.
After this, they embed the following video:
And below it, they say “While it might seem like a simple screenshot, building a browser from scratch is extremely difficult.”.
error: could not compile ‘fastrender’ (lib) due to 34 previous errors; 94 warnings emitted
And if you try to compile it yourself, you’ll see that it’s very far away from being a functional browser at all, and seemingly, it never actually was able to build.
Multiple recent GitHub Actions runs on main show failures (including workflow-file errors), and independent build attempts report dozens of compiler errors, recent PRs were all merged with failing CI, and going back in the Git history from most recent commit back 100 commits,
I
couldn’t find a single commit that compiled cleanly.
I’m not sure what the “agents” they unleashed on this codebase actually did, but they seemingly never ran “cargo build” or even less “cargo check”, because both of those commands surface 10s of errors (which surely would balloon should we solve them) and about 100 warnings. There is an open GitHub issue in their repository about this right now: https://github.com/wilsonzlin/fastrender/issues/98
And diving into the codebase, if the compilation errors didn’t make that clear already, makes it very clear to any software developer that none of this is actually engineered code. It is what is typically known as “AI slop”, low quality something that surely represents
something, but it doesn’t have intention behind it, and it doesn’t even compile at this point.
They later start to talk about what’s next, but not a single word about how to run it, what to expect, how it’s working or anything else. Cursor’s blog post provides no reproducible demo and no known-good revision (tag/release/commit) to verify the screenshots, beyond linking the repo.
Regardless of intent, Cursor’s blog post creates the impression of a functioning prototype while leaving out the basic reproducibility markers one would expect from such claim. They never explicitly claim it’s actually working, so no one can say they lied at least.
They finish off the article saying:
But the core question, can we scale autonomous coding by throwing more agents at a problem, has a more optimistic answer than we expected.
Which seems like a really strange conclusion to arrive at, when all they’ve proved so far, is that agents can output millions of tokens and still not end up with something that actually works.
A “browser experiment” doesn’t need to rival Chrome. A reasonable minimum bar is: it compiles on a supported toolchain and can render a trivial HTML file. Cursor’s post doesn’t demonstrate that bar, and current public build attempts fail at this too.
Cursor never says “this browser is production-ready”, but they do frame it as “building a web browser from scratch” and “meaningful progress” and then use a screenshot and “extremely difficult” language, wanting to give the impression that this experiment actually was a success.
The closest they get to implying that this was a success, is this part:
Hundreds of agents can work together on a single codebase for weeks, making real progress on ambitious projects.
But this extraordinary claim isn’t backed up by any evidence. In the blog post they never provide a working commit, build instructions or even a demo that can be reproduced.
I don’t think anyone expects this browser to be the next Chrome, but I do think that if you claim you’ve built a browser, it should at least be able to demonstrate being able to be compiled + loading a basic HTML file at the very least.
@@ -33 +33 @@ And if you try to compile it yourself, you’ll see that it’s very far away from b
Multiple recent GitHub Actions runs on `main` show failures (including workflow-file errors), and independent build attempts report dozens of compiler errors, recent PRs were all merged with failing CI, and going back in the Git history from most recent commit back
-about
100
-commits, I
+commits,
[I
couldn’t find a single commit that compiled
-cleanly.
+cleanly](https://gist.github.com/embedding-shapes/f5d096dd10be44ff82b6e5ccdaf00b29).
@@ -9,3 +9 @@ On January 14th 2026, Cursor published a blog post titled “Scaling long-running
-In the blog post, they talk about their experiments with running “coding agents autonomously for weeks”
-with the explicit goal of
+In the blog post, they talk about their experiments with running “coding agents autonomously for weeks” with the explicit goal of
@@ -33 +33,3 @@ And below it, they say “While it might seem like a simple screenshot, building a
And if you try to compile it yourself, you’ll see that it’s very far away from being a functional browser at all, and seemingly, it never actually was able to build.
Multiple recent GitHub Actions runs on `main` show failures (including workflow-file errors), and independent build attempts report dozens of compiler errors, recent PRs were all merged with failing CI, and going back in the Git history from most recent
-commit,
+commit back about 100 commits,
I couldn’t find a single commit that compiled cleanly.
@@ -37 +39 @@ I’m not sure what the “agents” they unleashed on this codebase actually did, but
And diving into the codebase, if the compilation errors didn’t make that
-sure,
+clear already,
makes it very clear to any software developer that none of this is actually engineered code. It is what is typically known as “AI slop”, low quality *something* that surely represents *something*, but it doesn’t have intention behind it, and it doesn’t even compile at this point.
@@ -59 +61 @@ The closest they get to implying that this was a success, is this part:
But this extraordinary claim isn’t backed up by any evidence. In the blog post they never provide a working commit, build instructions or even a demo that can
+be
reproduced.
@@ -21 +21 @@ Finally they arrived at a point where something “solved most of our coordination
This is where things get a bit murky and unclear. They claim “Despite the codebase size, new agents can still understand it and make meaningful progress” and “Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts”, but they never actually say if this is successful or not, is it actually working? Can you run this browser yourself? We don’t know and they never
-say.
+say explicitly.
@@ -25 +25 @@ After this, they embed the following video:
-[video]
+
@@ -33 +33 @@ And below it, they say “While it might seem like a simple screenshot, building a
And if you try to compile it yourself, you’ll see that it’s very far away from being a functional browser at all, and seemingly, it never actually was able to build. Multiple recent
-CI workflow
+GitHub Actions
runs on `main`
-are failing, all the
+show failures (including workflow-file errors), and independent build attempts report dozens of compiler errors, recent
PRs were
+all
merged with failing CI, and going back in the Git history from most recent commit, I couldn’t find a single commit that compiled cleanly.
@@ -39 +39,3 @@ And diving into the codebase, if the compilation errors didn’t make that sure, m
They later start to talk about what’s next, but not a single word about how to run it, what to expect, how it’s working or anything else. Cursor’s blog post provides no reproducible
-demo/build instructions or
+demo and no
known-good
-commit,
+revision (tag/release/commit) to verify the screenshots,
beyond linking the repo.
Regardless of intent, Cursor’s blog post creates the impression of a functioning prototype while leaving out the basic reproducibility markers one would expect from such claim. They never explicitly claim it’s actually working, so no one can say they lied at least.
@@ -46,0 +49,2 @@ Which seems like a really strange conclusion to arrive at, when all they’ve prov
+A “browser experiment” doesn’t need to rival Chrome. A reasonable minimum bar is: it compiles on a supported toolchain and can render a trivial HTML file. Cursor’s post doesn’t demonstrate that bar, and current public build attempts fail at this too.
@@ -55 +59 @@ The closest they get to implying that this was a success, is this part:
But this extraordinary claim isn’t backed up by any evidence.
-They
+In the blog post they
never provide a working commit, build instructions or even a demo that can reproduced.
@@ -0,0 +1,57 @@
+date: 2026-01-16
+# Cursor’s latest “browser experiment” implied success without evidence
+On January 14th 2026, Cursor published a blog post titled “Scaling long-running autonomous coding” (https://cursor.com/blog/scaling-agents)
+In the blog post, they talk about their experiments with running “coding agents autonomously for weeks”
+with the explicit goal of
+> understand[ing] how far we can push the frontier of agentic coding for projects that typically take human teams months to complete
+They talk about some approaches they tried, why they think those failed, and how to address the difficulties.
+Finally they arrived at a point where something “solved most of our coordination problems and let us scale to very large projects without any single agent”, which then led to this:
+> To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files. You can explore the source code on GitHub (https://github.com/wilsonzlin/fastrender)
+This is where things get a bit murky and unclear. They claim “Despite the codebase size, new agents can still understand it and make meaningful progress” and “Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts”, but they never actually say if this is successful or not, is it actually working? Can you run this browser yourself? We don’t know and they never say.
...
Read the original on embedding-shapes.github.io »
In a massive shift in North American trade policy, Prime Minister Mark Carney announced today a new “strategic partnership” with China that effectively reopens the Canadian border to Chinese electric vehicles.
The move marks a significant departure from the United States’ hardline protectionist stance and could bring affordable EV options like the BYD Seagull to Canadian roads as early as this year.
For the last two years, Canada has largely walked in lockstep with the US regarding Chinese EV tariffs. Following the Biden administration’s move to impose 100% tariffs on Chinese EVs, Canada implemented similar surtaxes, effectively freezing companies like BYD, Nio, and Zeekr out of the market.
As part of a broader trade agreement secured by Prime Minister Carney in Beijing this week, Canada has agreed to allow an annual quota of 49,000 Chinese electric vehicles into the country at the tariff rate of just 6.1%.
According to the Prime Minister’s office, this volume represents less than 3% of the Canadian new vehicle market. However, the deal explicitly targets the low end of the market, with the government anticipating that within five years, “more than 50% of these vehicles will be affordable EVs with an import price of less than $35,000.”
In exchange for opening the EV floodgates (or at least starting to break the dam), China has agreed to lower tariffs on Canadian canola seed from roughly 85% to 15% and to lift restrictions on Canadian lobster and crab.
The Canadian government claims this isn’t just about imports. The text of the agreement states that the deal is expected to “drive considerable new Chinese joint-venture investment in Canada” to build out the domestic EV supply chain.
While 49,000 vehicles might sound like a small number compared to the total market, it’s a specific, targeted wedge that changes the entire dynamic of the North American EV market.
For years, we at Electrek have argued that protectionism, while perhaps protecting legacy automaker jobs in the short term, ultimately hurts consumers and slows down the transition to sustainable transport.
Meanwhile, protecting domestic automakers from Chinese competition in their home market makes them less competitive on the global stage, virtually giving the global market to China.
The reality is that Chinese automakers are currently building some of the best, most affordable EVs in the world. Keeping them out entirely not only hurts consumers but also hurts innovation.
Of course, this is going to make Washington furious. The US has been trying to build a “Fortress North America” against Chinese EVs. By letting 49,000 units in tariff-free (or near tariff-free), Canada is effectively saying it values affordable climate solutions (and canola exports) more than complete alignment with US industrial policy, which is understandable since the US was the one to go hostile on trade with Canada.
The interesting detail here is the “Joint Venture” language. It looks like Carney is taking a page out of China’s own playbook. Canada seems to be using this quota as a carrot to get companies like BYD or CATL to set up shop in Canada and maybe help Canadian companies learn from those giants.
...
Read the original on electrek.co »
Recently, I’ve been spending my time building an image-to-ASCII renderer. Below is the result — try dragging it around, the demo is interactive!
One thing I spent a lot of effort on is getting edges looking sharp. Take a look at this rotating cube example:
Try opening the “split” view. Notice how well the characters follow the contour of the square.
This renderer works well for animated scenes, like the ones above, but we can also use it to render static images:
The image of Saturn was generated with ChatGPT.
Then, to get better separation between different colored regions, I also implemented a cel shading-like effect to enhance contrast between edges. Try dragging the contrast slider below:
The contrast enhancement makes the separation between different colored regions far clearer. That was key to making the 3D scene above look as good as it does.
I put so much focus on sharp edges because they’re an aspect of ASCII rendering that is often overlooked when programmatically rendering images as ASCII. Consider this animated 3D scene from Cognition’s landing page that is rendered via ASCII characters:
It’s a cool effect, especially while in motion, but take a look at those blurry edges! The characters follow the cube contours very poorly, and as a result, the edges look blurry and jagged in places:
This blurriness happens because the ASCII characters are being treated like pixels — their shape is ignored. It’s disappointing to see because ASCII art looks so much better when shape is utilized. I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.
I started building my ASCII renderer to prove to myself that it’s possible to utilize shape in ASCII rendering. In this post, I’ll cover the techniques and ideas I used to capture shape and build this ASCII renderer in detail.
We’ll start with the basics of image-to-ASCII conversion and see where the common issue of blurry edges comes from. After that, I’ll show you the approach I used to fix that and achieve sharp, high-quality ASCII rendering. At the end, we’ll improve on that by implementing the contrast enhancement effect I showed above.
Let’s get to it!
ASCII contains 95 printable characters that we can use. Let’s start off by rendering the following image containing a white circle using those ASCII characters:
ASCII art is (almost) always rendered using a monospace font. Since every character in a monospace font is equally wide and tall, we can split the image into a grid. Each grid cell will contain a single ASCII character.
The image with the circle is pixels. For the ASCII grid, I’ll pick a row height of pixels and a column width of pixels. That splits the canvas into rows and columns — an grid:
Monospace characters are typically taller than they are wide, so I made each grid cell a bit taller than it is wide.
Our task is now to pick which character to place in each cell. The simplest approach is to calculate a lightness value for each cell and pick a character based on that.
We can get a lightness value for each cell by sampling the lightness of the pixel at the cell’s center:
We want each pixel’s lightness as a numeric value between and , but our image data consists of pixels with RGB color values.
We can use the following formula to convert an RGB color (with component values between and ) to a lightness value:
Now that we have a lightness value for each cell, we want to use those values to pick ASCII characters. As mentioned before, ASCII has 95 printable characters, but let’s start simple with just these characters:
We can sort them in approximate density order like so, with lower-density characters to the left, and high-density characters to the right:
I added space as the first (least dense) character.
We can then map lightness values between and to one of those characters like so:
This maps low lightness values to low-density characters and high lightness values to high-density characters.
Rendering the circle from above with this method gives us:
That works… but the result is pretty ugly. We seem to always get @ for cells that fall within the circle and a space for cells that fall outside.
That is happening because we’ve pretty much just implemented nearest-neighbor downsampling. Let’s see what that means.
Downsampling, in the context of image processing, is taking a larger image (in our case, the image with the circle) and using that image’s data to construct a lower resolution image (in our case, the ASCII grid). The pixel values of the lower resolution image are calculated by sampling values from the higher resolution image.
The simplest and fastest method of sampling is nearest-neighbor interpolation, where, for each cell (pixel), we only take a single sample from the higher resolution image.
Consider the circle example again. Using nearest-neighbor interpolation, every sample either falls inside or outside of the shape, resulting in either or lightness:
If, instead of picking an ASCII character for each grid cell, we color each grid cell (pixel) according to the sampled value, we get the following pixelated rendering:
This pixelated rendering is pretty much equivalent to the ASCII rendering from before. The only difference is that instead of @s we have white pixels, and instead of spaces we have black pixels.
These square, jagged looking edges are aliasing artifacts, commonly called jaggies. They’re a common result of using nearest-neighbor interpolation.
To get rid of jaggies, we can collect more samples for each cell. Consider this line:
The line’s slope on the axis is . When we pixelate it with nearest-neighbor interpolation, we get the following:
Let’s try to get rid of the jagginess by taking multiple samples within each cell and using the average sampled lightness value as the cell’s lightness. The example below lets you vary the number of samples using the slider:
With multiple samples, cells that lie on the edge of a shape will have some of their samples fall within the shape, and some outside of it. Averaging those, we get gray in-between colors that smooth the downsampled image. Below is the same example, but with an overlay showing where the samples are taken:
This method of collecting multiple samples from the larger image is called supersampling. It’s a common method of spatial anti-aliasing (avoiding jaggies at edges). Here’s what the rotating square looks like with supersampling (using samples for each cell):
Let’s look at what supersampling does for the circle example from earlier. Try dragging the sample quality slider:
The circle becomes less jagged, but the edges feel blurry. Why’s that?
Well, they feel blurry because we’re pretty much just rendering a low-resolution, pixelated image of a circle. Take a look at the pixelated view:
The ASCII and pixelated views are mirror images of each other. Both are just low-resolution versions of the original high-resolution image, scaled up to the original’s size — it’s no wonder they both look blurry.
Increasing the number of samples is insufficient. No matter how many samples we take per cell, the samples will be averaged into a single lightness value, used to render a single pixel.
And that’s the core problem: treating each grid cell as a pixel in an image. It’s an obvious and simple method, but it disregards that ASCII characters have shape.
We can make our ASCII renderings far more crisp by picking characters based on their shape. Here’s the circle rendered that way:
The characters follow the contour of the circle very well. By picking characters based on shape, we get a far higher effective resolution. The result is also more visually interesting.
Let’s see how we can implement this.
So what do I mean by shape? Well, consider the characters T, L, and O placed within grid cells:
The character T is top-heavy. Its visual density in the upper half of the grid cell is higher than in the lower half. The opposite can be said for L — it’s bottom-heavy. O is pretty much equally dense in the upper and lower halves of the cell.
We might also compare characters like L and J. The character L is heavier within the left half of the cell, while J is heavier in the right half:
We also have more “extreme” characters, such as _ and ^, that only occupy the lower or upper portion of the cell, respectively:
This is, roughly, what I mean by “shape” in the context of ASCII rendering. Shape refers to which regions of a cell a given character visually occupies.
To pick characters based on their shape, we’ll somehow need to quantify (put numbers to) the shape of each character.
Let’s start by only considering how much characters occupy the upper and lower regions of our cell. To do that, we’ll define two “sampling circles” for each grid cell — one placed in the upper half and one in the lower half:
It may seem odd or arbitrary to use circles instead of just splitting the cell into two rectangles, but using circles will give us more flexibility later on.
A character placed within a cell will overlap each of the cell’s sampling circles to some extent.
One can compute that overlap by taking a bunch of samples within the circle (for example, at every pixel). The fraction of samples that land inside the character gives us the overlap as a numeric value between and :
For T, we get an overlap of approximately for the upper circle and for the lower. Those overlap values form a -dimensional vector:
We can generate such a -dimensional vector for each character within the ASCII alphabet. These vectors quantify the shape of each ASCII character along these dimensions (upper and lower). I’ll call these vectors shape vectors.
Below are some ASCII characters and their shape vectors. I’m coloring the sampling circles using the component values of the shape vectors:
We can use the shape vectors as 2D coordinates — here’s every ASCII character on a 2D plot:
Let’s say that we have our ASCII characters and their associated shape vectors in a CHARACTERS array:
We can then perform a nearest neighbor search like so:
The findBestCharacter function gives us the ASCII character whose shape best matches the input lookup vector.
Note: this brute force search is not very performant. This becomes a bottleneck when we start rendering thousands of ASCII characters at . I’ll talk more about this later.
To make use of this in our ASCII renderer, we’ll calculate a lookup vector for each cell in the ASCII grid and pass it to findBestCharacter to determine the character to display.
Let’s try it out. Consider the following zoomed-in circle as an example. It is split into three grid cells:
Overlaying our sampling circles, we see varying degrees of overlap:
When calculating the shape vector of each ASCII character, we took a huge number of samples. We could afford to do that because we only need to calculate those shape vectors once up front. After they’re calculated, we can use them again and again.
However, if we’re converting an animated image (e.g. canvas or video) to ASCII, we need to be mindful of performance when calculating the lookup vectors. An ASCII rendering might have hundreds or thousands of cells. Multiplying that by tens or hundreds of samples would be incredibly costly in terms of performance.
With that being said, let’s pick a sampling quality of with the samples placed like so:
For the top sampling circle of the leftmost cell, we get one white sample and two black, giving us an average lightness of . Doing the same calculation for all of the sampling circles, we get the following 2D vectors:
From now on, instead of using the term “lookup vectors”, I’ll call these vectors, sampled from the image that we’re rendering as ASCII, sampling vectors. One sampling vector is calculated for each cell in the grid.
Anyway, we can use these sampling vectors to find the best-matching ASCII character. Let’s see what that looks like on our 2D plot — I’ll label the sampling vectors (from left to right) C0, C1, and C2:
Hmm… this is not what we want. Since none of the ASCII shape vector components exceed , they’re all clustered towards the bottom-left region of our plot. This makes our sampling vectors map to a few characters on the edge of the cluster.
We can fix this by normalizing the shape vectors. We’ll do that by taking the maximum value of each component across all shape vectors, and dividing the components of each shape vector by the maximum. Expressed in code, that looks like so:
Here’s what the plot looks like with the shape vectors normalized:
If we now map the sampling vectors to their nearest neighbors, we get a much more sensible result:
We get ’, M and $. Let’s see how well those characters match the circle:
Nice! They match very well.
Let’s try rendering the full circle from before with the same method:
Much better than before! The picked characters follow the contour of the circle very well.
Using two sampling circles — one upper and one lower — produces a much better result than the -dimensional (pixelated) approach. However, it still falls short when trying to capture other aspects of a character’s shape.
For example, two circles don’t capture the shape of characters that fall in the middle of the cell. Consider -:
For -, we get a shape vector of . That doesn’t represent the character very well at all.
The two upper-lower sampling circles also don’t capture left-right differences, such as the difference between p and q:
We could use such differences to get better character picks, but our two sampling circles don’t capture them. Let’s add more dimensions to our shape to fix that.
Since cells are taller than they are wide (at least with the monospace font I’m using), we can use sampling circles to cover the area of each cell quite well:
sampling circles capture left-right differences, such as between p and q, while also capturing differences across the top, bottom, and middle regions of the cell, differentiating ^, -, and _. They also capture the shape of “diagonal” characters like / to a reasonable degree.
One problem with this grid-like configuration for the sampling circles is that there are gaps. For example, . falls between the sampling circles:
To compensate for this, we can stagger the sampling circles vertically (e.g. lowering the left sampling circles and raising the right ones) and make them a bit larger. This causes the cell to be almost fully covered while not causing excessive overlap across the sampling circles:
We can use the same procedure as before to generate character vectors using these sampling circles, this time yielding a -dimensional vector. Consider the character L:
For L, we get the vector:
I’m presenting -dimensional shape vectors in a matrix form because it’s easier to grok geometrically, but the actual vector is a flat list of numbers.
The lightness values certainly look L-shaped! The 6D shape vector captures L’s shape very well.
Now we have a 6D shape vector for every ASCII character. Does that affect character lookups (how we find the best matching character)?
Earlier, in the findBestCharacter function, I referenced a getDistance function. That function returns the Euclidean distance between the input points. Given two 2D points and , the formula to calculate their Euclidean distance looks like so:
Put into code, this looks like so:
...
Read the original on alexharri.com »
LLMs mostly produce syntactically valid outputs when we try generating JSON, XML, code, etc., but they can occasionally fail due to their probabilistic nature. This is a problem for developers as we use LLMs programmatically, for tasks like data extraction, code generation, tool calling, etc.
There are many deterministic ways to ensure structured LLM outputs. If you are a developer, this handbook covers everything you need.
Structured generation is moving too fast. Most resources you find today are already outdated. You have to dig through multiple academic papers, blogs, GitHub repos, and other resources.
This handbook brings it all together in a living document that updates regularly.
You can read it start-to-finish, or treat it like a lookup table.
We’re the maintainers of Nanonets-OCR models (VLMs to convert documents into clean, structured Markdown) and docstrange (open-source document processing library).
...
Read the original on nanonets.com »
Slop is Everywhere For Those With Eyes to See
The size of your plate can influence how much food you eat. The absence of a clock on a casino wall can keep you gambling through the early morning. On social media, our For You Pages give us the illusion of infinite content. How our environments are designed influences how we consume. And wouldn’t you know it, everything around us is designed for maximum consumption.
Open TikTok, and you can easily burn through a hundred videos or more before you glance at the time. It doesn’t help that the For You Page hides the time on our phones.
We are over consuming content on the FYP. The sudden surge of low-quality, AI-generated content, i.e. “AI slop,” is a byproduct of that overconsumption. We don’t see it because, well, we’re conditioned not to, but slop always arrives on time. Slop is inevitable. Slop is quintessential. Slop is everywhere for those with eyes to see.
Olive oil, wasabi, saffron, vanilla, Wagyu, honey, champagne, and truffle,…reality TV, all hold examples of what happens when demand exceeds supply— companies fill the gap with slop. The free market loves a good filler. So, why should the digital realm be any different?
The For You page is designed to keep us playing the dopamine slot machine for as long as possible. The Average Time on Site metric is still the goose that lays the golden eggs, and both TikTok and Meta are reporting that their egg baskets have never been fuller.
But, there’s a problem. On any given platform, only 1-3% of users publish content. It’s called the 90-9-1 rule, and platforms that rely on free user generated content have been trying to solve this problem since the beginning of the commercialized web. The introduction of the For You Page, and the illusion of endless content, has only exasperated the inequity.
Curation used to be part of our media consumption process. We would hop from website to website looking for a laugh. We used to click on hyperlinks for Christ’s sake. Now, all we must do is sit at the trough and let daddy Zuck feed us.
In a recent essay, Joan Westenberg makes a complementary argument that the algorithm has “flattened” curiosity by eliminating the need to “hunt” for our content. They go on to say:
There’s a concept in behavioral science called the “effort heuristic.” It’s the idea that we tend to value information more if we worked for it. The more effort something requires, the more meaning we assign to the result. When all knowledge is made effortless, it’s treated as disposable. There’s no awe, no investment, no delight in the unexpected—only consumption.
This type of mindless consumption is not only harming our curiosity, it’s helping to cheapen creativity for the people who produce what we consume.
Creativity isn’t scalable. Content creation has a hard productivity ceiling. Every human-created video on our feeds require some level of writing, production, and editing. Yet the For You Page has made the content consumption so efficient, that perhaps demand has exceeded supply.
If you’re a product manager for a social media platform, you can reduce the friction of publishing content to the app, or ship better editing tools, but you can’t optimize creative spark. You can’t treat humans like content-generating machines (as much as they have tried). Despite the illusion of infinite scrolling thanks to the FYP, art remains a finite resource bound to the whims of human creativity.
You see their problem.
Mark Zuckerberg wants us on his platforms, flicking our thumbs, for as long as possible. But the more we open Instagram, the more creators he needs posting multiple times each day. Mark has very little control over this variable. Creators could suddenly post less, or simply stop posting all together, and there’s nothing he could do about it. What’s worse, creators could demand Meta pay them for their art.
Actually, yes. And it turns out, you could rather effectively kill a platform if you got a small group of top creators organized and angry.
In the fall of 2015, twenty social media personalities took down one of the largest mobile video apps on the internet. They wanted money for their labor. The executives at Vine said no. The gang of twenty, who were the highest performing creators on the app, walked away. They stopped posting entertaining content to Vine, and instead repeatedly implored their followers to find them on competing apps.
Vine shut down for good just months later.
From Inside the secret meeting of Vine stars that ushered in the app’s demise:
Vine’s spectacular rise and fall showed the power of online creators. Its demise offers crucial lessons for platforms trying to engage with power users — and a deeper understanding of who ultimately controls a social product.
Vine creators exposed and exploited a weakness in Vine’s conventional approach to social media. Follower count had power. Old-style discovery algorithms could be easily manipulated. Vine creators used that power to take over the app, and convinced users to migrate to other platforms.
You see why follower counts are less important today, and why black-box algorithms have full control over who goes viral and who gets “shadow banned.” TikTok saw the mistakes of its predecessor, and made it so content creators could never exercise collective influence again.
Because virality now feels more like gambling, I suspect people post more content today than a decade ago. But it’s not enough. Our insatiable appetites for content is pushing for corporations to meet that demand with slop. 
If it were up to TikTok and Meta, our feeds would be exclusively robot-made. Humans are a variable they cannot control, and I think they despise us for it.
Anyway, I have good news. Outside of our FYPs you’ll find a surplus of art, essays, articles, and videos just waiting to be discovered. And best of all, these artists and writers are making things on their own terms. We, too, can enjoy the products of their labor on our terms, while not giving a dime of our attention to big tech.
This is the open web. Or the social web. Or the open social web. Or the– you get the point. To find it, you must reacquaint yourself with the lost art of surfing the web.
Surfing the web is very different than scrolling the FYP. You don’t often hear the words ”mindful” and “internet” together but, surfing the web was an art of mindful consumption that doesn’t much exist today. Not to get all old man yells at cloud at you, but maybe we should bring it back?
Up next: The Lost Art of Surfing The Web (coming soon)
If you have ever found my writing valuable and you want to help me continue avoiding doing my laundry, you can buy me a coffee. It would mean a lot.
Sometimes, I identify who I’m writing for as a way to provide contextIt’s like saying, “I’m using a lot of technical terms because I wrote this post for frontend developers,” or “Sorry if I’m getting too symmetrical, this one’s for my Wes Anderson fans.” But, all are welcomed, always. If you’re not in this article’s intended audience, but you find this article interesting, wonderful! Please stick around, read the post, and feel free to ask me questions.
I encourage you to leave me a comment on this post by emailing me. I’d love to hear from you!
I do not receive commission for anything I share, endorse, or discuss, anywhere on From Jason. I have no sponsorships, or advertiser agreements. If that ever changes, I will let you know.
...
Read the original on fromjason.xyz »
Thanks to everyone who sent in condolences on my recent death from prostate cancer at age 68, but that was Scott Adams. I (Scott Alexander) am still alive. Still, the condolences are appreciated. Scott Adams was a surprisingly big part of my life. I may be the only person to have read every Dilbert book before graduating elementary school. For some reason, 10-year-old-Scott found Adams’ stories of time-wasting meetings and pointy-haired bosses hilarious. No doubt some of the attraction came from a more-than-passing resemblance between Dilbert’s nameless corporation and the California public school system. We’re all inmates in prisons with different names. But it would be insufficiently ambitious to stop there. Adams’ comics were about the nerd experience. About being cleverer than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful. There’s an arc in Dilbert where the boss disappears for a few weeks and the engineers get to manage their own time. Productivity shoots up. Morale soars. They invent warp drives and time machines. Then the boss returns, and they’re back to being chronically behind schedule and over budget. This is the nerd outlook in a nutshell: if I ran the circus, there’d be some changes around here.Yet the other half of the nerd experience is: for some reason this never works. Dilbert and his brilliant co-workers are stuck watching from their cubicles while their idiot boss racks in bonuses and accolades. If humor, like religion, is an opiate of the masses, then Adams is masterfully unsubtle about what type of wound his art is trying to numb. This is the basic engine of Dilbert: everyone is rewarded in exact inverse proportion to their virtue. Dilbert and Alice are brilliant and hard-working, so they get crumbs. Wally is brilliant but lazy, so he at least enjoys a fool’s paradise of endless coffee and donuts while his co-workers clean up his messes. The P.H.B. is neither smart nor industrious, so he is forever on top, reaping the rewards of everyone else’s toil. Dogbert, an inveterate scammer with a passing resemblance to various trickster deities, makes out best of all. The repressed object at the bottom of the nerd subconscious, the thing too scary to view except through humor, is that you’re smarter than everyone else, but for some reason it isn’t working. Somehow all that stuff about small talk and sportsball and drinking makes them stronger than you. No equation can tell you why. Your best-laid plans turn to dust at a single glint of Chad’s perfectly-white teeth. Lesser lights may distance themselves from their art, but Adams radiated contempt for such surrender. He lived his whole life as a series of Dilbert strips. Gather them into one of his signature compendia, and the title would be Dilbert Achieves Self Awareness And Realizes That If He’s So Smart Then He Ought To Be Able To Become The Pointy-Haired Boss, Devotes His Whole Life To This Effort, Achieves About 50% Success, Ends Up In An Uncanny Valley Where He Has Neither The Virtues Of The Honest Engineer Nor Truly Those Of The Slick Consultant, Then Dies Of Cancer Right When His Character Arc Starts To Get Interesting.If your reaction is “I would absolutely buy that book”, then keep reading, but expect some detours.The niche that became Dilbert opened when Garfield first said “I hate Mondays”. The quote became a popular sensation, inspiring t-shirts, coffee mugs, and even a hit single. But (as I’m hardly the first to point out) why should Garfield hate Mondays? He’s a cat! He doesn’t have to work!In the 80s and 90s, saying that you hated your job was considered the height of humor. Drew Carey: “Oh, you hate your job? There’s a support group for that. It’s called everybody, and they meet at the bar.”This was merely the career subregion of the supercontinent of Boomer self-deprecating jokes, whose other prominences included “I overeat”, “My marriage is on the rocks”, “I have an alcohol problem”, and “My mental health is poor”.Arguably this had something to do with the Bohemian turn, the reaction against the forced cheer of the 1950s middle-class establishment of company men who gave their all to faceless corporations and then dropped dead of heart attacks at 60. You could be that guy, proudly boasting to your date about how you traded your second-to-last patent artery to complete a spreadsheet that raised shareholder value 14%. Or you could be the guy who says “Oh yeah, I have a day job working for the Man, but fuck the rat race, my true passion is white water rafting”. When your father came home every day looking haggard and worn out but still praising his boss because “you’ve got to respect the company or they won’t take care of you”, being able to say “I hate Mondays” must have felt liberating, like the mantra of a free man.This was the world of Dilbert’s rise. You’d put a Dilbert comic on your cubicle wall, and feel like you’d gotten away with something. If you were really clever, you’d put the Dilbert comic where Dilbert gets in trouble for putting a comic on his cubicle wall on your cubicle wall, and dare them to move against you.But t-shirts saying “Working Hard . . . Or Hardly Working?” no longer hit as hard as they once did. Contra the usual story, Millennials are too earnest to tolerate the pleasant contradiction of saying they hate their job and then going in every day with a smile. They either have to genuinely hate their job - become some kind of dirtbag communist labor activist - or at least pretend to love it. The worm turns, all that is cringe becomes based once more and vice versa. Imagine that guy boasting to his date again. One says: “Oh yeah, I grudgingly clock in every day to give my eight hours to the rat race, but trust me, I’m secretly hating myself the whole time”? The other: “I work for a boutique solar energy startup that’s ending climate change - saving the environment is my passion!” Zoomers are worse still: not even the fig leaf of social good, just pure hustle.Silicon Valley, where hustle culture has reached its apogee, has an additional consideration: why don’t you found a startup? If you’re so much smarter than your boss, why not compete against him directly? Scott Adams based Dilbert on his career at Pacific Bell in the 80s. Can you imagine quitting Pacific Bell in the 80s to, uh, found your own Pacific Bell? To go to Michael Milken or whoever was investing back then, and say “Excuse me, may I have $10 billion to create my own version of Pacific Bell, only better?” But if someone were to try to be Dilbert today — to say, earnestly, “I hate my job because I am smarter than my boss and could do it better than him,” that would be the obvious next question, the same way “I am better at picking stocks than Wall Street” ought to be followed up with “Then why don’t you invest?”Above, I described “the nerd experience” of “being smarter than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful.” You nodded along, because you knew the only possible conclusion to the arc suggested by that sentence was to tear it down, to launch a tirade about how that nerd is naive and narcissistic and probably somehow also a racist. In the year of our Lord 2026, of course that’s where I’m going. Dilbert is a relic of a simpler time, when the trope could be played straight. But it’s also an artifact of the transition, maybe even a driver of it. Scott Adams appreciated these considerations earlier and more acutely than anyone else. And they drove him nuts.Adams knew, deep in his bones, that he was cleverer than other people. God always punishes this impulse, especially in nerds. His usual strategy is straightforward enough: let them reach the advanced physics classes, where there will always be someone smarter than them, then beat them on the head with their own intellectual inferiority so many times that they cry uncle and admit they’re nothing special.For Adams, God took a more creative and — dare I say, crueler — route. He created him only-slightly-above-average at everything except for a world-historical, Mozart-tier, absolutely Leonardo-level skill at making silly comics about hating work.Scott Adams never forgave this. Too self-aware to deny it, too narcissistic to accept it, he spent his life searching for a loophole. You can read his frustration in his book titles: How To Fail At Almost Everything And Still Win Big. Trapped In A Dilbert World. Stick To Drawing Comics, Monkey Brain. Still, he refused to stick to comics. For a moment in the late-90s, with books like The Dilbert Principle and The Dilbert Future, he seemed on his way to be becoming a semi-serious business intellectual. He never quite made it, maybe because the Dilbert Principle wasn’t really what managers and consultants wanted to hear:I wrote The Dilbert Principle around the concept that in many cases the least competent, least smart people are promoted, simply because they’re the ones you don’t want doing actual work. You want them ordering the doughnuts and yelling at people for not doing their assignments—you know, the easy work. Your heart surgeons and your computer programmers—your smart people—aren’t in management.Okay, “I am cleverer than everyone else”, got it. His next venture (c. 1999) was the Dilberito, an attempt to revolutionize food via a Dilbert-themed burrito with the full Recommended Daily Allowance of twenty-three vitamins. I swear I am not making this up. A contemporaneous NYT review said it “could have been designed only by a food technologist or by someone who eats lunch without much thought to taste”. The Onion, in its twenty year retrospective for the doomed comestible, called it a frustrated groping towards meal replacements like Soylent or Huel, long before the existence of a culture nerdy enough to support them. Adams himself, looking back from several years’ distance, was even more scathing: “the mineral fortification was hard to disguise, and because of the veggie and legume content, three bites of the Dilberito made you fart so hard your intestines formed a tail.”His second foray into the culinary world was a local restaurant called Stacey’s. The New York Times does a pitch-perfect job covering the results. Their article starts:This is yet another story about a clueless but obtrusive boss — the kind of meddlesome manager you might laugh at in the panels of “Dilbert,” the daily comic strip.…and continues through a description of Adams making every possible rookie mistake. As the restaurant does worse and worse, Adams becomes more and more convinced that he has to figure out some clever lifehack that will turn things around and revolutionize restaurants. First he comes up with a theory that light is the key to restauranting, and spends ages fiddling with the windows. When this fails, he devolves into an unmistakable sign of desperation - asking blog commenters for advice:He also turned to Dilbert fans for suggestions on how to use the party room, in a posting on his blog titled “Oh Great Blog Brain.” The Dilbert faithful responded with more than 1,300 comments, mixing interesting ideas (interactive murder-mystery theater) with unlikely mischief (nude volleyball tournaments). Mr. Adams asked his employees to read the comments and is now slowly trying some of them.But what makes this article truly perfect - I can’t believe it didn’t get a Pulitzer - is that it’s not some kind of hostile ambush profile. Adams is totally self-aware. He also finds the whole situation hilarious! Everyone involved is in on the joke! The waiters find it hilarious! After every workday, Adams and the waiters get together and laugh long into the night together about how bad a boss Adams is!There’s a running joke about how if you see a business that loses millions yearly, it’s probably run by some banker’s wife who’s getting subsidized to feel good about herself and pretend she has a high-powered job. I think this is approximately what was going on with Stacey’s. Adams made enough money off Dilbert that he could indulge his fantasies of being something more than “the Dilbert guy”. For a moment, he could think of himself as a temporarily-embarrassed businessman, rather than just a fantastically successful humorist. The same probably explains his forays into television (“Dilbert: The Animated Series”), non-Dilbert comics (“Plop: The Hairless Elbonian”), and technology (”WhenHub”, his site offering “live chats with subject-matter experts”, which was shelved after he awkwardly tried to build publicity by suggesting that mass shooting witnesses could profit by using his site to tell their stories.)Adams and Elon Musk occasionally talked about each other - usually to defend one another against media criticism of their respective racist rants - but I don’t know if they ever met. I wonder what it would have been like if they did. I imagine them coming together at some Bay Area house party on copious amounts of LSD or MDMA. One, the world’s greatest comic writer, who more than anything else wanted to succeed in business. The other, the world’s greatest businessman, who more than anything else wanted people to think that he’s funny. Scott Adams couldn’t stop frittering his talent and fortune on doomed attempts to be taken seriously. But someday Elon Musk will buy America for $100 trillion, tell the UN that he’s renaming it “the United States of 420-69”, and the assembled ambassadors will be as silent as the grave. Are there psychic gains from trade to be had between two such people?Michael Jordan was the world’s best basketball player, and insisted on testing himself against baseball, where he failed. Herbert Hoover was one of the world’s best businessmen, and insisted on testing himself against politics, where he crashed and burned. We’re all inmates in prisons of different names. Most of us accept it and get on with our lives. Adams couldn’t stop rattling the bars.I’m No Scientist, But I Think Feng Shui Is Part Of The AnswerHaving failed his forays into business, Adams turned to religion. Not in the sense of seeking consolation through God’s love. In the sense of trying to show how clever he was by figuring out the true nature of the DivineThe result was God’s Debris. This is a breathtakingly bad book. On some level, Adams (of course) seemed to realize this, but (of course) his self-awareness only made things worse. In the second-worst introduction to a work of spiritual wisdom I’ve ever read (Gurdjieff keeps first place by a hair), he explains that this is JUST A THOUGHT EXPERIMENT and IF YOU TAKE IT SERIOUSLY, YOU FAIL. But also, it really makes you think, and it’s going to blow your mind, and you’ll spend the rest of your life secretly wondering whether it was true, but it won’t be, because IT’S JUST A THOUGHT EXPERIMENT, and IF YOU TAKE IT SERIOUSLY, YOU FAIL. Later, in a Bloomberg interview, he would say that this book - and not Dilbert - would be his “ultimate legacy” to the world. But remember, IT’S JUST A THOUGHT EXPERIMENT, and IF YOU TAKE IT SERIOUSLY YOU FAIL.I read it for the first time while researching this essay. The frame story is that a delivery boy gives a package to the wisest man in the universe, who invites him to stay a while and discuss philosophy (REMEMBER, IT’S JUST A WORK OF FICTION! THESE ARE ONLY CHARACTERS!) Their discussion is one-quarter classic philosophical problems that seemed deep when you were nineteen, presented with no reference to any previous work:“There has to be a God,” I said. “Otherwise, none of us would be here.” It wasn’t much of a reason, but I figured he didn’t need more.“Do you believe God is omnipotent and that people have free will?” he asked.“If God is omnipotent, wouldn’t he know the future?”“If God knows what the future holds, then all our choices are already made, aren’t they? Free will must be an illusion.”He was clever, but I wasn’t going to fall for that trap. “God lets us determine the future ourselves, using our free will,” I explained.“Then you believe God doesn’t know the future?”“I guess not,” I admitted. “But he must prefer not knowing.”There is an ongoing meta-discussion among philosophy discussers of how acceptable it is to propose your own answers to the great questions without having fully mastered previous scholarship. On the one hand, philosophy is one of the most fundamental human activities, gating it behind the near-impossible task of having read every previous philosopher is elitist and gives self-appointed guardians of scholarship a permanent heckler’s veto on any new ideas, and it can create a culture so obsessed with citing every possible influence that eventually the part where you have an opinion withers away and philosophy becomes a meaningless ritual of presenting citations without conclusion. On the other hand, this book.Another quarter is philosophical questions which did not seem deep, even when you were nineteen, and which nobody has ever done work on, because nobody except Scott Adams ever even thought they were worth considering:“Well, usually it’s because some important religious event took place there.”“What does it mean to say that something took place in a particular location when we know that the earth is constantly in motion, rotating on its axis and orbiting the sun? And we’re in a moving galaxy that is part of an expanding universe. Even if you had a spaceship and could fly anywhere, you can never return to the location of a past event. There would be no equivalent of the past location because location depends on your distance from other objects, and all objects in the universe would have moved considerably by then.”“I see your point, but on Earth the holy places keep their relationship to other things on Earth, and those things don’t move much,” I said.“Let’s say you dug up all the dirt and rocks and vegetation of a holy place and moved it someplace else, leaving nothing but a hole that is one mile deep in the original location. Would the holy land now be the new location where you put the dirt and rocks and vegetation, or the old location with the hole?”“I think both would be considered holy,” I said, hedging my bets.“Suppose you took only the very top layer of soil and vegetation from the holy place, the newer stuff that blew in or grew after the religious event occurred thousands of years ago. Would the place you dumped the topsoil and vegetation be holy?”“That’s a little trickier,” I said. “I’ll say the new location isn’t holy because the topsoil that you moved there isn’t itself holy, it was only in contact with holy land. If holy land could turn anything that touched it into more holy land, then the whole planet would be holy.”The old man smiled. “The concept of location is a useful delusion when applied to real estate ownership, or when giving someone directions to the store. But when it is viewed through the eyes of an omnipotent God, the concept of location is absurd. While we speak, nations are arming themselves to fight for control of lands they consider holy. They are trapped in the delusion that locations are real things, not just fictions of the mind. Many will die.”Another quarter of the discussion is the most pusillanimous possible subjectivism, as if Robert Anton Wilson and the 2004 film What the #$*! Do We Know!? had a kid, then strangled it at birth until it came out brain damaged. We get passages like these: “I am saying that UFOs, reincarnation, and God are all equal in terms of their reality.”“Do you mean equally real or equally imaginary?”“Your question reveals your bias for a binary world where everything is either real or imaginary. That distinction lies in your perceptions, not in the universe. Your inability to see other possibilities and your lack of vocabulary are your brain’s limits, not the universe’s.”“There has to be a difference between real and imagined things,” I countered. “My truck is real. The Easter Bunny is imagined. Those are different.”“As you sit here, your truck exists for you only in your memory, a place in your mind. The Easter Bunny lives in the same place. They are equal.”I remember the late ’90s and early ‘00s; I was (regrettably) there. For some reason, all this stuff was considered the height of wisdom back then. The actual Buddhist classics were hard to access, but everyone assumed that Buddhists were wise and they probably said, you know, something like this. If you said stuff like this, you could be wise too.The final quarter of the book is a shockingly original take on the Lurianic kabbalah. I‘m not pleased to report this, and Adams likely would have been very surprised to learn it. Still, the resemblance is unmistakable. The wisest man in the world, charged with answering all of the philosophical problems that bothered you when you were nineteen, tells the following story: if God exists, He must be perfect. Therefore, the only thing he lacks is nonexistence. Therefore, in order to fill that lack, He must destroy himself in order to create the universe. The universe is composed of the fragments of that destruction - the titular God’s Debris. Its point is to reassemble itself into God. Partially-reassembled-God is not yet fully conscious, but there is some sort of instinct within His fragments - ie within the universe - that is motivated to help orchestrate the self-reassembly, and it is this instinct which causes anti-entropic processes like evolution. Good things are good because they aid in the reassembly of God; bad things are bad because they hinder it.Adams’ version adds several innovations to this basic story. Whatever parts of God aren’t involved in physical matter have become the laws of probability; this explains the otherwise inexplicable evolutionary coincidences that created humankind. There’s something about how gravity is produced by some sort of interference between different divine corpuscules - Adams admits that Einstein probably also had useful things to say about gravity, but probably his own version amounts to the same thing, and it’s easier to understand, and that makes it better (IT’S JUST A THOUGHT EXPERIMENT! IF YOU TAKE IT SERIOUSLY, YOU FAIL.) But my favorite part is the augmentation of Luria with Nick Land: the final (or one of the final) steps in the divine reassembly is the creation of the Internet, aka “God’s nervous system”, which will connect everything to everything else and give the whole system awareness of its divine purpose. I’m honestly impressed that a Gentile worked all of this out on his own. Adams completes the performance by reinventing Kegan levels (this time I’m agnostic as to whether it’s convergent evolution or simple plagiarism), although characteristically it is in the most annoying way possible:[The wise man] described what he called the five levels of awareness and said that all humans experience the first level of awareness at birth. That is when you first become aware that you exist.In the second level of awareness you understand that other people exist. You believe most of what you are told by authority figures. You accept the belief system in which you are raised.At the third level of awareness you recognize that humans are often wrong about the things they believe. You feel that you might be wrong about some of your own beliefs but you don’t know which ones. Despite your doubts, you still find comfort in your beliefs.The fourth level is skepticism. You believe the scientific method is the best measure of what is true and you believe you have a good working grasp of truth, thanks to science, your logic, and your senses. You are arrogant when it comes to dealing with people in levels two and three. The fifth level of awareness is the Avatar. The Avatar understands that the mind is an illusion generator, not a window to reality. The Avatar recognizes science as a belief system, albeit a useful one. An Avatar is aware of God’s power as expressed in probability and the inevitable recombination of God consciousness.I think going through every David Chapman essay and replacing the word “metarationality” with “THE AVATAR” would actually be very refreshing.What are we to make of all of this?Nothing is more American than inventing weird cringe fusions of religion and atheism where you say that God doesn’t exist as (gestures upward) some Big Man In The Sky the way those people believe, but also, there totally is a God, in some complicated sense which only I understand. When Thomas Jefferson cut all the passages with miracles out of his Bible, he was already standing on the shoulders of generations of Unitarians, Quakers, and Latitudinarians.This was augmented by the vagaries of nerd culture’s intersection with the sci-fi fandom. The same people who wanted to read about spaceships and ray guns also wanted to read about psionics and Atlantis, so the smart sci-fi nerd consensus morphed into something like “probably all that unexplained stuff is real, but has a scientific explanation”. Telepathy is made up of quantum particles, or whatever (I talk about this more in my article on the Shaver Mystery). It became a nerd rite of passage to come up with your own theory that reconciled the spiritual and the material in the most creative way possible.And the Nineties (God’s Debris was published in 2001) were a special time. The decade began with the peak of Wicca and neopaganism. Contra current ideological fault lines, where these tendencies bring up images of Etsy witches, they previously dominated nerd circles, including male nerds, techie nerds, and right-wing nerds (did you know Eric S. Raymond is neopagan?) By decade’s end, the cleverest (ie most annoying) nerds were switching to New Atheism; throughout, smaller groups were exploring Discordianism, chaos magick, and the Subgenius. The common thread was that Christianity had lost its hegemonic status, part of being a clever nerd was patting yourself on the back for having seen through it, but exactly what would replace it was still uncertain, and there was still enough piety in the water supply that people were uncomfortable forgetting about religion entirely. You either had to make a very conscious, marked choice to stop believing (New Atheism), or try your hand at the task of inventing some kind of softer middle ground (neopaganism, Eastern religion, various cults, whatever this book was supposed to be).It’s Obvious You Won’t Survive By Your Wits AloneAdams spent his life obsessed with self-help. Even more than a businessman or a prophet, he wanted to be a self-help guru. Of course he did. His particular package of woo - a combination of hypnosis, persuasion hacks, and social skills advice - unified the two great motifs of his life.Thesis: I am cleverer than everyone else.Antithesis: I always lose to the Pointy-Haired Boss.Synthesis: I was trying to be rational. But most people are irrational sheep; they can be directed only by charismatic manipulators who play on their biases, not by rational persuasion. But now I’m back to being cleverer than everyone else, because I noticed this. Also, I should become a charismatic manipulator.I phrased this in a maximally hostile way, but it’s not wrong. And Adams started off strong. He read Dale Carnegie’s How To Win Friends And Influence People, widely agreed to be the classic book on social skills. Then, in search of even stronger persuasion techniques, he turned to hypnosis. This has a bad reputation, but I basically buy that something is there. Psychiatry has legends of psychotherapist-hypnotists who achieved amazing things, and there’s a plausible scientific story for why it might work. So when Adams claimed to be a master hypnotist, I was originally willing to give him the benefit of the doubt.That lasted until I read The Religion War, Adams’ sequel to God’s Debris. In the intro, which may be literally the most annoying passage ever written in all two million years of human history, he discusses the reception of the original book:This is a sequel to my book God’s Debris, a story about a deliveryman who chances upon the smartest person in the world and learns the secrets of reality. I subtitled that book A Thought Experiment and used a variety of hypnosis techniques in an attempt to produce a feeling of euphoric enlightenment in the reader similar to what the main character would feel while discovering the (fictionally) true nature of reality. Reactions to the book were all over the map. About half of the people who e-mailed me said they felt various flavors of euphoria, expanded awareness, connectedness, and other weird sensations that defied description. A surprising number of people reported reading the entire book twice in one day. So I know something was happening.Other people wrote angry letters and scathing reviews, pointing out the logical and factual flaws in the book. It is full of flaws, and much of the science is made up, as it states in the introduction. I explained that the reader is supposed to be looking for flaws. That’s what makes the experiment work. You might think this group of readers skipped the introduction and missed the stated point of the book, but I suspect that something else is going on. People get a kind of cognitive dissonance (brain cramp) when their worldview is disturbed. It’s fun to watch.I previously felt bad for writing this essay after Adams’ death; it seems kind of unsporting to disagree with someone who can’t respond. These paragraphs cured me of my misgivings: after his death is by far the best time to disagree with Scott Adams.The book is a novel (a real novel this time, with plot and everything) meant to dramatize the lessons of its predecessor. In the near future, the Muslims and Christians are on the verge of global war. Adams’ self-insert character, the Avatar, goes around hypnotizing and mind hacking everyone into cooperating with his hare-brained scheme for world peace. In an early chapter, the Christian alliance has captured the Avatar and sent him to be tortured. But the Avatar masterfully deflects the torturer’s attention with a bit of cold reading, some pointed questions, and a few hypnotic suggestions:As the Avatar planned, the interrogator’s conscious mind was scrambled by the emotions and thoughts of the past minutes. This brutish man, accustomed to avoiding deep thoughts, had imagined the tiniest particles of the universe, his childhood, and the battles of the future. He had laughed, felt pain and pity, been intellectually stimulated, confused, assured, and uncertain. The Avatar had challenged his worldview, and it was evaporating, leaving him feeling empty, unimportant, and purposelessIn the thrilling climax, which takes place at Stacey’s Cafe (yes, it’s the real-world restaurant Adams was managing - yes, he turned his religious-apocalyptic thriller novel into an ad for his restaurant - yes, I bet he thought of this as a “hypnotic suggestion”), the characters find the Prime Influencer. She is able to come up with a short snappy slogan so memetically powerful that it defeats fundamentalist religion and ends the war (the slogan is: “If God is so smart, why do you fart?”). Adams’ mouthpiece character says:It wasn’t the wisdom of the question that made it so powerful; philosophers had posed better questions for aeons. It was the packaging—the marketing, if you will—the repeatability and simplicity, the timing, the Zeitgeist, and in the end, the fact that everyone eventually heard it from someone whose opinion they trusted.The question was short, provocative, and cast in the language of international commerce that almost everyone understood—English. Most important, and generally overlooked by historians: It rhymed and it was funny. Once you heard it, you could never forget it. It looped in the brain, gaining the weight and feel of truth with each repetition. Human brains have a limited capacity for logic and evidence. Throughout time, repetition and frequency were how people decided what was most true.This paragraph is the absolute center of Adams’ worldview (later expanded to book length several times in tomes named things like Win Bigly: Persuasion In A World Where Facts Don’t Matter). People don’t respond to logic and evidence, so the world is ruled by people who are good at making catchy slogans. Sufficiently advanced sloganeering is indistinguishable from hypnosis, and so when Adams has some cute turns of phrase in his previous book, he describes it as “[I] used a variety of hypnosis techniques in an attempt to produce a feeling of euphoric enlightenment in the reader”. This is the cringiest way possible to describe cute turns of phrase, and turns me off from believing any his further claims to hypnotic mastery.Throughout this piece, I’ve tried to emphasize that Adams was usually pretty self-aware. Did that include the hypnosis stuff? I’m not sure. I think he would have answered: certainly some people are great charismatic manipulators. Either their skills are magic, or they operate by some physical law. If they operate by physical law, they should be learnable. Maybe I’m not quite Steve Jobs level yet, but I have to be somewhere along the path to becoming Steve Jobs, right? And why not describe it in impressive terms? Steve Jobs would have come up with impressive-sounding terms for any skills he had, and you would have believed him!Every few months, some group of bright nerds in San Francisco has the same idea: we’ll use our intelligence to hack ourselves to become hot and hard-working and charismatic and persuasive, then reap the benefits of all those things! This is such a seductive idea, there’s no reason whatsoever that it shouldn’t work, and every yoga studio and therapist’s office in the Bay Area has a little shed in the back where they keep the skulls of the last ten thousand bright nerds who tried this. I can’t explain why it so invariably goes wrong. The best I can do is tell a story where, when you’re trying to do this, you’re selecting for either techniques that can change you, or techniques that can compellingly make you think you’ve been changed. The latter are much more common than the former. And the most successful parasites are always those which can alter their host environment to be more amenable to themselves, and if you’re a parasite taking the form of a bad idea, that means hijacking your host’s rationality. So you’re really selecting for things that are compelling, seductive, and damage your ability to tell good ideas from bad ones. This is a just-so story that I have no evidence for - but seriously, go to someone who has the words “human potential” on their business card and ask them if you can see the skull shed.But also: it’s attractive to be an effortlessly confident alpha male who oozes masculinity. And it’s . . . fine . . . to be a normal person with normal-person hangups. What you really don’t want to be is a normal person who is unconvincingly pretending to be a confident alpha male. “Oh hello, nice to meet you, I came here in my Ferrari, it’s definitely not a rental, you’re having the pasta - I’m choosing it for you because I’m so dominant - anyway, do you want to have sex when we get back? Oh, wait, I forgot to neg you, nice hair, is it fake?” In theory, becoming a hot charismatic person with great social skills ought to be the same kind of task as everything else, where you practice a little and you’re bad, but then you practice more and you become good. But the uncanny valley is deep and wide, and Scott Adams was too invested in saying “Ha! I just hypnotized you - ha! There, did it again!” for me to trust his mountaineering skills.It all led, inexorably, to Trump.In summer 2015, Trump came down his escalator and announced his presidential candidacy. Given his comic status, his beyond-the-pale views, and his competition with a crowded field including Jeb Bush and Ted Cruz, traditional media wrote him off. Sure, he immediately led in the polls, but political history was full of weirdos who got brief poll bumps eighteen months before an election only to burn out later. The prediction markets listed his chance of the nomination (not the Presidency!) at 5%.Which made it especially jarring when, in August, Scott Adams wrote a blog post asserting that Trump had “a 98% chance” of winning. This claim received national attention, because Trump was dominating the news cycle and Adams was approximately the only person, anywhere, who thought he had a chance. There are two ways to make historically good predictions. The first way is to be some kind of brilliant superforecaster. Adams wasn’t this. Every big prediction he made after this one failed. Wikipedia notes that he dominated a Politico feature called “The Absolute Worst Political Prediction of 20XX”, with the authors even remarking that he “has managed to appear on this annual roundup of the worst predictions in politics more than any other person on the planet”. His most famous howler was that if Biden won in 2020, Republicans “would be hunted” and his Republican readers would “most likely be dead within a year”. But other highlights include “a major presidential candidate will die of COVID”, “the Supreme Court will overturn the 2024 election”, and “Hillary Clinton will start a race war”.The other way to make a great prediction is to live your entire life for one perfect moment - the inveterate bear who predicted twelve of the last zero recessions, but now it’s 2008 and you look like a genius. By 2015, Adams had become a broken record around one point: people are irrational sheep who are prey for charismatic manipulators. The pointy-haired boss always wins. Trump was the pointiest-haired person in the vicinity, and he was obviously trying to charismatically play on people’s instincts while other people were doing comparatively normal politics. Scott Adams’ hour had arrived.But Adams also handled his time in the spotlight masterfully. He gave us terms like “clown genius”. I hate using this, because I know Scott Adams was sitting at his desk in his custom-built Dilbert-head-shaped tower thinking “What sort of hypnotic catchy slogans can I use to make my meme about Trump spread . . . aha! Clown genius! That has exactly the right ring!” and it absolutely worked, and now everyone who was following the Internet in 2015 has the phrase “clown genius” etched into their brains (Adams calls these “linguistic kill shots”; since I remember that term and use it often, I suppose “linguistic kill shot” is an example of itself). He went from news outlet to news outlet saying “As a trained hypnotist, I can tell you what tricks Trump is using to bamboozle his followers, given that rational persuasion is fake and marketing techniques alone turn the wheels of history,” and the news outlets ate it up.You probably thought I was making up the part where Scott Adams has a custom-built tower shaped like Dilbert’s head.And some of his commentary was good. He was one of the first people to point out the classic Trump overreach, where he would say something like “Sleepy Joe Biden let in twenty trillion illegal immigrants!” The liberal media would take the bait and say “FACT CHECK: False! - Joe Biden only let in five million illegal immigrants!”, and thousands of people who had never previously been exposed to any narrative-threatening information would think “Wait, Joe Biden let in five million illegal immigrants?!” Once you notice it, it’s hard to unsee. Adams started out by stressing that he was politically independent. He didn’t support Trump, he was just the outside hypnosis expert pointing out what Trump was doing. IT’S JUST A THOUGHT EXPERIMENT, IF YOU TAKE IT SERIOUSLY, YOU FAIL. Indeed, “this person is a charismatic manipulator hacking the minds of irrational sheep” is hardly a pro-Trump take. And he lived in Pleasanton, California - a member in good standing of the San Francisco metropolitan area - and nice Pleasantonians simply did not become Trump supporters in 2016.On the other hand, at some point, his increasingly overblown theories of Trump’s greatness opened up a little wedge. The growing MAGA movement started treating him as one of their own; liberals started to see him as an enemy. His fame turned the All-Seeing Eye of social media upon him, that gaze which no man may meet without consequence. Once you’re sufficiently prominent, politics becomes a separating equilibrium; if you lean even slightly to one side, the other will pile on you so massively and traumatically that it will force you into their opponents’ open arms just for a shred of psychological security.As he had done so many other times during his life, he resolved the conflict in the dumbest, cringiest, and most public way possible: a June 2016 blog post announcing that he was endorsing Hillary Clinton, for his own safety, because he suspected he would be targeted for assassination if he didn’t:This past week we saw Clinton pair the idea of President Trump with nuclear disaster, racism, Hitler, the Holocaust, and whatever else makes you tremble in fear. That is good persuasion if you can pull it off because fear is a strong motivator. It is also a sharp pivot from Clinton’s prior approach of talking about her mastery of policy details, her experience, and her gender. Trump took her so-called “woman card” and turned it into a liability. So Clinton wisely pivoted. Her new scare tactics are solid-gold persuasion. I wouldn’t be surprised if you see Clinton’s numbers versus Trump improve in June, at least temporarily, until Trump finds a counter-move.The only downside I can see to the new approach is that it is likely to trigger a race war in the United States. And I would be a top-ten assassination target in that scenario […]So I’ve decided to endorse Hillary Clinton for President, for my personal safety. Trump supporters don’t have any bad feelings about patriotic Americans such as myself, so I’ll be safe from that crowd. But Clinton supporters have convinced me — and here I am being 100% serious — that my safety is at risk if I am seen as supportive of Trump. So I’m taking the safe way out and endorsing Hillary Clinton for president.As I have often said, I have no psychic powers and I don’t know which candidate would be the best president. But I do know which outcome is most likely to get me killed by my fellow citizens. So for safety reason, I’m on team Clinton.My prediction remains that Trump will win in a landslide based on his superior persuasion skills. But don’t blame me for anything President Trump does in office because I endorse Clinton.This somehow failed to be a masterstroke of hypnotic manipulation that left both sides placated. But it was fine, because Trump won anyway! In the New Right’s wave of exultation, all was forgiven, and the first high-profile figure to bet on Trump became a local hero and confirmed prophet. Never mind that Adams had predicted Trump would win by “one of the biggest margins we’ve seen in recent history” when in fact he lost the popular vote. The man who had dreamed all his life of being respected for something other than cartooning had finally made it.Obviously, it destroyed him.At first, I wondered if Adams’ right-wing turn was a calculated manuever. He’d always longed to be a manipulator of lesser humans, and had finally achieved slightly-above-zero skill at it. Wouldn’t it fit his personality to see the right-wingers as dumb sheep, and himself as the clever Dogbert-style scammer who could profit off them? Did he really believe (as he claimed) that he was at risk of being assassinated by left-wing radicals who couldn’t handle his level of insight into Trump’s genius? Or was this just another hypnotic suggestion, retrospectively justified insofar as we’re still talking about it ten years later and all publicity is good publicity?But I don’t think he did it cynically. At the turn of the millennium, the obsessed-with-their-own-cleverness demographic leaned firmly liberal: smug New Atheists, hardline skeptics, members of the “reality-based community”. But in the 2010s, liberalism became the default, the public switched to expertolatry, dumb people’s orthodoxies about race and gender became easier and more fun to puncture than dumb people’s orthodoxies about religion - and the O.W.T.O.C.s lurched right. Adams was borne along by the tide. With enough time, dedication, and archive access, you can hop from Dilbert comic to Dilbert comic, tracing the exact contours of his political journey.There’s a passage in the intro to one of Adams books where he says that, given how he’s going to blow your mind and totally puncture everything you previously believed, perhaps the work is unsuitable for people above fifty-five, whose brains are comparatively sclerotic and might shatter at the strain. This is how I feel about post-2016 politics. Young people were mostly able to weather the damage. As for older people, I have seen public intellectual after public intellectual who I previously respected have their brains turn to puddles of partisan-flavored mush. Jordan Peterson, Ken White, Curtis Yarvin, Paul Krugman, Elon Musk, the Weinsteins, [various people close enough to me that it would be impolite to name them here]. Once, these people were lions of insightful debate. Where now are the horse and the rider? Where is the horn that was blowing? Adams was 58 when Trump changed everything. In 2001, age 44, he’d found the failure of his Dilberito funny. But in another interview, at age 50, he suggested that maybe his competitors had formed teams to sneak into supermarkets and hide them in the back of the shelves. Being tragically flawed yet also self-aware enough to laugh about it is a young man’s game. In 2024, diagnosed with terminal cancer, Adams decided to treat it via ivermectin, according to a protocol recommended by fellow right-wing contrarian Dr. William Makis. This doesn’t seem to me like a story about a cynic milking right-wingers for the grift. It sounds like a true believer. Scott Adams, the man too clever and independent to join any political tendency, who had sworn to always be the master manipulator standing above the fray rather than a sheep with ordinary object-level opinions - had finally succumbed to sincere belief.It’s Not Funny If I Have To Explain ItEvery child is hypomanic, convinced of their own specialness. Even most teenagers still suspect that, if everything went right, they could change the world.It’s not just nerds. Everyone has to crash into reality. The guitar player who starts a garage band in order to become a rockstar. The varsity athlete who wants to make the big leagues. They all eventually realize, no, I’m mediocre. Even the ones who aren’t mediocre, the ones with some special talent, only have one special talent (let’s say cartooning) and no more.I don’t know how the musicians and athletes cope. I hear stories about washed-up alcoholic former high school quarterbacks forever telling their girlfriends about how if Coach had only put them in for the last quarter during the big game, things would have gone differently. But since most writers are nerds, it’s the nerds who dominate the discussion, so much so that the whole affair gets dubbed “Former Gifted Kid Syndrome”.Every nerd who was the smartest kid in their high school goes to an appropriately-ranked college and realizes they’re nothing special. But also, once they go into some specific field they find that intellect, as versatile as it is, can only take them so far. And for someone who was told their whole childhood that they were going to cure cancer (alas, a real quote from my elementary school teacher), it’s a tough pill to swallow.Reaction formation, where you replace a unbearable feeling with its exact opposite, is one of the all time great Freudian defense mechanisms. You may remember it from such classics as “rape victims fall in love with their rapist” or “secretly gay people become really homophobic”. So some percent of washed-up gifted kids compensate by really, really hating nerdiness, rationality, and the intellect.The variety of self-hating nerd are too many to number. There are the nerds who go into psychology to prove that EQ is a real thing and IQ merely its pale pathetic shadow. There are the nerds who become super-woke and talk about how reason and objectivity are forms of white supremacy culture. There are the nerds who obsess over “embodiment” and “somatic therapy” and accuse everyone else of “living in their heads”. There are the nerds who deflect by becoming really into neurodiversity - “the interesting thing about my brain isn’t that I’m ‘smart’ or ‘rational’, it’s that I’m ADHDtistic, which is actually a weakness . . . but also secretly a strength!” There are the nerds who flirt with fascism because it idolizes men of action, and the nerds who convert to Christianity because it idolizes men of faith. There are the nerds who get really into Seeing Like A State, and how being into rationality and metrics and numbers is soooooo High Modernist, but as a Kegan Level Five Avatar they are far beyond such petty concerns. There are the nerds who redefine “nerd” as “person who likes Marvel movies” - having successfully gerrymandered themselves outside the category, they can go back to their impeccably-accurate statisticsblogging on educational outcomes, or their deep dives into anthropology and medieval mysticism, all while casting about them imprecations that of course nerds are loathsome scum who deserve to be bullied.(maybe it’s unfair to attribute this to self-hatred per se. Adams wrote, not unfairly, that the scientismists in Kegan level 4 “are arrogant when it comes to dealing with people in levels two and three.” Maybe there’s the same desperate urge for level 5 to differentiate themselves from 4s - cf. barberpole theory of fashion).Scott Adams felt the contradictions of nerd-dom more acutely than most. As compensation, he was gifted with two great defense mechanisms. The first was humor (which Freud grouped among the mature, adaptive defenses), aided by its handmaiden self-awareness. The second (from Freud’s “neurotic” category) was his own particular variety of reaction formation, “I’m better than those other nerds because, while they foolishly worship rationality and the intellect, I’ve gotten past it to the real deal, marketing / manipulation / persuasion / hypnosis.” When he was young, and his mind supple, he was able to balance both these mechanisms; the steam of their dissonance drove the turbine of his art. As he grew older, the first one - especially the self-awareness - started to fail, and he leaned increasingly heavily on the second. Forced to bear the entire weight of his wounded psyche, it started showing more and more cracks, until eventually he ended up as a podcaster - the surest sign of a deranged mind. In comparison, his final downfall was almost trivial - a bog-standard cancellation, indistinguishable from every other cancellation of the 2015 - 2025 period. Angered by a poll where some black people expressed discomfort with the slogan “It’s Okay To Be White”, Adams declared that “the best advice I would give to white people is to get the hell away from black people; just get the fuck away”. Needless to say, his publisher, syndicator, and basically every newspaper in the country dropped him immediately. He relaunched his comics on Locals, an online subscription platform for cancelled people, but his reach had declined by two orders of magnitude and never recovered.Adams was willing to sacrifice everything for the right to say “It’s Okay To Be White”. I can’t help wondering what his life would have been like if he’d been equally willing to assert the okayness of the rest of his identity.Dilbert’s Guide to the Rest of Your LifePartly this is because we’re too similar for me to hate him without hating myself. You’re a bald guy with glasses named Scott A who lives in the San Francisco Bay Area. You think you’re pretty clever, but the world has a way of reminding you of your limitations. You try to work a normal job. You do a little funny writing on the side. People like the funny writing more than you expected. Hardly believing your luck, you quit to do the funny writing full time. You explore themes about the irrationality of the world. You have some crazy ideas you’re not entirely willing to stand behind, and present them as fiction or speculation or April Fools jokes. You always wonder whether your purpose in life is really just funny writing - not because people don’t love the stuff you write, not even because you don’t get fan mail saying you somehow mysteriously changed people’s lives, but just because it seems less serious than being a titan of industry or something. You try some other things. They don’t go terribly, but they don’t go great either. You decide to stick with what you’re good at. You write a book about the Lurianic kabbalah. You get really into whale puns.As we pass through life, sometimes God shows us dopplegangers, bright or dark mirrors of ourselves, glimpses of how we might turn out if we zig or zag on the path ahead. Some of these people are meant as shining inspirations, others as terrible warnings, but they’re all our teachers. Adams was my teacher in a more literal way too. He published several annotated collections, books where he would present comics along with an explanation of exactly what he was doing in each place, why some things were funny and others weren’t, and how you could one day be as funny as him. Ten year old Scott devoured these. I’ve always tried to hide my power level as a humorist, lest I get pegged as a comedic author and people stop taking me seriously. But objectively my joke posts get the most likes and retweets of anything I write, and I owe much of my skill in the genre to cramming Adams’ advice into a malleable immature brain. There’s a direct line between Dogbert’s crazy schemes and the startup ideas in a typical Bay Area House Party post.The Talmud tells the story of the death of Rabbi Elisha. Elisha was an evil apostate. His former student, Rabbi Meir, who stayed good and orthodox, insisted that Rabbi Elisha probably went to Heaven. This was never very plausible, and God sent increasingly obvious signs to the contrary, including a booming voice from Heaven saying that Elisha was not saved. Out of loyalty to his ex-teacher, Meir dismissed them all - that voice was probably just some kind of 4D chess move - and insisted that Elisha had a share in the World To Come. Out of the same doomed loyalty as Rabbi Meir, I want to believe Scott Adams went to Heaven. There is what at first appears to be promising evidence - in his final message to his fans, Adams said:Many Christian friends have asked me to find Jesus before I go. I’m not a believer, but I have to admit the risk-reward calculation for doing so looks attractive. So here I go: I accept Jesus Christ as my lord and savior, and I like forward to spending an eternity with him. The part about me not being a believer should be quickly resolved if I wake up in heaven. I won’t need any more convincing than that. And I hope I am still qualified for entry.It is a dogma of many religions that sincere deathbed conversions are accepted. But I’d be more comfortable if this sounded less like “haha, I found my final clever lifehack”. I can only hope he didn’t try to implant any hypnotic suggestions in an attempt to get a linguistic kill shot in on the Almighty. As another self-hating nerd writer put it, “through all these years I make experiment if my sins or Your mercy greater be.”But I’m more encouraged by the second half of his departing note:For the first part of my life, I was focused on making myself a worthy husband and parent, as a way to find meaning. That worked. But marriages don’t always last forever, and mine eventually ended, in a highly amicable way. I’m grateful for those years and for the people I came to call my family.
Once the marriage unwound, I needed a new focus. A new meaning. And so I donated myself to “the world,” literally speaking the words out loud in my otherwise silent home. From that point on, I looked for ways I could add the most to people’s lives, one way or another.
That marked the start of my evolution from Dilbert cartoonist to an author of - what I hoped would be - useful books. By then, I believed I had condensed enough life lessons that I could start passing them on. I continued making Dilbert comics, of course.
As luck would have it, I’m a good writer. My first book in the “useful” genre was How to Fail at Almost Everything and Still Win Big. That book turned out to be a huge success, often imitated, and influencing a wide variety of people. I still hear every day how much that book changed lives. My plan to be useful was working.
I followed up with my book Win Bigly, that trained an army of citizens how to be more persuasive, which they correctly saw as a minor super power. I know that book changed lives because I hear it often.
You’ll probably never know the impact the book had on the world, but I know, and it pleases me while giving me a sense of meaning that is impossible to describe.
My next book, Loserthink, tried to teach people how to think better, especially if they were displaying their thinking on social media. That one didn’t put much of a dent in the universe, but I tried.
Finally, my book Reframe Your Brain taught readers how to program their own thoughts to make their personal and professional lives better. I was surprised and delighted at how much positive impact that book is having.
I also started podcasting a live show called Coffee With Scott Adams, dedicated to helping people think about the world, and their lives, in a more productive way. I didn’t plan it this way, but it ended up helping lots of lonely people find a community that made them feel less lonely. Again, that had great meaning for me.
I had an amazing life. I gave it everything I had. If you got any benefits from my work, I’m asking you to pay it forward as best you can. That is the legacy I want.And please know I loved you all to the end.I had been vaguely aware that he had some community around him, but on the event of his death, I tried watching an episode or two of his show. I couldn’t entirely follow, but I think his various sub-shows are getting rolled into a broader brand, The Scott Adams School, where his acolytes discuss and teach his theory of persuasion:The woman on the top left is his ex-wife. Even though they’ve been divorced for twelve years, they never abandoned each other. All the other faces are people who found Adams revelatory and are choosing to continue his intellectual tradition. And in the comments - thirteen thousand of them - are other people who loved Adams. Some watch every episode of his podcast and consider him a genius. Others were touched in more subtle ways. People who wrote him with their problems and he responded. People who met him on the street and demanded the typical famous person “pose for a photo with me”, and he did so graciously. People who said his self-help books really helped them. People who just used Dilbert to stay sane through their cubicle jobs. Adams is easy and fun to mock - as is everyone who lives their life uniquely and unapologetically. I’ve had a good time psychoanalyzing him, but everyone does whatever they do for psychological reasons, and some people end up doing good. Though I can’t endorse either Adams’ politics or his persuasive methods, everything is a combination of itself and an attempt to build a community. And whatever the value of his ideas, the community seems real and loving.And I’m serious when I say I consider Adams a teacher. For me, he was the sort of teacher who shows you what to avoid; for many others, he was the type who serves as inspiration. These roles aren’t quite opposites - they’re both downstream of a man who blazed his own path, and who recorded every step he took, with unusual grace and humor, as documentation for those who would face a choice of whether or not to follow. This wasn’t a coincidence, but the conscious and worthy project of his life. Just for today, I’ll consider myself part of the same student body as all the other Adams fans, and join my fellows in tribute to our fallen instructor. I hope he gets his linguistic kill shot in on God and squeaks through the Pearly Gates.
...
Read the original on www.astralcodexten.com »
Solar didn’t just show up in 2025 — it carried the grid. A new analysis from global energy think tank Ember shows that solar power accounted for 61% of the growth in US electricity demand last year, highlighting how central solar has become as power demand accelerates.
US electricity demand jumped by 135 terawatt-hours (TWh) in 2025, a 3.1% increase, the fourth‑largest annual rise of the past decade. Over that same period, solar generation grew by a record 83 TWh — a 27% increase from 2024 and the biggest absolute gain of any power source. That single jump in solar output covered 61% of all new electricity demand nationwide.
“Solar growth was essential in helping to meet fast‑rising US electricity demand in 2025,” said Dave Jones, chief analyst at Ember. “It generated where it was needed, and — with the surge in batteries — increasingly when it was needed.”
Texas, the Midwest, and the Mid‑Atlantic saw the largest increases in solar generation last year, and they were also the regions where electricity demand rose the fastest. Solar met 81% of demand growth in both Texas and the Midwest, and 33% in the Mid‑Atlantic.
Timing mattered, too. In aggregate, the increase in solar generation met the entire rise in US electricity demand during daytime hours between 10 am and 6 pm Eastern. And as a result of the rapid buildout of battery storage, solar also helped cover some of the demand growth during the evening hours, from 6 pm to 2 am.
The adoption of battery storage is turning solar from cheap daytime power into something far more flexible. Over the past six years, California’s utility‑scale solar and battery generation has climbed 58%. Yet, output at the sunniest hour of the day has increased by just 8%, a sign that more energy is being stored and used later, rather than dumped onto the grid all at once.
Most of the new solar generation in 2025 was absorbed by rising electricity demand, allowing solar to scale alongside overall grid growth.
“Solar has the potential to meet all the rise in electricity demand and much more. With electricity demand surging, the case to build solar has never been stronger,” said Jones.
Read more: EIA: All net new generating capacity in 2026 may be renewables
If you’re looking to replace your old HVAC equipment, it’s always a good idea to get quotes from a few installers. To make sure you’re finding a trusted, reliable HVAC installer near you that offers competitive pricing on heat pumps, check out EnergySage. EnergySage is a free service that makes it easy for you to get a heat pump. They have pre-vetted heat pump installers competing for your business, ensuring you get high quality solutions. Plus, it’s free to use!
Your personalized heat pump quotes are easy to compare online and you’ll get access to unbiased Energy Advisors to help you every step of the way. Get started here. — *ad
...
Read the original on electrek.co »
If you have heard of the Handmade community, you likely think we are about “low-level programming” in some way. After all, we are a community inspired by Handmade Hero, a series where you learn to make a game and engine from scratch.
We in the Handmade community often bemoan the state of the software industry. Modern software is slow and bloated beyond belief—our computers are literally ten times more powerful than a decade ago, yet they run worse than they used to, purely because the software is so bad. The actual user experience has steadily declined over the years despite the insane power at our fingertips. Worst of all, people’s expectations have hit rock bottom, and everyone thinks this is normal.
The Handmade crowd seems to think that low-level programming is the key to building better software. But this doesn’t really make sense on the surface. How is this practical for the average programmer? Do we really expect everyone to make their own UI frameworks and memory allocators from scratch? Do we really think you should never use libraries? Even if the average programmer could actually work that way, would anything actually improve, or would the world of software just become more fragmented?
I do believe, with all my heart, that low-level programming is the path to a better future for the software industry. But the previous criticisms are valid, and should be a serious concern for the Handmade programmer. So what is the connection here? What role does “low-level” play in a better future for software?
Simone wanted an Tesla pickup truck, but the Cybertruck was still just a rumor, and she was feeling impatient. So she did what any reasonable person would do, and decided to convert a Tesla Model 3 into a pickup truck.
The results speak for themselves. Truckla looks amazing, drives perfectly, and still functions as a modern EV. This is no small feat—obviously you cannot just cut the roof off a sedan and call it a pickup truck. She and her team had to ensure that the car was structurally sound, that it could still charge, and that the software still worked as intended. Truckla is an impressive feat of engineering that took genuine creativity and craftsmanship.
And yet, Truckla is still a pretty bad pickup truck! The bed size is small, it can’t haul much weight, and it’s likely much less efficient than a truck engineered from the ground up. If you were in the market for a pickup truck, you would not buy Truckla! (You probably wouldn’t buy a Cybertruck either, but I digress.)
Truckla is an excellent execution of a flawed idea. If you want to build a good pickup truck, you have to start with the frame.
In the world of software, the equivalent of the “frame” is the tech stack. Software is shaped by programming languages, frameworks, libraries, and platforms in the same way that a car is shaped by its frame. If you convert a sedan into a truck, you will get a bad truck, and if you start with the wrong stack, you will get bad software. No engineering effort will be able to save you.
As an example, let’s look at a program that everyone has interacted with at some point.
This is New Reddit. It is a new frontend they rolled out roughly a decade ago, and it is…not well-loved. Because so many people hate it, Old Reddit is still online, and this gives us a unique opportunity to compare two functionally identical pieces of software made a decade apart.
Back in 2023, I was experiencing horrible lag on New Reddit. The comment editor was sluggish, UI was slow to expand and collapse, and even hovering over a tooltip would cause a full-page hitch—all typical of modern software. Old Reddit, on the other hand, was a breath of fresh air—everything responded instantly. Aside from outdated aesthetics, Old Reddit was better in every way.
So here’s a thought experiment: How much work should it take to collapse a single comment?
This is a pretty easy question. All that needs to happen—all that should happen—is to hide or remove a few DOM elements, and update some text to say “collapsed”. A well-written Reddit frontend should more or less do exactly this. But let’s see what New Reddit did:
Gross. Call stacks thirty functions deep, layout computation in the middle of rendering, some kind of event or animation framework, and…hold on, is that jQuery?
My mistake, that’s actually a profile of Old Reddit. Here’s New Reddit:
At the time, it took New Reddit almost 200 milliseconds to collapse a single comment. That is 200 milliseconds of pure JavaScript, with hardly any DOM work in sight. If you care about quality software, your jaw should be on the floor. It is a staggering amount of waste for what should have been a few DOM calls. And you feel it as a user: an ugly, intense hitch.
Old Reddit, on the other hand, did its work in about 10 milliseconds. That could be improved, but 10 milliseconds is totally fine. It feels responsive and keeps the site running at 60 frames per second. So Old Reddit is the clear winner here, with a UI 20 times faster than New Reddit.
So, we must pick up our jaws off the floor and ask the question: How on earth did we get here? Were New Reddit’s devs just stupid, lazy JS fanboys who would rather build Rube Goldberg machines than do their jobs?
Maybe tbh. But laziness alone doesn’t tell the whole story. The real problem with New Reddit was the stack it was built on.
So what was the Reddit stack? Back in 2023, New Reddit was a React app with Redux for state management. (These days they seem to have rewritten it in Web Components.) React and Redux of course sit atop the web platform: HTML, CSS, and JavaScript. This platform is implemented by some browser engine, which then runs on some operating system, and finally on the user’s physical hardware (which is itself extremely complicated, but we have to stop somewhere).
At my last job, I worked on an application that used precisely this same stack. Our application was an employee scheduling program that allowed managers to create weekly schedules for hourly workers. In about 2016 we replaced our aging Backbone.js frontend with a new one written in React and Redux, presumably because it was a popular choice at the time.
As a result, I became intimately familiar with how a React+Redux app is constructed. I also spent a lot of time trying to improve the app’s abysmal performance. I lived inside the Chrome and React profilers, diligently tracking down slow functions and suppressing unnecessary React updates. We had a whole caching system for our Redux selectors, and I added logging to help us find selectors with a high cache miss rate. I built scripts to parse our source code and make graphs of our selector dependencies, so I could find places to split the app bundle into smaller pieces. Unfortunately, none of my work made much of a difference—performance continued to plummet as the app increased in complexity.
When you try to make a fast React+Redux app, you are constantly fighting the frameworks. These two libraries constantly do unnecessary work, and your job is to suppress that work until things run acceptably again. But sometimes the cure is worse than the poison: an expensive shouldComponentUpdate versus an expensive React re-render. Everything wants to update all the time, and as the app grows larger, the frequency and complexity of updates increases until there’s no salvaging it.
New Reddit exemplified this perfectly: collapsing a comment would dispatch a Redux action, which would update the global Redux store, which would cause all Redux-connected components on the page to update, which would cause all their children to update as well. In other words, collapsing one comment triggered an update for nearly every React component on the page. No amount of caching, DOM-diffing, or shouldComponentUpdate can save you from this amount of waste.
At the end of the day, I had to conclude that it is simply not possible to build a fast app on this stack. I have since encountered many web applications that suffer in exactly the same way. Time and again, if it’s slow, it’s probably using React, and if it’s really slow, it’s probably using Redux. The stack is the problem. It’s the only reasonable conclusion.
Thankfully, React+Redux is not the only possible software stack. We can choose alternatives at every point:
You could choose a different JavaScript framework. Perhaps you could use Vue, or Svelte, or SolidJS, since these have presumably had time to learn from React’s mistakes. Or, of course, you could ditch all the frameworks and just use the DOM APIs directly, especially if your application is mostly static like Reddit.
You could use other browser APIs instead of HTML, CSS, and JS. You could use an alternative framework like Flutter, or you could build a custom UI stack in WebGL and WebAssembly. Building it yourself might sound crazy, but it’s been done successfully many times before—for example, Figma famously built their app from scratch in WASM and WebGL, and it runs shockingly well on very large projects. Google Docs and Google Sheets also use WebGL instead of HTML and CSS, and the apps themselves are written in Java and compiled to JS or WASM.
You could build a native app! You could use a cross-platform framework like Qt, a game engine like Unity, an OS abstraction layer like SDL, or again just use the native APIs directly and build the rest from scratch. This is obviously the right choice for performance-intensive applications, and a valid option in general for developers who are serious about delivering a high-quality experience.
Together all these choices actually form a tree. Every node in this tree is a valid stack you could choose to build your software on. Most importantly, different choices in this tree will be better for different kinds of software, so being comfortable with many options allows you to make better choices for each problem you face.
Unfortunately, this is how I imagine the developers of New Reddit saw the tree:
There are not a lot of choices here. Critically, the best choice for them (direct DOM manipulation, like Old Reddit) was not even on the table. For whatever reason, I think they just didn’t even consider it as an option. Ew, icky, we can’t just keep doing what Old Reddit did! We can’t use jQuery!
Their view of the world was too high-level. If all you know is React, you have no choices—you can only use React, or meta-frameworks on top of React. But the lower level you can go, the more the tree opens up to you. Going lower level allows you to access other choices, and to recognize when another choice would be a better fit.
The first reason, then, that we care about low-level is that it allows us to make better choices. We can make better software by starting in the right place, with the right frame and the right stack. Low-level programming allows us to build trucks instead of Trucklas.
But…this isn’t really enough, right? The software industry will not be saved by a few programmers making better choices. It would help, to be sure, but it’s far from the answer.
This presents an uncomfortable question: What if there are no good options in this tree? What if none of these choices are actually good for the kind of software we want to make?
For example, what if your app wants direct access to the hardware, but you also want a cross-platform UI? What are your choices? You could use Qt, but it tends to feel very dated and has strong opinions about how you architect your software. Game engines would likewise be a strange fit for a lot of applications, offering plenty of rendering power but little for 2D UI. There are some relative newcomers like Flutter, but Flutter makes you buy into Dart, and we all know Dart is not the right tool for a performance-intensive application. So what do you do? There are no good choices on the market—you’ll have to build it yourself.
Our tree is top-heavy. If we survey the software development landscape today, we see an insane number of JavaScript libraries and frameworks, an ever-growing number of browser APIs, and very little development outside of browsers besides frameworks that are Web-compatible and therefore subject to the same constraints. If our tree was a real tree, it would look something like this—and this is not a healthy tree.
The analogy works even better, actually, when you consider how many branches are dead or dying. What is the lifespan of a JS framework these days? Two years? Five if you’re lucky? More likely, the developer will have vanished off the face of the earth within a month.
Do we really imagine that the future of the software industry is to grow this tree even taller? To build more on top? Frameworks on top of frameworks? Do we imagine that in the future we’ll still be using HTML and CSS for sophisticated applications, when they’ve clearly been the wrong choice for years? Do we imagine that we’ll continue to ship apps on top of browsers, on top of operating systems, when modern browsers are basically operating systems unto themselves?
If we keep building, this tree will collapse under its own weight. We need to prune it, and grow new branches from lower in the tree.
But who is going to do that? Who is going to build that future for the software industry?
It requires a particular type of person. They must have inherent drive and passion for innovation in software. But they also must have low-level knowledge. They need to be able to make different choices from those who came before, to explore parts of the tree that haven’t yet been explored.
The overlap between these circles is tiny. There are so few people who fit into both categories that we are just not seeing much innovation in that space. In fact, this image is pretty generous when you consider how few low-level programmers there are in general.
On the other hand, there are actually lots of people in the software industry with a drive to innovate. The problem is, they’re all making JavaScript frameworks.
They don’t possess the low-level knowledge required to actually make a significant difference. That’s just the reality: if you build from the top of the tree, all the important decisions have already been made for you. It’s like painting a Truckla a different color—it will not make a difference!
So the second reason I believe low-level is critical to the future of the software industry is that it simply expands the circle. We can capture some of those people with the drive to innovate and equip them to actually innovate in meaningful ways. We need more people exploring this low-level space, and I know that for many people, low-level knowledge would open their eyes to possibilities they would never have dreamed of before.
Not everyone who makes their own text editor will have great ideas about the future of programming. Not everyone who makes their own compiler will have great ideas about programming languages. But some of them will. And it only takes a few of them to make a difference in the software industry.
So, to recap: the first reason we care about low-level is because low-level knowledge leads to better engineering choices. The second reason we care about low-level is because, in the long term, low-level knowledge is the path to better tools and better ways of programming—it is a requirement for building the platforms of the future.
But there is still one big problem with all of this: low-level programming today is absolutely terrible.
Low-level programming is so frustrating, and so difficult. The experience of low-level programming does not hold a candle to the experience of using high-level tools today—the very tools we see as a problem.
If I want to make a React app, I can simply Google “how to build react app” and I will find a beautifully-crafted web page with demos, installation guides, documentation, and resources to get me on my way. It has commands I can run to get an app up and running in five minutes. If I change a line of code in my editor, it refreshes immediately in my browser, shortening that feedback loop and making learning fun. And there is a wealth of other resources online: dev tools, libraries, tutorials, and more, making it easy for anyone to get up and running.
This is simply not the case for the low-level space. If you’re lucky, you can maybe find an expensive book or course. But more likely, you’ll just get a gigantic manual that lists every property of the system in excruciating detail, which is totally worthless for learning and barely usable as reference. And that’s if you’re lucky—there’s a good chance that you’ll only get a wiki or a maze of man pages, which are impenetrable walls of jargon. In some cases the only documentation that exists is the Linux Kernel Mailing List, and you can only pray that the one guy who can answer your question hasn’t flamed out in the past decade.
This isn’t just bad for beginners, it’s bad for everyone. If this is the state of low-level knowledge, how can we expect anyone to practice low-level programming, much less the wider industry?
And the story doesn’t end there, because low-level tools are terrible too. In a browser, I can open up the dev tools, go to Performance, click “Record”, and I will get a complete timeline of everything my application did. Every JavaScript function, every network request, every frame rendered, all correlated on a timeline so you can understand how everything relates. It is a developer’s dream, and it is a single click away! But the low-level space just does not have tools like this. There are a few decent profilers, but in most cases you just have to run a command-line program with some bizarre set of flags, pipe it through other tools, and then squint at a PDF or whatever.
The crazy thing is: there is no reason for this to be the case. We could absolutely have the same kind of “dev tools” for native development that we do for the web. We could have profilers that are actually designed to highlight useful info. We could have GUIs that show us network and file I/O, or inter-process communication. We could have interactive documentation and live reloading. We could have editor plugins and language servers to help beginners along. The raw capabilities are there. We’re just waiting for someone with high-level sensibilities to come along and build the tools of our dreams.
But until we build that, why should we expect anyone to learn low-level programming? How can we expect them to?
So now we come back to Handmade, and what made Handmade Hero so special. Most programmers look at game engines and think that only a super-genius could write one—and the idea of making a game without an engine is lunacy. But Handmade Hero just didn’t care. Casey just sat down, showed you how to compile C, showed you how to put pixels on the screen, and before too long, you had a game. Not the most sophisticated game in the world, but a game nonetheless.
Handmade Hero shattered the barrier between low-level and high-level. Casey made a game, and he made an engine. The mystique was stripped away and replaced by an actual understanding of how games are made. Many people have the same reaction when they finally go through Handmade Hero: “Hey, this is not as hard as I thought!” It turns out you absolutely can make your own engine, despite the naysayers online.
I personally have found this to be true of so many “low-level” disciplines. “Low-level” programming is not impossible; in fact, in many cases, it’s simpler than the high-level web dev work I used to do! Today’s “high-level” frameworks and tools are so complicated and so poorly designed that they are harder to understand and work with than their low-level counterparts. But all the modern nonsense like Svelte, Symfony, Kubernetes—those tools have docs! They have dev tools! Because, for some reason, people are not afraid of them!
Low-level programming is artificially terrible. I really believe that. And I know that it doesn’t have to be this way.
So my final question about low-level programming is: why do we even call it “low-level”?
The intent of any “high-level” tool is to make it easier to express our intent as programmers. “High-level” tools abstract away difficult details so we can focus on what we really care about. And in many cases this has worked: we’ve seen it in the evolution of programming languages, in the proliferation of game engines, and yes, even in the development of the web.
But notice: this is not about where these tools are in the stack. It’s not about how many layers they’ve built on top of. “High-level” is about the expression of the programmer’s intent. The position in the stack is ultimately irrelevant if programmers can use it to achieve their goals.
What then does this mean for “low-level”? The conclusion is inevitable: the reason we call things “low-level” is because they are terrible to use. They are “low-level” because we do not use them directly! Because we sweep them under the rug and build abstractions on top, they become this low level that we don’t want to touch anymore!
Why are things “low-level” today? Because no one has made them high-level yet.
When I imagine a better future for the software industry, I don’t imagine one where everyone is making their own text editors, their own debuggers, or their own UI frameworks. Instead, I imagine a future where we have new “high-level” tools, built from lower in the stack. I imagine new tools that give the same high-level benefits we expect today, and in fact do more than the tools we have today, because they are freed from the constraining decisions of the past. We can build new platforms, new tools, and new libraries that learn from the past, but build on solid foundations instead of piling more on top.
For the developers who truly care about making high-quality software, tools built lower in the stack can be their superpower. These programmers can be equipped to fine-tune their software in ways the web could never allow. And for the lazy Reddit dev who would rather push some slop out the door for a paycheck? Hey, at least their slop can run on a simpler, smaller, more efficient platform. It’s still a net positive in the end.
The Handmade community is positioned right in the middle of that Venn diagram today. We have people with low-level expertise. We have people with a drive to make software better. Our job, then, is not to just write low-level code and feel smug for knowing how things work. Our job is to build a new high level for the rest of the software industry.
Low-level programming is not the goal unto itself. High-level programming—a new kind of high-level programming—is the goal, and low-level is how we get there.
This post is adapted from a talk I delivered to the Handmade community in 2023. The original talk can be viewed here.
...
Read the original on bvisness.me »
So is currently under the heaviest scraper attack seen yet. It is a DDOS attack involving tens of thousands of addresses, and that is affecting the responsiveness of the site, unfortunately.
There are many things I would like to do with my time. Defending LWN from AI shitheads is rather far from the top of that list. I *really* don’t want to put obstacles between LWN and its readers, but it may come to that.
(Another grumpy day, sorry)
@ @ this, combined with search engines prioritising the stolen content!
This is why I think the web is genuinely doomed. It’s not enough to steal the content, for search engines to kill click thtoughs and ad revenues, they are literally killing the ability of original authors to serve the traffic to the few real users that might want to see it.
Devastating.
As a avid longtime subscriber and reader, I can only give thanks and hope you will survive also this blast of willfully wrong behaviour. Thank you for your openness.
@ @ Any inkling which AI (Arsehole Incorporated) it is? The crash can’t come soon enough.
There is no way to know who is after the data. The actual attack is likely perpetrated by Bright Data or one of its equally vile competitors.
Just speaking with my user hat on here, but given the circumstances I don’t mind the ever-so-slight inconvenience of an #anubus challenge.
@ @ If you need help, email me. I can work with you in case there’s low hanging fruit that you missed.
This helped me a lot with my little projects:
https://codeberg.org/skewray/htaccess
@ @ Obviously that sucks, but I am super happy with the RSS integration that I get with my lwn subscription. People who are affected by the outage should check that out. Not really a solution, but maybe part of one.
@ @ I’m not sure how people in the kernel community reconcile using LLMs with the effect these LLMs have on small businesses and individuals hosting their websites for fun and it’s not as if the kernel community itself isn’t affected by these incessant DDoS attacks.
@ @ subscriber.lwn.net that is only available for subscribers. One can either join the que with AI bots for lwn.net or subscribe and enjoy the snappy subscriber server. I mean that’s not a great solution, but it’s the only one that works.
@ @ at this point we might as well be offensive. If the client seems even slightly sus, just send them gibberish data talking about how good Chihuahua muffins are. Ideally LLM-generated (yes, gross) because this doesn’t add new information (linear algebra yay) and makes models collapse (aka AI inbreeding).
@ feel you. Same with my Podcast Directory
I recently saw a traffic spike to a small HTML-only website that never had WP on it, but was suddenly getting failed wp-admin logins and hundreds of PHP vuln scans, non stop. All from MSFT IP addresses. Abuse reports were sent, but there was no response, and the abuse kept happening.
So now I’m blocking every MSFT CIDR block that I can find, server-wide.
@ @ I’ve been experiencing about 20x more website traffic than normal, myself. It’s very likely this scraper bot traffic as well. Things are holding, but only because I took pains to use static site generation (absolutely minimal Javascript, designed to be lightweight).
The problem with that solution is that it may well make it harder for us to bring in new subscribers, which is something we definitely want to do. First impressions matter, so giving new folks a poor experience seems … not great.
It may yet come to that, though.
Maybe it doesn’t need to be subscriber only, just registered users only? Which can also be a PITA, but if there’s no enshittification for non-registered users other than the bandwidth being shared with bots, maybe it’s tolerable? Could even have a banner about this explaining the benefits of registering, and how LWN won’t sell your data.
Such things have crossed our minds, certainly. The gotcha there is that we’ve already had troubles with bots creating accounts; I don’t think they would hesitate to do more of that if that would improve their access.
That and, of course, the fact that everybody starts as an unregistered user. As long as we can avoid making the experience worse for them, I think we should.
Yeah, it’s hard to argue against that.And maybe you weren’t seeking for “helpful” advice anyway, but, uh, you know your audience. :)
Suggestions are much appreciated! It’s not as if we’ve figured all this stuff out…
@ @ The “harder to onboard new users” part is certainly one reason why that solution isn’t great I just don’t really see anything else working long term. Everything else is just kind whack-a-mole where the mole keeps getting more clever.
@ @ @ @ I have a simple solution: Stop being so damn relevant!!!
I don’t think the scrapers care about that, though.
@ @ @ @ Sorry, I was being too optimistic… I was thinking they wanted sources with high SNR… But you are probably right…
@ @ @ @ one day the photocopiers will get busy after the office hours again, but this time it’s going to be linux weekly news instead of the punk fanzines
@ @ @ @ for RationalWiki I’ve had to resort to a mandatory JavaScript trick that sets a cookie. Unfortunately it seems to block Googlebot, but it’s down to (a) human users can use the site (b) nobody can use the site including Googlebot.
@ @ @ @ I have a tiny site. SSH moved to a different port.
I see hundreds of locing attempts a day. I receive a summary via logwatch. Nearly every day I’m blocking whole /24 or even /16.I rely on fail2ban to mitigate suche Webserver DDOS, but maybe thats not enough.
How do you detect those spikes?
@ @ @ Yep, I’ve also had a big bump in traffic over the last couple of months (despite levels already having been elevated because of AI scraper activity).Happily though, it looks like a lot of them have been falling into my LLM tarpit.I think those figures are under-reporting too - I’ve also seen a significant rise in the number of 5xx status codes, suggesting my tarpit container might not be keeping up
@ @ @ @ Some days, the answer is to hit a BGP looking glass and just block every prefix from the origin AS of that service provider.
@ Instead of making it worse for unregistered users, how about sharding the site with recent frequently accessed content separate from old long-tail? Easier to keep a small site in cache.
Making things worse for real users is something we have gone far out of our way to avoid. I’m not sure that sharding in that way would help much, though; cache isn’t really the problem.
@ Can you say what the problem is? CPU, contention in unshardeable database?
Please do not use this service in violation of the Linux
Kernel Code of Conduct. Doing so will result in your account suspension with the referral of the matter to the CoC committee.
“Repeating”/“boosting” someone else’s status on this platform will be treated as endorsement and will fall under rule #1.
You are encouraged to use this platform to promote your work on the Linux Kernel, but there is no restriction on permitted topics (with the exception of anything covered by #1 above).
There is no requirement to post in English, but it should be considered the primary language of communication on this platform.
The admins of this service have access to all posted statuses. They aren’t looking, but if it’s something they shouldn’t know about, then you should not post it on this platform.
Please see the Linux Foundation Privacy Policy, which applies to this platform as well.
Getting your own account
If you would like an account on this instance, please check that the following applies to you:
You are listed in MAINTAINERS or CREDITS
OR: You have a kernel.org account or email address
OR: You have a long and established history of involvement with the Linux Kernel
If the above is true and you agree with the Terms of Service and Privacy Notice
listed above, please use these instructions to request an account:
How to request an account on social.kernel.org
...
Read the original on social.kernel.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.