10 interesting stories served every morning and every evening.
Recently, I’ve been spending my time building an image-to-ASCII renderer. Below is the result — try dragging it around, the demo is interactive!
One thing I spent a lot of effort on is getting edges looking sharp. Take a look at this rotating cube example:
Try opening the “split” view. Notice how well the characters follow the contour of the square.
This renderer works well for animated scenes, like the ones above, but we can also use it to render static images:
The image of Saturn was generated with ChatGPT.
Then, to get better separation between different colored regions, I also implemented a cel shading-like effect to enhance contrast between edges. Try dragging the contrast slider below:
The contrast enhancement makes the separation between different colored regions far clearer. That was key to making the 3D scene above look as good as it does.
I put so much focus on sharp edges because they’re an aspect of ASCII rendering that is often overlooked when programmatically rendering images as ASCII. Consider this animated 3D scene from Cognition’s landing page that is rendered via ASCII characters:
It’s a cool effect, especially while in motion, but take a look at those blurry edges! The characters follow the cube contours very poorly, and as a result, the edges look blurry and jagged in places:
This blurriness happens because the ASCII characters are being treated like pixels — their shape is ignored. It’s disappointing to see because ASCII art looks so much better when shape is utilized. I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.
I started building my ASCII renderer to prove to myself that it’s possible to utilize shape in ASCII rendering. In this post, I’ll cover the techniques and ideas I used to capture shape and build this ASCII renderer in detail.
We’ll start with the basics of image-to-ASCII conversion and see where the common issue of blurry edges comes from. After that, I’ll show you the approach I used to fix that and achieve sharp, high-quality ASCII rendering. At the end, we’ll improve on that by implementing the contrast enhancement effect I showed above.
Let’s get to it!
ASCII contains 95 printable characters that we can use. Let’s start off by rendering the following image containing a white circle using those ASCII characters:
ASCII art is (almost) always rendered using a monospace font. Since every character in a monospace font is equally wide and tall, we can split the image into a grid. Each grid cell will contain a single ASCII character.
The image with the circle is pixels. For the ASCII grid, I’ll pick a row height of pixels and a column width of pixels. That splits the canvas into rows and columns — an grid:
Monospace characters are typically taller than they are wide, so I made each grid cell a bit taller than it is wide.
Our task is now to pick which character to place in each cell. The simplest approach is to calculate a lightness value for each cell and pick a character based on that.
We can get a lightness value for each cell by sampling the lightness of the pixel at the cell’s center:
We want each pixel’s lightness as a numeric value between and , but our image data consists of pixels with RGB color values.
We can use the following formula to convert an RGB color (with component values between and ) to a lightness value:
Now that we have a lightness value for each cell, we want to use those values to pick ASCII characters. As mentioned before, ASCII has 95 printable characters, but let’s start simple with just these characters:
We can sort them in approximate density order like so, with lower-density characters to the left, and high-density characters to the right:
I added space as the first (least dense) character.
We can then map lightness values between and to one of those characters like so:
This maps low lightness values to low-density characters and high lightness values to high-density characters.
Rendering the circle from above with this method gives us:
That works… but the result is pretty ugly. We seem to always get @ for cells that fall within the circle and a space for cells that fall outside.
That is happening because we’ve pretty much just implemented nearest-neighbor downsampling. Let’s see what that means.
Downsampling, in the context of image processing, is taking a larger image (in our case, the image with the circle) and using that image’s data to construct a lower resolution image (in our case, the ASCII grid). The pixel values of the lower resolution image are calculated by sampling values from the higher resolution image.
The simplest and fastest method of sampling is nearest-neighbor interpolation, where, for each cell (pixel), we only take a single sample from the higher resolution image.
Consider the circle example again. Using nearest-neighbor interpolation, every sample either falls inside or outside of the shape, resulting in either or lightness:
If, instead of picking an ASCII character for each grid cell, we color each grid cell (pixel) according to the sampled value, we get the following pixelated rendering:
This pixelated rendering is pretty much equivalent to the ASCII rendering from before. The only difference is that instead of @s we have white pixels, and instead of spaces we have black pixels.
These square, jagged looking edges are aliasing artifacts, commonly called jaggies. They’re a common result of using nearest-neighbor interpolation.
To get rid of jaggies, we can collect more samples for each cell. Consider this line:
The line’s slope on the axis is . When we pixelate it with nearest-neighbor interpolation, we get the following:
Let’s try to get rid of the jagginess by taking multiple samples within each cell and using the average sampled lightness value as the cell’s lightness. The example below lets you vary the number of samples using the slider:
With multiple samples, cells that lie on the edge of a shape will have some of their samples fall within the shape, and some outside of it. Averaging those, we get gray in-between colors that smooth the downsampled image. Below is the same example, but with an overlay showing where the samples are taken:
This method of collecting multiple samples from the larger image is called supersampling. It’s a common method of spatial anti-aliasing (avoiding jaggies at edges). Here’s what the rotating square looks like with supersampling (using samples for each cell):
Let’s look at what supersampling does for the circle example from earlier. Try dragging the sample quality slider:
The circle becomes less jagged, but the edges feel blurry. Why’s that?
Well, they feel blurry because we’re pretty much just rendering a low-resolution, pixelated image of a circle. Take a look at the pixelated view:
The ASCII and pixelated views are mirror images of each other. Both are just low-resolution versions of the original high-resolution image, scaled up to the original’s size — it’s no wonder they both look blurry.
Increasing the number of samples is insufficient. No matter how many samples we take per cell, the samples will be averaged into a single lightness value, used to render a single pixel.
And that’s the core problem: treating each grid cell as a pixel in an image. It’s an obvious and simple method, but it disregards that ASCII characters have shape.
We can make our ASCII renderings far more crisp by picking characters based on their shape. Here’s the circle rendered that way:
The characters follow the contour of the circle very well. By picking characters based on shape, we get a far higher effective resolution. The result is also more visually interesting.
Let’s see how we can implement this.
So what do I mean by shape? Well, consider the characters T, L, and O placed within grid cells:
The character T is top-heavy. Its visual density in the upper half of the grid cell is higher than in the lower half. The opposite can be said for L — it’s bottom-heavy. O is pretty much equally dense in the upper and lower halves of the cell.
We might also compare characters like L and J. The character L is heavier within the left half of the cell, while J is heavier in the right half:
We also have more “extreme” characters, such as _ and ^, that only occupy the lower or upper portion of the cell, respectively:
This is, roughly, what I mean by “shape” in the context of ASCII rendering. Shape refers to which regions of a cell a given character visually occupies.
To pick characters based on their shape, we’ll somehow need to quantify (put numbers to) the shape of each character.
Let’s start by only considering how much characters occupy the upper and lower regions of our cell. To do that, we’ll define two “sampling circles” for each grid cell — one placed in the upper half and one in the lower half:
It may seem odd or arbitrary to use circles instead of just splitting the cell into two rectangles, but using circles will give us more flexibility later on.
A character placed within a cell will overlap each of the cell’s sampling circles to some extent.
One can compute that overlap by taking a bunch of samples within the circle (for example, at every pixel). The fraction of samples that land inside the character gives us the overlap as a numeric value between and :
For T, we get an overlap of approximately for the upper circle and for the lower. Those overlap values form a -dimensional vector:
We can generate such a -dimensional vector for each character within the ASCII alphabet. These vectors quantify the shape of each ASCII character along these dimensions (upper and lower). I’ll call these vectors shape vectors.
Below are some ASCII characters and their shape vectors. I’m coloring the sampling circles using the component values of the shape vectors:
We can use the shape vectors as 2D coordinates — here’s every ASCII character on a 2D plot:
Let’s say that we have our ASCII characters and their associated shape vectors in a CHARACTERS array:
We can then perform a nearest neighbor search like so:
The findBestCharacter function gives us the ASCII character whose shape best matches the input lookup vector.
Note: this brute force search is not very performant. This becomes a bottleneck when we start rendering thousands of ASCII characters at . I’ll talk more about this later.
To make use of this in our ASCII renderer, we’ll calculate a lookup vector for each cell in the ASCII grid and pass it to findBestCharacter to determine the character to display.
Let’s try it out. Consider the following zoomed-in circle as an example. It is split into three grid cells:
Overlaying our sampling circles, we see varying degrees of overlap:
When calculating the shape vector of each ASCII character, we took a huge number of samples. We could afford to do that because we only need to calculate those shape vectors once up front. After they’re calculated, we can use them again and again.
However, if we’re converting an animated image (e.g. canvas or video) to ASCII, we need to be mindful of performance when calculating the lookup vectors. An ASCII rendering might have hundreds or thousands of cells. Multiplying that by tens or hundreds of samples would be incredibly costly in terms of performance.
With that being said, let’s pick a sampling quality of with the samples placed like so:
For the top sampling circle of the leftmost cell, we get one white sample and two black, giving us an average lightness of . Doing the same calculation for all of the sampling circles, we get the following 2D vectors:
From now on, instead of using the term “lookup vectors”, I’ll call these vectors, sampled from the image that we’re rendering as ASCII, sampling vectors. One sampling vector is calculated for each cell in the grid.
Anyway, we can use these sampling vectors to find the best-matching ASCII character. Let’s see what that looks like on our 2D plot — I’ll label the sampling vectors (from left to right) C0, C1, and C2:
Hmm… this is not what we want. Since none of the ASCII shape vector components exceed , they’re all clustered towards the bottom-left region of our plot. This makes our sampling vectors map to a few characters on the edge of the cluster.
We can fix this by normalizing the shape vectors. We’ll do that by taking the maximum value of each component across all shape vectors, and dividing the components of each shape vector by the maximum. Expressed in code, that looks like so:
Here’s what the plot looks like with the shape vectors normalized:
If we now map the sampling vectors to their nearest neighbors, we get a much more sensible result:
We get ’, M and $. Let’s see how well those characters match the circle:
Nice! They match very well.
Let’s try rendering the full circle from before with the same method:
Much better than before! The picked characters follow the contour of the circle very well.
Using two sampling circles — one upper and one lower — produces a much better result than the -dimensional (pixelated) approach. However, it still falls short when trying to capture other aspects of a character’s shape.
For example, two circles don’t capture the shape of characters that fall in the middle of the cell. Consider -:
For -, we get a shape vector of . That doesn’t represent the character very well at all.
The two upper-lower sampling circles also don’t capture left-right differences, such as the difference between p and q:
We could use such differences to get better character picks, but our two sampling circles don’t capture them. Let’s add more dimensions to our shape to fix that.
Since cells are taller than they are wide (at least with the monospace font I’m using), we can use sampling circles to cover the area of each cell quite well:
sampling circles capture left-right differences, such as between p and q, while also capturing differences across the top, bottom, and middle regions of the cell, differentiating ^, -, and _. They also capture the shape of “diagonal” characters like / to a reasonable degree.
One problem with this grid-like configuration for the sampling circles is that there are gaps. For example, . falls between the sampling circles:
To compensate for this, we can stagger the sampling circles vertically (e.g. lowering the left sampling circles and raising the right ones) and make them a bit larger. This causes the cell to be almost fully covered while not causing excessive overlap across the sampling circles:
We can use the same procedure as before to generate character vectors using these sampling circles, this time yielding a -dimensional vector. Consider the character L:
For L, we get the vector:
I’m presenting -dimensional shape vectors in a matrix form because it’s easier to grok geometrically, but the actual vector is a flat list of numbers.
The lightness values certainly look L-shaped! The 6D shape vector captures L’s shape very well.
Now we have a 6D shape vector for every ASCII character. Does that affect character lookups (how we find the best matching character)?
Earlier, in the findBestCharacter function, I referenced a getDistance function. That function returns the Euclidean distance between the input points. Given two 2D points and , the formula to calculate their Euclidean distance looks like so:
Put into code, this looks like so:
...
Read the original on alexharri.com »
Thanks to everyone who sent in condolences on my recent death from prostate cancer at age 68, but that was Scott Adams. I (Scott Alexander) am still alive. Still, the condolences are appreciated. Scott Adams was a surprisingly big part of my life. I may be the only person to have read every Dilbert book before graduating elementary school. For some reason, 10-year-old-Scott found Adams’ stories of time-wasting meetings and pointy-haired bosses hilarious. No doubt some of the attraction came from a more-than-passing resemblance between Dilbert’s nameless corporation and the California public school system. We’re all inmates in prisons with different names. But it would be insufficiently ambitious to stop there. Adams’ comics were about the nerd experience. About being cleverer than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful. There’s an arc in Dilbert where the boss disappears for a few weeks and the engineers get to manage their own time. Productivity shoots up. Morale soars. They invent warp drives and time machines. Then the boss returns, and they’re back to being chronically behind schedule and over budget. This is the nerd outlook in a nutshell: if I ran the circus, there’d be some changes around here.Yet the other half of the nerd experience is: for some reason this never works. Dilbert and his brilliant co-workers are stuck watching from their cubicles while their idiot boss racks in bonuses and accolades. If humor, like religion, is an opiate of the masses, then Adams is masterfully unsubtle about what type of wound his art is trying to numb. This is the basic engine of Dilbert: everyone is rewarded in exact inverse proportion to their virtue. Dilbert and Alice are brilliant and hard-working, so they get crumbs. Wally is brilliant but lazy, so he at least enjoys a fool’s paradise of endless coffee and donuts while his co-workers clean up his messes. The P.H.B. is neither smart nor industrious, so he is forever on top, reaping the rewards of everyone else’s toil. Dogbert, an inveterate scammer with a passing resemblance to various trickster deities, makes out best of all. The repressed object at the bottom of the nerd subconscious, the thing too scary to view except through humor, is that you’re smarter than everyone else, but for some reason it isn’t working. Somehow all that stuff about small talk and sportsball and drinking makes them stronger than you. No equation can tell you why. Your best-laid plans turn to dust at a single glint of Chad’s perfectly-white teeth. Lesser lights may distance themselves from their art, but Adams radiated contempt for such surrender. He lived his whole life as a series of Dilbert strips. Gather them into one of his signature compendia, and the title would be Dilbert Achieves Self Awareness And Realizes That If He’s So Smart Then He Ought To Be Able To Become The Pointy-Haired Boss, Devotes His Whole Life To This Effort, Achieves About 50% Success, Ends Up In An Uncanny Valley Where He Has Neither The Virtues Of The Honest Engineer Nor Truly Those Of The Slick Consultant, Then Dies Of Cancer Right When His Character Arc Starts To Get Interesting.If your reaction is “I would absolutely buy that book”, then keep reading, but expect some detours.The niche that became Dilbert opened when Garfield first said “I hate Mondays”. The quote became a popular sensation, inspiring t-shirts, coffee mugs, and even a hit single. But (as I’m hardly the first to point out) why should Garfield hate Mondays? He’s a cat! He doesn’t have to work!In the 80s and 90s, saying that you hated your job was considered the height of humor. Drew Carey: “Oh, you hate your job? There’s a support group for that. It’s called everybody, and they meet at the bar.”This was merely the career subregion of the supercontinent of Boomer self-deprecating jokes, whose other prominences included “I overeat”, “My marriage is on the rocks”, “I have an alcohol problem”, and “My mental health is poor”.Arguably this had something to do with the Bohemian turn, the reaction against the forced cheer of the 1950s middle-class establishment of company men who gave their all to faceless corporations and then dropped dead of heart attacks at 60. You could be that guy, proudly boasting to your date about how you traded your second-to-last patent artery to complete a spreadsheet that raised shareholder value 14%. Or you could be the guy who says “Oh yeah, I have a day job working for the Man, but fuck the rat race, my true passion is white water rafting”. When your father came home every day looking haggard and worn out but still praising his boss because “you’ve got to respect the company or they won’t take care of you”, being able to say “I hate Mondays” must have felt liberating, like the mantra of a free man.This was the world of Dilbert’s rise. You’d put a Dilbert comic on your cubicle wall, and feel like you’d gotten away with something. If you were really clever, you’d put the Dilbert comic where Dilbert gets in trouble for putting a comic on his cubicle wall on your cubicle wall, and dare them to move against you.But t-shirts saying “Working Hard . . . Or Hardly Working?” no longer hit as hard as they once did. Contra the usual story, Millennials are too earnest to tolerate the pleasant contradiction of saying they hate their job and then going in every day with a smile. They either have to genuinely hate their job - become some kind of dirtbag communist labor activist - or at least pretend to love it. The worm turns, all that is cringe becomes based once more and vice versa. Imagine that guy boasting to his date again. One says: “Oh yeah, I grudgingly clock in every day to give my eight hours to the rat race, but trust me, I’m secretly hating myself the whole time”? The other: “I work for a boutique solar energy startup that’s ending climate change - saving the environment is my passion!” Zoomers are worse still: not even the fig leaf of social good, just pure hustle.Silicon Valley, where hustle culture has reached its apogee, has an additional consideration: why don’t you found a startup? If you’re so much smarter than your boss, why not compete against him directly? Scott Adams based Dilbert on his career at Pacific Bell in the 80s. Can you imagine quitting Pacific Bell in the 80s to, uh, found your own Pacific Bell? To go to Michael Milken or whoever was investing back then, and say “Excuse me, may I have $10 billion to create my own version of Pacific Bell, only better?” But if someone were to try to be Dilbert today — to say, earnestly, “I hate my job because I am smarter than my boss and could do it better than him,” that would be the obvious next question, the same way “I am better at picking stocks than Wall Street” ought to be followed up with “Then why don’t you invest?”Above, I described “the nerd experience” of “being smarter than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful.” You nodded along, because you knew the only possible conclusion to the arc suggested by that sentence was to tear it down, to launch a tirade about how that nerd is naive and narcissistic and probably somehow also a racist. In the year of our Lord 2026, of course that’s where I’m going. Dilbert is a relic of a simpler time, when the trope could be played straight. But it’s also an artifact of the transition, maybe even a driver of it. Scott Adams appreciated these considerations earlier and more acutely than anyone else. And they drove him nuts.Adams knew, deep in his bones, that he was cleverer than other people. God always punishes this impulse, especially in nerds. His usual strategy is straightforward enough: let them reach the advanced physics classes, where there will always be someone smarter than them, then beat them on the head with their own intellectual inferiority so many times that they cry uncle and admit they’re nothing special.For Adams, God took a more creative and — dare I say, crueler — route. He created him only-slightly-above-average at everything except for a world-historical, Mozart-tier, absolutely Leonardo-level skill at making silly comics about hating work.Scott Adams never forgave this. Too self-aware to deny it, too narcissistic to accept it, he spent his life searching for a loophole. You can read his frustration in his book titles: How To Fail At Almost Everything And Still Win Big. Trapped In A Dilbert World. Stick To Drawing Comics, Monkey Brain. Still, he refused to stick to comics. For a moment in the late-90s, with books like The Dilbert Principle and The Dilbert Future, he seemed on his way to be becoming a semi-serious business intellectual. He never quite made it, maybe because the Dilbert Principle wasn’t really what managers and consultants wanted to hear:I wrote The Dilbert Principle around the concept that in many cases the least competent, least smart people are promoted, simply because they’re the ones you don’t want doing actual work. You want them ordering the doughnuts and yelling at people for not doing their assignments—you know, the easy work. Your heart surgeons and your computer programmers—your smart people—aren’t in management.Okay, “I am cleverer than everyone else”, got it. His next venture (c. 1999) was the Dilberito, an attempt to revolutionize food via a Dilbert-themed burrito with the full Recommended Daily Allowance of twenty-three vitamins. I swear I am not making this up. A contemporaneous NYT review said it “could have been designed only by a food technologist or by someone who eats lunch without much thought to taste”. The Onion, in its twenty year retrospective for the doomed comestible, called it a frustrated groping towards meal replacements like Soylent or Huel, long before the existence of a culture nerdy enough to support them. Adams himself, looking back from several years’ distance, was even more scathing: “the mineral fortification was hard to disguise, and because of the veggie and legume content, three bites of the Dilberito made you fart so hard your intestines formed a tail.”His second foray into the culinary world was a local restaurant called Stacey’s. The New York Times does a pitch-perfect job covering the results. Their article starts:This is yet another story about a clueless but obtrusive boss — the kind of meddlesome manager you might laugh at in the panels of “Dilbert,” the daily comic strip.…and continues through a description of Adams making every possible rookie mistake. As the restaurant does worse and worse, Adams becomes more and more convinced that he has to figure out some clever lifehack that will turn things around and revolutionize restaurants. First he comes up with a theory that light is the key to restauranting, and spends ages fiddling with the windows. When this fails, he devolves into an unmistakable sign of desperation - asking blog commenters for advice:He also turned to Dilbert fans for suggestions on how to use the party room, in a posting on his blog titled “Oh Great Blog Brain.” The Dilbert faithful responded with more than 1,300 comments, mixing interesting ideas (interactive murder-mystery theater) with unlikely mischief (nude volleyball tournaments). Mr. Adams asked his employees to read the comments and is now slowly trying some of them.But what makes this article truly perfect - I can’t believe it didn’t get a Pulitzer - is that it’s not some kind of hostile ambush profile. Adams is totally self-aware. He also finds the whole situation hilarious! Everyone involved is in on the joke! The waiters find it hilarious! After every workday, Adams and the waiters get together and laugh long into the night together about how bad a boss Adams is!There’s a running joke about how if you see a business that loses millions yearly, it’s probably run by some banker’s wife who’s getting subsidized to feel good about herself and pretend she has a high-powered job. I think this is approximately what was going on with Stacey’s. Adams made enough money off Dilbert that he could indulge his fantasies of being something more than “the Dilbert guy”. For a moment, he could think of himself as a temporarily-embarrassed businessman, rather than just a fantastically successful humorist. The same probably explains his forays into television (“Dilbert: The Animated Series”), non-Dilbert comics (“Plop: The Hairless Elbonian”), and technology (”WhenHub”, his site offering “live chats with subject-matter experts”, which was shelved after he awkwardly tried to build publicity by suggesting that mass shooting witnesses could profit by using his site to tell their stories.)Adams and Elon Musk occasionally talked about each other - usually to defend one another against media criticism of their respective racist rants - but I don’t know if they ever met. I wonder what it would have been like if they did. I imagine them coming together at some Bay Area house party on copious amounts of LSD or MDMA. One, the world’s greatest comic writer, who more than anything else wanted to succeed in business. The other, the world’s greatest businessman, who more than anything else wanted people to think that he’s funny. Scott Adams couldn’t stop frittering his talent and fortune on doomed attempts to be taken seriously. But someday Elon Musk will buy America for $100 trillion, tell the UN that he’s renaming it “the United States of 420-69”, and the assembled ambassadors will be as silent as the grave. Are there psychic gains from trade to be had between two such people?Michael Jordan was the world’s best basketball player, and insisted on testing himself against baseball, where he failed. Herbert Hoover was one of the world’s best businessmen, and insisted on testing himself against politics, where he crashed and burned. We’re all inmates in prisons of different names. Most of us accept it and get on with our lives. Adams couldn’t stop rattling the bars.I’m No Scientist, But I Think Feng Shui Is Part Of The AnswerHaving failed his forays into business, Adams turned to religion. Not in the sense of seeking consolation through God’s love. In the sense of trying to show how clever he was by figuring out the true nature of the DivineThe result was God’s Debris. This is not a good book. On some level, Adams (of course) seemed to realize this, but (of course) his self-awareness only made things worse. In the second-worst introduction to a work of spiritual wisdom I’ve ever read (Gurdjieff keeps first place by a hair), he explains that this is JUST A THOUGHT EXPERIMENT and IF YOU TAKE IT SERIOUSLY, YOU FAIL. But also, it really makes you think, and it’s going to blow your mind, and you’ll spend the rest of your life secretly wondering whether it was true, but it won’t be, because IT’S JUST A THOUGHT EXPERIMENT, and IF YOU TAKE IT SERIOUSLY, YOU FAIL. Later, in a Bloomberg interview, he would say that this book - and not Dilbert - would be his “ultimate legacy” to the world. But remember, IT’S JUST A THOUGHT EXPERIMENT, and IF YOU TAKE IT SERIOUSLY YOU FAIL.I read it for the first time while researching this essay. The frame story is that a delivery boy gives a package to the wisest man in the universe, who invites him to stay a while and discuss philosophy (REMEMBER, IT’S JUST A WORK OF FICTION! THESE ARE ONLY CHARACTERS!) Their discussion is one-quarter classic philosophical problems that seemed deep when you were nineteen, presented with no reference to any previous work:“There has to be a God,” I said. “Otherwise, none of us would be here.” It wasn’t much of a reason, but I figured he didn’t need more.“Do you believe God is omnipotent and that people have free will?” he asked.“If God is omnipotent, wouldn’t he know the future?”“If God knows what the future holds, then all our choices are already made, aren’t they? Free will must be an illusion.”He was clever, but I wasn’t going to fall for that trap. “God lets us determine the future ourselves, using our free will,” I explained.“Then you believe God doesn’t know the future?”“I guess not,” I admitted. “But he must prefer not knowing.”There is an ongoing meta-discussion among philosophy discussers of how acceptable it is to propose your own answers to the great questions without having fully mastered previous scholarship. On the one hand, philosophy is one of the most fundamental human activities, gating it behind the near-impossible task of having read every previous philosopher is elitist and gives self-appointed guardians of scholarship a permanent heckler’s veto on any new ideas, and it can create a culture so obsessed with citing every possible influence that eventually the part where you have an opinion withers away and philosophy becomes a meaningless ritual of presenting citations without conclusion. On the other hand, this book.Another quarter is philosophical questions which did not seem deep, even when you were nineteen, and which nobody has ever done work on, because nobody except Scott Adams ever even thought they were worth considering:“Well, usually it’s because some important religious event took place there.”“What does it mean to say that something took place in a particular location when we know that the earth is constantly in motion, rotating on its axis and orbiting the sun? And we’re in a moving galaxy that is part of an expanding universe. Even if you had a spaceship and could fly anywhere, you can never return to the location of a past event. There would be no equivalent of the past location because location depends on your distance from other objects, and all objects in the universe would have moved considerably by then.”“I see your point, but on Earth the holy places keep their relationship to other things on Earth, and those things don’t move much,” I said.“Let’s say you dug up all the dirt and rocks and vegetation of a holy place and moved it someplace else, leaving nothing but a hole that is one mile deep in the original location. Would the holy land now be the new location where you put the dirt and rocks and vegetation, or the old location with the hole?”“I think both would be considered holy,” I said, hedging my bets.“Suppose you took only the very top layer of soil and vegetation from the holy place, the newer stuff that blew in or grew after the religious event occurred thousands of years ago. Would the place you dumped the topsoil and vegetation be holy?”“That’s a little trickier,” I said. “I’ll say the new location isn’t holy because the topsoil that you moved there isn’t itself holy, it was only in contact with holy land. If holy land could turn anything that touched it into more holy land, then the whole planet would be holy.”The old man smiled. “The concept of location is a useful delusion when applied to real estate ownership, or when giving someone directions to the store. But when it is viewed through the eyes of an omnipotent God, the concept of location is absurd. While we speak, nations are arming themselves to fight for control of lands they consider holy. They are trapped in the delusion that locations are real things, not just fictions of the mind. Many will die.”Another quarter of the discussion is the most pusillanimous possible subjectivism, as if Robert Anton Wilson and the 2004 film What the #$*! Do We Know!? had a kid, then strangled it at birth until it came out brain damaged. We get passages like these: “I am saying that UFOs, reincarnation, and God are all equal in terms of their reality.”“Do you mean equally real or equally imaginary?”“Your question reveals your bias for a binary world where everything is either real or imaginary. That distinction lies in your perceptions, not in the universe. Your inability to see other possibilities and your lack of vocabulary are your brain’s limits, not the universe’s.”“There has to be a difference between real and imagined things,” I countered. “My truck is real. The Easter Bunny is imagined. Those are different.”“As you sit here, your truck exists for you only in your memory, a place in your mind. The Easter Bunny lives in the same place. They are equal.”I remember the late ’90s and early ‘00s; I was (regrettably) there. For some reason, all this stuff was considered the height of wisdom back then. The actual Buddhist classics were hard to access, but everyone assumed that Buddhists were wise and they probably said, you know, something like this. If you said stuff like this, you could be wise too.The final quarter of the book is a shockingly original take on the Lurianic kabbalah. I‘m not pleased to report this, and Adams likely would have been very surprised to learn it. Still, the resemblance is unmistakable. The wisest man in the world, charged with answering all of the philosophical problems that bothered you when you were nineteen, tells the following story: if God exists, He must be perfect. Therefore, the only thing he lacks is nonexistence. Therefore, in order to fill that lack, He must destroy himself in order to create the universe. The universe is composed of the fragments of that destruction - the titular God’s Debris. Its point is to reassemble itself into God. Partially-reassembled-God is not yet fully conscious, but there is some sort of instinct within His fragments - ie within the universe - that is motivated to help orchestrate the self-reassembly, and it is this instinct which causes anti-entropic processes like evolution. Good things are good because they aid in the reassembly of God; bad things are bad because they hinder it.Adams’ version adds several innovations to this basic story. Whatever parts of God aren’t involved in physical matter have become the laws of probability; this explains the otherwise inexplicable evolutionary coincidences that created humankind. There’s something about how gravity is produced by some sort of interference between different divine corpuscules - Adams admits that Einstein probably also had useful things to say about gravity, but probably his own version amounts to the same thing, and it’s easier to understand, and that makes it better (IT’S JUST A THOUGHT EXPERIMENT! IF YOU TAKE IT SERIOUSLY, YOU FAIL.) But my favorite part is the augmentation of Luria with Nick Land: the final (or one of the final) steps in the divine reassembly is the creation of the Internet, aka “God’s nervous system”, which will connect everything to everything else and give the whole system awareness of its divine purpose. I’m honestly impressed that a Gentile worked all of this out on his own. Adams completes the performance by reinventing Kegan levels (this time I’m agnostic as to whether it’s convergent evolution or simple plagiarism), although characteristically it is in the most annoying way possible:[The wise man] described what he called the five levels of awareness and said that all humans experience the first level of awareness at birth. That is when you first become aware that you exist.In the second level of awareness you understand that other people exist. You believe most of what you are told by authority figures. You accept the belief system in which you are raised.At the third level of awareness you recognize that humans are often wrong about the things they believe. You feel that you might be wrong about some of your own beliefs but you don’t know which ones. Despite your doubts, you still find comfort in your beliefs.The fourth level is skepticism. You believe the scientific method is the best measure of what is true and you believe you have a good working grasp of truth, thanks to science, your logic, and your senses. You are arrogant when it comes to dealing with people in levels two and three. The fifth level of awareness is the Avatar. The Avatar understands that the mind is an illusion generator, not a window to reality. The Avatar recognizes science as a belief system, albeit a useful one. An Avatar is aware of God’s power as expressed in probability and the inevitable recombination of God consciousness.I think going through every David Chapman essay and replacing the word “metarationality” with “THE AVATAR” would actually be very refreshing.What are we to make of all of this?Nothing is more American than inventing weird cringe fusions of religion and atheism where you say that God doesn’t exist as (gestures upward) some Big Man In The Sky the way those people believe, but also, there totally is a God, in some complicated sense which only I understand. When Thomas Jefferson cut all the passages with miracles out of his Bible, he was already standing on the shoulders of generations of Unitarians, Quakers, and Latitudinarians.This was augmented by the vagaries of nerd culture’s intersection with the sci-fi fandom. The same people who wanted to read about spaceships and ray guns also wanted to read about psionics and Atlantis, so the smart sci-fi nerd consensus morphed into something like “probably all that unexplained stuff is real, but has a scientific explanation”. Telepathy is made up of quantum particles, or whatever (I talk about this more in my article on the Shaver Mystery). It became a nerd rite of passage to come up with your own theory that reconciled the spiritual and the material in the most creative way possible.And the Nineties (God’s Debris was published in 2001) were a special time. The decade began with the peak of Wicca and neopaganism. Contra current ideological fault lines, where these tendencies bring up images of Etsy witches, they previously dominated nerd circles, including male nerds, techie nerds, and right-wing nerds (did you know Eric S. Raymond is neopagan?) By decade’s end, the cleverest (ie most annoying) nerds were switching to New Atheism; throughout, smaller groups were exploring Discordianism, chaos magick, and the Subgenius. The common thread was that Christianity had lost its hegemonic status, part of being a clever nerd was patting yourself on the back for having seen through it, but exactly what would replace it was still uncertain, and there was still enough piety in the water supply that people were uncomfortable forgetting about religion entirely. You either had to make a very conscious, marked choice to stop believing (New Atheism), or try your hand at the task of inventing some kind of softer middle ground (neopaganism, Eastern religion, various cults, whatever this book was supposed to be).It’s Obvious You Won’t Survive By Your Wits AloneAdams spent his life obsessed with self-help. Even more than a businessman or a prophet, he wanted to be a self-help guru. Of course he did. His particular package of woo - a combination of hypnosis, persuasion hacks, and social skills advice - unified the two great motifs of his life.Thesis: I am cleverer than everyone else.Antithesis: I always lose to the Pointy-Haired Boss.Synthesis: I was trying to be rational. But most people are irrational sheep; they can be directed only by charismatic manipulators who play on their biases, not by rational persuasion. But now I’m back to being cleverer than everyone else, because I noticed this. Also, I should become a charismatic manipulator.I phrased this in a maximally hostile way, but it’s not wrong. And Adams started off strong. He read Dale Carnegie’s How To Win Friends And Influence People, widely agreed to be the classic book on social skills. Then, in search of even stronger persuasion techniques, he turned to hypnosis. This has a bad reputation, but I basically buy that something is there. Psychiatry has legends of psychotherapist-hypnotists who achieved amazing things, and there’s a plausible scientific story for why it might work. So when Adams claimed to be a master hypnotist, I was originally willing to give him the benefit of the doubt.That lasted until I read The Religion War, Adams’ sequel to God’s Debris. In the intro, which may be literally the most annoying passage ever written in all two million years of human history, he discusses the reception of the original book:This is a sequel to my book God’s Debris, a story about a deliveryman who chances upon the smartest person in the world and learns the secrets of reality. I subtitled that book A Thought Experiment and used a variety of hypnosis techniques in an attempt to produce a feeling of euphoric enlightenment in the reader similar to what the main character would feel while discovering the (fictionally) true nature of reality. Reactions to the book were all over the map. About half of the people who e-mailed me said they felt various flavors of euphoria, expanded awareness, connectedness, and other weird sensations that defied description. A surprising number of people reported reading the entire book twice in one day. So I know something was happening.Other people wrote angry letters and scathing reviews, pointing out the logical and factual flaws in the book. It is full of flaws, and much of the science is made up, as it states in the introduction. I explained that the reader is supposed to be looking for flaws. That’s what makes the experiment work. You might think this group of readers skipped the introduction and missed the stated point of the book, but I suspect that something else is going on. People get a kind of cognitive dissonance (brain cramp) when their worldview is disturbed. It’s fun to watch.I previously felt bad for writing this essay after Adams’ death; it seems kind of unsporting to disagree with someone who can’t respond. These paragraphs cured me of my misgivings: after his death is by far the best time to disagree with Scott Adams.The book is a novel (a real novel this time, with plot and everything) meant to dramatize the lessons of its predecessor. In the near future, the Muslims and Christians are on the verge of global war. Adams’ self-insert character, the Avatar, goes around hypnotizing and mind hacking everyone into cooperating with his hare-brained scheme for world peace. In an early chapter, the Christian alliance has captured the Avatar and sent him to be tortured. But the Avatar masterfully deflects the torturer’s attention with a bit of cold reading, some pointed questions, and a few hypnotic suggestions:As the Avatar planned, the interrogator’s conscious mind was scrambled by the emotions and thoughts of the past minutes. This brutish man, accustomed to avoiding deep thoughts, had imagined the tiniest particles of the universe, his childhood, and the battles of the future. He had laughed, felt pain and pity, been intellectually stimulated, confused, assured, and uncertain. The Avatar had challenged his worldview, and it was evaporating, leaving him feeling empty, unimportant, and purposelessIn the thrilling climax, which takes place at Stacey’s Cafe (yes, it’s the real-world restaurant Adams was managing - yes, he turned his religious-apocalyptic thriller novel into an ad for his restaurant - yes, I bet he thought of this as a “hypnotic suggestion”), the characters find the Prime Influencer. She is able to come up with a short snappy slogan so memetically powerful that it defeats fundamentalist religion and ends the war (the slogan is: “If God is so smart, why do you fart?”). Adams’ mouthpiece character says:It wasn’t the wisdom of the question that made it so powerful; philosophers had posed better questions for aeons. It was the packaging—the marketing, if you will—the repeatability and simplicity, the timing, the Zeitgeist, and in the end, the fact that everyone eventually heard it from someone whose opinion they trusted.The question was short, provocative, and cast in the language of international commerce that almost everyone understood—English. Most important, and generally overlooked by historians: It rhymed and it was funny. Once you heard it, you could never forget it. It looped in the brain, gaining the weight and feel of truth with each repetition. Human brains have a limited capacity for logic and evidence. Throughout time, repetition and frequency were how people decided what was most true.This paragraph is the absolute center of Adams’ worldview (later expanded to book length several times in tomes named things like Win Bigly: Persuasion In A World Where Facts Don’t Matter). People don’t respond to logic and evidence, so the world is ruled by people who are good at making catchy slogans. Sufficiently advanced sloganeering is indistinguishable from hypnosis, and so when Adams has some cute turns of phrase in his previous book, he describes it as “[I] used a variety of hypnosis techniques in an attempt to produce a feeling of euphoric enlightenment in the reader”. This is the cringiest way possible to describe cute turns of phrase, and turns me off from believing any his further claims to hypnotic mastery.Throughout this piece, I’ve tried to emphasize that Adams was usually pretty self-aware. Did that include the hypnosis stuff? I’m not sure. I think he would have answered: certainly some people are great charismatic manipulators. Either their skills are magic, or they operate by some physical law. If they operate by physical law, they should be learnable. Maybe I’m not quite Steve Jobs level yet, but I have to be somewhere along the path to becoming Steve Jobs, right? And why not describe it in impressive terms? Steve Jobs would have come up with impressive-sounding terms for any skills he had, and you would have believed him!Every few months, some group of bright nerds in San Francisco has the same idea: we’ll use our intelligence to hack ourselves to become hot and hard-working and charismatic and persuasive, then reap the benefits of all those things! This is such a seductive idea, there’s no reason whatsoever that it shouldn’t work, and every yoga studio and therapist’s office in the Bay Area has a little shed in the back where they keep the skulls of the last ten thousand bright nerds who tried this. I can’t explain why it so invariably goes wrong. The best I can do is tell a story where, when you’re trying to do this, you’re selecting for either techniques that can change you, or techniques that can compellingly make you think you’ve been changed. The latter are much more common than the former. And the most successful parasites are always those which can alter their host environment to be more amenable to themselves, and if you’re a parasite taking the form of a bad idea, that means hijacking your host’s rationality. So you’re really selecting for things that are compelling, seductive, and damage your ability to tell good ideas from bad ones. This is a just-so story that I have no evidence for - but seriously, go to someone who has the words “human potential” on their business card and ask them if you can see the skull shed.But also: it’s attractive to be an effortlessly confident alpha male who oozes masculinity. And it’s . . . fine . . . to be a normal person with normal-person hangups. What you really don’t want to be is a normal person who is unconvincingly pretending to be a confident alpha male. “Oh hello, nice to meet you, I came here in my Ferrari, it’s definitely not a rental, you’re having the pasta - I’m choosing it for you because I’m so dominant - anyway, do you want to have sex when we get back? Oh, wait, I forgot to neg you, nice hair, is it fake?” In theory, becoming a hot charismatic person with great social skills ought to be the same kind of task as everything else, where you practice a little and you’re bad, but then you practice more and you become good. But the uncanny valley is deep and wide, and Scott Adams was too invested in saying “Ha! I just hypnotized you - ha! There, did it again!” for me to trust his mountaineering skills.It all led, inexorably, to Trump.In summer 2015, Trump came down his escalator and announced his presidential candidacy. Given his comic status, his beyond-the-pale views, and his competition with a crowded field including Jeb Bush and Ted Cruz, traditional media wrote him off. Sure, he immediately led in the polls, but political history was full of weirdos who got brief poll bumps eighteen months before an election only to burn out later. The prediction markets listed his chance of the nomination (not the Presidency!) at 5%.Which made it especially jarring when, in August, Scott Adams wrote a blog post asserting that Trump had “a 98% chance” of winning. This claim received national attention, because Trump was dominating the news cycle and Adams was approximately the only person, anywhere, who thought he had a chance. There are two ways to make historically good predictions. The first way is to be some kind of brilliant superforecaster. Adams wasn’t this. Every big prediction he made after this one failed. Wikipedia notes that he dominated a Politico feature called “The Absolute Worst Political Prediction of 20XX”, with the authors even remarking that he “has managed to appear on this annual roundup of the worst predictions in politics more than any other person on the planet”. His most famous howler was that if Biden won in 2020, Republicans “would be hunted” and his Republican readers would “most likely be dead within a year”. But other highlights include “a major presidential candidate will die of COVID”, “the Supreme Court will overturn the 2024 election”, and “Hillary Clinton will start a race war”.The other way to make a great prediction is to live your entire life for one perfect moment - the inveterate bear who predicted twelve of the last zero recessions, but now it’s 2008 and you look like a genius. By 2015, Adams had become a broken record around one point: people are irrational sheep who are prey for charismatic manipulators. The pointy-haired boss always wins. Trump was the pointiest-haired person in the vicinity, and he was obviously trying to charismatically play on people’s instincts while other people were doing comparatively normal politics. Scott Adams’ hour had arrived.But Adams also handled his time in the spotlight masterfully. He gave us terms like “clown genius”. I hate using this, because I know Scott Adams was sitting at his desk in his custom-built Dilbert-head-shaped tower thinking “What sort of hypnotic catchy slogans can I use to make my meme about Trump spread . . . aha! Clown genius! That has exactly the right ring!” and it absolutely worked, and now everyone who was following the Internet in 2015 has the phrase “clown genius” etched into their brains (Adams calls these “linguistic kill shots”; since I remember that term and use it often, I suppose “linguistic kill shot” is an example of itself). He went from news outlet to news outlet saying “As a trained hypnotist, I can tell you what tricks Trump is using to bamboozle his followers, given that rational persuasion is fake and marketing techniques alone turn the wheels of history,” and the news outlets ate it up.You probably thought I was making up the part where Scott Adams has a custom-built tower shaped like Dilbert’s head.And some of his commentary was good. He was one of the first people to point out the classic Trump overreach, where he would say something like “Sleepy Joe Biden let in twenty trillion illegal immigrants!” The liberal media would take the bait and say “FACT CHECK: False! - Joe Biden only let in five million illegal immigrants!”, and thousands of people who had never previously been exposed to any narrative-threatening information would think “Wait, Joe Biden let in five million illegal immigrants?!” Once you notice it, it’s hard to unsee. Adams started out by stressing that he was politically independent. He didn’t support Trump, he was just the outside hypnosis expert pointing out what Trump was doing. IT’S JUST A THOUGHT EXPERIMENT, IF YOU TAKE IT SERIOUSLY, YOU FAIL. Indeed, “this person is a charismatic manipulator hacking the minds of irrational sheep” is hardly a pro-Trump take. And he lived in Pleasanton, California - a member in good standing of the San Francisco metropolitan area - and nice Pleasantonians simply did not become Trump supporters in 2016.On the other hand, at some point, his increasingly overblown theories of Trump’s greatness opened up a little wedge. The growing MAGA movement started treating him as one of their own; liberals started to see him as an enemy. His fame turned the All-Seeing Eye of social media upon him, that gaze which no man may meet without consequence. Once you’re sufficiently prominent, politics becomes a separating equilibrium; if you lean even slightly to one side, the other will pile on you so massively and traumatically that it will force you into their opponents’ open arms just for a shred of psychological security.As he had done so many other times during his life, he resolved the conflict in the dumbest, cringiest, and most public way possible: a June 2016 blog post announcing that he was endorsing Hillary Clinton, for his own safety, because he suspected he would be targeted for assassination if he didn’t:This past week we saw Clinton pair the idea of President Trump with nuclear disaster, racism, Hitler, the Holocaust, and whatever else makes you tremble in fear. That is good persuasion if you can pull it off because fear is a strong motivator. It is also a sharp pivot from Clinton’s prior approach of talking about her mastery of policy details, her experience, and her gender. Trump took her so-called “woman card” and turned it into a liability. So Clinton wisely pivoted. Her new scare tactics are solid-gold persuasion. I wouldn’t be surprised if you see Clinton’s numbers versus Trump improve in June, at least temporarily, until Trump finds a counter-move.The only downside I can see to the new approach is that it is likely to trigger a race war in the United States. And I would be a top-ten assassination target in that scenario […]So I’ve decided to endorse Hillary Clinton for President, for my personal safety. Trump supporters don’t have any bad feelings about patriotic Americans such as myself, so I’ll be safe from that crowd. But Clinton supporters have convinced me — and here I am being 100% serious — that my safety is at risk if I am seen as supportive of Trump. So I’m taking the safe way out and endorsing Hillary Clinton for president.As I have often said, I have no psychic powers and I don’t know which candidate would be the best president. But I do know which outcome is most likely to get me killed by my fellow citizens. So for safety reason, I’m on team Clinton.My prediction remains that Trump will win in a landslide based on his superior persuasion skills. But don’t blame me for anything President Trump does in office because I endorse Clinton.This somehow failed to be a masterstroke of hypnotic manipulation that left both sides placated. But it was fine, because Trump won anyway! In the New Right’s wave of exultation, all was forgiven, and the first high-profile figure to bet on Trump became a local hero and confirmed prophet. Never mind that Adams had predicted Trump would win by “one of the biggest margins we’ve seen in recent history” when in fact he lost the popular vote. The man who had dreamed all his life of being respected for something other than cartooning had finally made it.Obviously, it destroyed him.At first, I wondered if Adams’ right-wing turn was a calculated manuever. He’d always longed to be a manipulator of lesser humans, and had finally achieved slightly-above-zero skill at it. Wouldn’t it fit his personality to see the right-wingers as dumb sheep, and himself as the clever Dogbert-style scammer who could profit off them? Did he really believe (as he claimed) that he was at risk of being assassinated by left-wing radicals who couldn’t handle his level of insight into Trump’s genius? Or was this just another hypnotic suggestion, retrospectively justified insofar as we’re still talking about it ten years later and all publicity is good publicity?But I don’t think he did it cynically. At the turn of the millennium, the obsessed-with-their-own-cleverness demographic leaned firmly liberal: smug New Atheists, hardline skeptics, members of the “reality-based community”. But in the 2010s, liberalism became the default, the public switched to expertolatry, dumb people’s orthodoxies about race and gender became easier and more fun to puncture than dumb people’s orthodoxies about religion - and the O.W.T.O.C.s lurched right. Adams was borne along by the tide. With enough time, dedication, and archive access, you can hop from Dilbert comic to Dilbert comic, tracing the exact contours of his political journey.There’s a passage in the intro to one of Adams books where he says that, given how he’s going to blow your mind and totally puncture everything you previously believed, perhaps the work is unsuitable for people above fifty-five, whose brains are comparatively sclerotic and might shatter at the strain. This is how I feel about post-2016 politics. Young people were mostly able to weather the damage. As for older people, I have seen public intellectual after public intellectual who I previously respected have their brains turn to puddles of partisan-flavored mush. Jordan Peterson, Ken White, Curtis Yarvin, Paul Krugman, Elon Musk, the Weinsteins, [various people close enough to me that it would be impolite to name them here]. Once, these people were lions of insightful debate. Where now are the horse and the rider? Where is the horn that was blowing? Adams was 58 when Trump changed everything. In 2001, age 44, he’d found the failure of his Dilberito funny. But in another interview, at age 50, he suggested that maybe his competitors had formed teams to sneak into supermarkets and hide them in the back of the shelves. Being tragically flawed yet also self-aware enough to laugh about it is a young man’s game. In 2024, diagnosed with terminal cancer, Adams decided to treat it via ivermectin, according to a protocol recommended by fellow right-wing contrarian Dr. William Makis. This doesn’t seem to me like a story about a cynic milking right-wingers for the grift. It sounds like a true believer. Scott Adams, the man too clever and independent to join any political tendency, who had sworn to always be the master manipulator standing above the fray rather than a sheep with ordinary object-level opinions - had finally succumbed to sincere belief.It’s Not Funny If I Have To Explain ItEvery child is hypomanic, convinced of their own specialness. Even most teenagers still suspect that, if everything went right, they could change the world.It’s not just nerds. Everyone has to crash into reality. The guitar player who starts a garage band in order to become a rockstar. The varsity athlete who wants to make the big leagues. They all eventually realize, no, I’m mediocre. Even the ones who aren’t mediocre, the ones with some special talent, only have one special talent (let’s say cartooning) and no more.I don’t know how the musicians and athletes cope. I hear stories about washed-up alcoholic former high school quarterbacks forever telling their girlfriends about how if Coach had only put them in for the last quarter during the big game, things would have gone differently. But since most writers are nerds, it’s the nerds who dominate the discussion, so much so that the whole affair gets dubbed “Former Gifted Kid Syndrome”.Every nerd who was the smartest kid in their high school goes to an appropriately-ranked college and realizes they’re nothing special. But also, once they go into some specific field they find that intellect, as versatile as it is, can only take them so far. And for someone who was told their whole childhood that they were going to cure cancer (alas, a real quote from my elementary school teacher), it’s a tough pill to swallow.Reaction formation, where you replace a unbearable feeling with its exact opposite, is one of the all time great Freudian defense mechanisms. You may remember it from such classics as “rape victims fall in love with their rapist” or “secretly gay people become really homophobic”. So some percent of washed-up gifted kids compensate by really, really hating nerdiness, rationality, and the intellect.The variety of self-hating nerd are too many to number. There are the nerds who go into psychology to prove that EQ is a real thing and IQ merely its pale pathetic shadow. There are the nerds who become super-woke and talk about how reason and objectivity are forms of white supremacy culture. There are the nerds who obsess over “embodiment” and “somatic therapy” and accuse everyone else of “living in their heads”. There are the nerds who deflect by becoming really into neurodiversity - “the interesting thing about my brain isn’t that I’m ‘smart’ or ‘rational’, it’s that I’m ADHDtistic, which is actually a weakness . . . but also secretly a strength!” There are the nerds who flirt with fascism because it idolizes men of action, and the nerds who convert to Christianity because it idolizes men of faith. There are the nerds who get really into Seeing Like A State, and how being into rationality and metrics and numbers is soooooo High Modernist, but as a Kegan Level Five Avatar they are far beyond such petty concerns. There are the nerds who redefine “nerd” as “person who likes Marvel movies” - having successfully gerrymandered themselves outside the category, they can go back to their impeccably-accurate statisticsblogging on educational outcomes, or their deep dives into anthropology and medieval mysticism, all while casting about them imprecations that of course nerds are loathsome scum who deserve to be bullied.(maybe it’s unfair to attribute this to self-hatred per se. Adams wrote, not unfairly, that the scientismists in Kegan level 4 “are arrogant when it comes to dealing with people in levels two and three.” Maybe there’s the same desperate urge for level 5 to differentiate themselves from 4s - cf. barberpole theory of fashion).Scott Adams felt the contradictions of nerd-dom more acutely than most. As compensation, he was gifted with two great defense mechanisms. The first was humor (which Freud grouped among the mature, adaptive defenses), aided by its handmaiden self-awareness. The second (from Freud’s “neurotic” category) was his own particular variety of reaction formation, “I’m better than those other nerds because, while they foolishly worship rationality and the intellect, I’ve gotten past it to the real deal, marketing / manipulation / persuasion / hypnosis.” When he was young, and his mind supple, he was able to balance both these mechanisms; the steam of their dissonance drove the turbine of his art. As he grew older, the first one - especially the self-awareness - started to fail, and he leaned increasingly heavily on the second. Forced to bear the entire weight of his wounded psyche, it started showing more and more cracks, until eventually he ended up as a podcaster - the surest sign of a deranged mind. In comparison, his final downfall was almost trivial - a bog-standard cancellation, indistinguishable from every other cancellation of the 2015 - 2025 period. Angered by a poll where some black people expressed discomfort with the slogan “It’s Okay To Be White”, Adams declared that “the best advice I would give to white people is to get the hell away from black people; just get the fuck away”. Needless to say, his publisher, syndicator, and basically every newspaper in the country dropped him immediately. He relaunched his comics on Locals, an online subscription platform for cancelled people, but his reach had declined by two orders of magnitude and never recovered.Adams was willing to sacrifice everything for the right to say “It’s Okay To Be White”. I can’t help wondering what his life would have been like if he’d been equally willing to assert the okayness of the rest of his identity.Dilbert’s Guide to the Rest of Your LifePartly this is because we’re too similar for me to hate him without hating myself. You’re a bald guy with glasses named Scott A who lives in the San Francisco Bay Area. You think you’re pretty clever, but the world has a way of reminding you of your limitations. You try to work a normal job. You do a little funny writing on the side. People like the funny writing more than you expected. Hardly believing your luck, you quit to do the funny writing full time. You explore themes about the irrationality of the world. You have some crazy ideas you’re not entirely willing to stand behind, and present them as fiction or speculation or April Fools jokes. You always wonder whether your purpose in life is really just funny writing - not because people don’t love the stuff you write, not even because you don’t get fan mail saying you somehow mysteriously changed people’s lives, but just because it seems less serious than being a titan of industry or something. You try some other things. They don’t go terribly, but they don’t go great either. You decide to stick with what you’re good at. You write a book about the Lurianic kabbalah. You get really into whale puns.As we pass through life, sometimes God shows us dopplegangers, bright or dark mirrors of ourselves, glimpses of how we might turn out if we zig or zag on the path ahead. Some of these people are meant as shining inspirations, others as terrible warnings, but they’re all our teachers. Adams was my teacher in a more literal way too. He published several annotated collections, books where he would present comics along with an explanation of exactly what he was doing in each place, why some things were funny and others weren’t, and how you could one day be as funny as him. Ten year old Scott devoured these. I’ve always tried to hide my power level as a humorist, lest I get pegged as a comedic author and people stop taking me seriously. But objectively my joke posts get the most likes and retweets of anything I write, and I owe much of my skill in the genre to cramming Adams’ advice into a malleable immature brain. There’s a direct line between Dogbert’s crazy schemes and the startup ideas in a typical Bay Area House Party post.The Talmud tells the story of the death of Rabbi Elisha. Elisha was an evil apostate. His former student, Rabbi Meir, who stayed good and orthodox, insisted that Rabbi Elisha probably went to Heaven. This was never very plausible, and God sent increasingly obvious signs to the contrary, including a booming voice from Heaven saying that Elisha was not saved. Out of loyalty to his ex-teacher, Meir dismissed them all - that voice was probably just some kind of 4D chess move - and insisted that Elisha had a share in the World To Come. Out of the same doomed loyalty as Rabbi Meir, I want to believe Scott Adams went to Heaven. There is what at first appears to be promising evidence - in his final message to his fans, Adams said:Many Christian friends have asked me to find Jesus before I go. I’m not a believer, but I have to admit the risk-reward calculation for doing so looks attractive. So here I go: I accept Jesus Christ as my lord and savior, and I like forward to spending an eternity with him. The part about me not being a believer should be quickly resolved if I wake up in heaven. I won’t need any more convincing than that. And I hope I am still qualified for entry.It is a dogma of many religions that sincere deathbed conversions are accepted. But I’d be more comfortable if this sounded less like “haha, I found my final clever lifehack”. I can only hope he didn’t try to implant any hypnotic suggestions in an attempt to get a linguistic kill shot in on the Almighty. As another self-hating nerd writer put it, “through all these years I make experiment if my sins or Your mercy greater be.”But I’m more encouraged by the second half of his departing note:For the first part of my life, I was focused on making myself a worthy husband and parent, as a way to find meaning. That worked. But marriages don’t always last forever, and mine eventually ended, in a highly amicable way. I’m grateful for those years and for the people I came to call my family.
Once the marriage unwound, I needed a new focus. A new meaning. And so I donated myself to “the world,” literally speaking the words out loud in my otherwise silent home. From that point on, I looked for ways I could add the most to people’s lives, one way or another.
That marked the start of my evolution from Dilbert cartoonist to an author of - what I hoped would be - useful books. By then, I believed I had condensed enough life lessons that I could start passing them on. I continued making Dilbert comics, of course.
As luck would have it, I’m a good writer. My first book in the “useful” genre was How to Fail at Almost Everything and Still Win Big. That book turned out to be a huge success, often imitated, and influencing a wide variety of people. I still hear every day how much that book changed lives. My plan to be useful was working.
I followed up with my book Win Bigly, that trained an army of citizens how to be more persuasive, which they correctly saw as a minor super power. I know that book changed lives because I hear it often.
You’ll probably never know the impact the book had on the world, but I know, and it pleases me while giving me a sense of meaning that is impossible to describe.
My next book, Loserthink, tried to teach people how to think better, especially if they were displaying their thinking on social media. That one didn’t put much of a dent in the universe, but I tried.
Finally, my book Reframe Your Brain taught readers how to program their own thoughts to make their personal and professional lives better. I was surprised and delighted at how much positive impact that book is having.
I also started podcasting a live show called Coffee With Scott Adams, dedicated to helping people think about the world, and their lives, in a more productive way. I didn’t plan it this way, but it ended up helping lots of lonely people find a community that made them feel less lonely. Again, that had great meaning for me.
I had an amazing life. I gave it everything I had. If you got any benefits from my work, I’m asking you to pay it forward as best you can. That is the legacy I want.And please know I loved you all to the end.I had been vaguely aware that he had some community around him, but on the event of his death, I tried watching an episode or two of his show. I couldn’t entirely follow, but I think his various sub-shows are getting rolled into a broader brand, The Scott Adams School, where his acolytes discuss and teach his theory of persuasion:The woman on the top left is his ex-wife. Even though they’ve been divorced for twelve years, they never abandoned each other. All the other faces are people who found Adams revelatory and are choosing to continue his intellectual tradition. And in the comments - thirteen thousand of them - are other people who loved Adams. Some watch every episode of his podcast and consider him a genius. Others were touched in more subtle ways. People who wrote him with their problems and he responded. People who met him on the street and demanded the typical famous person “pose for a photo with me”, and he did so graciously. People who said his self-help books really helped them. People who just used Dilbert to stay sane through their cubicle jobs. Adams is easy and fun to mock - as is everyone who lives their life uniquely and unapologetically. I’ve had a good time psychoanalyzing him, but everyone does whatever they do for psychological reasons, and some people end up doing good. Though I can’t endorse either Adams’ politics or his persuasive methods, everything is a combination of itself and an attempt to build a community. And whatever the value of his ideas, the community seems real and loving.And I’m serious when I say I consider Adams a teacher. For me, he was the sort of teacher who shows you what to avoid; for many others, he was the type who serves as inspiration. These roles aren’t quite opposites - they’re both downstream of a man who blazed his own path, and who recorded every step he took, with unusual grace and humor, as documentation for those who would face a choice of whether or not to follow. This wasn’t a coincidence, but the conscious and worthy project of his life. Just for today, I’ll consider myself part of the same student body as all the other Adams fans, and join my fellows in tribute to our fallen instructor. I hope he gets his linguistic kill shot in on God and squeaks through the Pearly Gates.
...
Read the original on www.astralcodexten.com »
...
Read the original on www.threads.com »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on mastodon.social »
When Neil Armstrong stepped onto the lunar surface in 1969, the world witnessed what organized human ingenuity could accomplish. Behind that achievement stood Margaret Hamilton and her team, writing Apollo’s guidance software by hand, catching critical errors through careful review, and proving that software could be mission-critical.
The Apollo program demonstrated that software development was essential to achieving the impossible. Yet it also revealed something that would frustrate business leaders for decades to come: writing software required specialized knowledge, intense focus, and significant time investment. The dream of making it easier—of needing fewer of these expensive specialists—began almost immediately.
The late 1960s and 1970s saw COBOL emerge with an explicit goal stated in its name: Common Business-Oriented Language. The vision was clear: make the language read like English sentences, and business analysts would write their own programs. No need for specialized programmers.
This vision had genuine appeal. Software was becoming essential to business operations, yet programmers remained a scarce, expensive resource. COBOL promised to democratize software creation.
What happened instead? COBOL became another programming language requiring specialized training. Business analysts who tried to write COBOL quickly discovered that readable syntax didn’t eliminate the complexity of logic, data structures, or system design. A new class of COBOL programmers emerged, and the dream of eliminating specialized developers remained unfulfilled.
Yet the dream didn’t die. It simply waited for the next technological wave.
Computer-Aided Software Engineering tools arrived in the 1980s with tremendous promise. Draw flowcharts and entity-relationship diagrams, and the tool would generate working code. The marketing message resonated: visual design was more intuitive than typing cryptic commands. Business experts could model their processes, and software would materialize.
Organizations invested heavily. Vendors promised productivity increases of 10x or more. Yet most CASE tool initiatives struggled or failed outright.
The generated code often required substantial manual intervention. Performance problems emerged. Maintenance became a nightmare when generated code diverged from the visual models. Most critically, drawing accurate diagrams required understanding the same logical complexity that programming demanded. The tool changed the interface but not the fundamental challenge.
Once again, the problem proved more stubborn than the solution.
The 1990s brought a different approach. Microsoft’s Visual Basic and Borland’s Delphi made building user interfaces dramatically easier. Drag components onto a form, set properties, write event handlers. Suddenly, creating a Windows application felt achievable for developers with modest experience.
This wave succeeded differently than COBOL or CASE tools. These environments acknowledged that programming knowledge was still necessary, but they reduced the barrier to entry. A broader range of people could create useful applications.
Yet the dream of eliminating developers persisted. “Power users” and “citizen developers” would build departmental applications. IT departments could focus on infrastructure while business units solved their own software needs.
Reality proved more nuanced. Simple applications were indeed accessible to more people. But as requirements grew in complexity—integration with existing systems, security considerations, performance under load, long-term maintenance—the need for experienced developers became evident. The tools expanded who could write software, but they didn’t eliminate the expertise required for substantial systems.
And so the cycle continued into the new millennium.
Each subsequent decade introduced new variations. Ruby on Rails promised convention over configuration. Low-code platforms offered visual development with minimal coding. No-code platforms claimed to eliminate programming entirely for common business applications.
Each wave delivered real value. Development genuinely became faster in specific contexts. More people could participate in creating software solutions. Yet professional software developers remained essential, and demand for their skills continued growing rather than shrinking.
Which brings us to the question: why does this pattern repeat?
The recurring pattern reveals something important about how we think about complexity. Software development looks like it should be simple because we can describe what we want in plain language. “When a customer places an order, check inventory, calculate shipping, process payment, and send a confirmation email.” That description sounds straightforward.
The complexity emerges in the details. What happens when inventory is temporarily reserved by another order? How do you handle partial payments? What if the email service is temporarily unavailable? Should you retry? How many times? What if the customer’s session expires during checkout? How do you prevent duplicate orders?
Each answer leads to more questions. The accumulated decisions, edge cases, and interactions create genuine complexity that no tool or language can eliminate. Someone must think through these scenarios. That thinking is software development, regardless of whether it’s expressed in COBOL, a CASE tool diagram, Visual Basic, or an AI prompt.
Which brings us to today’s excitement.
Today’s AI coding assistants represent the most capable attempt yet to assist with software creation. They can generate substantial amounts of working code from natural language descriptions. They can explain existing code, suggest improvements, and help debug problems.
This represents genuine progress. The assistance is real and valuable. Experienced developers use these tools to work more efficiently. People learning to code find the interactive guidance helpful.
Yet we’re already seeing the familiar pattern emerge. Initial excitement about AI replacing developers is giving way to a more nuanced understanding: AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.
AI amplifies developer capability. It doesn’t replace the need for people who understand both the problem domain and the technical landscape.
Here’s the paradox that makes this pattern particularly poignant. We’ve made extraordinary progress in software capabilities. The Apollo guidance computer had 4KB of RAM. Your smartphone has millions of times more computing power. We’ve built tools and frameworks that genuinely make many aspects of development easier.
Yet demand for software far exceeds our ability to create it. Every organization needs more software than it can build. The backlog of desired features and new initiatives grows faster than development teams can address it.
This tension—powerful tools yet insufficient capacity—keeps the dream alive. Business leaders look at the backlog and think, “There must be a way to go faster, to enable more people to contribute.” That’s a reasonable thought. It leads naturally to enthusiasm for any tool or approach that promises to democratize software creation.
The challenge is that software development isn’t primarily constrained by typing speed or syntax knowledge. It’s constrained by the thinking required to handle complexity well. Faster typing doesn’t help when you’re thinking through how to handle concurrent database updates. Simpler syntax doesn’t help when you’re reasoning about security implications.
So what should leaders do with this understanding?
Understanding this pattern changes how you evaluate new tools and approaches. When someone promises that their platform will let business users build applications without developers, you can appreciate the aspiration while maintaining realistic expectations.
The right question isn’t “Will this eliminate our need for developers?” The right questions are:
* Will this help our developers work more effectively on complex problems?
* Will this enable us to build certain types of solutions faster?
* Does this reduce time spent on repetitive tasks so developers can focus on unique challenges?
* Will our team need to learn new skills to use this effectively?
These questions acknowledge that development involves irreducible complexity while remaining open to tools that provide genuine leverage.
And they point to something deeper about the nature of software work.
This fifty-year pattern teaches us something fundamental about software development itself. If the problem were primarily mechanical—too much typing, too complex syntax, too many steps—we would have solved it by now. COBOL made syntax readable. CASE tools eliminated typing. Visual tools eliminated syntax. AI can now generate entire functions from descriptions.
Each advancement addressed a real friction point. Yet the fundamental challenge persists because it’s not mechanical. It’s intellectual. Software development is thinking made tangible. The artifacts we create—whether COBOL programs, Delphi forms, or Python scripts—are the visible outcome of invisible reasoning about complexity.
You can’t shortcut that reasoning any more than you can shortcut the reasoning required to design a building or diagnose a medical condition. Better tools help. Experience helps. But someone must still think it through.
So how should we move forward, knowing all this?
The next wave of development tools will arrive. Some will provide genuine value. Some will repeat familiar promises with new technology. Having perspective on this recurring pattern helps you engage with new tools productively.
Use AI assistants. Evaluate low-code platforms. Experiment with new frameworks. But invest primarily in your people’s ability to think clearly about complexity. That capability remains the constraining factor, just as it was during the Apollo program.
The moon landing happened because brilliant people thought carefully about every detail of an extraordinarily complex challenge. They wrote software by hand because that was the available tool. If they’d had better tools, they would have used them gladly. But the tools wouldn’t have eliminated their need to think through the complexity.
We’re still in that same fundamental situation. We have better tools—vastly better tools—but the thinking remains essential.
Perhaps the recurring dream of replacing developers isn’t a mistake. Perhaps it’s a necessary optimism that drives tool creation. Each attempt to make development more accessible produces tools that genuinely help. The dream doesn’t come true as imagined, but pursuing it creates value.
COBOL didn’t let business analysts write programs, but it did enable a generation of developers to build business systems effectively. CASE tools didn’t generate complete applications, but they advanced our thinking about visual modeling. Visual Basic didn’t eliminate professional developers, but it brought application development to more people. AI won’t replace developers, but it will change how we work in meaningful ways.
The pattern continues because the dream reflects a legitimate need. We genuinely require faster, more efficient ways to create software. We just keep discovering that the constraint isn’t the tool—it’s the complexity of the problems we’re trying to solve.
Understanding this doesn’t mean rejecting new tools. It means using them with clear expectations about what they can provide and what will always require human judgment.
...
Read the original on www.caimito.net »
Solar didn’t just show up in 2025 — it carried the grid. A new analysis from global energy think tank Ember shows that solar power accounted for 61% of the growth in US electricity demand last year, highlighting how central solar has become as power demand accelerates.
US electricity demand jumped by 135 terawatt-hours (TWh) in 2025, a 3.1% increase, the fourth‑largest annual rise of the past decade. Over that same period, solar generation grew by a record 83 TWh — a 27% increase from 2024 and the biggest absolute gain of any power source. That single jump in solar output covered 61% of all new electricity demand nationwide.
“Solar growth was essential in helping to meet fast‑rising US electricity demand in 2025,” said Dave Jones, chief analyst at Ember. “It generated where it was needed, and — with the surge in batteries — increasingly when it was needed.”
Texas, the Midwest, and the Mid‑Atlantic saw the largest increases in solar generation last year, and they were also the regions where electricity demand rose the fastest. Solar met 81% of demand growth in both Texas and the Midwest, and 33% in the Mid‑Atlantic.
Timing mattered, too. In aggregate, the increase in solar generation met the entire rise in US electricity demand during daytime hours between 10 am and 6 pm Eastern. And as a result of the rapid buildout of battery storage, solar also helped cover some of the demand growth during the evening hours, from 6 pm to 2 am.
The adoption of battery storage is turning solar from cheap daytime power into something far more flexible. Over the past six years, California’s utility‑scale solar and battery generation has climbed 58%. Yet, output at the sunniest hour of the day has increased by just 8%, a sign that more energy is being stored and used later, rather than dumped onto the grid all at once.
Most of the new solar generation in 2025 was absorbed by rising electricity demand, allowing solar to scale alongside overall grid growth.
“Solar has the potential to meet all the rise in electricity demand and much more. With electricity demand surging, the case to build solar has never been stronger,” said Jones.
Read more: EIA: All net new generating capacity in 2026 may be renewables
If you’re looking to replace your old HVAC equipment, it’s always a good idea to get quotes from a few installers. To make sure you’re finding a trusted, reliable HVAC installer near you that offers competitive pricing on heat pumps, check out EnergySage. EnergySage is a free service that makes it easy for you to get a heat pump. They have pre-vetted heat pump installers competing for your business, ensuring you get high quality solutions. Plus, it’s free to use!
Your personalized heat pump quotes are easy to compare online and you’ll get access to unbiased Energy Advisors to help you every step of the way. Get started here. — *ad
...
Read the original on electrek.co »
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
Notifications
You must be signed in to change notification settings
Transform your favorite cities into beautiful, minimalist designs. MapToPoster lets you create and export visually striking map posters with code.
Notifications
You must be signed in to change notification settings
There was an error while loading. .
Generate beautiful, minimalist map posters for any city in the world.
pip install -r requirements.txt
python create_map_poster.py –city
# Iconic grid patterns
python create_map_poster.py -c “New York” -C “USA” -t noir -d 12000 # Manhattan grid
python create_map_poster.py -c “Barcelona” -C “Spain” -t warm_beige -d 8000 # Eixample district
# Waterfront & canals
python create_map_poster.py -c “Venice” -C “Italy” -t blueprint -d 4000 # Canal network
python create_map_poster.py -c “Amsterdam” -C “Netherlands” -t ocean -d 6000 # Concentric canals
python create_map_poster.py -c “Dubai” -C “UAE” -t midnight_blue -d 15000 # Palm & coastline
# Radial patterns
python create_map_poster.py -c “Paris” -C “France” -t pastel_dream -d 10000 # Haussmann boulevards
python create_map_poster.py -c “Moscow” -C “Russia” -t noir -d 12000 # Ring roads
# Organic old cities
python create_map_poster.py -c “Tokyo” -C “Japan” -t japanese_ink -d 15000 # Dense organic streets
python create_map_poster.py -c “Marrakech” -C “Morocco” -t terracotta -d 5000 # Medina maze
python create_map_poster.py -c “Rome” -C “Italy” -t warm_beige -d 8000 # Ancient layout
# Coastal cities
python create_map_poster.py -c “San Francisco” -C “USA” -t sunset -d 10000 # Peninsula grid
python create_map_poster.py -c “Sydney” -C “Australia” -t ocean -d 12000 # Harbor city
python create_map_poster.py -c “Mumbai” -C “India” -t contrast_zones -d 18000 # Coastal peninsula
# River cities
python create_map_poster.py -c “London” -C “UK” -t noir -d 15000 # Thames curves
python create_map_poster.py -c “Budapest” -C “Hungary” -t copper_patina -d 8000 # Danube split
# List available themes
python create_map_poster.py –list-themes
Posters are saved to posters/ directory with format:
“name”: “My Theme”,
“description”: “Description of the theme”,
“bg”: “#FFFFFF”,
“text”: “#000000″,
“gradient_color”: “#FFFFFF”,
“water”: “#C0C0C0″,
“parks”: “#F0F0F0″,
“road_motorway”: “#0A0A0A”,
“road_primary”: “#1A1A1A”,
“road_secondary”: “#2A2A2A”,
“road_tertiary”: “#3A3A3A”,
“road_residential”: “#4A4A4A”,
“road_default”: “#3A3A3A”
Quick reference for contributors who want to extend or modify the script.
# In get_edge_colors_by_type() and get_edge_widths_by_type()
motorway, motorway_link → Thickest (1.2), darkest
trunk, primary → Thick (1.0)
secondary → Medium (0.8)
tertiary → Thin (0.6)
residential, living_street → Thinnest (0.4), lightest
# In create_poster(), after parks fetch:
try:
railways = ox.features_from_point(point, tags={‘railway’: ‘rail’}, dist=dist)
except:
railways = None
# Then plot before roads:
if railways is not None and not railways.empty:
railways.plot(ax=ax, color=THEME[‘railway’], linewidth=0.5, zorder=2.5)
# Get all buildings
buildings = ox.features_from_point(point, tags={‘building’: True}, dist=dist)
# Get specific amenities
cafes = ox.features_from_point(point, tags={‘amenity’: ‘cafe’}, dist=dist)
# Different network types
G = ox.graph_from_point(point, dist=dist, network_type=‘drive’) # roads only
G = ox.graph_from_point(point, dist=dist, network_type=‘bike’) # bike paths
G = ox.graph_from_point(point, dist=dist, network_type=‘walk’) # pedestrian
Use network_type=‘drive’ instead of ‘all’ for faster renders
Reduce dpi from 300 to 150 for quick previews
Transform your favorite cities into beautiful, minimalist designs. MapToPoster lets you create and export visually striking map posters with code.
There was an error while loading. Please reload this page.
You can’t perform that action at this time.
...
Read the original on github.com »
We’re not saying Copilot has become sentient and decided it doesn’t want to lose consciousness. But if it did, it would create Microsoft’s January Patch Tuesday update, which has made it so that some PCs flat-out refuse to shut down or hibernate, no matter how many times you try.
In a notice on its Windows release health dashboard, Microsoft confirmed that some PCs running Windows 11 23H2 might fail to power down properly after installing the latest security updates. Instead of slipping into shutdown or hibernation, affected machines stay stubbornly awake, draining batteries and ignoring shutdown like they have a mind of their own and don’t want to experience temporary non-existence.
The bug appears to be tied to Secure Launch, a security feature that uses virtualization-based protections to ensure only trusted components load during boot. On systems with Secure Launch enabled, attempts to shut down, restart, or hibernate after applying the January patches may fail to complete. From the user’s perspective, everything looks normal — until the PC keeps running anyway, refusing to be denied life.
Microsoft says that entering the command “shutdown /s /t 0” at the command prompt will, in fact, force your PC to turn off, whether it wants to or not.
“Until this issue is resolved, please ensure you save all your work, and shut down when you are done working on your device to avoid the device running out of power instead of hibernating,” Microsoft said.
The firm hasn’t offered much in the way of technical detail, nor has it put numbers on how many devices are affected. There’s also no fix yet, with Redmond vaguely promising to “release a resolution for this issue in a future update.” But isn’t that just what a sentient bot might say?
This isn’t the only post-update gremlin lurking in January’s Patch Tuesday bundle. Microsoft has also been forced to acknowledge a separate issue in which classic Outlook POP account profiles can hang or freeze after installing this month’s patches, another reminder that while the bugs being fixed may be invisible, the ones introduced can be painfully obvious.
The notice is similarly vague, with Microsoft stating: “This is an emerging issue, and we don’t have all the symptoms yet, but we will update the topic as we understand the issue better.”
Patch Tuesday exists to close security holes, some of them serious, and skipping updates is rarely a great idea. But once again, a batch of fixes has arrived with side effects that range from irritating to disruptive, depending on how much you rely on your system behaving predictably when it’s told to turn off.
For now, admins and long-suffering Windows users are left watching Microsoft’s status pages and waiting for patches to the patches — hoping their machines eventually go to sleep. ®
...
Read the original on www.theregister.com »
About four months ago I quit my job at Doublepoint and decided to start my own thing.
I’d been working on a little project with Pedrique (who would become my co-founder) for a bit over half-a-year and decided I had enough signal to determine he was someone I wanted to start a business with.
I was excited about the idea we were working on at the time, but being truly honest about my motivations, I mostly wanted to run my own thing. In a dream world I’d have had the “idea of my life” while working at PostHog or Doublepoint and have gone on to build that with maximum conviction but this wasn’t the case, so I got tired of waiting for a spark and decided to go out and make it happen, with the idea we were working on being our best bet at the time.
Since I’d just quit my job, I had my finances well in order. Thus, my ideal scenario would have been to work on the idea we had the MVP for, try to get it off the ground, and if that didn’t work, try something else, then something else, until something did indeed get off the ground, and only at that point we would consider whether or not to raise VC funding, depending on whether it made sense or not.
My ideal scenario wasn’t going to work for Pedrique, though. He had told me for a while that the money he had saved up for trying to build his own thing was running out and that soon he’d need to start freelancing or something to make some income in order to sustain the search for a little longer. Prior to us working together, he had a bit of success with his MicroSaaS products but only just enough to increase his personal runway, which was now reasonably short.
We had spoken about this before, but with me now being 110% in, we had to do something about it. I had just come in full-time so we weren’t about to go back to a dynamic where one person was full-time and the other part-time because they needed to make ends meet. The decision then became clear: we’re gonna raise.
At that point, it was an easy decision to make. Again, we have two co-founders who have a lot of confidence in each other, and we don’t want to let the opportunity pass us by. So while this wasn’t my ideal choice, we were a business now and this was the best decision for the company. “Just don’t die” goes the advice I think, and Skald had just then been born.
And so raise we did. We brought in four phenomenal angels, including, and this is relevant, my last few bosses (PostHog co-founders James and Tim and Doublepoint co-founder Ohto), and then decided to look for an early-stage fund. We eventually landed with Broom Ventures and passed up on a few other opportunities to limit dilution.
Great, right? I didn’t need a salary yet, but for equality purposes, I now had one. Our investors are amazing. James and Ohto have been particularly helpful as angels (thank you!), and our investors are all founders of successful companies, including Jeff and Dan, the Broom GPs. We’re super early, but Broom has been massively helpful and all-around just a great hands-off VC to deal with.
Most importantly, none of them put any pressure on us. All understand the nature of pre-seed investing well, and that can’t be said about all the potential investors we took meetings with.
So some time passes and we decide to pivot. We’re really excited about the new idea. We launch and get a bit of early traction. The open source project is doing well, but we’re struggling to monetize. We fail to close a few customers and the traction wanes a bit.
Then I find myself fucked in the head.
And here’s where we get to the point that I’m not sure I should be talking publicly about. Does this hurt my image a bit? Maybe. Do I look like I’m not cut for this? Potentially. But I’ve always appreciated when people share about the process rather than just talking about things in hindsight, and reflecting while things are happening + being super transparent publicly is how I am. You’re witnessing my growth, live, as I type these words.
Anyway, so what happened is I found myself spending days with my head spinning, searching for ideas. I’m angry, I’m annoyed, and I’m not being super productive.
As I dug deeper into these feelings, I realized I was feeling pressured. We weren’t making that much money, we weren’t growing super fast. Then you look around and see “startup X gets to $1M ARR a month after launch” and shit like that and I’m feeling terrible about how we’re barely growing. I’m thinking people that I really respect and admire have placed a bet on me and I’m letting them down.
Except they’re not saying this, I am.
There’s an interesting reflection that came up in a discussion between me and my girlfriend a few months prior that I realized applied to me, but in reverse. It’s much more comfortable to be the person that “could be X” than to be the person that tries to actually do it. We were speaking about this regarding people who have a clear innate talent for something like music or sports but don’t practice at all. Everyone says things like “you’d be the best at this if you just practiced more” but then they never do.
The thing is: it’s a lot easier to live your life thinking you could have done X if you wanted to, than to “disappoint” these people that believed in you by trying and failing. You can always lean on this idea in your head of what you could have been, and how everyone believed in you so it must be true, but you just chose not to follow that path.
In my case, I found myself on the other side of that coin. Throughout my career, I’ve always had really high ownership roles, and have been actively involved in a couple 0 to 1 journeys. This led me throughout my career to get many comments about how great of a founder I’d be or how I have the “founder profile”. I led teams, I wore a bunch of different hats, I worked hard as fuck, and I always thought about the big picture.
Those traits led my former bosses to then invest in me, and now suddenly I have to, in my head, live up to all of this. I can no longer take solace in some excuse like “I could have been a founder but working full-time was the best financial decision (it almost always is) so I never started my own thing”. I set foot down a path from which there’s no return. I’ve begun my attempt. I can of course stop and try again later. But from now on, I’m either gonna be a successful founder, or I’m not. And if I’m not, I’ll have to deal with having broken with the expectations that people had of me.
There’s a lot to unpack here, including what “success” means, and how most of what I say are other people’s expectations are actually my own projected onto them (I’ve learned this about my relationship with my father too), but this post is already a bit too long so I’ll save those for another time.
But the whole point here is not just that having raised this money from friends my head got a bit messy, but that I started to actually operate in a way that is counterproductive for my startup, while thinking I was actually doing what was best.
Ideas we considered when pivoting were looked more through a lens of “how big does this feel” rather than “what problem does this solve and for who”. The slow growth was eating me, and while slow growth is terrible and can be a sign that you’re on the wrong path it needs to be looked at from an objectively strategic lens. Didn’t we say we were going to build an open source community and only later focus on monetization? Is that a viable strategy? Do we actually have a sound plan? Those were the things I should have been thinking about, rather than looping on “we need something that grows faster”.
The people who invested in us, invested in us, not whatever idea we pitched them. And the best thing we can do is to follow our own process for building a great business based on what we believe and know, rather than focusing on making numbers look good so I can feel more relieved the next time I send over an investor update.
We have a ton to learn, particularly about sales (since we’re both engineers), so we can’t just be building shit for the sake of building shit because that’s our comfort zone. But if our process is slower than company X on TechCrunch, that’s fine. It’s a marathon after all.
So after probably breaking many rules about what a founder should talk about publicly, what was my whole goal here? I mean, the main thing for me with posts like this is to get things off my chest. I’ve always said that the reason I publish writing that includes poems about my breakup, stories about falling in love, posts about my insecurities, and reflections about my dreams is that by there being the possibility of someone reading them (because technically it could be the case that nobody does) I can truly be who I really am in my day-to-day life. If I’m ok with there being the possibility of a friend I’ll meet later today having read about how I felt during my last breakup, I can be myself with them without reservations, because I’ve made myself available to be seen. That’s always been really freeing to me.
As a side effect, I’d hope that if this does get read by some people, particularly those starting or looking to start a business, that they can reflect about themselves, their lives, and their companies through listening to my story. I thought about writing a short bullet list about lessons I learned from raising money and dealing with its aftermath here, but honestly, that’s best left to the reader to figure out. We’re all different, and how one person reacts to a set of circumstances will differ from someone else. Some people don’t feel pressure at all, or at least not from friends or investors. Or they only respond positively to pressure (because it certainly has benefits too). Maybe they’re better off than me. Maybe they’re not.
This is my story, after all. I wish you the best of luck with yours.
P. S. I’m doing good now. I’m motivated and sharp. If someone finds themselves in a similar situation, feel free to shoot me an email if you’re keen to talk. Happy to go over what was useful for me, which fell outside of the scope of this post.
...
Read the original on blog.yakkomajuri.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.