10 interesting stories served every morning and every evening.
Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.
Direct messages and servers that are not age-restricted will continue to function normally, but users won’t be able to send messages or view content in an age-restricted server until they complete the age check process, even if it’s a server they were part of before age verification rolled out. Savannah Badalich, Discord’s global head of product policy, said in an interview with The Verge that those servers will be “obfuscated” with a black screen until the user verifies they’re an adult. Users also won’t be able to join any new age-restricted servers without verifying their age.
Discord’s global age verification launch is part of a wave of similar moves at other online platforms, driven by an international legal push for age checks and stronger child safety measures. This is not the first time Discord has implemented some form of age verification, either. It initially rolled out age checks for users in the UK and Australia last year, which some users figured out how to circumvent using Death Stranding’s photo mode. Badalich says Discord “immediately fixed it after a week,” but expects users will continue finding creative ways to try getting around the age checks, adding that Discord will “try to bug bash as much as we possibly can.”
It’s not just teens trying to cheat the system who might attempt to dodge age checks. Adult users could avoid verifying, as well, due to concerns around data privacy, particularly if they don’t want to use an ID to verify their age. In October, one of Discord’s former third-party vendors suffered a data breach that exposed users’ age verification data, including images of government IDs.
A government ID might still be required for age verification in its global rollout. According to Discord, to remove the new “teen-by-default” changes and limitations, “users can choose to use facial age estimation or submit a form of identification to [Discord’s] vendor partners, with more options coming in the future.”
The first option uses AI to analyze a user’s video selfie, which Discord says never leaves the user’s device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents “are deleted quickly — in most cases, immediately after age confirmation.”
Badalich also says after the October data breach, Discord “immediately stopped doing any sort of age verification flows with that vendor” and is now using a different third-party vendor. She adds, “We’re not doing biometric scanning [or] facial recognition. We’re doing facial estimation. The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.”
However, some users may not have to go through either form of age verification. Discord is also rolling out an age inference model that analyzes metadata like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.
“If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,” Badalich says.
She goes on to explain that the addition of age assurance will mainly impact adult content: “A majority of people on Discord are not necessarily looking at explicit or graphic content. When we say that, we’re really talking about things that are truly adult content [and] age inappropriate for a teen. So, the way that it will work is a majority of people are not going to see a change in their experience.”
Even so, there’s still a risk that some users will leave Discord as a result of the age verification rollout. “We do expect that there will be some sort of hit there, and we are incorporating that into what our planning looks like,” Badalich says. “We’ll find other ways to bring users back.”
...
Read the original on www.theverge.com »
Not sure if it’s just me, but I often get a primal satisfaction whenever I see intricate patterns emerging out of seemingly disordered environments.
Think about the galleries of ant colonies, the absurdly perfect hexagons of honeycombs, or the veins on a leaf. No architect, no blueprint. Just simple rules stacking on each other that result in beautiful patterns. I can’t explain why, but seeing those structures always felt good.
Humans do this too. And for me, one of the most fascinating patterns we’ve come up with is the roads.
Sometimes I imagine aliens from faraway galaxies discovering Earth long after we’re gone. Forests reclaimed by nature, cities reduced to rubble, yet between them, a faintly pattern is still visible - the road network. I like to think they will feel the same way I do when looking at nature patterns. - “Man, someone really thought this through.”
I’ve got to say, roads have fascinated me since I was a kid.
I still remember playing SimCity 2000 for the first time when I was about five or six years old. I didn’t understand much. Definitely didn’t know what zoning, taxes, or demand were. But roads fascinated me from the start.
I think roads lie at the heart of every city builder. It’s the fabric on which cities are built. Since that moment, I’ve played almost every modern-themed city builder out there. In the meantime, I’ve also started noticing them in the real world. Examining them in more detail.
Despite every game bringing an improvement over the one before, something always felt… off.
SimCity 4 added elevation and diagonal roads. SimCity 2013 introduced curved roads. Then came Cities: Skylines with a ton of freedom. You could know freeplace roads and merge them into intersections at any angle, build flyovers at different elevations to construct crazy, yet unrealistic, interchanges. I think this was the largest breakthrough.
But something was still nagging me. Highway ramps were unrealistically sharp or wobbly, lanes that were supposed to be high-speed bent too sharply at certain points, and the corner radii of intersections looked strange.
I mean look at this. This is probably what highway engineers have nightmares about.
And then came the mods. Mods changed everything. The great community enabled a new kind of freedom. One could build almost anything: perfect merge lanes, realistic markings, and smooth transitions. It was a total game-changer. I am particularly proud of this 5-lane turbo roundabout:
But even then, mods didn’t feel completely natural. They were still limited by the game’s original system.
Cities: Skylines 2 pushed it even further, with lanes becoming even more realistic and markings as well. I think at this point, a non-trained eye won’t know the difference from reality.
Then I stopped stumbling around and started asking why? I tried to understand how engineers design roads and how game developers code them.
That’s when I ran straight into the fundamental issue - right at the base of it. And it comes to something every developer knows about and loves:
If you’re a Unity or Unreal developer or played with basically any vector graphics editing software, you already know them well. Bezier curves are an elegant, intuitive, and incredibly powerful way to smoothly interpolate between two points while taking into consideration some direction of movement (the tangent).
That’s exactly what roads are supposed to do, right? Of course, developers naturally think they are the perfect tool.
They’ve got their beauty, I need to admit. But hidden beneath the surface lies an uncomfortable truth.
You see, the shapes of roads in real life come from an underlying essential fact: the wheel axles of a vehicle. No matter how you drive a car, the distance between the left and right wheels remains constant. You can notice this in tyre tracks in snow or sand. Two perfectly parallel paths, always the same distance apart maintaining a consistent curved shape.
Here’s the issue with Bezier splines: they don’t preserve shape and curvature when offset.
At gentle curves, they kinda look fine, but once you have tighter bends, the math falls apart. In mathy terms: The offset of a Bezier curve is not a Bezier curve.
When game engines try to generate a road mesh along a Bezier spline, the geometry often fails at tight angles. The inner edge curves at a different rate than the outer edge. This creates “pinching,” self-intersecting geometry.
Here is the best example of how they start to fail in extreme scenarios.
To sum up: Bézier curves are unconstrained. The freedom they enable is exactly the “Achilles’ heel”. Real roads are engineered with the constraints of real motion in mind. A car’s path can’t magically self-intersect.
Ok, so what preserves parallelism? If you’ve already been through kindergarten, you’re already familiar with it: It’s the CIRCLE.
It has almost like a magical property: no matter how much you offset it, the result is still a circular arc. Perfectly parallel with the initial one. So satisfying.
Scrapping Bezier curves for Circle Arcs also yields a nice, unexpected bonus. To procedurally build intersections, the engine has to perform many curve-curve intersection operations multiple times per frame. The intersection between two Bezier curves is notoriously complex. On one side, you have polynomial root finding, iterative numerical methods, de Castelaju’s method + bounding boxes, and multiple convergence checks vs a simple, plain O(1) formula in Circle Arcs.
By stitching together circular arcs of different radii, you can create any shape while adhering to proper engineering principles.
But this is not the end of the story. Circle arcs have issues as well (Oh no). The problem with circles in infrastructure is that they have constant curvature. What this means is that when entering a circular curve from a straight line, the lateral force jumps from 0 to a fixed constant value (determined by the radius of the circle). If you were in a car or train entering at high speed into this kind of curve, it would feel terrible.
Civil engineers have to account for this as well. So then, what curve maintains parallelism when offset and has a smoothly increasing curvature?
Introduce you to: transition curves - most famously, the clothoid.
A clothoid gradually increases curvature over distance. You start almost straight, then slowly turn tighter and tighter. The steering wheel rotates smoothly. The forces ramp up naturally, and a passenger’s body barely notices the transition.
These curves provide comfortable rides at high speeds by maintaining parallel offsets and continuous curvature changes.
And they are also… a math nightmare. Differential geometry. Integrals. Oh my… Which is probably why most games don’t even dare.
Vehicles move slowly on city streets. For intersections of urban roads, circular arcs are more than a decent choice.
Does everything I just rambled about matter? Do 99% of city-builder players care what shape the corner radius of the intersection has? Most likely, no. Then why bother?
First, because of curiosity. As any other nerd overly obsessed with the nitty-gritty details of a very specific subject, I just wanted to see how I would implement it. Like challenging the status quo.
Second, even if established titles might not accurately render roads, they are still light-years ahead of what solutions an indie developer can find online. The tutorials and assets for this are just sad. I personally got bored with grids, and I just wanted to built a better solution to share with anyone who wants to build a city builder.
In the next blog post, I’ll discuss more technicalities and dive into how I’ve built my own solution. If you want to follow along or get notified when I release this asset, scribble your email below.
...
Read the original on sandboxspirit.com »
A friend of mine recently attended an open forum panel about how engineering orgs can better support their engineers. The themes that came up were not surprising:
Sacrificing quality makes it hard to feel proud of the work. No acknowledgement of current velocity. If we sprint to deliver, the expectation becomes to keep sprinting, forever.
I’ve been hearing variations of this for a while now, but now I’m also hearing and agreeing with “AI doesn’t always speed us up”.
Developers used to google things. You’d read a StackOverflow answer, or an article, or a GitHub issue. You did some research, verified it against your own context, and came to your own conclusion. Nobody said “Google did it for me” or “it was the top result so it must be true.”
Now I’m starting to hear “AI did it for me.”
That’s either overhyping what happened, or it means the developer didn’t come to their own conclusion. Both are bad. If someone on my team ever did say Google wrote their code because they copied a StackOverflow answer, I’d be worried about the same things I’m worried about now with AI: did you actually understand what you pasted?
Vibe coding is fun. At first. For prototyping or low-stakes personal projects, it’s useful. But when the stakes are real, every line of code has consequences.
On a personal project, I asked an AI agent to add a test to a specific file. The file was 500 lines before the request and 100 lines after. I asked why it deleted all the other content. It said it didn’t. Then it said the file didn’t exist before. I showed it the git history and it apologised, said it should have checked whether the file existed first. (Thank you git).
Now imagine that in a healthcare codebase instead of a side project.
AI assistance can cost more time than it saves. That sounds backwards, but it’s what happened here. I spent longer arguing with the agent and recovering the file than I would have spent writing the test myself.
Using AI as an investigation tool, and not jumping straight to AI as solution provider, is a step that some people skip. AI-assisted investigation is an underrated skill that’s not easy, and it takes practice to know when AI is wrong. Using AI-generated code can be effective, but if we give AI more of the easy code-writing tasks, we can fall into the trap where AI assistance costs more time than it saves.
Most people miss this about AI-assisted development. Writing code is the easy part of the job. It always has been. The hard part is investigation, understanding context, validating assumptions, and knowing why a particular approach is the right one for this situation. When you hand the easy part to AI, you’re not left with less work. You’re left with only the hard work. And if you skipped the investigation because AI already gave you an answer, you don’t have the context to evaluate what it gave you.
Reading and understanding other people’s code is much harder than writing code. AI-generated code is other people’s code. So we’ve taken the part developers are good at (writing), offloaded it to a machine, and left ourselves with the part that’s harder (reading and reviewing), but without the context we’d normally build up by doing the writing ourselves.
My friend’s panel raised a point I keep coming back to: if we sprint to deliver something, the expectation becomes to keep sprinting. Always. Tired engineers miss edge cases, skip tests, ship bugs. More incidents, more pressure, more sprinting. It feeds itself.
This is a management problem, not an engineering one. When leadership sees a team deliver fast once (maybe with AI help, maybe not), that becomes the new baseline. The conversation shifts from “how did they do that?” to “why can’t they do that every time?”
My friend was saying:
When people claim AI makes them 10x more productive, maybe it’s turning them from a 0.1x engineer to a 1x engineer. So technically yes, they’ve been 10x’d. The question is whether that’s a productivity gain or an exposure of how little investigating they were doing before.
Burnout and shipping slop will eat whatever productivity gains AI gives you. You can’t optimise your way out of people being too tired to think clearly.
I’ve used the phrase “AI is senior skill, junior trust” to explain how AI coding agents work in practice. They’re highly skilled at writing code but we have to trust their output like we would a junior engineer. The code looks good and probably works, but we should check more carefully because they don’t have the experience.
Another way to look at it: an AI coding agent is like a brilliant person who reads really fast and just walked in off the street. They can help with investigations and could write some code, but they didn’t go to that meeting last week to discuss important background and context.
Developers need to take responsible ownership of every line of code they ship. Not just the lines they wrote, the AI-generated ones too.
If you’re cutting and pasting AI output because someone set an unrealistic velocity target, you’ve got a problem 6 months from now when a new team member is trying to understand what that code does. Or at 2am when it breaks. “AI wrote it” isn’t going to help you in either situation.
The other day there was a production bug. A user sent an enquiry to the service team a couple of hours after a big release. There was an edge case timezone display bug. The developer who made the change had 30 minutes before they had to leave to teach a class, and it was late enough for me to already be at home. So I used AI to help investigate, letting it know the bug must be based on recent changes and explaining how we could reproduce. Turned out some deprecated methods were taking priority over the current timezone-aware ones, so the timezone was never converting correctly. Within 15 minutes I had the root cause, a solution idea, and investigation notes in the GitHub issue. The developer confirmed the fix, others tested and deployed, and I went downstairs to grab my DoorDash dinner.
No fire drill. No staying late. AI did the investigation grunt work, I provided the context and verified, the developer confirmed the solution. That’s AI helping with the hard part.
...
Read the original on www.blundergoat.com »
...
Read the original on www.githubstatus.com »
Let’s start by asking ourselves: what color SHOULD the sky be?
Or, one step further back, what color should anything be?
And the answer is: the color of anything is due to the wavelength of photons coming from that thing and hitting your eye.
These sidenotes are optional to read, but I’ll use them for giving the fuller technical details when I’ve abbreviated things in the main body of the text.
In this case, the color you see is determined by the wavelengths of light entering your eye since (1) you may be seeing a pure frequency, but in almost all cases, (2) you’re seeing many frequencies, which your brain interprets as a single color.
For instance, the sensation of turquoise at a specific point can be caused by (a) photons of wavelength 500nm emanating from that point, (b) a specific combo of photons of wavelengths 470nm and 540nm, or (c) (mostly realistically) photons of a huge number of wavelengths, probably peaking somewhere around 500nm.
In the text, I am a bit fast and loose with the difference.
When sunlight hits Earth’s atmosphere, most colors of photons pass through unencumbered. But blue photons have a tendency to ricochet around a lot.
This causes them to disperse all throughout the atmosphere. They disperse so far and wide, and are so numerous, that you can look at any part of the sky on a clear afternoon and, at that moment, blue photons will be shooting from that point straight to your eyes.
Therefore the sky is blue.
Most colors of light pass through the atmosphere relatively unencumbered. You only see them when you look at the sun, where they contribute to the whiteness of the sun’s light. Blue, however, bounces around a lot, getting spread all over the sky. Because blue photons hit our eyeballs from every angle of the sky, the whole sky appears blue.
This is true and all, but it kicks the can down the road. Why blue? Why not red?
In short, it’s because blue and violet have the closest frequencies to a “resonant frequency” of nitrogen and oxygen molecules’s electron clouds.
There’s a lot there, so we’ll unpack it below. But first, here’s an (interactive) demo.
This demo is a simplification. In reality, 99.999% of photons pass through (neither scattering nor absorbing), even at the resonant frequency. Pretty boring to watch!
When a photon passes through/near a small molecule (like N or O, which make up 99% of our atmosphere), it causes the electron cloud around the molecules to “jiggle”. This jiggling is at the same frequency as the photon itself — meaning violet photons cause faster jiggling than red photons.
In any case, for reasons due the internal structure of the molecule, there are certain resonant frequencies of each molecule’s electron cloud. As the electron clouds vibrate closer and closer to these resonant frequencies, the vibrations get larger and larger.
The stronger the electron cloud’s oscillations, the more likely a passing photon (a) is deflected in a new direction rather than (b) passes straight through.
For both N and O, the lowest resonant frequency is in the ultraviolet range. So as the visible colors increase in frequency towards ultraviolet, we see more and more deflection, or “scattering”.
“Scattering” is the scientific term of art for molecules deflecting photons. Linguistically, it’s used somewhat inconsistently. You’ll hear both “blue light scatters more” (the subject is the light) and “atmospheric molecules scatter blue light more” (the subject is the molecule). In any case, they means the same thing 🤷♂️
In fact, violet is 10x more likely to scatter than red.
Math talk: scattering increases proportional to the FOURTH power of the frequency. So higher frequency light means WAY more scattering.
So why isn’t the sky violet? Great question – we’ll cover that in a sec.
I just want to point out two other things that (a) you can see in the demo above, and (b) are useful for later in this article.
First, when light gets really close to — and eventually exactly at — the resonant frequency of the molecule’s electron cloud, it gets absorbed far more than scattered! The photon simply disappears into the electron cloud (and the electron cloud bumps up one energy level). This isn’t important for understanding the color of Earth’s sky… but there are other skies out there 😉
Second, did you notice that even red scatters some? Like, yes, blue scatters 10x more. But the sky is actually every color, just mostly blue/violet. This is why the sky is light blue. If white light is all visible colors of light mixed together equally, light blue is all visible colors mixed together — but biased towards blue.
What would the sky look like if it was only blue? Check it out.
I’ll just end by saying, this dynamic (where scattering increases sharply with the frequency of light) applies to far more than just N and O. In fact, any small gaseous molecule — carbon dioxide, hydrogen, helium, etc. — would preferentially scatter blue, yielding a blue sky at day.
As you saw above, violet scatters more than blue. So why isn’t the sky purple? The dumb but true answer is: our eyes are just worse at seeing violet. It’s the very highest frequency of light we can see; it’s riiight on the edge of our perception.
But! — if we could see violet as well as blue, the sky would appear violet.
We might as well tackle the elephant in the room: if we could see ultraviolet (which is the next higher frequency after violet), would the sky actually be ultraviolet?
And the answer is not really. If we could see UV, the sky would be a UV-tinted violet, but it wouldn’t be overwhelmingly ultraviolet. First, because the sun emits less UV light than visible light. And second, some of that UV light is absorbed by the ozone layer, so it never ever reaches Earth’s surface.
You can see both of those effects in the solar radiation spectrum chart:
The sun emits the most visible light, with UV frequencies falling off very steeply. Augmenting this effect is that the ozone layer in particular absorbs a lot of UV before it can reach Earth’s surface.
Why is the sunset red?
So the obvious next question is why is the sky red at dusk and dawn?
It’s because the sunlight has to travel through way more atmosphere when you’re viewing it at a low angle, and this extended jaunt through the atmosphere gives ample opportunity for allll the blue to scatter away — and even a good deal of the green too!
Simply put, the blue photons (and to a lesser degree, the green) have either (a) gone off into space or (b) hit the earth somewhere else before they reach your eyes.
When the sun is on the horizon (e.g. sunrise or sunset), the photons it emits travel through 40x as much atmosphere to reach your eyes as they would at midday. So blue’s 10x propensity to scatter means it’s simply gone by the time it would’ve reached your eyes. Even green is significantly dampened. Red light, which hardly scatters at all, just cruises on through.
Again, you can play with this and see for yourself 😎
The answer to this question is the second of three “domains” you should understand in order to have a working model of atmosphere color. The physics are different from the small-molecule scattering above.
Clouds are made up of a huge number of tiny water droplets. These droplets are so small (around .02 millimeters in diameter) that they remain floating in the air. But compared to small gas molecules like N and O, these droplets are enormous. A single water droplet may be 100 trillion HO molecules!
So, it’s not as simple as “the photons cause the hundreds of trillions of electrons to jiggle”. Instead, it’s more like the light has entered a very tiny prism or glass bead.
In a prism, white light can reflect around, bounce off exterior or interior surfaces, and even reflect differently depending on frequency — creating a rainbow effect.
The droplet is just as complex. Some of the photons hitting the droplet bounce off the surface. Some enter it, bounce around inside once, twice, etc. — and leave again. Perhaps a few are absorbed. As with a prism, different wavelengths of light will reflect at different angles. The specifics aren’t important — you should just get the general gist.
So whatever white (or slightly yellowish) light that came from the direction of the sun is leaving in many random directions. Think of every color, shooting off in different directions! And then multiply that by a quadrillion droplets! In sum, you just see every frequency of photon coming from every part of the cloud.
And that means the cloud is white!
This idea that the tiny droplets that comprise clouds scales up. Anything larger that light can enter — drizzle, raindrops, hail — will also tend towards white.
But that raises the question — what about things in between tiny molecules (N, O) and the relatively enormous prism-like droplets? How do those things act?
Well, the dust in the sky of Mars is a great example 😉
Why is the sky on Mars red?
The answer to this question is the third of three “domains” you should understand in order to have a working model of atmosphere color. The physics are different from both the small-molecule scattering and large-droplet prism-dynamics above.
The Martian sky is red because it’s full of tiny, iron-rich dust particles that absorb blue — leaving only red to scatter.
Yeah, yeah, I hear you. This answer is can-kicking! “Dust, schmust. Why does it absorb blue?”, you demand.
OK, so the answer is actually fairly straightforward. And it generalizes. Here’s the rule: whenever you have solid particles in the atmosphere (very small ones, approximately the size of the wavelength of visible light), they generally tend to turn the air warm colors — red, orange, yellow.
If you live in an area with wildfires, you’ve probably seen this effect here on Earth!
To really understand the reason, let’s back up and talk about some chemistry.
Compared to tiny gas molecules, solid particles tend to have a much wider range of light frequencies that they absorb.
For instance, we discussed how N and O have specific resonant frequencies at which they hungrily absorb UV photons. Move slightly away from those frequencies, and absorption drops off a cliff.
But even for a tiny dust nanoparticle, there are many constituent molecules, each in slightly different configurations, each being jostled slightly differently by its neighbors. Consequently, the constituent molecules all have slightly different preferences of which frequency to absorb.
Because the “peak” absorption of the molecules is usually violet or ultraviolet (as it is with small gases), blues/violets will make it to the surface much less than oranges/reds.
Approximate light absorption from Martian dust as a function of wavelength
Of course, a reasonable question is why are blue and violet absorbed so strongly by these dust particles?
Well, those are the only photons with enough energy to bump the dust molecules’s electrons up to a new energy state.
So, the exact specifics depend on the molecules in question, but generally, the level of energy needed to bump up the electron energy state in a dust or smog particle’s molecules corresponds to violet or UV photons.
This is actually true of solids in general, not just atmospheric dust or aerosols. If you’ve ever heard that purple was “the color of kings” or that the purple dye of antiquity was worth its weight in gold, it’s true! To get something purple, you’d need to find a material whose electrons were excited by low-energy red photons, but had no use for higher-energy violet photons.
So this is why the Martian sky is red — and why reds and browns are more common in nature (for solid things, at least) than purple and blue.
Why is the Martian sunset blue?
It’s less famous than the red daytime sky of Mars, but the Martian sunset is blue!
Martian sunset photo taken by the Spirit rover.
In the last section, we talked about Martian dust absorbing violet/blue. But the dust also scatters light — which it can do totally unrelated to how it absorbs (remember, since photons can — and usually do — cruise straight through a molecule, scattering and absorbing can have their own interesting frequency-dependent characteristics. They don’t simply sum to 100%)
Small atmospheric particles, like dust and smog, are equal-opportunity scatterers. The absolute probability they’ll scatter a photon does not change significantly with the photon’s wavelength. However, different-frequency photons can be more or less likely to scatter in different directions.
For our purposes, it suffices to know that Martian dust — like many atmospheric particles of similar size — generally scatters blue light closer to the direction it was already going. Red light has a higher probability of deflecting at a greater angle.
Because red light deflects MORE and blue light LESS when scattering off dust particles, the area directly around the sun will be blue — even though more blue is absorbed en route.
When molecules deflect photons only a tiny angle, it’s called “forward scattering”. Forward scattering is the most pronounced for larger particles, like dust or smog aerosols. It’s actually so strong on Mars that even at midday, red light doesn’t fill the sky evenly — the sky opposite the sun is noticeably darker!
But blue’s tendency to forward-scatter more directly against Martian dust means the Martian sunset has a blue halo.
At the beginning of this article, I said being able to predict something is a good measure of how well you understand it. Let’s do that now. Let’s build a model for predicting the sky color on new planets/moons, or during different scenarios on our own planet.
Here are the three general rules of thumb we’ve already talked about.
Atmospheric gases tend to be much, much smaller than the wavelengths of visible light. In these cases, they tend to preferentially scatter blue/violet/UV. This means that gaseous atmospheres are usually blue or blue-green.
Uranus: upper atmosphere is 98% hydrogen and helium. We don’t have pictures from the surface.
Neptune: upper atmosphere is 99% hydrogen and helium. We don’t have pictures from the surface.
This is pleasingly true for Earth, Uranus, and Neptune.
You may recall Neptune as looking like a much darker, richer blue. However, more recent analysis by Patrick Irwin shows the true color is very likely closer to what’s shown here.
It’s also worth noting that Neptune and Uranus’s blue color is made noticeably richer by the red-absorbing methane in their atmospheres.
When visible light hits particles that are in the ballpark of its own wavelength, things get more complicated and can differ on a case-by-case basis.
These particles are typically either:
Haze: solid particles formed by chemical reactions in the atmosphere
All three significantly dusty/hazy atmospheres in our solar system hold to this rule!
Titan’s sky is orange due to a haze of tholins (organic molecules)
Venus’s sky is yellow to a haze of sulfurous compounds
When visible light hits clouds of droplets (or ice crystals) that are much bigger than light’s wavelength, the droplets act akin to a vast army of floating prisms, sending out all colors in all directions.
Consequently, clouds tend to appear white, gray, or desaturated hues.
Venus: high-altitude clouds of sulfuric acid (!). The tan/orange is from the aforementioned haze.
Putting it all together
The largest and most complex atmosphere in our solar system is Jupiter. But we know enough to start making some smart guesses about it!
QUIZ: looking at this picture, what can you say about Jupiter’s atmosphere? Answers below the image, so take a guess before scrolling 😉
Here’s a comparison of how a basic guess — informed by our simplistic model — compares to scientific consensus.
Clouds, probably of ice because of coldness
Small atmospheric molecules. But potentially a chemically odd haze, if something absorbed the visible spectrum pretty strongly?
The Galileo probe that descended into Jupiter entered one of these spots. It’s most surprising finding was how dry Jupiter’s atmosphere seemed to be. But knowing it fell between where the clouds were, this makes total sense. Instead of ice crystals, it found hydrogen and helium.
...
Read the original on explainers.blog »
With the help of a custom-developed algorithm, CacheTVS, we have exhaustively checked every single view on Earth in search of the coveted longest line of sight on the planet. Based on the method we detail here, we present the greatest view of all:
The longest line of sight on the planet, at 530km, from the Hindu Kush to Pik Dankova
Longest lines of sight tend to group together around peaks and ridges. So the following are more of our own curated list rather than the technically correct runners up. We chose them based on being in notably different regions of the world.
The second longest line of sight, at 504km, from Antioquia to Pico Crostobal in Colombia
Now we go right to the other side of the world to Colombia in South America. We’ve found a line of sight from the department of Antioquia to Pico Cristobal, Colombia’s highest mountain.
The third longest line of sight, at 483km, from Mount Elbrus in Russia to the Pontic Mountains in Turkey
Our pick for the third greatest view is from Mount Elbrus in Russia to the Pontic Moutains in Turkey. Notably Russia does not share a land border with Turkey, so it’s interesting that you can still see one from the other.
We have around 4.5 billion lines of sight for you to explore in an interactive map at map.alltheviews.world
...
Read the original on alltheviews.world »
This project uses an WEMOS D1 Mini ESP8266 module and an Arduino sketch to connect to a NTP (Network Time Protocol) server to automatically retrieve and display the local time on a inexpensive analog quartz clock. The ESP8266 reconnects to the NTP server every 15 minutes which keeps the clock accurate. The clock also automatically adjusts for daylight savings time.
WEMOS D1 Mini ESP8266 Module with EERAM IC and Components on a Piece of Perfboard
I’m using an analog clock with a quartz movement I found at my local Walmart for $3.88. Whatever analog clock you decide to use, its quartz movement will need to be modified so that it can be controlled by the ESP8266 module. Open up the movement (most of them snap together without any fasteners), disconnect the internal coil of the Lavet stepping motor from its quartz oscillator and then solder a wire to each of the coil’s leads to make connections for the ESP8266. If you search around on the web you’ll find articles showing how others have done it. Be careful when working with the coil. The coil’s wires are typically thinner than a human hair and extremely fragile.
The sketch: AnalogClock.ino should be (I hope) clear enough, but here, in brief, is a summary of how it operates. Ten times each second the ESP8266 compares the time displayed on the analog clock to the actual time retrieved from an NTP server. If the analog clock lags behind the actual time, the ESP8266 advances the clock’s second hand until the clock agrees with the actual time. If the time displayed on the analog clock is ahead of the actual time, the ESP8266 simply waits until the actual time catches up with the analog clock since it can’t move the clock’s hands backwards.
The ESP8266 advances the analog clock’s second hand by generating bipolar pulses, alternately positive and negative to the clock’s Lavet motor coil. Because of differences in clock mechanisms, you may need to increase or decrease the “PULSETIME” constant in the sketch by few milliseconds to make your mechanism step reliably. Experimentally, I found that 30 milliseconds works best for my movement.
The biggest problem with using these cheap analog clocks for a project like this is that the clocks don’t provide any type of feedback to indicate the position of the clock’s hands. Thus if power is interrupted to the ESP8266 controlling the clock, the ESP8266 “forgets” where the clock’s hands are positioned. To get around this problem, the positions of the hour, minute and second hands are stored in a Microchip 47L04 Serial EERAM (4Kbit SRAM with EEPROM backup) and updated each second as the clock’s hands positions change. If power is interrupted, the ESP8266 can retrieve the last position of the clock’s hands from the EERAM when power is reapplied.
The very first time that the sketch is run, the user will be directed to a simple web page (see below) served by the ESP8266 which is used to tell it where the analog clock’s hands are initially positioned. From that point on, the ESP8266 will use the data stored in the EERAM to “remember” the positions of the clock’s hands.
Once the ESP8266 finishes its initialization and starts operation, it serves a simple web page showing the clock’s status. The status page can optionally show a graphic image representing the clock’s Face drawn using Scalable Vector Graphics, or HTML Canvas, or no image at all.
Analog Clock Status Page Using Scalable Vector Graphics to Draw the Clock Face
Analog Clock Status Page Using the HTML Canvas Element to Draw the Clock Face
...
Read the original on github.com »
Anthropic recently published a blog post about building a C compiler entirely with Claude
. They called it CCC (Claude’s C Compiler) and claimed it could compile the Linux kernel. 100% of the code was written by Claude Opus 4.6, a human only guided the process by writing test cases. That sounded interesting enough to test the claim and benchmark CCC against the industry standard GCC.
The source code of CCC is available at claudes-c-compiler
. It is written entirely in Rust, targeting x86-64, i686, AArch64 and RISC-V 64. The frontend, SSA-based IR, optimizer, code generator, peephole optimizers, assembler, linker and DWARF debug info generation are all implemented from scratch with zero compiler-specific dependencies. That is a lot of work for an AI to do.
Before we jump into the comparison, it helps to understand what happens when you compile a C program. There are four stages involved.
Image credit: The four stages of the gcc compiler
Preprocessor: Handles #include, #define and other directives. It takes the source code and produces expanded source code.
Compiler: Takes the preprocessed source code and translates it into assembly language. This is where the real heavy lifting happens, understanding the C language, type checking, optimizations, register allocation and so on.
Assembler: Converts the assembly language into machine code (object files). It has to know the exact instruction encoding for the target CPU architecture.
Linker: Takes one or more object files and combines them into a single executable. It resolves references between files, sets up memory layout and produces the final binary.
Writing a programming language is hard (prior vibe coding). Writing a compiler is on another level entirely. A programming language defines the rules. A compiler has to understand those rules, translate them into machine instructions, optimize the output for speed and size, handle edge cases across different CPU architectures and produce correct code every single time.
GCC has been in development since 1987. That is close to 40 years of work by thousands of contributors. It supports dozens of architectures, hundreds of optimization passes and millions of edge cases that have been discovered and fixed over the decades. The optimization passes alone (register allocation, function inlining, loop unrolling, vectorization, dead code elimination, constant propagation) represent years of PhD-level research. This is one of the reasons why it’s ubiquitous.
This is why CCC being able to compile real C code at all is noteworthy. But it also explains why the output quality is far from what GCC produces. Building a compiler that parses C correctly is one thing. Building one that produces fast and efficient machine code is a completely different challenge.
Ironically, among the four stages, the compiler (translation to assembly) is the most approachable one for an AI to build. It is mostly about pattern matching and rule application: take C constructs and map them to assembly patterns.
The assembler is harder than it looks. It needs to know the exact binary encoding of every instruction for the target architecture. x86-64 alone has thousands of instruction variants with complex encoding rules (REX prefixes, ModR/M bytes, SIB bytes, displacement sizes). Getting even one bit wrong means the CPU will do something completely unexpected.
The linker is arguably the hardest. It has to handle relocations, symbol resolution across multiple object files, different section types, position-independent code, thread-local storage, dynamic linking and format-specific details of ELF binaries. The Linux kernel linker script alone is hundreds of lines of layout directives that the linker must get exactly right.
The Linux kernel is one of the most complex C codebases in the world. It has millions of lines of code, uses GCC-specific extensions, inline assembly, linker scripts and countless tricks that push the compiler to its limits. It is not a good first test for a new compiler.
SQLite, on the other hand, is distributed as a single amalgamation file (one big .c file). It is standard C, well-tested and self-contained. If your compiler can handle SQLite, it can handle a lot. If it cannot handle SQLite correctly, there is no point testing anything bigger.
That is why I tested both. SQLite tells us about correctness and runtime performance. The kernel tells us about scale and compatibility.
CCC was built with the gcc_m16 Cargo feature, which delegates 16-bit real-mode boot code (-m16 flag) to GCC. This is needed because CCC’s i686 backend produces code too large for the 32KB real-mode limit. The x86_64 C code is compiled entirely by CCC.
A ccc_wrapper.sh script routes .S assembly files to GCC (CCC does not process assembly) and all .c files to CCC.
Compilers are usually measured on below scenarios. Hence, tests are also designed around them.
Same hardware — identical VM specs for both compilers
Both run to completion — no tests killed prematurely
CCC gets help where needed — gcc_m16 feature for boot code, wrapper for assembly files
Same benchmark script — benchmark_sqlite.sh runs identically on both VMs
The benchmark was designed to be CPU-bound:
* No correlated subqueries (O(n^2) queries were replaced with GROUP BY)
The fair comparison is CCC vs GCC at -O0 (no optimization): CCC takes 87s vs GCC’s 65s — CCC is 1.3x slower. The “5x faster” number only appears because GCC is doing 7 minutes of optimization work that CCC simply skips.
CCC compiled every single C source file in the Linux 6.9 kernel without a single compiler error (0 errors, 96 warnings). This is genuinely impressive for a compiler built entirely by an AI.
However, the build failed at the linker stage with around 40,784 undefined reference errors. The errors follow two patterns:
__jump_table relocations — CCC generates incorrect relocation entries for kernel jump labels (used for static keys/tracepoints)
These are linker-visible bugs in CCC’s relocation/symbol generation, not C language compilation bugs. This is a good example of why the linker is the hardest part. The compiler did its job fine, but the generated relocations were not quite right for the kernel’s complex linker script.
CCC -O0 and -O2 produce byte-identical binaries (4,374,024 bytes). CCC has 15 SSA optimization passes, but they all run at every optimization level. There is no tiered optimization — the -O flag is accepted but completely ignored.
When you ask GCC to compile with -O2, it performs dozens of extra optimization passes:
* Register allocation: fitting variables into CPU registers so they do not spill to slow memory
* Vectorization: using SIMD instructions (SSE/AVX) to process multiple values at once
GCC’s -O2 spends 7 minutes doing this work, and the payoff is clear: the resulting binary runs 1.7x faster (6.1s vs 10.3s).
CCC does none of this at any optimization level. Comparing “CCC compile time vs GCC -O2 compile time” is like comparing a printer that only prints in black-and-white vs one that does full color. The black-and-white printer is faster, but it isn’t doing the same job.
CCC-compiled SQLite is functionally correct — it produces the same query results as GCC-compiled SQLite. All 5 crash/edge-case tests passed. But it is very slow.
No failures observed during these tests:
The per-query breakdown shows that CCC’s slowdown is not uniform. Simple queries are only 1-7x slower, but complex operations involving nested loops blow up:
The pattern is clear: operations that involve nested iteration (subqueries, JOINs) are orders of magnitude slower, while simple sequential operations are only slightly slower.
Modern CPUs have a small set of fast storage locations called registers. A good compiler tries to keep frequently used variables in these registers. When there are more variables than registers, the compiler “spills” them to the stack (regular RAM), which is much slower.
CCC’s biggest performance problem is excessive register spilling. SQLite’s core execution engine sqlite3VdbeExec is a single function with 100+ local variables and a massive switch statement. CCC does not have good register allocation, so it spills almost all variables to the stack.
movq -0x1580(%rbp), %rax ; load from deep stack offset
movq %rax, -0x2ae8(%rbp) ; store to another deep stack offset
movq -0x1588(%rbp), %rax ; load next value
movq %rax, -0x2af0(%rbp) ; store to next offset
; … dozens more memory-to-memory copies
CCC uses stack offsets up to -0x2ae8 (11,000 bytes deep) for a function with 32 variables. Every operation goes: stack -> rax -> stack, using %rax as a shuttle register.
CCC is 4.2x slower than GCC O0 for register-heavy code. In sqlite3VdbeExec with 100+ variables and 200+ switch cases, this ratio compounds to 100x+.
CCC runs the same 15-pass SSA pipeline at all optimization levels:
This means -O2 provides zero benefit. Every binary CCC produces is effectively -O0 quality, regardless of what flag you pass.
The 2.78x code bloat means more instruction cache misses, which compounds the register spilling penalty.
CCC-compiled binaries lack internal function symbols (nm reports 0 symbols, readelf shows only 90 PLT stubs vs GCC’s 1,500+ functions). This makes profiling and debugging impossible.
The NOT IN (subquery) pattern causes SQLite to execute a nested loop: for each of the around 100,000 rows in the outer table, it scans through around 10,000 rows in the inner table. That is roughly 1 billion iterations through SQLite’s main execution function (sqlite3VdbeExec), which is basically a giant switch statement.
With CCC’s roughly 4x per-iteration overhead from register spilling, plus extra cache misses from the 2.78x larger binary (the CPU cannot keep all the instructions in its fast cache), the slowdown compounds:
* Cache pressure: around 2-3x additional penalty (instructions do not fit in L1/L2 cache)
This is why simple queries (INSERT, DROP TABLE) are only 1-2x slower, but nested operations blow up to 100,000x+ slower.
Correctness: Compiled every C file in the kernel (0 errors) and produced correct SQLite output for all queries
Stability: Zero crashes, zero segfaults across all tests
Memory usage: 5.9x more RAM for compilation (1.6 GB vs 272 MB for SQLite)
Compilation speed: Could only be compared with -O0 as CCC does not do anything beyond this. CCC is around 25% slower vs GCC (87s vs 65s)
Within hours of Anthropic releasing CCC, someone opened issue #1
– “Hello world does not compile”. The example straight from the README did not work on a fresh Fedora or Ubuntu install:
$ ./target/release/ccc -o hello hello.c
/usr/include/stdio.h:34:10: error: stddef.h: No such file or directory
/usr/include/stdio.h:37:10: error: stdarg.h: No such file or directory
ccc: error: 2 preprocessor error(s) in hello.c
Meanwhile, GCC compiled it just fine. The issue was that CCC’s preprocessor did not search the right system include paths for stddef.h and stdarg.h (these come from the compiler, not the C library). It got 288 thumbs-up reactions, over 200 comments and turned into one of those legendary GitHub threads where people tag @claude asking it to fix the bug, ask @grok for summaries and post comments like “my job is safe”.
Someone got it working on Compiler Explorer and remarked that the assembly output “reminds me of the quality of an undergraduate’s compiler assignment”. Which, to be fair, is both harsh and not entirely wrong when you look at the register spilling patterns.
The issue is still open at the time of writing.
Claude’s C Compiler is a remarkable achievement. It is a working C compiler built entirely by an AI that can correctly compile 2,844 files from the Linux kernel without a single error. It produces functionally correct code (verified with SQLite — all queries return correct results, all crash tests pass).
But it is not ready for real use:
The output code is very slow. CCC-compiled SQLite takes 2 hours to run a benchmark that GCC finishes in 10 seconds. The root cause is poor register allocation — CCC uses a single register as a shuttle to move values between stack locations, turning every operation into multiple memory accesses.
The “compiles the kernel” claim needs a footnote. CCC compiles all the C source files, but the final binary cannot be produced because CCC generates incorrect relocations for kernel data structures (__jump_table, __ksymtab).
Optimization flags are decorative. Passing -O2 or -O3 to CCC does literally nothing — the output binary is byte-identical to -O0.
For Anthropic’s stated goal of demonstrating that Claude can build complex software, CCC is a genuine success. For anyone wanting to compile software to actually run efficiently, GCC (or Clang, or any production compiler) remains the only real option.
All scripts, results and graphs are available at compare-claude-compiler
Part of this work was assisted by AI. The Python scripts used to generate benchmark results and graphs were written with AI assistance. The benchmark design, test execution, analysis and writing were done by a human with AI helping where needed.
...
Read the original on harshanu.space »
One of the surprising (at least to me) consequences of the fall of Twitter is the rise of LinkedIn as a social media site. I saw some interesting posts I wanted to call attention to:
First, Simon Wardley on building things without understanding how they work:
And here’s Bruce Perens, whose post is very much in conversation with them, even though he’s not explicitly responding to either of them.
Finally, here’s the MIT engineering professor Louis Bucciarelli from his book Designing Engineers, written back in 1994. Here I’m just copying and paste the quotes from my previous post on active knowledge.
A few years ago, I attended a national conference on technological literacy… One of the main speakers, a sociologist, presented data he had gathered in the form of responses to a questionnaire. After a detailed statistical analysis, he had concluded that we are a nation of technological illiterates. As an example, he noted how few of us (less than 20 percent) know how our telephone works.
This statement brought me up short. I found my mind drifting and filling with anxiety. Did I know how my telephone works?
I squirmed in my seat, doodled some, then asked myself, What does it mean to know how a telephone works? Does it mean knowing how to dial a local or long-distance number? Certainly I knew that much, but this does not seem to be the issue here.
No, I suspected the question to be understood at another level, as probing the respondent’s knowledge of what we might call the “physics of the device.”I called to mind an image of a diaphragm, excited by the pressure variations of speaking, vibrating and driving a coil back and forth within a a magnetic field… If this was what the speaker meant, then he was right: Most of us don’t know how our telephone works.
Indeed, I wondered, does [the speaker] know how his telephone works? Does he know about the heuristics used to achieve optimum routing for long distance calls? Does he know about the intricacies of the algorithms used for echo and noise suppression? Does he know how a signal is transmitted to and retrieved from a satellite in orbit? Does he know how AT&T, MCI, and the local phone companies are able to use the same network simultaneously? Does he know how many operators are needed to keep this system working, or what those repair people actually do when they climb a telephone pole? Does he know about corporate financing, capital investment strategies, or the role of regulation in the functioning of this expansive and sophisticated communication system?
Does anyone know how their telephone works?
There’s a technical interview question that goes along the lines of: “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? Do you know about the interrupts that fire inside of your operating system when you actually strike the enter key? Do you know which modulation scheme being used by the 802.11ax Wi-Fi protocol in your laptop right now? Could you explain the difference between quadrature amplitude modulation (QAM) and quadrature phase shift keying (QPSK), and could you determine which one your laptop is currently using? Are you familiar with the relaxed memory model of the ARM processor? How garbage collection works inside of the JVM? Do you understand how the field effect transistors inside the chip implement digital logic?
I remember talking to Brendan Gregg about how he conducted technical interviews, back when we both worked at Netflix. He told me that he was interested in identifying the limits of a candidate’s knowledge, and how they reacted when they reached that limit. So, he’d keep asking deeper questions about their area of knowledge until they reached a point where they didn’t know anymore. And then he’d see whether they would actually admit “I don’t know the answer to that”, or whether they would bluff. He knew that nobody understood the system all of the way down.
In their own ways, Wardley, Jacob, Perens, and Bucciarelli are all correct.
Wardley’s right that it’s dangerous to build things where we don’t understand the underlying mechanism of how they actually work. This is precisely why magic is used as an epithet in our industry. Magic refers to frameworks that deliberately obscure the underlying mechanisms in service of making it easier to build within that framework. Ruby on Rails is the canonical example of a framework that uses magic.
Jacob is right that AI is changing the way that normal software development work gets done. It’s a new capability that has proven itself to be so useful that it clearly isn’t going away. Yes, it represents a significant shift in how we build software, it moves us further away from how the underlying stuff actually works, but the benefits exceed the risks.
Perens is right that the scenario that Wardley fears has, in some sense, already come to pass. Modern CPU architectures and operating systems contain significant complexity, and many software developers are blissfully unaware of how these things really work. Yes, they have mental models of how the system below them works, but those mental models are incorrect in fundamental ways.
Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.
...
Read the original on surfingcomplexity.blog »
...
Read the original on www.githubstatus.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.