10 interesting stories served every morning and every evening.




1 1,013 shares, 115 trendiness

Discord will require a face scan or ID for full access next month

Users who aren’t ver­i­fied as adults will not be able to ac­cess age-re­stricted servers and chan­nels, won’t be able to speak in Discord’s livestream-like stage” chan­nels, and will see con­tent fil­ters for any con­tent Discord de­tects as graphic or sen­si­tive. They will also get warn­ing prompts for friend re­quests from po­ten­tially un­fa­mil­iar users, and DMs from un­fa­mil­iar users will be au­to­mat­i­cally fil­tered into a sep­a­rate in­box.

Direct mes­sages and servers that are not age-re­stricted will con­tinue to func­tion nor­mally, but users won’t be able to send mes­sages or view con­tent in an age-re­stricted server un­til they com­plete the age check process, even if it’s a server they were part of be­fore age ver­i­fi­ca­tion rolled out. Savannah Badalich, Discord’s global head of prod­uct pol­icy, said in an in­ter­view with The Verge that those servers will be obfuscated” with a black screen un­til the user ver­i­fies they’re an adult. Users also won’t be able to join any new age-re­stricted servers with­out ver­i­fy­ing their age.

Discord’s global age ver­i­fi­ca­tion launch is part of a wave of sim­i­lar moves at other on­line plat­forms, dri­ven by an in­ter­na­tional le­gal push for age checks and stronger child safety mea­sures. This is not the first time Discord has im­ple­mented some form of age ver­i­fi­ca­tion, ei­ther. It ini­tially rolled out age checks for users in the UK and Australia last year, which some users fig­ured out how to cir­cum­vent us­ing Death Stranding’s photo mode. Badalich says Discord immediately fixed it af­ter a week,” but ex­pects users will con­tinue find­ing cre­ative ways to try get­ting around the age checks, adding that Discord will try to bug bash as much as we pos­si­bly can.”

It’s not just teens try­ing to cheat the sys­tem who might at­tempt to dodge age checks. Adult users could avoid ver­i­fy­ing, as well, due to con­cerns around data pri­vacy, par­tic­u­larly if they don’t want to use an ID to ver­ify their age. In October, one of Discord’s for­mer third-party ven­dors suf­fered a data breach that ex­posed users’ age ver­i­fi­ca­tion data, in­clud­ing im­ages of gov­ern­ment IDs.

A gov­ern­ment ID might still be re­quired for age ver­i­fi­ca­tion in its global roll­out. According to Discord, to re­move the new teen-by-default” changes and lim­i­ta­tions, users can choose to use fa­cial age es­ti­ma­tion or sub­mit a form of iden­ti­fi­ca­tion to [Discord’s] ven­dor part­ners, with more op­tions com­ing in the fu­ture.”

The first op­tion uses AI to an­a­lyze a user’s video selfie, which Discord says never leaves the user’s de­vice. If the age group es­ti­mate (teen or adult) from the selfie is in­cor­rect, users can ap­peal it or ver­ify with a photo of an iden­tity doc­u­ment in­stead. That doc­u­ment will be ver­i­fied by a third party ven­dor, but Discord says the im­ages of those doc­u­ments are deleted quickly — in most cases, im­me­di­ately af­ter age con­fir­ma­tion.”

Badalich also says af­ter the October data breach, Discord immediately stopped do­ing any sort of age ver­i­fi­ca­tion flows with that ven­dor” and is now us­ing a dif­fer­ent third-party ven­dor. She adds, We’re not do­ing bio­met­ric scan­ning [or] fa­cial recog­ni­tion. We’re do­ing fa­cial es­ti­ma­tion. The ID is im­me­di­ately deleted. We do not keep any in­for­ma­tion around like your name, the city that you live in, if you used a birth cer­tifi­cate or some­thing else, any of that in­for­ma­tion.”

However, some users may not have to go through ei­ther form of age ver­i­fi­ca­tion. Discord is also rolling out an age in­fer­ence model that an­a­lyzes meta­data like the types of games a user plays, their ac­tiv­ity on Discord, and be­hav­ioral sig­nals like signs of work­ing hours or the amount of time they spend on Discord.

If we have a high con­fi­dence that they are an adult, they will not have to go through the other age ver­i­fi­ca­tion flows,” Badalich says.

She goes on to ex­plain that the ad­di­tion of age as­sur­ance will mainly im­pact adult con­tent: A ma­jor­ity of peo­ple on Discord are not nec­es­sar­ily look­ing at ex­plicit or graphic con­tent. When we say that, we’re re­ally talk­ing about things that are truly adult con­tent [and] age in­ap­pro­pri­ate for a teen. So, the way that it will work is a ma­jor­ity of peo­ple are not go­ing to see a change in their ex­pe­ri­ence.”

Even so, there’s still a risk that some users will leave Discord as a re­sult of the age ver­i­fi­ca­tion roll­out. We do ex­pect that there will be some sort of hit there, and we are in­cor­po­rat­ing that into what our plan­ning looks like,” Badalich says. We’ll find other ways to bring users back.”

...

Read the original on www.theverge.com »

2 572 shares, 25 trendiness

Art of Roads in Games

Not sure if it’s just me, but I of­ten get a pri­mal sat­is­fac­tion when­ever I see in­tri­cate pat­terns emerg­ing out of seem­ingly dis­or­dered en­vi­ron­ments.

Think about the gal­leries of ant colonies, the ab­surdly per­fect hexa­gons of hon­ey­combs, or the veins on a leaf. No ar­chi­tect, no blue­print. Just sim­ple rules stack­ing on each other that re­sult in beau­ti­ful pat­terns. I can’t ex­plain why, but see­ing those struc­tures al­ways felt good.

Humans do this too. And for me, one of the most fas­ci­nat­ing pat­terns we’ve come up with is the roads.

Sometimes I imag­ine aliens from far­away galax­ies dis­cov­er­ing Earth long af­ter we’re gone. Forests re­claimed by na­ture, cities re­duced to rub­ble, yet be­tween them, a faintly pat­tern is still vis­i­ble - the road net­work. I like to think they will feel the same way I do when look­ing at na­ture pat­terns. - Man, some­one re­ally thought this through.”

I’ve got to say, roads have fas­ci­nated me since I was a kid.

I still re­mem­ber play­ing SimCity 2000 for the first time when I was about five or six years old. I did­n’t un­der­stand much. Definitely did­n’t know what zon­ing, taxes, or de­mand were. But roads fas­ci­nated me from the start.

I think roads lie at the heart of every city builder. It’s the fab­ric on which cities are built. Since that mo­ment, I’ve played al­most every mod­ern-themed city builder out there. In the mean­time, I’ve also started notic­ing them in the real world. Examining them in more de­tail.

Despite every game bring­ing an im­prove­ment over the one be­fore, some­thing al­ways felt… off.

SimCity 4 added el­e­va­tion and di­ag­o­nal roads. SimCity 2013 in­tro­duced curved roads. Then came Cities: Skylines with a ton of free­dom. You could know freeplace roads and merge them into in­ter­sec­tions at any an­gle, build fly­overs at dif­fer­ent el­e­va­tions to con­struct crazy, yet un­re­al­is­tic, in­ter­changes. I think this was the largest break­through.

But some­thing was still nag­ging me. Highway ramps were un­re­al­is­ti­cally sharp or wob­bly, lanes that were sup­posed to be high-speed bent too sharply at cer­tain points, and the cor­ner radii of in­ter­sec­tions looked strange.

I mean look at this. This is prob­a­bly what high­way en­gi­neers have night­mares about.

And then came the mods. Mods changed every­thing. The great com­mu­nity en­abled a new kind of free­dom. One could build al­most any­thing: per­fect merge lanes, re­al­is­tic mark­ings, and smooth tran­si­tions. It was a to­tal game-changer. I am par­tic­u­larly proud of this 5-lane turbo round­about:

But even then, mods did­n’t feel com­pletely nat­ural. They were still lim­ited by the game’s orig­i­nal sys­tem.

Cities: Skylines 2 pushed it even fur­ther, with lanes be­com­ing even more re­al­is­tic and mark­ings as well. I think at this point, a non-trained eye won’t know the dif­fer­ence from re­al­ity.

Then I stopped stum­bling around and started ask­ing why? I tried to un­der­stand how en­gi­neers de­sign roads and how game de­vel­op­ers code them.

That’s when I ran straight into the fun­da­men­tal is­sue - right at the base of it. And it comes to some­thing every de­vel­oper knows about and loves:

If you’re a Unity or Unreal de­vel­oper or played with ba­si­cally any vec­tor graph­ics edit­ing soft­ware, you al­ready know them well. Bezier curves are an el­e­gant, in­tu­itive, and in­cred­i­bly pow­er­ful way to smoothly in­ter­po­late be­tween two points while tak­ing into con­sid­er­a­tion some di­rec­tion of move­ment (the tan­gent).

That’s ex­actly what roads are sup­posed to do, right? Of course, de­vel­op­ers nat­u­rally think they are the per­fect tool.

They’ve got their beauty, I need to ad­mit. But hid­den be­neath the sur­face lies an un­com­fort­able truth.

You see, the shapes of roads in real life come from an un­der­ly­ing es­sen­tial fact: the wheel axles of a ve­hi­cle. No mat­ter how you drive a car, the dis­tance be­tween the left and right wheels re­mains con­stant. You can no­tice this in tyre tracks in snow or sand. Two per­fectly par­al­lel paths, al­ways the same dis­tance apart main­tain­ing a con­sis­tent curved shape.

Here’s the is­sue with Bezier splines: they don’t pre­serve shape and cur­va­ture when off­set.

At gen­tle curves, they kinda look fine, but once you have tighter bends, the math falls apart. In mathy terms: The off­set of a Bezier curve is not a Bezier curve.

When game en­gines try to gen­er­ate a road mesh along a Bezier spline, the geom­e­try of­ten fails at tight an­gles. The in­ner edge curves at a dif­fer­ent rate than the outer edge. This cre­ates pinching,” self-in­ter­sect­ing geom­e­try.

Here is the best ex­am­ple of how they start to fail in ex­treme sce­nar­ios.

To sum up: Bézier curves are un­con­strained. The free­dom they en­able is ex­actly the Achilles’ heel”. Real roads are en­gi­neered with the con­straints of real mo­tion in mind. A car’s path can’t mag­i­cally self-in­ter­sect.

Ok, so what pre­serves par­al­lelism? If you’ve al­ready been through kinder­garten, you’re al­ready fa­mil­iar with it: It’s the CIRCLE.

It has al­most like a mag­i­cal prop­erty: no mat­ter how much you off­set it, the re­sult is still a cir­cu­lar arc. Perfectly par­al­lel with the ini­tial one. So sat­is­fy­ing.

Scrapping Bezier curves for Circle Arcs also yields a nice, un­ex­pected bonus. To pro­ce­du­rally build in­ter­sec­tions, the en­gine has to per­form many curve-curve in­ter­sec­tion op­er­a­tions mul­ti­ple times per frame. The in­ter­sec­tion be­tween two Bezier curves is no­to­ri­ously com­plex. On one side, you have poly­no­mial root find­ing, it­er­a­tive nu­mer­i­cal meth­ods, de Castelaju’s method + bound­ing boxes, and mul­ti­ple con­ver­gence checks vs a sim­ple, plain O(1) for­mula in Circle Arcs.

By stitch­ing to­gether cir­cu­lar arcs of dif­fer­ent radii, you can cre­ate any shape while ad­her­ing to proper en­gi­neer­ing prin­ci­ples.

But this is not the end of the story. Circle arcs have is­sues as well (Oh no). The prob­lem with cir­cles in in­fra­struc­ture is that they have con­stant cur­va­ture. What this means is that when en­ter­ing a cir­cu­lar curve from a straight line, the lat­eral force jumps from 0 to a fixed con­stant value (determined by the ra­dius of the cir­cle). If you were in a car or train en­ter­ing at high speed into this kind of curve, it would feel ter­ri­ble.

Civil en­gi­neers have to ac­count for this as well. So then, what curve main­tains par­al­lelism when off­set and has a smoothly in­creas­ing cur­va­ture?

Introduce you to: tran­si­tion curves - most fa­mously, the clothoid.

A clothoid grad­u­ally in­creases cur­va­ture over dis­tance. You start al­most straight, then slowly turn tighter and tighter. The steer­ing wheel ro­tates smoothly. The forces ramp up nat­u­rally, and a pas­sen­ger’s body barely no­tices the tran­si­tion.

These curves pro­vide com­fort­able rides at high speeds by main­tain­ing par­al­lel off­sets and con­tin­u­ous cur­va­ture changes.

And they are also… a math night­mare. Differential geom­e­try. Integrals. Oh my… Which is prob­a­bly why most games don’t even dare.

Vehicles move slowly on city streets. For in­ter­sec­tions of ur­ban roads, cir­cu­lar arcs are more than a de­cent choice.

Does every­thing I just ram­bled about mat­ter? Do 99% of city-builder play­ers care what shape the cor­ner ra­dius of the in­ter­sec­tion has? Most likely, no. Then why bother?

First, be­cause of cu­rios­ity. As any other nerd overly ob­sessed with the nitty-gritty de­tails of a very spe­cific sub­ject, I just wanted to see how I would im­ple­ment it. Like chal­leng­ing the sta­tus quo.

Second, even if es­tab­lished ti­tles might not ac­cu­rately ren­der roads, they are still light-years ahead of what so­lu­tions an in­die de­vel­oper can find on­line. The tu­to­ri­als and as­sets for this are just sad. I per­son­ally got bored with grids, and I just wanted to built a bet­ter so­lu­tion to share with any­one who wants to build a city builder.

In the next blog post, I’ll dis­cuss more tech­ni­cal­i­ties and dive into how I’ve built my own so­lu­tion. If you want to fol­low along or get no­ti­fied when I re­lease this as­set, scrib­ble your email be­low.

...

Read the original on sandboxspirit.com »

3 494 shares, 19 trendiness

AI Makes the Easy Part Easier and the Hard Part Harder for Developers

A friend of mine re­cently at­tended an open fo­rum panel about how en­gi­neer­ing orgs can bet­ter sup­port their en­gi­neers. The themes that came up were not sur­pris­ing:

Sacrificing qual­ity makes it hard to feel proud of the work. No ac­knowl­edge­ment of cur­rent ve­loc­ity. If we sprint to de­liver, the ex­pec­ta­tion be­comes to keep sprint­ing, for­ever.

I’ve been hear­ing vari­a­tions of this for a while now, but now I’m also hear­ing and agree­ing with AI does­n’t al­ways speed us up”.

Developers used to google things. You’d read a StackOverflow an­swer, or an ar­ti­cle, or a GitHub is­sue. You did some re­search, ver­i­fied it against your own con­text, and came to your own con­clu­sion. Nobody said Google did it for me” or it was the top re­sult so it must be true.”

Now I’m start­ing to hear AI did it for me.”

That’s ei­ther over­hyp­ing what hap­pened, or it means the de­vel­oper did­n’t come to their own con­clu­sion. Both are bad. If some­one on my team ever did say Google wrote their code be­cause they copied a StackOverflow an­swer, I’d be wor­ried about the same things I’m wor­ried about now with AI: did you ac­tu­ally un­der­stand what you pasted?

Vibe cod­ing is fun. At first. For pro­to­typ­ing or low-stakes per­sonal pro­jects, it’s use­ful. But when the stakes are real, every line of code has con­se­quences.

On a per­sonal pro­ject, I asked an AI agent to add a test to a spe­cific file. The file was 500 lines be­fore the re­quest and 100 lines af­ter. I asked why it deleted all the other con­tent. It said it did­n’t. Then it said the file did­n’t ex­ist be­fore. I showed it the git his­tory and it apol­o­gised, said it should have checked whether the file ex­isted first. (Thank you git).

Now imag­ine that in a health­care code­base in­stead of a side pro­ject.

AI as­sis­tance can cost more time than it saves. That sounds back­wards, but it’s what hap­pened here. I spent longer ar­gu­ing with the agent and re­cov­er­ing the file than I would have spent writ­ing the test my­self.

Using AI as an in­ves­ti­ga­tion tool, and not jump­ing straight to AI as so­lu­tion provider, is a step that some peo­ple skip. AI-assisted in­ves­ti­ga­tion is an un­der­rated skill that’s not easy, and it takes prac­tice to know when AI is wrong. Using AI-generated code can be ef­fec­tive, but if we give AI more of the easy code-writ­ing tasks, we can fall into the trap where AI as­sis­tance costs more time than it saves.

Most peo­ple miss this about AI-assisted de­vel­op­ment. Writing code is the easy part of the job. It al­ways has been. The hard part is in­ves­ti­ga­tion, un­der­stand­ing con­text, val­i­dat­ing as­sump­tions, and know­ing why a par­tic­u­lar ap­proach is the right one for this sit­u­a­tion. When you hand the easy part to AI, you’re not left with less work. You’re left with only the hard work. And if you skipped the in­ves­ti­ga­tion be­cause AI al­ready gave you an an­swer, you don’t have the con­text to eval­u­ate what it gave you.

Reading and un­der­stand­ing other peo­ple’s code is much harder than writ­ing code. AI-generated code is other peo­ple’s code. So we’ve taken the part de­vel­op­ers are good at (writing), of­floaded it to a ma­chine, and left our­selves with the part that’s harder (reading and re­view­ing), but with­out the con­text we’d nor­mally build up by do­ing the writ­ing our­selves.

My friend’s panel raised a point I keep com­ing back to: if we sprint to de­liver some­thing, the ex­pec­ta­tion be­comes to keep sprint­ing. Always. Tired en­gi­neers miss edge cases, skip tests, ship bugs. More in­ci­dents, more pres­sure, more sprint­ing. It feeds it­self.

This is a man­age­ment prob­lem, not an en­gi­neer­ing one. When lead­er­ship sees a team de­liver fast once (maybe with AI help, maybe not), that be­comes the new base­line. The con­ver­sa­tion shifts from how did they do that?” to why can’t they do that every time?”

My friend was say­ing:

When peo­ple claim AI makes them 10x more pro­duc­tive, maybe it’s turn­ing them from a 0.1x en­gi­neer to a 1x en­gi­neer. So tech­ni­cally yes, they’ve been 10x’d. The ques­tion is whether that’s a pro­duc­tiv­ity gain or an ex­po­sure of how lit­tle in­ves­ti­gat­ing they were do­ing be­fore.

Burnout and ship­ping slop will eat what­ever pro­duc­tiv­ity gains AI gives you. You can’t op­ti­mise your way out of peo­ple be­ing too tired to think clearly.

I’ve used the phrase AI is se­nior skill, ju­nior trust” to ex­plain how AI cod­ing agents work in prac­tice. They’re highly skilled at writ­ing code but we have to trust their out­put like we would a ju­nior en­gi­neer. The code looks good and prob­a­bly works, but we should check more care­fully be­cause they don’t have the ex­pe­ri­ence.

Another way to look at it: an AI cod­ing agent is like a bril­liant per­son who reads re­ally fast and just walked in off the street. They can help with in­ves­ti­ga­tions and could write some code, but they did­n’t go to that meet­ing last week to dis­cuss im­por­tant back­ground and con­text.

Developers need to take re­spon­si­ble own­er­ship of every line of code they ship. Not just the lines they wrote, the AI-generated ones too.

If you’re cut­ting and past­ing AI out­put be­cause some­one set an un­re­al­is­tic ve­loc­ity tar­get, you’ve got a prob­lem 6 months from now when a new team mem­ber is try­ing to un­der­stand what that code does. Or at 2am when it breaks. AI wrote it” is­n’t go­ing to help you in ei­ther sit­u­a­tion.

The other day there was a pro­duc­tion bug. A user sent an en­quiry to the ser­vice team a cou­ple of hours af­ter a big re­lease. There was an edge case time­zone dis­play bug. The de­vel­oper who made the change had 30 min­utes be­fore they had to leave to teach a class, and it was late enough for me to al­ready be at home. So I used AI to help in­ves­ti­gate, let­ting it know the bug must be based on re­cent changes and ex­plain­ing how we could re­pro­duce. Turned out some dep­re­cated meth­ods were tak­ing pri­or­ity over the cur­rent time­zone-aware ones, so the time­zone was never con­vert­ing cor­rectly. Within 15 min­utes I had the root cause, a so­lu­tion idea, and in­ves­ti­ga­tion notes in the GitHub is­sue. The de­vel­oper con­firmed the fix, oth­ers tested and de­ployed, and I went down­stairs to grab my DoorDash din­ner.

No fire drill. No stay­ing late. AI did the in­ves­ti­ga­tion grunt work, I pro­vided the con­text and ver­i­fied, the de­vel­oper con­firmed the so­lu­tion. That’s AI help­ing with the hard part.

...

Read the original on www.blundergoat.com »

4 468 shares, 54 trendiness

Notifications are delayed

...

Read the original on www.githubstatus.com »

5 363 shares, 49 trendiness

Why is the sky blue?

Let’s start by ask­ing our­selves: what color SHOULD the sky be?

Or, one step fur­ther back, what color should any­thing be?

And the an­swer is: the color of any­thing is due to the wave­length of pho­tons com­ing from that thing and hit­ting your eye.

These side­notes are op­tional to read, but I’ll use them for giv­ing the fuller tech­ni­cal de­tails when I’ve ab­bre­vi­ated things in the main body of the text.

In this case, the color you see is de­ter­mined by the wave­lengths of light en­ter­ing your eye since (1) you may be see­ing a pure fre­quency, but in al­most all cases, (2) you’re see­ing many fre­quen­cies, which your brain in­ter­prets as a sin­gle color.

For in­stance, the sen­sa­tion of turquoise at a spe­cific point can be caused by (a) pho­tons of wave­length 500nm em­a­nat­ing from that point, (b) a spe­cific combo of pho­tons of wave­lengths 470nm and 540nm, or (c) (mostly re­al­is­ti­cally) pho­tons of a huge num­ber of wave­lengths, prob­a­bly peak­ing some­where around 500nm.

In the text, I am a bit fast and loose with the dif­fer­ence.

When sun­light hits Earth’s at­mos­phere, most col­ors of pho­tons pass through un­en­cum­bered. But blue pho­tons have a ten­dency to ric­o­chet around a lot.

This causes them to dis­perse all through­out the at­mos­phere. They dis­perse so far and wide, and are so nu­mer­ous, that you can look at any part of the sky on a clear af­ter­noon and, at that mo­ment, blue pho­tons will be shoot­ing from that point straight to your eyes.

Therefore the sky is blue.

Most col­ors of light pass through the at­mos­phere rel­a­tively un­en­cum­bered. You only see them when you look at the sun, where they con­tribute to the white­ness of the sun’s light. Blue, how­ever, bounces around a lot, get­ting spread all over the sky. Because blue pho­tons hit our eye­balls from every an­gle of the sky, the whole sky ap­pears blue.

This is true and all, but it kicks the can down the road. Why blue? Why not red?

In short, it’s be­cause blue and vi­o­let have the clos­est fre­quen­cies to a resonant fre­quency” of ni­tro­gen and oxy­gen mol­e­cules’s elec­tron clouds.

There’s a lot there, so we’ll un­pack it be­low. But first, here’s an (interactive) demo.

This demo is a sim­pli­fi­ca­tion. In re­al­ity, 99.999% of pho­tons pass through (neither scat­ter­ing nor ab­sorb­ing), even at the res­o­nant fre­quency. Pretty bor­ing to watch!

When a pho­ton passes through/​near a small mol­e­cule (like N or O, which make up 99% of our at­mos­phere), it causes the elec­tron cloud around the mol­e­cules to jiggle”. This jig­gling is at the same fre­quency as the pho­ton it­self — mean­ing vi­o­let pho­tons cause faster jig­gling than red pho­tons.

In any case, for rea­sons due the in­ter­nal struc­ture of the mol­e­cule, there are cer­tain res­o­nant fre­quen­cies of each mol­e­cule’s elec­tron cloud. As the elec­tron clouds vi­brate closer and closer to these res­o­nant fre­quen­cies, the vi­bra­tions get larger and larger.

The stronger the elec­tron cloud’s os­cil­la­tions, the more likely a pass­ing pho­ton (a) is de­flected in a new di­rec­tion rather than (b) passes straight through.

For both N and O, the low­est res­o­nant fre­quency is in the ul­tra­vi­o­let range. So as the vis­i­ble col­ors in­crease in fre­quency to­wards ul­tra­vi­o­let, we see more and more de­flec­tion, or scattering”.

Scattering” is the sci­en­tific term of art for mol­e­cules de­flect­ing pho­tons. Linguistically, it’s used some­what in­con­sis­tently. You’ll hear both blue light scat­ters more” (the sub­ject is the light) and atmospheric mol­e­cules scat­ter blue light more” (the sub­ject is the mol­e­cule). In any case, they means the same thing 🤷‍♂️

In fact, vi­o­let is 10x more likely to scat­ter than red.

Math talk: scat­ter­ing in­creases pro­por­tional to the FOURTH power of the fre­quency. So higher fre­quency light means WAY more scat­ter­ing.

So why is­n’t the sky vi­o­let? Great ques­tion – we’ll cover that in a sec.

I just want to point out two other things that (a) you can see in the demo above, and (b) are use­ful for later in this ar­ti­cle.

First, when light gets re­ally close to — and even­tu­ally ex­actly at — the res­o­nant fre­quency of the mol­e­cule’s elec­tron cloud, it gets ab­sorbed far more than scat­tered! The pho­ton sim­ply dis­ap­pears into the elec­tron cloud (and the elec­tron cloud bumps up one en­ergy level). This is­n’t im­por­tant for un­der­stand­ing the color of Earth’s sky… but there are other skies out there 😉

Second, did you no­tice that even red scat­ters some? Like, yes, blue scat­ters 10x more. But the sky is ac­tu­ally every color, just mostly blue/​vi­o­let. This is why the sky is light blue. If white light is all vis­i­ble col­ors of light mixed to­gether equally, light blue is all vis­i­ble col­ors mixed to­gether — but bi­ased to­wards blue.

What would the sky look like if it was only blue? Check it out.

I’ll just end by say­ing, this dy­namic (where scat­ter­ing in­creases sharply with the fre­quency of light) ap­plies to far more than just N and O. In fact, any small gaseous mol­e­cule — car­bon diox­ide, hy­dro­gen, he­lium, etc. — would pref­er­en­tially scat­ter blue, yield­ing a blue sky at day.

As you saw above, vi­o­let scat­ters more than blue. So why is­n’t the sky pur­ple? The dumb but true an­swer is: our eyes are just worse at see­ing vi­o­let. It’s the very high­est fre­quency of light we can see; it’s ri­i­ight on the edge of our per­cep­tion.

But! — if we could see vi­o­let as well as blue, the sky would ap­pear vi­o­let.

We might as well tackle the ele­phant in the room: if we could see ul­tra­vi­o­let (which is the next higher fre­quency af­ter vi­o­let), would the sky ac­tu­ally be ul­tra­vi­o­let?

And the an­swer is not re­ally. If we could see UV, the sky would be a UV-tinted vi­o­let, but it would­n’t be over­whelm­ingly ul­tra­vi­o­let. First, be­cause the sun emits less UV light than vis­i­ble light. And sec­ond, some of that UV light is ab­sorbed by the ozone layer, so it never ever reaches Earth’s sur­face.

You can see both of those ef­fects in the so­lar ra­di­a­tion spec­trum chart:

The sun emits the most vis­i­ble light, with UV fre­quen­cies falling off very steeply. Augmenting this ef­fect is that the ozone layer in par­tic­u­lar ab­sorbs a lot of UV be­fore it can reach Earth’s sur­face.

Why is the sun­set red?

So the ob­vi­ous next ques­tion is why is the sky red at dusk and dawn?

It’s be­cause the sun­light has to travel through way more at­mos­phere when you’re view­ing it at a low an­gle, and this ex­tended jaunt through the at­mos­phere gives am­ple op­por­tu­nity for al­lll the blue to scat­ter away — and even a good deal of the green too!

Simply put, the blue pho­tons (and to a lesser de­gree, the green) have ei­ther (a) gone off into space or (b) hit the earth some­where else be­fore they reach your eyes.

When the sun is on the hori­zon (e.g. sun­rise or sun­set), the pho­tons it emits travel through 40x as much at­mos­phere to reach your eyes as they would at mid­day. So blue’s 10x propen­sity to scat­ter means it’s sim­ply gone by the time it would’ve reached your eyes. Even green is sig­nif­i­cantly damp­ened. Red light, which hardly scat­ters at all, just cruises on through.

Again, you can play with this and see for your­self 😎

The an­swer to this ques­tion is the sec­ond of three domains” you should un­der­stand in or­der to have a work­ing model of at­mos­phere color. The physics are dif­fer­ent from the small-mol­e­cule scat­ter­ing above.

Clouds are made up of a huge num­ber of tiny wa­ter droplets. These droplets are so small (around .02 mil­lime­ters in di­am­e­ter) that they re­main float­ing in the air. But com­pared to small gas mol­e­cules like N and O, these droplets are enor­mous. A sin­gle wa­ter droplet may be 100 tril­lion HO mol­e­cules!

So, it’s not as sim­ple as the pho­tons cause the hun­dreds of tril­lions of elec­trons to jig­gle”. Instead, it’s more like the light has en­tered a very tiny prism or glass bead.

In a prism, white light can re­flect around, bounce off ex­te­rior or in­te­rior sur­faces, and even re­flect dif­fer­ently de­pend­ing on fre­quency — cre­at­ing a rain­bow ef­fect.

The droplet is just as com­plex. Some of the pho­tons hit­ting the droplet bounce off the sur­face. Some en­ter it, bounce around in­side once, twice, etc. — and leave again. Perhaps a few are ab­sorbed. As with a prism, dif­fer­ent wave­lengths of light will re­flect at dif­fer­ent an­gles. The specifics aren’t im­por­tant — you should just get the gen­eral gist.

So what­ever white (or slightly yel­low­ish) light that came from the di­rec­tion of the sun is leav­ing in many ran­dom di­rec­tions. Think of every color, shoot­ing off in dif­fer­ent di­rec­tions! And then mul­ti­ply that by a quadrillion droplets! In sum, you just see every fre­quency of pho­ton com­ing from every part of the cloud.

And that means the cloud is white!

This idea that the tiny droplets that com­prise clouds scales up. Anything larger that light can en­ter — driz­zle, rain­drops, hail — will also tend to­wards white.

But that raises the ques­tion — what about things in be­tween tiny mol­e­cules (N, O) and the rel­a­tively enor­mous prism-like droplets? How do those things act?

Well, the dust in the sky of Mars is a great ex­am­ple 😉

Why is the sky on Mars red?

The an­swer to this ques­tion is the third of three domains” you should un­der­stand in or­der to have a work­ing model of at­mos­phere color. The physics are dif­fer­ent from both the small-mol­e­cule scat­ter­ing and large-droplet prism-dy­nam­ics above.

The Martian sky is red be­cause it’s full of tiny, iron-rich dust par­ti­cles that ab­sorb blue — leav­ing only red to scat­ter.

Yeah, yeah, I hear you. This an­swer is can-kick­ing! Dust, schmust. Why does it ab­sorb blue?”, you de­mand.

OK, so the an­swer is ac­tu­ally fairly straight­for­ward. And it gen­er­al­izes. Here’s the rule: when­ever you have solid par­ti­cles in the at­mos­phere (very small ones, ap­prox­i­mately the size of the wave­length of vis­i­ble light), they gen­er­ally tend to turn the air warm col­ors — red, or­ange, yel­low.

If you live in an area with wild­fires, you’ve prob­a­bly seen this ef­fect here on Earth!

To re­ally un­der­stand the rea­son, let’s back up and talk about some chem­istry.

Compared to tiny gas mol­e­cules, solid par­ti­cles tend to have a much wider range of light fre­quen­cies that they ab­sorb.

For in­stance, we dis­cussed how N and O have spe­cific res­o­nant fre­quen­cies at which they hun­grily ab­sorb UV pho­tons. Move slightly away from those fre­quen­cies, and ab­sorp­tion drops off a cliff.

But even for a tiny dust nanopar­ti­cle, there are many con­stituent mol­e­cules, each in slightly dif­fer­ent con­fig­u­ra­tions, each be­ing jos­tled slightly dif­fer­ently by its neigh­bors. Consequently, the con­stituent mol­e­cules all have slightly dif­fer­ent pref­er­ences of which fre­quency to ab­sorb.

Because the peak” ab­sorp­tion of the mol­e­cules is usu­ally vi­o­let or ul­tra­vi­o­let (as it is with small gases), blues/​vi­o­lets will make it to the sur­face much less than or­anges/​reds.

Approximate light ab­sorp­tion from Martian dust as a func­tion of wave­length

Of course, a rea­son­able ques­tion is why are blue and vi­o­let ab­sorbed so strongly by these dust par­ti­cles?

Well, those are the only pho­tons with enough en­ergy to bump the dust mol­e­cules’s elec­trons up to a new en­ergy state.

So, the ex­act specifics de­pend on the mol­e­cules in ques­tion, but gen­er­ally, the level of en­ergy needed to bump up the elec­tron en­ergy state in a dust or smog par­ti­cle’s mol­e­cules cor­re­sponds to vi­o­let or UV pho­tons.

This is ac­tu­ally true of solids in gen­eral, not just at­mos­pheric dust or aerosols. If you’ve ever heard that pur­ple was the color of kings” or that the pur­ple dye of an­tiq­uity was worth its weight in gold, it’s true! To get some­thing pur­ple, you’d need to find a ma­te­r­ial whose elec­trons were ex­cited by low-en­ergy red pho­tons, but had no use for higher-en­ergy vi­o­let pho­tons.

So this is why the Martian sky is red — and why reds and browns are more com­mon in na­ture (for solid things, at least) than pur­ple and blue.

Why is the Martian sun­set blue?

It’s less fa­mous than the red day­time sky of Mars, but the Martian sun­set is blue!

Martian sun­set photo taken by the Spirit rover.

In the last sec­tion, we talked about Martian dust ab­sorb­ing vi­o­let/​blue. But the dust also scat­ters light — which it can do to­tally un­re­lated to how it ab­sorbs (remember, since pho­tons can — and usu­ally do — cruise straight through a mol­e­cule, scat­ter­ing and ab­sorb­ing can have their own in­ter­est­ing fre­quency-de­pen­dent char­ac­ter­is­tics. They don’t sim­ply sum to 100%)

Small at­mos­pheric par­ti­cles, like dust and smog, are equal-op­por­tu­nity scat­ter­ers. The ab­solute prob­a­bil­ity they’ll scat­ter a pho­ton does not change sig­nif­i­cantly with the pho­ton’s wave­length. However, dif­fer­ent-fre­quency pho­tons can be more or less likely to scat­ter in dif­fer­ent di­rec­tions.

For our pur­poses, it suf­fices to know that Martian dust — like many at­mos­pheric par­ti­cles of sim­i­lar size — gen­er­ally scat­ters blue light closer to the di­rec­tion it was al­ready go­ing. Red light has a higher prob­a­bil­ity of de­flect­ing at a greater an­gle.

Because red light de­flects MORE and blue light LESS when scat­ter­ing off dust par­ti­cles, the area di­rectly around the sun will be blue — even though more blue is ab­sorbed en route.

When mol­e­cules de­flect pho­tons only a tiny an­gle, it’s called forward scat­ter­ing”. Forward scat­ter­ing is the most pro­nounced for larger par­ti­cles, like dust or smog aerosols. It’s ac­tu­ally so strong on Mars that even at mid­day, red light does­n’t fill the sky evenly — the sky op­po­site the sun is no­tice­ably darker!

But blue’s ten­dency to for­ward-scat­ter more di­rectly against Martian dust means the Martian sun­set has a blue halo.

At the be­gin­ning of this ar­ti­cle, I said be­ing able to pre­dict some­thing is a good mea­sure of how well you un­der­stand it. Let’s do that now. Let’s build a model for pre­dict­ing the sky color on new plan­ets/​moons, or dur­ing dif­fer­ent sce­nar­ios on our own planet.

Here are the three gen­eral rules of thumb we’ve al­ready talked about.

Atmospheric gases tend to be much, much smaller than the wave­lengths of vis­i­ble light. In these cases, they tend to pref­er­en­tially scat­ter blue/​vi­o­let/​UV. This means that gaseous at­mos­pheres are usu­ally blue or blue-green.

Uranus: up­per at­mos­phere is 98% hy­dro­gen and he­lium. We don’t have pic­tures from the sur­face.

Neptune: up­per at­mos­phere is 99% hy­dro­gen and he­lium. We don’t have pic­tures from the sur­face.

This is pleas­ingly true for Earth, Uranus, and Neptune.

You may re­call Neptune as look­ing like a much darker, richer blue. However, more re­cent analy­sis by Patrick Irwin shows the true color is very likely closer to what’s shown here.

It’s also worth not­ing that Neptune and Uranus’s blue color is made no­tice­ably richer by the red-ab­sorb­ing methane in their at­mos­pheres.

When vis­i­ble light hits par­ti­cles that are in the ball­park of its own wave­length, things get more com­pli­cated and can dif­fer on a case-by-case ba­sis.

These par­ti­cles are typ­i­cally ei­ther:

Haze: solid par­ti­cles formed by chem­i­cal re­ac­tions in the at­mos­phere

All three sig­nif­i­cantly dusty/​hazy at­mos­pheres in our so­lar sys­tem hold to this rule!

Titan’s sky is or­ange due to a haze of tholins (organic mol­e­cules)

Venus’s sky is yel­low to a haze of sul­furous com­pounds

When vis­i­ble light hits clouds of droplets (or ice crys­tals) that are much big­ger than light’s wave­length, the droplets act akin to a vast army of float­ing prisms, send­ing out all col­ors in all di­rec­tions.

Consequently, clouds tend to ap­pear white, gray, or de­sat­u­rated hues.

Venus: high-al­ti­tude clouds of sul­fu­ric acid (!). The tan/​or­ange is from the afore­men­tioned haze.

Putting it all to­gether

The largest and most com­plex at­mos­phere in our so­lar sys­tem is Jupiter. But we know enough to start mak­ing some smart guesses about it!

QUIZ: look­ing at this pic­ture, what can you say about Jupiter’s at­mos­phere? Answers be­low the im­age, so take a guess be­fore scrolling 😉

Here’s a com­par­i­son of how a ba­sic guess — in­formed by our sim­plis­tic model — com­pares to sci­en­tific con­sen­sus.

Clouds, prob­a­bly of ice be­cause of cold­ness

Small at­mos­pheric mol­e­cules. But po­ten­tially a chem­i­cally odd haze, if some­thing ab­sorbed the vis­i­ble spec­trum pretty strongly?

The Galileo probe that de­scended into Jupiter en­tered one of these spots. It’s most sur­pris­ing find­ing was how dry Jupiter’s at­mos­phere seemed to be. But know­ing it fell be­tween where the clouds were, this makes to­tal sense. Instead of ice crys­tals, it found hy­dro­gen and he­lium.

...

Read the original on explainers.blog »

6 360 shares, 27 trendiness

All The Views

With the help of a cus­tom-de­vel­oped al­go­rithm, CacheTVS, we have ex­haus­tively checked every sin­gle view on Earth in search of the cov­eted longest line of sight on the planet. Based on the method we de­tail here, we pre­sent the great­est view of all:

The longest line of sight on the planet, at 530km, from the Hindu Kush to Pik Dankova

Longest lines of sight tend to group to­gether around peaks and ridges. So the fol­low­ing are more of our own cu­rated list rather than the tech­ni­cally cor­rect run­ners up. We chose them based on be­ing in no­tably dif­fer­ent re­gions of the world.

The sec­ond longest line of sight, at 504km, from Antioquia to Pico Crostobal in Colombia

Now we go right to the other side of the world to Colombia in South America. We’ve found a line of sight from the de­part­ment of Antioquia to Pico Cristobal, Colombia’s high­est moun­tain.

The third longest line of sight, at 483km, from Mount Elbrus in Russia to the Pontic Mountains in Turkey

Our pick for the third great­est view is from Mount Elbrus in Russia to the Pontic Moutains in Turkey. Notably Russia does not share a land bor­der with Turkey, so it’s in­ter­est­ing that you can still see one from the other.

We have around 4.5 bil­lion lines of sight for you to ex­plore in an in­ter­ac­tive map at map.allthe­views.world

...

Read the original on alltheviews.world »

7 359 shares, 40 trendiness

jim11662418/ESP8266_WiFi_Analog_Clock: Uses an ESP8266 module and an Arduino sketch to display the local time on a inexpensive analog quartz clock.

This pro­ject uses an WEMOS D1 Mini ESP8266 mod­ule and an Arduino sketch to con­nect to a NTP (Network Time Protocol) server to au­to­mat­i­cally re­trieve and dis­play the lo­cal time on a in­ex­pen­sive ana­log quartz clock. The ESP8266 re­con­nects to the NTP server every 15 min­utes which keeps the clock ac­cu­rate. The clock also au­to­mat­i­cally ad­justs for day­light sav­ings time.

WEMOS D1 Mini ESP8266 Module with EERAM IC and Components on a Piece of Perfboard

I’m us­ing an ana­log clock with a quartz move­ment I found at my lo­cal Walmart for $3.88. Whatever ana­log clock you de­cide to use, its quartz move­ment will need to be mod­i­fied so that it can be con­trolled by the ESP8266 mod­ule. Open up the move­ment (most of them snap to­gether with­out any fas­ten­ers), dis­con­nect the in­ter­nal coil of the Lavet step­ping mo­tor from its quartz os­cil­la­tor and then sol­der a wire to each of the coil’s leads to make con­nec­tions for the ESP8266. If you search around on the web you’ll find ar­ti­cles show­ing how oth­ers have done it. Be care­ful when work­ing with the coil. The coil’s wires are typ­i­cally thin­ner than a hu­man hair and ex­tremely frag­ile.

The sketch: AnalogClock.ino should be (I hope) clear enough, but here, in brief, is a sum­mary of how it op­er­ates. Ten times each sec­ond the ESP8266 com­pares the time dis­played on the ana­log clock to the ac­tual time re­trieved from an NTP server. If the ana­log clock lags be­hind the ac­tual time, the ESP8266 ad­vances the clock’s sec­ond hand un­til the clock agrees with the ac­tual time. If the time dis­played on the ana­log clock is ahead of the ac­tual time, the ESP8266 sim­ply waits un­til the ac­tual time catches up with the ana­log clock since it can’t move the clock’s hands back­wards.

The ESP8266 ad­vances the ana­log clock’s sec­ond hand by gen­er­at­ing bipo­lar pulses, al­ter­nately pos­i­tive and neg­a­tive to the clock’s Lavet mo­tor coil. Because of dif­fer­ences in clock mech­a­nisms, you may need to in­crease or de­crease the PULSETIME con­stant in the sketch by few mil­lisec­onds to make your mech­a­nism step re­li­ably. Experimentally, I found that 30 mil­lisec­onds works best for my move­ment.

The biggest prob­lem with us­ing these cheap ana­log clocks for a pro­ject like this is that the clocks don’t pro­vide any type of feed­back to in­di­cate the po­si­tion of the clock’s hands. Thus if power is in­ter­rupted to the ESP8266 con­trol­ling the clock, the ESP8266 forgets” where the clock’s hands are po­si­tioned. To get around this prob­lem, the po­si­tions of the hour, minute and sec­ond hands are stored in a Microchip 47L04 Serial EERAM (4Kbit SRAM with EEPROM backup) and up­dated each sec­ond as the clock’s hands po­si­tions change. If power is in­ter­rupted, the ESP8266 can re­trieve the last po­si­tion of the clock’s hands from the EERAM when power is reap­plied.

The very first time that the sketch is run, the user will be di­rected to a sim­ple web page (see be­low) served by the ESP8266 which is used to tell it where the ana­log clock’s hands are ini­tially po­si­tioned. From that point on, the ESP8266 will use the data stored in the EERAM to remember” the po­si­tions of the clock’s hands.

Once the ESP8266 fin­ishes its ini­tial­iza­tion and starts op­er­a­tion, it serves a sim­ple web page show­ing the clock’s sta­tus. The sta­tus page can op­tion­ally show a graphic im­age rep­re­sent­ing the clock’s Face drawn us­ing Scalable Vector Graphics, or HTML Canvas, or no im­age at all.

Analog Clock Status Page Using Scalable Vector Graphics to Draw the Clock Face

Analog Clock Status Page Using the HTML Canvas Element to Draw the Clock Face

...

Read the original on github.com »

8 334 shares, 17 trendiness

CCC vs GCC

Anthropic re­cently pub­lished a blog post about build­ing a C com­piler en­tirely with Claude

. They called it CCC (Claude’s C Compiler) and claimed it could com­pile the Linux ker­nel. 100% of the code was writ­ten by Claude Opus 4.6, a hu­man only guided the process by writ­ing test cases. That sounded in­ter­est­ing enough to test the claim and bench­mark CCC against the in­dus­try stan­dard GCC.

The source code of CCC is avail­able at claudes-c-com­piler

. It is writ­ten en­tirely in Rust, tar­get­ing x86-64, i686, AArch64 and RISC-V 64. The fron­tend, SSA-based IR, op­ti­mizer, code gen­er­a­tor, peep­hole op­ti­miz­ers, as­sem­bler, linker and DWARF de­bug info gen­er­a­tion are all im­ple­mented from scratch with zero com­piler-spe­cific de­pen­den­cies. That is a lot of work for an AI to do.

Before we jump into the com­par­i­son, it helps to un­der­stand what hap­pens when you com­pile a C pro­gram. There are four stages in­volved.

Image credit: The four stages of the gcc com­piler

Preprocessor: Handles #include, #define and other di­rec­tives. It takes the source code and pro­duces ex­panded source code.

Compiler: Takes the pre­processed source code and trans­lates it into as­sem­bly lan­guage. This is where the real heavy lift­ing hap­pens, un­der­stand­ing the C lan­guage, type check­ing, op­ti­miza­tions, reg­is­ter al­lo­ca­tion and so on.

Assembler: Converts the as­sem­bly lan­guage into ma­chine code (object files). It has to know the ex­act in­struc­tion en­cod­ing for the tar­get CPU ar­chi­tec­ture.

Linker: Takes one or more ob­ject files and com­bines them into a sin­gle ex­e­cutable. It re­solves ref­er­ences be­tween files, sets up mem­ory lay­out and pro­duces the fi­nal bi­nary.

Writing a pro­gram­ming lan­guage is hard (prior vibe cod­ing). Writing a com­piler is on an­other level en­tirely. A pro­gram­ming lan­guage de­fines the rules. A com­piler has to un­der­stand those rules, trans­late them into ma­chine in­struc­tions, op­ti­mize the out­put for speed and size, han­dle edge cases across dif­fer­ent CPU ar­chi­tec­tures and pro­duce cor­rect code every sin­gle time.

GCC has been in de­vel­op­ment since 1987. That is close to 40 years of work by thou­sands of con­trib­u­tors. It sup­ports dozens of ar­chi­tec­tures, hun­dreds of op­ti­miza­tion passes and mil­lions of edge cases that have been dis­cov­ered and fixed over the decades. The op­ti­miza­tion passes alone (register al­lo­ca­tion, func­tion in­lin­ing, loop un­rolling, vec­tor­iza­tion, dead code elim­i­na­tion, con­stant prop­a­ga­tion) rep­re­sent years of PhD-level re­search. This is one of the rea­sons why it’s ubiq­ui­tous.

This is why CCC be­ing able to com­pile real C code at all is note­wor­thy. But it also ex­plains why the out­put qual­ity is far from what GCC pro­duces. Building a com­piler that parses C cor­rectly is one thing. Building one that pro­duces fast and ef­fi­cient ma­chine code is a com­pletely dif­fer­ent chal­lenge.

Ironically, among the four stages, the com­piler (translation to as­sem­bly) is the most ap­proach­able one for an AI to build. It is mostly about pat­tern match­ing and rule ap­pli­ca­tion: take C con­structs and map them to as­sem­bly pat­terns.

The as­sem­bler is harder than it looks. It needs to know the ex­act bi­nary en­cod­ing of every in­struc­tion for the tar­get ar­chi­tec­ture. x86-64 alone has thou­sands of in­struc­tion vari­ants with com­plex en­cod­ing rules (REX pre­fixes, ModR/M bytes, SIB bytes, dis­place­ment sizes). Getting even one bit wrong means the CPU will do some­thing com­pletely un­ex­pected.

The linker is ar­guably the hard­est. It has to han­dle re­lo­ca­tions, sym­bol res­o­lu­tion across mul­ti­ple ob­ject files, dif­fer­ent sec­tion types, po­si­tion-in­de­pen­dent code, thread-lo­cal stor­age, dy­namic link­ing and for­mat-spe­cific de­tails of ELF bi­na­ries. The Linux ker­nel linker script alone is hun­dreds of lines of lay­out di­rec­tives that the linker must get ex­actly right.

The Linux ker­nel is one of the most com­plex C code­bases in the world. It has mil­lions of lines of code, uses GCC-specific ex­ten­sions, in­line as­sem­bly, linker scripts and count­less tricks that push the com­piler to its lim­its. It is not a good first test for a new com­piler.

SQLite, on the other hand, is dis­trib­uted as a sin­gle amal­ga­ma­tion file (one big .c file). It is stan­dard C, well-tested and self-con­tained. If your com­piler can han­dle SQLite, it can han­dle a lot. If it can­not han­dle SQLite cor­rectly, there is no point test­ing any­thing big­ger.

That is why I tested both. SQLite tells us about cor­rect­ness and run­time per­for­mance. The ker­nel tells us about scale and com­pat­i­bil­ity.

CCC was built with the gc­c_m16 Cargo fea­ture, which del­e­gates 16-bit real-mode boot code (-m16 flag) to GCC. This is needed be­cause CCCs i686 back­end pro­duces code too large for the 32KB real-mode limit. The x86_64 C code is com­piled en­tirely by CCC.

A cc­c_wrap­per.sh script routes .S as­sem­bly files to GCC (CCC does not process as­sem­bly) and all .c files to CCC.

Compilers are usu­ally mea­sured on be­low sce­nar­ios. Hence, tests are also de­signed around them.

Same hard­ware — iden­ti­cal VM specs for both com­pil­ers

Both run to com­ple­tion — no tests killed pre­ma­turely

CCC gets help where needed — gc­c_m16 fea­ture for boot code, wrap­per for as­sem­bly files

Same bench­mark script — bench­mark_sqlite.sh runs iden­ti­cally on both VMs

The bench­mark was de­signed to be CPU-bound:

* No cor­re­lated sub­queries (O(n^2) queries were re­placed with GROUP BY)

The fair com­par­i­son is CCC vs GCC at -O0 (no op­ti­miza­tion): CCC takes 87s vs GCCs 65s — CCC is 1.3x slower. The 5x faster” num­ber only ap­pears be­cause GCC is do­ing 7 min­utes of op­ti­miza­tion work that CCC sim­ply skips.

CCC com­piled every sin­gle C source file in the Linux 6.9 ker­nel with­out a sin­gle com­piler er­ror (0 er­rors, 96 warn­ings). This is gen­uinely im­pres­sive for a com­piler built en­tirely by an AI.

However, the build failed at the linker stage with around 40,784 un­de­fined ref­er­ence er­rors. The er­rors fol­low two pat­terns:

__jump_table re­lo­ca­tions — CCC gen­er­ates in­cor­rect re­lo­ca­tion en­tries for ker­nel jump la­bels (used for sta­tic keys/​tra­ce­points)

These are linker-vis­i­ble bugs in CCCs re­lo­ca­tion/​sym­bol gen­er­a­tion, not C lan­guage com­pi­la­tion bugs. This is a good ex­am­ple of why the linker is the hard­est part. The com­piler did its job fine, but the gen­er­ated re­lo­ca­tions were not quite right for the ker­nel’s com­plex linker script.

CCC -O0 and -O2 pro­duce byte-iden­ti­cal bi­na­ries (4,374,024 bytes). CCC has 15 SSA op­ti­miza­tion passes, but they all run at every op­ti­miza­tion level. There is no tiered op­ti­miza­tion — the -O flag is ac­cepted but com­pletely ig­nored.

When you ask GCC to com­pile with -O2, it per­forms dozens of ex­tra op­ti­miza­tion passes:

* Register al­lo­ca­tion: fit­ting vari­ables into CPU reg­is­ters so they do not spill to slow mem­ory

* Vectorization: us­ing SIMD in­struc­tions (SSE/AVX) to process mul­ti­ple val­ues at once

GCCs -O2 spends 7 min­utes do­ing this work, and the pay­off is clear: the re­sult­ing bi­nary runs 1.7x faster (6.1s vs 10.3s).

CCC does none of this at any op­ti­miza­tion level. Comparing CCC com­pile time vs GCC -O2 com­pile time” is like com­par­ing a printer that only prints in black-and-white vs one that does full color. The black-and-white printer is faster, but it is­n’t do­ing the same job.

CCC-compiled SQLite is func­tion­ally cor­rect — it pro­duces the same query re­sults as GCC-compiled SQLite. All 5 crash/​edge-case tests passed. But it is very slow.

No fail­ures ob­served dur­ing these tests:

The per-query break­down shows that CCCs slow­down is not uni­form. Simple queries are only 1-7x slower, but com­plex op­er­a­tions in­volv­ing nested loops blow up:

The pat­tern is clear: op­er­a­tions that in­volve nested it­er­a­tion (subqueries, JOINs) are or­ders of mag­ni­tude slower, while sim­ple se­quen­tial op­er­a­tions are only slightly slower.

Modern CPUs have a small set of fast stor­age lo­ca­tions called reg­is­ters. A good com­piler tries to keep fre­quently used vari­ables in these reg­is­ters. When there are more vari­ables than reg­is­ters, the com­piler spills” them to the stack (regular RAM), which is much slower.

CCCs biggest per­for­mance prob­lem is ex­ces­sive reg­is­ter spilling. SQLite’s core ex­e­cu­tion en­gine sqlite3Vd­be­Exec is a sin­gle func­tion with 100+ lo­cal vari­ables and a mas­sive switch state­ment. CCC does not have good reg­is­ter al­lo­ca­tion, so it spills al­most all vari­ables to the stack.

movq -0x1580(%rbp), %rax  ; load from deep stack off­set

movq %rax, -0x2ae8(%rbp)  ; store to an­other deep stack off­set

movq -0x1588(%rbp), %rax  ; load next value

movq %rax, -0x2af0(%rbp)  ; store to next off­set

; … dozens more mem­ory-to-mem­ory copies

CCC uses stack off­sets up to -0x2ae8 (11,000 bytes deep) for a func­tion with 32 vari­ables. Every op­er­a­tion goes: stack -> rax -> stack, us­ing %rax as a shut­tle reg­is­ter.

CCC is 4.2x slower than GCC O0 for reg­is­ter-heavy code. In sqlite3Vd­be­Exec with 100+ vari­ables and 200+ switch cases, this ra­tio com­pounds to 100x+.

CCC runs the same 15-pass SSA pipeline at all op­ti­miza­tion lev­els:

This means -O2 pro­vides zero ben­e­fit. Every bi­nary CCC pro­duces is ef­fec­tively -O0 qual­ity, re­gard­less of what flag you pass.

The 2.78x code bloat means more in­struc­tion cache misses, which com­pounds the reg­is­ter spilling penalty.

CCC-compiled bi­na­ries lack in­ter­nal func­tion sym­bols (nm re­ports 0 sym­bols, read­elf shows only 90 PLT stubs vs GCCs 1,500+ func­tions). This makes pro­fil­ing and de­bug­ging im­pos­si­ble.

The NOT IN (subquery) pat­tern causes SQLite to ex­e­cute a nested loop: for each of the around 100,000 rows in the outer table, it scans through around 10,000 rows in the in­ner table. That is roughly 1 bil­lion it­er­a­tions through SQLite’s main ex­e­cu­tion func­tion (sqlite3VdbeExec), which is ba­si­cally a gi­ant switch state­ment.

With CCCs roughly 4x per-it­er­a­tion over­head from reg­is­ter spilling, plus ex­tra cache misses from the 2.78x larger bi­nary (the CPU can­not keep all the in­struc­tions in its fast cache), the slow­down com­pounds:

* Cache pres­sure: around 2-3x ad­di­tional penalty (instructions do not fit in L1/L2 cache)

This is why sim­ple queries (INSERT, DROP TABLE) are only 1-2x slower, but nested op­er­a­tions blow up to 100,000x+ slower.

Correctness: Compiled every C file in the ker­nel (0 er­rors) and pro­duced cor­rect SQLite out­put for all queries

Stability: Zero crashes, zero seg­faults across all tests

Memory us­age: 5.9x more RAM for com­pi­la­tion (1.6 GB vs 272 MB for SQLite)

Compilation speed: Could only be com­pared with -O0 as CCC does not do any­thing be­yond this. CCC is around 25% slower vs GCC (87s vs 65s)

Within hours of Anthropic re­leas­ing CCC, some­one opened is­sue #1

Hello world does not com­pile”. The ex­am­ple straight from the README did not work on a fresh Fedora or Ubuntu in­stall:

$ ./target/release/ccc -o hello hello.c

/usr/include/stdio.h:34:10: er­ror: std­def.h: No such file or di­rec­tory

/usr/include/stdio.h:37:10: er­ror: stdarg.h: No such file or di­rec­tory

ccc: er­ror: 2 pre­proces­sor er­ror(s) in hello.c

Meanwhile, GCC com­piled it just fine. The is­sue was that CCCs pre­proces­sor did not search the right sys­tem in­clude paths for std­def.h and stdarg.h (these come from the com­piler, not the C li­brary). It got 288 thumbs-up re­ac­tions, over 200 com­ments and turned into one of those leg­endary GitHub threads where peo­ple tag @claude ask­ing it to fix the bug, ask @grok for sum­maries and post com­ments like my job is safe”.

Someone got it work­ing on Compiler Explorer and re­marked that the as­sem­bly out­put reminds me of the qual­ity of an un­der­grad­u­ate’s com­piler as­sign­ment”. Which, to be fair, is both harsh and not en­tirely wrong when you look at the reg­is­ter spilling pat­terns.

The is­sue is still open at the time of writ­ing.

Claude’s C Compiler is a re­mark­able achieve­ment. It is a work­ing C com­piler built en­tirely by an AI that can cor­rectly com­pile 2,844 files from the Linux ker­nel with­out a sin­gle er­ror. It pro­duces func­tion­ally cor­rect code (verified with SQLite — all queries re­turn cor­rect re­sults, all crash tests pass).

But it is not ready for real use:

The out­put code is very slow. CCC-compiled SQLite takes 2 hours to run a bench­mark that GCC fin­ishes in 10 sec­onds. The root cause is poor reg­is­ter al­lo­ca­tion — CCC uses a sin­gle reg­is­ter as a shut­tle to move val­ues be­tween stack lo­ca­tions, turn­ing every op­er­a­tion into mul­ti­ple mem­ory ac­cesses.

The compiles the ker­nel” claim needs a foot­note. CCC com­piles all the C source files, but the fi­nal bi­nary can­not be pro­duced be­cause CCC gen­er­ates in­cor­rect re­lo­ca­tions for ker­nel data struc­tures (__jump_table, __ksymtab).

Optimization flags are dec­o­ra­tive. Passing -O2 or -O3 to CCC does lit­er­ally noth­ing — the out­put bi­nary is byte-iden­ti­cal to -O0.

For Anthropic’s stated goal of demon­strat­ing that Claude can build com­plex soft­ware, CCC is a gen­uine suc­cess. For any­one want­ing to com­pile soft­ware to ac­tu­ally run ef­fi­ciently, GCC (or Clang, or any pro­duc­tion com­piler) re­mains the only real op­tion.

All scripts, re­sults and graphs are avail­able at com­pare-claude-com­piler

Part of this work was as­sisted by AI. The Python scripts used to gen­er­ate bench­mark re­sults and graphs were writ­ten with AI as­sis­tance. The bench­mark de­sign, test ex­e­cu­tion, analy­sis and writ­ing were done by a hu­man with AI help­ing where needed.

...

Read the original on harshanu.space »

9 275 shares, 16 trendiness

Nobody knows how the whole system works

One of the sur­pris­ing (at least to me) con­se­quences of the fall of Twitter is the rise of LinkedIn as a so­cial me­dia site. I saw some in­ter­est­ing posts I wanted to call at­ten­tion to:

First, Simon Wardley on build­ing things with­out un­der­stand­ing how they work:

And here’s Bruce Perens, whose post is very much in con­ver­sa­tion with them, even though he’s not ex­plic­itly re­spond­ing to ei­ther of them.

Finally, here’s the MIT en­gi­neer­ing pro­fes­sor Louis Bucciarelli from his book Designing Engineers, writ­ten back in 1994. Here I’m just copy­ing and paste the quotes from my pre­vi­ous post on ac­tive knowl­edge.

A few years ago, I at­tended a na­tional con­fer­ence on tech­no­log­i­cal lit­er­acy… One of the main speak­ers, a so­ci­ol­o­gist, pre­sented data he had gath­ered in the form of re­sponses to a ques­tion­naire. After a de­tailed sta­tis­ti­cal analy­sis, he had con­cluded that we are a na­tion of tech­no­log­i­cal il­lit­er­ates. As an ex­am­ple, he noted how few of us (less than 20 per­cent) know how our tele­phone works.

This state­ment brought me up short. I found my mind drift­ing and fill­ing with anx­i­ety. Did I know how my tele­phone works?

I squirmed in my seat, doo­dled some, then asked my­self, What does it mean to know how a tele­phone works? Does it mean know­ing how to dial a lo­cal or long-dis­tance num­ber? Certainly I knew that much, but this does not seem to be the is­sue here.

No, I sus­pected the ques­tion to be un­der­stood at an­other level, as prob­ing the re­spon­den­t’s knowl­edge of what we might call the physics of the de­vice.”I called to mind an im­age of a di­aphragm, ex­cited by the pres­sure vari­a­tions of speak­ing, vi­brat­ing and dri­ving a coil back and forth within a a mag­netic field… If this was what the speaker meant, then he was right: Most of us don’t know how our tele­phone works.

Indeed, I won­dered, does [the speaker] know how his tele­phone works? Does he know about the heuris­tics used to achieve op­ti­mum rout­ing for long dis­tance calls? Does he know about the in­tri­ca­cies of the al­go­rithms used for echo and noise sup­pres­sion? Does he know how a sig­nal is trans­mit­ted to and re­trieved from a satel­lite in or­bit? Does he know how AT&T, MCI, and the lo­cal phone com­pa­nies are able to use the same net­work si­mul­ta­ne­ously? Does he know how many op­er­a­tors are needed to keep this sys­tem work­ing, or what those re­pair peo­ple ac­tu­ally do when they climb a tele­phone pole? Does he know about cor­po­rate fi­nanc­ing, cap­i­tal in­vest­ment strate­gies, or the role of reg­u­la­tion in the func­tion­ing of this ex­pan­sive and so­phis­ti­cated com­mu­ni­ca­tion sys­tem?

Does any­one know how their tele­phone works?

There’s a tech­ni­cal in­ter­view ques­tion that goes along the lines of: What hap­pens when you type a URL into your browser’s ad­dress bar and hit en­ter?” You can talk about what hap­pens at all sorts of dif­fer­ent lev­els (e.g., HTTP, DNS, TCP, IP, …). But does any­body re­ally un­der­stand all of the lev­els? Do you know about the in­ter­rupts that fire in­side of your op­er­at­ing sys­tem when you ac­tu­ally strike the en­ter key? Do you know which mod­u­la­tion scheme be­ing used by the 802.11ax Wi-Fi pro­to­col in your lap­top right now? Could you ex­plain the dif­fer­ence be­tween quad­ra­ture am­pli­tude mod­u­la­tion (QAM) and quad­ra­ture phase shift key­ing (QPSK), and could you de­ter­mine which one your lap­top is cur­rently us­ing? Are you fa­mil­iar with the re­laxed mem­ory model of the ARM proces­sor? How garbage col­lec­tion works in­side of the JVM? Do you un­der­stand how the field ef­fect tran­sis­tors in­side the chip im­ple­ment dig­i­tal logic?

I re­mem­ber talk­ing to Brendan Gregg about how he con­ducted tech­ni­cal in­ter­views, back when we both worked at Netflix. He told me that he was in­ter­ested in iden­ti­fy­ing the lim­its of a can­di­date’s knowl­edge, and how they re­acted when they reached that limit. So, he’d keep ask­ing deeper ques­tions about their area of knowl­edge un­til they reached a point where they did­n’t know any­more. And then he’d see whether they would ac­tu­ally ad­mit I don’t know the an­swer to that”, or whether they would bluff. He knew that no­body un­der­stood the sys­tem all of the way down.

In their own ways, Wardley, Jacob, Perens, and Bucciarelli are all cor­rect.

Wardley’s right that it’s dan­ger­ous to build things where we don’t un­der­stand the un­der­ly­ing mech­a­nism of how they ac­tu­ally work. This is pre­cisely why magic is used as an ep­i­thet in our in­dus­try. Magic refers to frame­works that de­lib­er­ately ob­scure the un­der­ly­ing mech­a­nisms in ser­vice of mak­ing it eas­ier to build within that frame­work. Ruby on Rails is the canon­i­cal ex­am­ple of a frame­work that uses magic.

Jacob is right that AI is chang­ing the way that nor­mal soft­ware de­vel­op­ment work gets done. It’s a new ca­pa­bil­ity that has proven it­self to be so use­ful that it clearly is­n’t go­ing away. Yes, it rep­re­sents a sig­nif­i­cant shift in how we build soft­ware, it moves us fur­ther away from how the un­der­ly­ing stuff ac­tu­ally works, but the ben­e­fits ex­ceed the risks.

Perens is right that the sce­nario that Wardley fears has, in some sense, al­ready come to pass. Modern CPU ar­chi­tec­tures and op­er­at­ing sys­tems con­tain sig­nif­i­cant com­plex­ity, and many soft­ware de­vel­op­ers are bliss­fully un­aware of how these things re­ally work. Yes, they have men­tal mod­els of how the sys­tem be­low them works, but those men­tal mod­els are in­cor­rect in fun­da­men­tal ways.

Finally, Bucciarelli is right that sys­tems like tele­phony are so in­her­ently com­plex, have been built on top of so many dif­fer­ent lay­ers in so many dif­fer­ent places, that no one per­son can ever ac­tu­ally un­der­stand how the whole thing works. This is the fun­da­men­tal na­ture of com­plex tech­nolo­gies: our knowl­edge of these sys­tems will al­ways be par­tial, at best. Yes, AI will make this sit­u­a­tion worse. But it’s a sit­u­a­tion that we’ve been in for a long time.

...

Read the original on surfingcomplexity.blog »

10 248 shares, 41 trendiness

Incident with Issues, Actions and Git Operations

...

Read the original on www.githubstatus.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.