10 interesting stories served every morning and every evening.




1 2,647 shares, 101 trendiness

Moon – Bartosz Ciechanowski

Let’s take a look at the Moon as seen from space in all its sun­lit glory. You can drag it around to change your point of view, and you can also use the slider to con­trol the date and time:

In this con­ve­nient view, we can freely pan the cam­era around to see the Moon and its mar­velous craters and moun­tains from var­i­ous an­gles. Unfortunately, we don’t have that free­dom of mo­tion in our daily ex­pe­ri­ence — the Moon wan­ders on its own path across the daily and nightly skies.

We can sim­u­late these trav­els be­low, where you can see the cur­rent po­si­tion of the Moon in the sky. You can drag that panorama around to ad­just your view­ing di­rec­tion — this lets you see the breadth of the sky both above and be­low the hori­zon. By drag­ging the slid­ers you can wit­ness how the po­si­tion of the Moon changes in the sky across and hours of your lo­cal time. As the Moon’s place­ment in the sky shifts, the lit­tle ar­row will guide you to its po­si­tion.

You can also drag the lit­tle fig­urine on the globe in the bot­tom-right cor­ner to see how the sky looks at that lo­ca­tion on Earth. If your browser al­lows it, click­ing­tap­ping the but­ton will au­to­mat­i­cally put the fig­urine at your cur­rent lo­ca­tion. This may all feel quite over­whelm­ing at the mo­ment, but we’ll even­tu­ally see how all these pieces fit to­gether:

Over the course of one day, the Moon trav­els on an arc in the sky al­most com­plet­ing a loop around the Earth. As the pass, the Moon’s il­lu­mi­na­tion also vis­i­bly changes.

You’ll prob­a­bly ad­mit that it’s a lit­tle hard to fo­cus on the tiny Moon as it shifts its po­si­tion in the sky. To make things eas­ier to see, I’ll zoom in the cam­era and lock its po­si­tion on the Moon:

Notice that across a sin­gle day the Moon seems to ro­tate, and over it quite vis­i­bly wob­bles. These wob­bly vari­a­tions let us oc­ca­sion­ally see some hid­den parts on the edges” of the Moon, but our neigh­bor ul­ti­mately shows us only one of its sides. In our space-float­ing demo we could eas­ily see the Moon from all sides, but on Earth we can never see most of the far side of the Moon.

Over the course of , the light­ing on the Moon also changes dra­mat­i­cally. The line be­tween the lit and un­lit parts of the Moon, known as the ter­mi­na­tor, sweeps across the Moon, re­veal­ing the de­tails of its sur­face. Although the Moon has a spher­i­cal shape, the fully lit Moon looks more like a flat disk.

In this ar­ti­cle I’ll ex­plain all the ef­fects we’ve just seen, and we’ll also learn about grav­ity, ocean tides, and eclipses. Let’s be­gin by ex­plor­ing how ce­les­tial bod­ies move through space and how their mere pres­ence in­flu­ences the mo­tion of their neigh­bors.

Let me in­tro­duce a lit­tle cos­mic play­ground in which we’ll do our ex­per­i­ments. Inside it, I put a lit­tle planet that floats freely in space. You can drag the planet around to change its po­si­tion. The ar­row sym­bol­izes the ini­tial ve­loc­ity of this body — you can tweak this ve­loc­ity by drag­ging the dashed out­line at the end of the ar­row. To get things go­ing, you can press the but­ton in the bot­tom-left cor­ner:

Notice that I’m draw­ing a ghost trail be­hind the mov­ing planet, mak­ing it eas­ier to track its mo­tion. As you can see, once you let the planet go, it trav­els through space in a straight line, only to even­tu­ally get out of vis­i­ble bounds.

Let’s com­pli­cate things a lit­tle by adding an­other body to this sand­box. You can tweak the po­si­tions and ve­loc­i­ties of both bod­ies to see how their mu­tual pres­ence im­pacts one an­other. I’m also mark­ing the thin lines of tra­jec­to­ries that the bod­ies will take even be­fore you let things go, mak­ing it eas­ier to plan their mo­tion:

The mo­tion we see now is­n’t as straight­for­ward as be­fore. In some sce­nar­ios, the two bod­ies travel past each other af­ter tweak­ing their ini­tial tra­jec­to­ries. In other con­fig­u­ra­tions, both ob­jects roam through space to­gether, per­ma­nently locked in a swing­ing dance.

You may have also man­aged to make the two bod­ies run into each other. We’ll even­tu­ally see a more re­al­is­tic vi­su­al­iza­tion of that sce­nario, but in this sim­pli­fied sim­u­la­tion when two ob­jects col­lide, they just stick to­gether and con­tinue their cou­pled jour­ney.

What’s re­spon­si­ble for all these ef­fects is the force of grav­ity act­ing on the ob­jects. Let’s ex­plore that in­ter­ac­tion up close. As be­fore, you can drag the two bod­ies around, and you can also change their masses us­ing the slid­ers be­low:

The ar­rows rep­re­sent the force of grav­ity act­ing on the two bod­ies  — the longer the ar­row, the larger the force. For com­plete­ness, I’m dis­play­ing the val­ues and units of masses and dis­tances, but the num­bers aren’t par­tic­u­larly im­por­tant here. What mat­ters is that when we in­crease ei­ther the mass of the first body m1 or the mass of the sec­ond body m2, the force of grav­ity grows too.

Moreover, the mag­ni­tude of grav­ity also de­pends on the dis­tance r be­tween the ob­jects. As bod­ies move far­ther apart, the grav­ity weak­ens. Notice how the forces act­ing on each body have the same mag­ni­tude, but they point to­wards the other body, which in­di­cates an at­trac­tive force.

If you paid close at­ten­tion to the lengths of the ar­rows, you might have no­ticed that the force de­creases quite rapidly with dis­tance. We can vi­su­al­ize this with a plot, in which the white line shows the mag­ni­tude of grav­ity as a func­tion of dis­tance. More pre­cisely, it shows that grav­ity is in­versely pro­por­tional to the square of that dis­tance:

Let’s take a very brief math­e­mat­i­cal in­ter­lude to de­scribe what we’ve seen in more de­tail. All these de­pen­den­cies are cap­tured in the fol­low­ing equa­tion for the force of grav­ity F, be­tween two ob­jects with masses m1 and m2 sep­a­rated by dis­tance r:

The grav­i­ta­tional con­stant G seen in front of the right-hand side of the equa­tion is in­cred­i­bly small, mak­ing grav­ity a very weak force. We have no is­sues lift­ing every­day ob­jects de­spite the might of the mass of the en­tire Earth pulling them down.

While the strength of grav­ity be­tween any two bod­ies is equal, the re­sult­ing change in mo­tion is not. You may re­call from el­e­men­tary physics classes that force F is equal to mass m times ac­cel­er­a­tion a. We can en­cap­su­late this idea in a pair of sim­ple for­mu­las that tie these val­ues for the first and sec­ond body:

By plug­ging in the equa­tion for the force of grav­ity F and re­duc­ing the masses, we end up with a set of two equa­tions for ac­cel­er­a­tions of the bod­ies:

Notice that the ac­cel­er­a­tion of the first body a1 de­pends on the mass of the sec­ond body m2. Similarly, the ac­cel­er­a­tion of the sec­ond body a2 de­pends on the mass of the first body m1. Let’s see this in prac­tice in the demon­stra­tion be­low, where I’m tem­porar­ily mak­ing the big body twenty times more mas­sive than the small body:

Notice that the body with smaller mass dras­ti­cally changes its course, while the mo­tion of the larger body is only mar­gin­ally af­fected. This tracks with our day-to-day ex­pe­ri­ence, where every item left hang­ing in the air very vis­i­bly ac­cel­er­ates to­wards the stag­ger­ingly mas­sive Earth, but our planet does­n’t jump out of its way to meet the falling ob­ject.

Now that we un­der­stand that it’s the force of grav­ity that makes the bod­ies move to­wards each other, let’s do a bet­ter job of track­ing the mo­tions of these ob­jects over time. Right now our cam­era is fixed in space, so the two bod­ies of­ten fly out of vis­i­ble bounds. Thankfully, we can eas­ily fix this by mov­ing the cam­era with the bod­ies.

In the demon­stra­tion be­low, I’m pre­sent­ing the same sce­nario from two dif­fer­ent van­tage points. On the left, I’m show­ing the scene from the fa­mil­iar point of view that’s fixed in space — you can plan the tra­jec­to­ries of the two bod­ies on that side.

On the right, you can see this sim­u­la­tion from the point of view of the cam­era that’s tied to the mo­tion of the these ob­jects. I’m mark­ing the po­si­tion of that cam­era with a white dot  on the thin line join­ing the bod­ies. By drag­ging the slider you can move the cam­era be­tween them:

With the cam­era fol­low­ing the bod­ies we can now track their mo­tion for­ever. More im­por­tantly, we can also see the rel­a­tive mo­tion of the two ob­jects. When you make the bod­ies move to­gether, you can wit­ness how from the per­spec­tive of the teal body, it’s the yel­low body that or­bits around the teal body, but from the per­spec­tive of the yel­low body, it’s the other way around.

Better yet, if we po­si­tion the cam­era halfway, or even any­where else be­tween the two bod­ies, both ob­jects seem to or­bit the cam­era. The per­cep­tion of rel­a­tive mo­tion de­pends on the point of view, but there is one point that’s par­tic­u­larly use­ful for ob­ser­va­tion. In this next demon­stra­tion, I’ve added a lit­tle white trail to the cam­era it­self. Watch how the path of the cam­era in space changes as you repo­si­tion it with the slider:

In gen­eral, the cam­era tra­verses some squig­gly path in space. However, there is one spe­cial po­si­tion be­tween the two bod­ies for which the cam­era trav­els in a per­fectly straight line. This point is known as the barycen­ter, and it’s lo­cated at the cen­ter of mass of these ob­jects.

Let’s ex­plore the con­cept of the barycen­ter a lit­tle closer. In the demon­stra­tion be­low, you can once again drag the bod­ies around to change the dis­tance be­tween them, and you can also use the slid­ers to tweak their masses. The cen­ter of mass of these two bod­ies is marked with a black and white sym­bol:

The equa­tion in the bot­tom part ex­plains the place­ment of the cen­ter of mass of these two ob­jects — it is lo­cated at a point where its dis­tance from the first body r1 mul­ti­plied by that body’s mass m1, equals that point’s dis­tance from the sec­ond body r2 mul­ti­plied by its mass m2.

This sim­ple rule be­comes slightly more com­pli­cated when more than two bod­ies are in­volved. In those sce­nar­ios, the po­si­tion of the cen­ter of mass is the weighted av­er­age of the po­si­tions of all the bod­ies, where the masses of these bod­ies serve, very ap­pro­pri­ately, as weights.

We’ll only be in­ter­ested in the cen­ter of mass of two bod­ies, so the demon­stra­tion we’ve just seen fits our needs well. Notice that as the bod­ies move far­ther away, the barycen­ter also mi­grates to stay in the con­stant pro­por­tion of the dis­tance sep­a­rat­ing the ob­jects. Moreover, if one of the bod­ies is much more mas­sive than the other, the cen­ter of mass could lie in­side that larger body.

In our space sim­u­la­tor, the mass of the teal body is three times the mass of the yel­low body, so the barycen­ter of this sys­tem lies three-quar­ters of the way be­tween the yel­low and teal ob­jects:

The mo­tion of the barycen­ter shows us that the tan­gled dance of two ce­les­tial bod­ies hides a much sim­pler lin­ear mo­tion through space and some ad­di­tional mo­tion of the two bod­ies around that barycen­ter .

Let’s try to see that other mo­tion more clearly by mak­ing one more mod­i­fi­ca­tion to the right side of the demon­stra­tions we’ve seen. Notice that the trails left by the bod­ies linger in space, but ide­ally, we’d also want to see the paths taken by the bod­ies rel­a­tive to the mov­ing cam­era.

To make this work we can at­tach a lit­tle draw­ing plane to the cam­era it­self — I’m out­lin­ing that plane be­low with a thin rec­tan­gle. Then, as the bod­ies move around, they can trace their trails on that plane as well:

With this new method we can see the paths the bod­ies took rel­a­tive to the mov­ing cam­era . When seen from this per­spec­tive, we can fi­nally re­veal that, in most prac­ti­cal sce­nar­ios, the two or­bit­ing bod­ies trace el­lipses rel­a­tive to each other.

Depending on the ini­tial con­di­tions, some of those el­lipses are larger, and some are smaller. Some are al­most cir­cu­lar, and some are quite elon­gated. Changing the po­si­tion of the cam­era with the slider changes the rel­a­tive sizes of these two el­lipses, but they main­tain their over­all pro­por­tions. The el­lipse of mo­tion of one body seen from the per­spec­tive of the other is the same for both bod­ies, it just shifts in space.

As you may have seen on this blog be­fore, an el­lipse can be more for­mally char­ac­ter­ized by its ec­cen­tric­ity and the size of its semi-ma­jor axis, which you can con­trol us­ing the slid­ers be­low:

Eccentricity spec­i­fies how elon­gated an el­lipse is. It can be de­fined as the ra­tio of the length of the dark pink seg­ment to the length of the semi-ma­jor axis. That seg­ment spans the dis­tance be­tween the cen­ter of the el­lipse and one of the two fo­cus points, which are also jointly known as foci. When we watch or­bital mo­tion from the per­spec­tive of the or­bited body, that body is al­ways in one of the fo­cus points of the or­bital el­lipse of the or­bit­ing body.

I’ve also marked two spe­cial points on the or­bital el­lipse. At apoap­sis, the or­bit­ing body is at its far­thest dis­tance from the or­bited body, and at pe­ri­ap­sis the or­bit­ing body is clos­est to that body. These two points are col­lec­tively known as ap­sides, and the line join­ing them is known as the line of ap­sides. The sim­ple rule for re­mem­ber­ing which ap­sis is which is that apoap­sis is the one that’s far­ther way from the or­bited body.

We’ve just de­scribed the or­bital el­lipse and its ap­sides as seen from the point of view of the larger body, but in our cos­mic play­ground we’ve seen how mov­ing the cam­era around with a slider can change the per­cep­tion of mo­tion:

With a two-body sys­tem like this one we ac­tu­ally have some flex­i­bil­ity in de­scrib­ing which body or­bits which. We typ­i­cally say that it’s the less mas­sive ob­ject that or­bits the more mas­sive one, but the ob­server on the smaller body would just see the mo­tion of the larger neigh­bor around it.

For us, it will be of­ten use­ful to de­scribe things from the point of view of the barycen­ter  — we’ve seen ear­lier how that spe­cial point lets us de­com­pose the mo­tion of two soli­tary bod­ies into the move­ment on a straight line and the or­bit­ing mo­tion around that barycen­ter.

That par­tic­u­lar view­point also lets us ex­plain an­other ir­reg­u­lar mo­tion we can see in these el­lip­ti­cal or­bits. Notice that as the two bod­ies are close to each other, they swing across their tra­jec­to­ries much faster.

You can see it best when look­ing at the dashed seg­ments I’ve drawn on the el­lip­ti­cal or­bits — tra­ver­sal of each brighter or darker sec­tion takes the same amount of time. These lines are vis­i­bly longer when the bod­ies are close, which re­flects their faster mo­tion as they travel longer dis­tance over the same pe­riod.

This non-uni­form mo­tion can also be seen in the an­gu­lar ve­loc­ity of the or­bital mo­tion, which de­scribes how many de­grees per sec­ond an or­bit­ing body sweeps through. In this next demon­stra­tion the blue line ro­tates with con­stant an­gu­lar ve­loc­ity, so in every sec­ond it goes across the same num­ber of de­grees. As you can see, the or­ange line join­ing two bod­ies ro­tates with vary­ing speed:

Notice how the or­ange line is some­times ahead of and some­times be­hind the blue line, which shows that the or­bital mo­tion does­n’t have a con­stant an­gu­lar ve­loc­ity.

This un­usual be­hav­ior is more eas­ily ex­plained with the fol­low­ing con­trap­tion, where I put the two bod­ies on a gi­ant bar that spins around on an axis placed right at the cen­ter of mass of the two bod­ies. Using the slider you can change the dis­tance be­tween these ob­jects:

As the bod­ies get closer, the ro­ta­tion speeds up. Conversely, as the bod­ies move far­ther apart, the ro­ta­tion slows down. You can eas­ily recre­ate a ver­sion of this ex­per­i­ment by hold­ing heavy items in your hands and spin­ning on a desk chair with your arms spread out. As you pull them to­wards your torso, your ro­ta­tion will speed up.

These are ex­am­ples of con­ser­va­tion of an­gu­lar mo­men­tum in which the speed of rev­o­lu­tion and the mass dis­tri­b­u­tion of a sys­tem are in­her­ently tied to­gether. Broadly speak­ing, when we dou­ble the dis­tance from the axis of ro­ta­tion, the an­gu­lar ve­loc­ity be­comes four times smaller.

The space play­grounds we’ve looked at ear­lier work just like the demon­stra­tion with the bar, but in­stead of a slider, it’s the force of grav­ity that de­ter­mines the dis­tance be­tween the bod­ies. Gravity pulls the ob­jects closer to­gether, in­creas­ing the speeds at which they swing by each other. As the bod­ies move past their clos­est dis­tance, that in­creased speed shoots them out away from each other and the cy­cle con­tin­ues.

The de­tails on how this ac­tion cre­ates el­lip­ti­cal paths are beau­ti­fully cov­ered in the video on Feynman’s Lost Lecture, but for our needs it will be enough to just wit­ness once more how all the ini­tial val­ues of masses, po­si­tions, and ve­loc­i­ties of the two bod­ies de­cide every­thing about their mo­tion:

With a firmer grasp on or­bital mo­tion in space, we can fi­nally see how every­thing we’ve learned af­fects move­ment of our planet and its clos­est ce­les­tial neigh­bor.

Let’s first look at the Moon and Earth side by side to com­pare their masses and sizes in im­pe­r­ial units­met­ric units:

The Earth’s mean ra­dius is only around 3.67 times larger than that of the Moon. Since the vol­ume of a sphere grows with the third power of its ra­dius, and the Earth is on av­er­age much denser, our plan­et’s mass ends up be­ing around 81.3 times larger than the Moon’s.

Let’s try to repli­cate this table in our space sim­u­la­tor, where I added two bod­ies with sizes and masses match­ing those of the Earth and the Moon. Let’s see how these val­ues af­fect the mo­tion of the two ob­jects:

With our sim­u­lated Earth be­ing so mas­sive, we can quite eas­ily make this Moon or­bit the Earth with var­i­ous el­lipses. Unfortunately, while this sim­u­la­tion cor­rectly mim­ics the rel­a­tive sizes of the real Earth and Moon, it does­n’t re­flect the cos­mic scale of the dis­tance be­tween these two bod­ies.

Let’s see how far away the Moon re­ally is. In the demon­stra­tion be­low, you can use the slider to zoom away from the Earth un­til the Moon’s po­si­tion be­comes vis­i­ble:

If you drag the slider all the way to the right, you’ll no­tice that I’m ac­tu­ally mark­ing three dis­tances be­tween the cen­ters of the Earth and the Moon. The or­bit of the Moon does­n’t form a per­fect cir­cle, so the sep­a­rat­ing dis­tance varies as the Moon gets clos­est to the Earth at pe­ri­ap­sis, and far­thest away at apoap­sis. The val­ues shown here in mileskilo­me­ters are the pre­dicted max­i­mum, mean, and min­i­mum of that dis­tance in the 21st cen­tury.

Let’s see the or­bit of the Moon in more de­tail. The fol­low­ing demon­stra­tion shows the mo­tion of our neigh­bor from the per­spec­tive of the Earth it­self. You can drag around the fol­low­ing demon­stra­tion to change the view­ing an­gle. The slider lets you con­trol the speed of time:

With all the sizes and dis­tances repli­cated re­al­is­ti­cally, it may be hard to see these tiny bod­ies. To make things more leg­i­ble, you can press the but­ton in the bot­tom right cor­ner to tog­gle be­tween the real and ten times larger ar­ti­fi­cial siz­ing of these bod­ies.

With this three di­men­sional view we can now see that the Moon’s mo­tion lies in the or­bital plane that I’m mark­ing with a faint gray disc. To help us ori­ent our­selves in space, I’ve also added a line that marks a fixed ref­er­ence di­rec­tion point­ing at some very dis­tant stars.

On av­er­age, it takes the Moon 27.322 days27 days, 7 hours, and 44 min­utes to com­plete the whole or­bit, as mea­sured by cross­ings of the ref­er­ence line. That pe­riod is known as the side­real month, where side­real means with re­spect to stars”. This is only one of the four dif­fer­ent types of lu­nar months that we’ll ex­plore in this ar­ti­cle.

As the Moon or­bits the Earth, it traces the fa­mil­iar el­lip­ti­cal shape. We can quite clearly see how the el­lip­ti­cal ec­cen­tric­ity shifts the Moon’s path rel­a­tive to the per­fect cir­cle of the vi­su­al­iza­tion of the or­bital plane that I’ve drawn above.

Let’s take a closer look at some of the pa­ra­me­ters of the Moon’s or­bit. In this next demon­stra­tion I’m us­ing the cur­rent po­si­tion and ve­loc­ity of the Moon to cal­cu­late an el­lipse that best de­scribes the Moon’s or­bit at that mo­ment of time. I’m draw­ing this el­lipse with a dashed line, while the solid trail shows the ac­tual path the Moon took:

Since we’re mak­ing the el­lipse fit the cur­rent or­bital mo­tion, this ide­al­ized el­lipse matches the ac­tual trail very well in the vicin­ity of the or­bit­ing Moon. However, far­ther away from the Moon this best-fit­ting el­lipse di­verges from the path the Moon ac­tu­ally took. This shows us that while it’s pretty close, the Moon’s tra­jec­tory does­n’t form a per­fect el­lipse.

As we see in the la­bels, both ec­cen­tric­ity and the length of the semi-ma­jor axis of this currently best-fit­ting” el­lipse vary over time. Measured over a long pe­riod, the ec­cen­tric­ity of the Moon’s or­bit has the av­er­age value of 0.0549, while the semi-ma­jor axis has the av­er­age length of 239,071 mi384,748 km.

Moreover, the fit­ted or­bital el­lipse not only changes its shape, but also its ori­en­ta­tion. The line of ap­sides of the el­lipse which joins the apoap­sis and the pe­ri­ap­sis wob­bles over time in a quite chaotic man­ner.

These ef­fects hap­pen be­cause the Earth and the Moon aren’t the sole bod­ies in space — they’re both part of the Solar System. True to its name, the Solar System is dom­i­nated by the Sun it­self, and it’s pri­mar­ily the ef­fects of the Sun’s grav­ity that cause all these per­tur­ba­tions of the Moon’s or­bit.

We’ll soon ex­plore the in­flu­ence of the Sun in more de­tail, but for now let’s fo­cus on the changes of the po­si­tions of apoap­sis and pe­ri­ap­sis. In the demon­stra­tion be­low, I’ve made time flow even faster than be­fore. Additionally, every time the Moon is at its clos­est to the Earth, that is when it’s at the pe­ri­ap­sis, I’m leav­ing a lit­tle marker on the or­bital plane:

Notice how the line of ap­sides wob­bles back and forth, but across many months it over­all makes steady progress ro­tat­ing, when seen from above, in the counter-clock­wise di­rec­tion. Averaged over long time, this line of ap­sides makes a full ro­ta­tion in 8.85 years8 years and 310 days, which de­fines the pe­riod of the Moon’s ap­si­dal pre­ces­sion.

The mark­ers that I drop when the Moon crosses the pe­ri­ap­sis mea­sure the anom­al­is­tic month. Notice that the lengths of anom­al­is­tic months vary a lot as they hap­pen on dif­fer­ent parts of the or­bit. Sometimes it takes the Moon less than 25 days to get clos­est to the Earth again, but some­times it takes it over 28 days to reach pe­ri­ap­sis again. Over long time the anom­al­is­tic month has a mean length of 27.554 days27 days, 13 hours, and 3 min­utes.

This pe­riod is a bit longer than the 27.322 days27 days, 7 hours, and 44 min­utes of the side­real month, which is tracked by the cross­ings of the ref­er­ence line. When av­er­aged over time, the line of ap­sides ro­tates steadily in the same di­rec­tion as the Moon’s or­bital mo­tion, so it takes the Moon a bit more time to catch up to pe­ri­ap­sis.

All the demon­stra­tions we’ve seen also show one more ef­fect that we did­n’t ac­count for in our sim­ple play­ground sim­u­la­tions — both the Earth and the Moon spin around their axes. You can see this more clearly in the demon­stra­tion be­low where I glued a blue ar­row to the sur­face of the Earth, and a gray ar­row to the sur­face of the Moon:

When viewed from the side, we can see that the axes of ro­ta­tions of these two bod­ies aren’t neatly per­pen­dic­u­lar to the or­bital plane, and they also spin at very dif­fer­ent rates. Our planet takes roughly 23.93 hours23 hours and 56 min­utes or al­most one day to com­plete a full rev­o­lu­tion and point to­wards the ref­er­ence di­rec­tion again. The Moon ro­tates much slower, tak­ing 27.322 days27 days, 7 hours, and 44 min­utes to re­volve just once and align with that di­rec­tion again.

From above we can see that the gray ar­row fixed to the Moon’s sur­face gen­er­ally points to­wards the Earth, as in­di­cated by the thin line join­ing the two bod­ies. If you pay close at­ten­tion, you’ll no­tice that this ar­row is some­times point­ing a bit ahead of that di­rec­tion and some­times a bit be­hind that di­rec­tion.

This is a con­se­quence of the Moon’s non-cir­cu­lar or­bit — we’ve seen ear­lier how the an­gu­lar ve­loc­ity of an or­bit­ing body changes as it sweeps through its or­bital el­lipse. The Moon ro­tates around its axis with more or less con­stant speed, but the Moon’s an­gu­lar po­si­tion rel­a­tive to the Earth does­n’t ad­vance at a con­stant rate. As a re­sult, the two ro­tat­ing mo­tions don’t al­ways per­fectly can­cel each other out.

In a close-up view of the bod­ies you might have also no­ticed that the ro­ta­tion axis of the Moon is tilted rel­a­tive to its or­bital plane. Similarly, the axis of ro­ta­tion of our planet is also tilted rel­a­tive to that plane. Let’s briefly switch our point of view to align our­selves straight-up with the Earth’s ro­ta­tion axis:

From this per­spec­tive we can see that the Moon’s or­bital plane is in­clined to our planet. Notice how the Moon’s po­si­tion rel­a­tive to the Earth changes dur­ing its or­bital mo­tion — it is some­times above” and some­times below” our planet, re­veal­ing the truly three di­men­sional as­pects of the Moon’s mo­tion.

All the or­bital ob­ser­va­tions we’ve made will help to ex­plain some of the ef­fects we’ve seen at the be­gin­ning of this ar­ti­cle, where we looked at the Moon through the eyes of an ob­server on the ground. Before we in­ves­ti­gate these ef­fects, we need to build a bit more in­tu­ition on how ob­jects in space look to some­one view­ing them from the sur­face of Earth.

Let’s first place our­selves on Earth and look at the sky in which I ar­ti­fi­cially put three col­or­ful ce­les­tial bod­ies. You can drag the demon­stra­tion around to change which part of the sky you’re look­ing at. If you lose track of these bod­ies, the lit­tle ar­rows will guide you back to their area of the sky:

Although the mark­ers of the com­pass di­rec­tions are of some help, it may be quite hard to grasp how this view from the Earth’s sur­face cor­re­sponds to the more ex­ter­nal view from space we’ve got­ten used to.

Let me clar­ify things in the next demon­stra­tion, where the left side shows the same view we’ve just seen, and the right side shows the same scene, but as seen from space. I’ve also out­lined the sky view on the left with the four col­ored lines — as you pan around the land­scape on the left, you can see that square out­line re­flected on the right. I’ve also added a fig­urine that rep­re­sents a vastly en­larged ob­server stand­ing on the ground. The fig­urine’s body and its right hand al­ways point in the cur­rent di­rec­tion of ob­ser­va­tion:

With that ex­ter­nal view, we can see how the ob­server on the ground can’t see the sky in every pos­si­ble di­rec­tion. Half of it is ob­scured by the Earth it­self, with the hori­zon clip­ping the whole breadth of the sur­round­ing sky to only the vis­i­ble hemi­sphere.

Moreover, no­tice how the ac­tual size of an ob­ject does­n’t match its size seen in the Earthly ob­server’s sky. For ex­am­ple, both yel­low and teal bod­ies are of the same phys­i­cal size, but the lat­ter looks smaller in the sky. Similarly, the pink body is phys­i­cally larger than the yel­low one, but they share sim­i­lar size from the ob­server’s point of view.

We can un­der­stand these siz­ing ef­fects with the help of cones that shoot out from the po­si­tion of the ob­server to­wards the bod­ies in space. Note that these cones start on the ground here, be­cause the ac­tual ob­server is much smaller than the gi­gan­tic il­lus­tra­tive fig­urine.

The size of the in­ter­sec­tion of those cones with the hemi­sphere of the sky, or the size of the pro­jected area, de­ter­mines the vis­i­ble size. Intuitively, the far­ther away the ob­ject, the smaller it ap­pears. If the pro­jec­tion oc­cu­pies a larger frac­tion of the to­tal hemi­sphere, the ob­ject will look larger as well.

We can con­ve­niently de­scribe the size of ob­jects in the sky by mea­sur­ing the an­gle spanned by the vis­i­ble cone. In the demon­stra­tion be­low, I’m show­ing a flat side view of this cone. You can drag the yel­low body around to change its dis­tance from the ob­server. You can also use the slider to change the size of that body:

The closer the ob­ject is to the ob­server, or the larger the body, the greater the an­gle of the vis­i­ble cone. That an­gle is known as the an­gu­lar di­am­e­ter or an­gu­lar size of the ob­served ob­ject.

Having ex­pe­ri­enced how ob­jects in the night sky may look at a fixed mo­ment in time, let’s see how the Earth’s ro­ta­tion af­fects ob­ser­va­tions done from the ground. In the demon­stra­tion be­low, you can scrub through time with the slider to wit­ness the ef­fects of the spin of our planet:

This scene may seem a bit con­trived, be­cause the three ob­jects are just mag­i­cally float­ing in space at fixed po­si­tions. Fortunately, it’s a de­cent rep­re­sen­ta­tion of how all the stars in the night sky ap­pear to Earthly ob­servers — they’re dis­tant enough that over the course of a day they es­sen­tially don’t move rel­a­tive to the Earth’s cen­ter. As our planet spins, these three ob­jects seem to rise over the hori­zon, travel across the vis­i­ble sky, and then set be­low the hori­zon again.

You’ll prob­a­bly agree that it’s a lit­tle an­noy­ing to have to man­u­ally keep pan­ning through the night sky to look at these ob­jects, so on the left side of this next demon­stra­tion I’m au­to­mat­i­cally ad­just­ing the view­ing an­gle to track the teal body. On the right side, I’m lock­ing the cam­era on the fig­urine it­self. Don’t be mis­led by what you see here — the Earth is still ro­tat­ing around its axis, the cam­era just ro­tates with it:

As seen through the ob­server’s eyes on the left side, the other ob­jects now seem to ro­tate around the teal one, but this is purely a con­se­quence of the ob­server turn­ing on the ground to keep fac­ing the teal body.

You may have ex­pe­ri­enced some­thing sim­i­lar when watch­ing an air­plane fly­ing over your head. As the plane is ap­proach­ing, its front is closer to you and its tail is in the back, but af­ter the plane has passed over, you see the plane’s tail as be­ing closer to you, and its front is more dis­tant. In your eyes the plane has ro­tated, but in fact the plane has kept its course the en­tire time, and it was you who turned to keep an eye on it.

When these ce­les­tial bod­ies dis­ap­pear be­neath the hori­zon, it be­comes im­pos­si­ble to track them, but thank­fully in these com­puter sim­u­la­tions I can make the Earth trans­par­ent, giv­ing us an un­ob­structed view of the full sphere of the sur­round­ing space:

With this ap­proach we can now see the en­tire tra­jec­tory of the three ob­jects as an ob­server on Earth sees them. Because these ob­jects don’t move rel­a­tive to the cen­ter of our planet, they travel on closed paths, re­turn­ing to where they came from af­ter the Earth com­pletes one rev­o­lu­tion around its axis over the course of 23.93 hours23 hours and 56 min­utes.

Let’s bring back the Moon into the pic­ture. In the sim­u­la­tion be­low, we can see the Moon in the starry sky as seen from the sur­face of the Earth. Note that I re­moved all the vi­sual ef­fects re­lated to sun­light, in­clud­ing the day­time blue sky and any il­lu­mi­na­tion changes on the sur­face of the Moon it­self.

...

Read the original on ciechanow.ski »

2 1,330 shares, 59 trendiness

OpenAI o3 Breakthrough High Score on ARC-AGI-Pub

OpenAI’s new o3 sys­tem - trained on the ARC-AGI-1 Public Training set - has scored a break­through 75.7% on the Semi-Private Evaluation set at our stated pub­lic leader­board $10k com­pute limit. A high-com­pute (172x) o3 con­fig­u­ra­tion scored 87.5%.

This is a sur­pris­ing and im­por­tant step-func­tion in­crease in AI ca­pa­bil­i­ties, show­ing novel task adap­ta­tion abil­ity never seen be­fore in the GPT-family mod­els. For con­text, ARC-AGI-1 took 4 years to go from 0% with GPT-3 in 2020 to 5% in 2024 with GPT-4o. All in­tu­ition about AI ca­pa­bil­i­ties will need to get up­dated for o3.

The mis­sion of ARC Prize goes be­yond our first bench­mark: to be a North Star to­wards AGI. And we’re ex­cited to be work­ing with the OpenAI team and oth­ers next year to con­tinue to de­sign next-gen, en­dur­ing AGI bench­marks.

ARC-AGI-2 (same for­mat - ver­i­fied easy for hu­mans, harder for AI) will launch along­side ARC Prize 2025. We’re com­mit­ted to run­ning the Grand Prize com­pe­ti­tion un­til a high-ef­fi­ciency, open-source so­lu­tion scor­ing 85% is cre­ated.

Read on for the full test­ing re­port.

We tested o3 against two ARC-AGI datasets:

At OpenAI’s di­rec­tion, we tested at two lev­els of com­pute with vari­able sam­ple sizes: 6 (high-efficiency) and 1024 (low-efficiency, 172x com­pute).

Here are the re­sults.

Note: o3 high-com­pute costs not avail­able as pric­ing and fea­ture avail­abil­ity is still TBD. The amount of com­pute was roughly 172x the low-com­pute con­fig­u­ra­tion.

Note on tuned”: OpenAI shared they trained the o3 we tested on 75% of the Public Training set. They have not shared more de­tails. We have not yet tested the ARC-untrained model to un­der­stand how much of the per­for­mance is due to ARC-AGI data.

Due to vari­able in­fer­ence bud­get, ef­fi­ciency (e.g., com­pute cost) is now a re­quired met­ric when re­port­ing per­for­mance. We’ve doc­u­mented both the to­tal costs and the cost per task as an ini­tial proxy for ef­fi­ciency. As an in­dus­try, we’ll need to fig­ure out what met­ric best tracks ef­fi­ciency, but di­rec­tion­ally, cost is a solid start­ing point.

The high-ef­fi­ciency score of 75.7% is within the bud­get rules of ARC-AGI-Pub (costs

The low-ef­fi­ciency score of 87.5% is quite ex­pen­sive, but still shows that per­for­mance on novel tasks does im­prove with in­creased com­pute (at least up to this level.)

Despite the sig­nif­i­cant cost per task, these num­bers aren’t just the re­sult of ap­ply­ing brute force com­pute to the bench­mark. OpenAI’s new o3 model rep­re­sents a sig­nif­i­cant leap for­ward in AIs abil­ity to adapt to novel tasks. This is not merely in­cre­men­tal im­prove­ment, but a gen­uine break­through, mark­ing a qual­i­ta­tive shift in AI ca­pa­bil­i­ties com­pared to the prior lim­i­ta­tions of LLMs. o3 is a sys­tem ca­pa­ble of adapt­ing to tasks it has never en­coun­tered be­fore, ar­guably ap­proach­ing hu­man-level per­for­mance in the ARC-AGI do­main.

Of course, such gen­er­al­ity comes at a steep cost, and would­n’t quite be eco­nom­i­cal yet: you could pay a hu­man to solve ARC-AGI tasks for roughly $5 per task (we know, we did that), while con­sum­ing mere cents in en­ergy. Meanwhile o3 re­quires $17-20 per task in the low-com­pute mode. But cost-per­for­mance will likely im­prove quite dra­mat­i­cally over the next few months and years, so you should plan for these ca­pa­bil­i­ties to be­come com­pet­i­tive with hu­man work within a fairly short time­line.

o3′s im­prove­ment over the GPT se­ries proves that ar­chi­tec­ture is every­thing. You could­n’t throw more com­pute at GPT-4 and get these re­sults. Simply scal­ing up the things we were do­ing from 2019 to 2023 — take the same ar­chi­tec­ture, train a big­ger ver­sion on more data — is not enough. Further progress is about new ideas.

ARC-AGI serves as a crit­i­cal bench­mark for de­tect­ing such break­throughs, high­light­ing gen­er­al­iza­tion power in a way that sat­u­rated or less de­mand­ing bench­marks can­not. However, it is im­por­tant to note that ARC-AGI is not an acid test for AGI — as we’ve re­peated dozens of times this year. It’s a re­search tool de­signed to fo­cus at­ten­tion on the most chal­leng­ing un­solved prob­lems in AI, a role it has ful­filled well over the past five years.

Passing ARC-AGI does not equate to achiev­ing AGI, and, as a mat­ter of fact, I don’t think o3 is AGI yet. o3 still fails on some very easy tasks, in­di­cat­ing fun­da­men­tal dif­fer­ences with hu­man in­tel­li­gence.

Furthermore, early data points sug­gest that the up­com­ing ARC-AGI-2 bench­mark will still pose a sig­nif­i­cant chal­lenge to o3, po­ten­tially re­duc­ing its score to un­der 30% even at high com­pute (while a smart hu­man would still be able to score over 95% with no train­ing). This demon­strates the con­tin­ued pos­si­bil­ity of cre­at­ing chal­leng­ing, un­sat­u­rated bench­marks with­out hav­ing to rely on ex­pert do­main knowl­edge. You’ll know AGI is here when the ex­er­cise of cre­at­ing tasks that are easy for reg­u­lar hu­mans but hard for AI be­comes sim­ply im­pos­si­ble.

Why does o3 score so much higher than o1? And why did o1 score so much higher than GPT-4o in the first place? I think this se­ries of re­sults pro­vides in­valu­able data points for the on­go­ing pur­suit of AGI.

My men­tal model for LLMs is that they work as a repos­i­tory of vec­tor pro­grams. When prompted, they will fetch the pro­gram that your prompt maps to and execute” it on the in­put at hand. LLMs are a way to store and op­er­a­tional­ize mil­lions of use­ful mini-pro­grams via pas­sive ex­po­sure to hu­man-gen­er­ated con­tent.

This memorize, fetch, ap­ply” par­a­digm can achieve ar­bi­trary lev­els of skills at ar­bi­trary tasks given ap­pro­pri­ate train­ing data, but it can­not adapt to nov­elty or pick up new skills on the fly (which is to say that there is no fluid in­tel­li­gence at play here.) This has been ex­em­pli­fied by the low per­for­mance of LLMs on ARC-AGI, the only bench­mark specif­i­cally de­signed to mea­sure adapt­abil­ity to nov­elty — GPT-3 scored 0, GPT-4 scored near 0, GPT-4o got to 5%. Scaling up these mod­els to the lim­its of what’s pos­si­ble was­n’t get­ting ARC-AGI num­bers any­where near what ba­sic brute enu­mer­a­tion could achieve years ago (up to 50%).

To adapt to nov­elty, you need two things. First, you need knowl­edge — a set of reusable func­tions or pro­grams to draw upon. LLMs have more than enough of that. Second, you need the abil­ity to re­com­bine these func­tions into a brand new pro­gram when fac­ing a new task — a pro­gram that mod­els the task at hand. Program syn­the­sis. LLMs have long lacked this fea­ture. The o se­ries of mod­els fixes that.

For now, we can only spec­u­late about the ex­act specifics of how o3 works. But o3′s core mech­a­nism ap­pears to be nat­ural lan­guage pro­gram search and ex­e­cu­tion within to­ken space — at test time, the model searches over the space of pos­si­ble Chains of Thought (CoTs) de­scrib­ing the steps re­quired to solve the task, in a fash­ion per­haps not too dis­sim­i­lar to AlphaZero-style Monte-Carlo tree search. In the case of o3, the search is pre­sum­ably guided by some kind of eval­u­a­tor model. To note, Demis Hassabis hinted back in a June 2023 in­ter­view that DeepMind had been re­search­ing this very idea — this line of work has been a long time com­ing.

So while sin­gle-gen­er­a­tion LLMs strug­gle with nov­elty, o3 over­comes this by gen­er­at­ing and ex­e­cut­ing its own pro­grams, where the pro­gram it­self (the CoT) be­comes the ar­ti­fact of knowl­edge re­com­bi­na­tion. Although this is not the only vi­able ap­proach to test-time knowl­edge re­com­bi­na­tion (you could also do test-time train­ing, or search in la­tent space), it rep­re­sents the cur­rent state-of-the-art as per these new ARC-AGI num­bers.

Effectively, o3 rep­re­sents a form of deep learn­ing-guided pro­gram search. The model does test-time search over a space of programs” (in this case, nat­ural lan­guage pro­grams — the space of CoTs that de­scribe the steps to solve the task at hand), guided by a deep learn­ing prior (the base LLM). The rea­son why solv­ing a sin­gle ARC-AGI task can end up tak­ing up tens of mil­lions of to­kens and cost thou­sands of dol­lars is be­cause this search process has to ex­plore an enor­mous num­ber of paths through pro­gram space — in­clud­ing back­track­ing.

There are how­ever two sig­nif­i­cant dif­fer­ences be­tween what’s hap­pen­ing here and what I meant when I pre­vi­ously de­scribed deep learn­ing-guided pro­gram search” as the best path to get to AGI. Crucially, the pro­grams gen­er­ated by o3 are nat­ural lan­guage in­struc­tions (to be executed” by a LLM) rather than ex­e­cutable sym­bolic pro­grams. This means two things. First, that they can­not make con­tact with re­al­ity via ex­e­cu­tion and di­rect eval­u­a­tion on the task — in­stead, they must be eval­u­ated for fit­ness via an­other model, and the eval­u­a­tion, lack­ing such ground­ing, might go wrong when op­er­at­ing out of dis­tri­b­u­tion. Second, the sys­tem can­not au­tonomously ac­quire the abil­ity to gen­er­ate and eval­u­ate these pro­grams (the way a sys­tem like AlphaZero can learn to play a board game on its own.) Instead, it is re­liant on ex­pert-la­beled, hu­man-gen­er­ated CoT data.

It’s not yet clear what the ex­act lim­i­ta­tions of the new sys­tem are and how far it might scale. We’ll need fur­ther test­ing to find out. Regardless, the cur­rent per­for­mance rep­re­sents a re­mark­able achieve­ment, and a clear con­fir­ma­tion that in­tu­ition-guided test-time search over pro­gram space is a pow­er­ful par­a­digm to build AI sys­tems that can adapt to ar­bi­trary tasks.

First of all, open-source repli­ca­tion of o3, fa­cil­i­tated by the ARC Prize com­pe­ti­tion in 2025, will be cru­cial to move the re­search com­mu­nity for­ward. A thor­ough analy­sis of o3′s strengths and lim­i­ta­tions is nec­es­sary to un­der­stand its scal­ing be­hav­ior, the na­ture of its po­ten­tial bot­tle­necks, and an­tic­i­pate what abil­i­ties fur­ther de­vel­op­ments might un­lock.

Moreover, ARC-AGI-1 is now sat­u­rat­ing — be­sides o3′s new score, the fact is that a large en­sem­ble of low-com­pute Kaggle so­lu­tions can now score 81% on the pri­vate eval.

We’re go­ing to be rais­ing the bar with a new ver­sion — ARC-AGI-2 - which has been in the works since 2022. It promises a ma­jor re­set of the state-of-the-art. We want it to push the bound­aries of AGI re­search with hard, high-sig­nal evals that high­light cur­rent AI lim­i­ta­tions.

Our early ARC-AGI-2 test­ing sug­gests it will be use­ful and ex­tremely chal­leng­ing, even for o3. And, of course, ARC Prize’s ob­jec­tive is to pro­duce a high-ef­fi­ciency and open-source so­lu­tion in or­der to win the Grand Prize. We cur­rently in­tend to launch ARC-AGI-2 along­side ARC Prize 2025 (estimated launch: late Q1).

Going for­ward, the ARC Prize Foundation will con­tinue to cre­ate new bench­marks to fo­cus the at­ten­tion of re­searchers on the hard­est un­solved prob­lems on the way to AGI. We’ve started work on a third-gen­er­a­tion bench­mark which de­parts com­pletely from the 2019 ARC-AGI for­mat and in­cor­po­rates some ex­cit­ing new ideas.

Today, we’re also re­leas­ing data (results, at­tempts, and prompt) from our high-com­pute o3 test­ing and would like your help to an­a­lyze the re­sults. In par­tic­u­lar, we are very cu­ri­ous about the ~9% set of Public Eval tasks o3 was un­able to solve, even with lots of com­pute, yet are straight­for­ward for hu­mans.

We in­vite the com­mu­nity to help us as­sess the char­ac­ter­is­tics of both solved and un­solved tasks.

To get your ideas flow­ing, here are 3 ex­am­ples of tasks un­solved by high-com­pute o3.

See our full set of o3 test­ing data.

Here’s the prompt that was used in test­ing.

We’ve also cre­ated a new chan­nel in our Discord named oai-analy­sis and we’d love to hear your analy­sis and in­sights there. Or tag us on X/Twitter @arcprize.

To sum up — o3 rep­re­sents a sig­nif­i­cant leap for­ward. Its per­for­mance on ARC-AGI high­lights a gen­uine break­through in adapt­abil­ity and gen­er­al­iza­tion, in a way that no other bench­mark could have made as ex­plicit.

o3 fixes the fun­da­men­tal lim­i­ta­tion of the LLM par­a­digm — the in­abil­ity to re­com­bine knowl­edge at test time — and it does so via a form of LLM-guided nat­ural lan­guage pro­gram search. This is not just in­cre­men­tal progress; it is new ter­ri­tory, and it de­mands se­ri­ous sci­en­tific at­ten­tion.

Sign up to get up­dates

We ex­pect to re-launch ARC Prize in Q1 2025. Sign up now to re­ceive of­fi­cial com­pe­ti­tion up­dates and news.

No spam. You can un­sub­scribe at any­time.

Subscribe to re­ceive of­fi­cial com­pe­ti­tion up­dates and news.

No spam. You can un­sub­scribe at any­time.

...

Read the original on arcprize.org »

3 978 shares, 3 trendiness

OpenAI whistleblower found dead in San Francisco apartment

SAN FRANCISCO — A for­mer OpenAI re­searcher known for whistle­blow­ing the block­buster ar­ti­fi­cial in­tel­li­gence com­pany fac­ing a swell of law­suits over its busi­ness model has died, au­thor­i­ties con­firmed this week.

Suchir Balaji, 26, was found dead in­side his Buchanan Street apart­ment on Nov. 26, San Francisco po­lice and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight res­i­dence at about 1 p.m. that day, af­ter re­ceiv­ing a call ask­ing of­fi­cers to check on his well-be­ing, a po­lice spokesper­son said.

The med­ical ex­am­in­er’s of­fice de­ter­mined the man­ner of death to be sui­cide and po­lice of­fi­cials this week said there is currently, no ev­i­dence of foul play.”

Information he held was ex­pected to play a key part in law­suits against the San Francisco-based com­pany.

Balaji’s death comes three months af­ter he pub­licly ac­cused OpenAI of vi­o­lat­ing U. S. copy­right law while de­vel­op­ing ChatGPT, a gen­er­a­tive ar­ti­fi­cial in­tel­li­gence pro­gram that has be­come a mon­ey­mak­ing sen­sa­tion used by hun­dreds of mil­lions of peo­ple across the world.

Its pub­lic re­lease in late 2022 spurred a tor­rent of law­suits against OpenAI from au­thors, com­puter pro­gram­mers and jour­nal­ists, who say the com­pany il­le­gally stole their copy­righted ma­te­r­ial to train its pro­gram and el­e­vate its value past $150 bil­lion.

The Mercury News and seven sis­ter news out­lets are among sev­eral news­pa­pers, in­clud­ing the New York Times, to sue OpenAI in the past year.

In an in­ter­view with the New York Times pub­lished Oct. 23, Balaji ar­gued OpenAI was harm­ing busi­nesses and en­tre­pre­neurs whose data were used to train ChatGPT.

If you be­lieve what I be­lieve, you have to just leave the com­pany,” he told the out­let, adding that this is not a sus­tain­able model for the in­ter­net ecosys­tem as a whole.”

Balaji grew up in Cupertino be­fore at­tend­ing UC Berkeley to study com­puter sci­ence. It was then he be­came a be­liever in the po­ten­tial ben­e­fits that ar­ti­fi­cial in­tel­li­gence could of­fer so­ci­ety, in­clud­ing its abil­ity to cure dis­eases and stop ag­ing, the Times re­ported. I thought we could in­vent some kind of sci­en­tist that could help solve them,” he told the news­pa­per.

But his out­look be­gan to sour in 2022, two years af­ter join­ing OpenAI as a re­searcher. He grew par­tic­u­larly con­cerned about his as­sign­ment of gath­er­ing data from the in­ter­net for the com­pa­ny’s GPT-4 pro­gram, which an­a­lyzed text from nearly the en­tire in­ter­net to train its ar­ti­fi­cial in­tel­li­gence pro­gram, the news out­let re­ported.

The prac­tice, he told the Times, ran afoul of the coun­try’s fair use” laws gov­ern­ing how peo­ple can use pre­vi­ously pub­lished work. In late October, he posted an analy­sis on his per­sonal web­site ar­gu­ing that point.

No known fac­tors seem to weigh in fa­vor of ChatGPT be­ing a fair use of its train­ing data,” Balaji wrote. That be­ing said, none of the ar­gu­ments here are fun­da­men­tally spe­cific to ChatGPT ei­ther, and sim­i­lar ar­gu­ments could be made for many gen­er­a­tive AI prod­ucts in a wide va­ri­ety of do­mains.”

Reached by this news agency, Balaji’s mother re­quested pri­vacy while griev­ing the death of her son.

In a Nov. 18 let­ter filed in fed­eral court, at­tor­neys for The New York Times named Balaji as some­one who had unique and rel­e­vant doc­u­ments” that would sup­port their case against OpenAI. He was among at least 12 peo­ple — many of them past or pre­sent OpenAI em­ploy­ees — the news­pa­per had named in court fil­ings as hav­ing ma­te­r­ial help­ful to their case, ahead of de­po­si­tions.

Generative ar­ti­fi­cial in­tel­li­gence pro­grams work by an­a­lyz­ing an im­mense amount of data from the in­ter­net and us­ing it to an­swer prompts sub­mit­ted by users, or to cre­ate text, im­ages or videos.

Ireland’s AI data cen­ters are suck­ing up too much of the coun­try’s en­ergy

San Jose, SJSU an­nounce col­lab­o­ra­tion with NVIDIA to fur­ther work­force de­vel­op­ment, AI in­no­va­tion

When OpenAI re­leased its ChatGPT pro­gram in late 2022, it tur­bocharged an in­dus­try of com­pa­nies seek­ing to write es­says, make art and cre­ate com­puter code. Many of the most valu­able com­pa­nies in the world now work in the field of ar­ti­fi­cial in­tel­li­gence, or man­u­fac­ture the com­puter chips needed to run those pro­grams. OpenAI’s own value nearly dou­bled in the past year.

News out­lets have ar­gued that OpenAI and Microsoft — which is in busi­ness with OpenAI and also has been sued by The Mercury News — have pla­gia­rized and stole its ar­ti­cles, un­der­min­ing their busi­ness mod­els.

Microsoft and OpenAI sim­ply take the work prod­uct of re­porters, jour­nal­ists, ed­i­to­r­ial writ­ers, ed­i­tors and oth­ers who con­tribute to the work of lo­cal news­pa­pers — all with­out any re­gard for the ef­forts, much less the le­gal rights, of those who cre­ate and pub­lish the news on which lo­cal com­mu­ni­ties rely,” the news­pa­pers’ law­suit said.

OpenAI has staunchly re­futed those claims, stress­ing that all of its work re­mains le­gal un­der fair use” laws.

We see im­mense po­ten­tial for AI tools like ChatGPT to deepen pub­lish­ers’ re­la­tion­ships with read­ers and en­hance the news ex­pe­ri­ence,” the com­pany said when the law­suit was filed.

If you or some­one you know is strug­gling with feel­ings of de­pres­sion or sui­ci­dal thoughts, the 988 Suicide & Crisis Lifeline of­fers free, round-the-clock sup­port, in­for­ma­tion and re­sources for help. Call or text the life­line at 988, or see the 988lifeline.org web­site, where chat is avail­able.

Jakob Rodgers is a se­nior break­ing news re­porter. Call, text or send him an en­crypted mes­sage via Signal at 510-390-2351, or email him at jrodgers@ba­yare­anews­group.com.

...

Read the original on www.mercurynews.com »

4 931 shares, 38 trendiness

School smartphone ban results in better sleep and improved mood

Psychologists at the University of York, who tested the im­pact that smart­phones have on chil­dren’s be­hav­iour for a new two-part doc­u­men­tary se­ries for Channel 4, found that a ban in school im­pacted pos­i­tively on sleep and mood.

Swiped: The School that Banned Smartphones airs this week on Channel 4

Swiped: The School that Banned Smartphones, hosted by Matt and Emma Willis, is based at The Stanway School in Colchester, and chal­lenged a group of Year 8 pupils to give up their smart­phones com­pletely for 21 days.

The ex­per­i­ment, led by Professor Lisa Henderson and Dr Emma Sullivan from the University, saw pupils un­dergo a se­ries of tests, with ex­perts mon­i­tor­ing their be­hav­ioural changes through­out the pe­riod, and re­peat­ing the tests at the end of the three weeks to con­clude what ef­fects giv­ing up your phone re­ally does have on your brain in­clud­ing sleep, well­be­ing and cog­ni­tion.

They found that stu­dents in the phone ban group ex­pe­ri­enced no­table im­prove­ments in their sleep. On av­er­age, they were falling asleep 20 min­utes faster than be­fore the ban, and re­ported get­ting a full hour of ex­tra rest each night.

Children in the phone ban group also went to bed on av­er­age, 50 min­utes ear­lier dur­ing the phone ban weeks com­pared to the week be­fore the phone ban, for ex­am­ple, bed­time was 10:12 pm one-week post ban, and 11:02 pm the week be­fore the ban. These changes, which were self-re­ported, were also ver­i­fied with sleep-track­ing de­vices.

Better sleep also ap­peared to co­in­cide with a boost in mood. Pupils in the phone ban group re­ported a 17% re­duc­tion in feel­ings re­lated to de­pres­sion and an 18% re­duc­tion in feel­ings re­lated to anx­i­ety, feel­ing gen­er­ally less up­set and ner­vous. Pupils who slept bet­ter even showed changes in their heart rate that sig­nalled im­proved well be­ing.

Professor Lisa Henderson, from the University’s Department of Psychology, said: This ex­per­i­ment in­cor­po­rated a much longer ab­sti­nence pe­riod than pre­vi­ous stud­ies, al­low­ing us to see how a smart­phone ban in school could im­pact on sleep, well­be­ing, cog­ni­tive abil­i­ties, and alert­ness.

The re­sults showed that a smart­phone ban in chil­dren un­der the age of 14 could have a pos­i­tive im­pact on sleep, and con­nected to im­proved sleep, a boost in over­all mood.”

Interestingly, the re­search did­n’t show sig­nif­i­cant im­prove­ments in cog­ni­tive abil­ity; the phone ban group showed a mod­est 3% boost in work­ing mem­ory, and there were no im­prove­ments in sus­tained at­ten­tion. Researchers sug­gest that these re­sults might mean that changes in cog­ni­tive abil­ity could take longer than the study pe­riod of 21 days to ma­te­ri­alise.

Dr Emma Sullivan, from the University’s Department of Psychology, said: Our re­sults come at an im­por­tant time when gov­ern­ment min­is­ters in the UK are think­ing about the im­pact of smart­phones on young peo­ple, and when other parts of the world, such as Australia, are in­tro­duc­ing a so­cial me­dia ban for un­der 16s.

Evidence gath­er­ing is cru­cial to make these big de­ci­sions that im­pact on the lives of young peo­ple, and whilst more work is needed on this, these first sets of re­sults are an in­ter­est­ing start to be­gin to have these bet­ter in­formed con­ver­sa­tions.”

Swiped: The School that Banned Smartphones starts at 8pm, Wednesday, 11 December, on Channel 4.

...

Read the original on www.york.ac.uk »

5 763 shares, 31 trendiness

OpenERV

The TW4 de­cen­tral­ized Energy Recovery Ventilator trans­fers clean out­door air in­side and moves pol­luted air out­doors, while har­vest­ing ~90% of the heat en­ergy - so you get fresh air with­out the down­sides of heat­ing or cool­ing it.

The WM12 is ba­si­cally two TW4 mod­ules side by side in a piece of tough polypropy­lene foam, so you can put it in a win­dow.

The de­sign is now in beta.  People who re­quest a unit by email will be con­tacted one at a time as units be­come avail­able, start­ing with those most able to han­dle po­ten­tial com­pli­ca­tions that come with the beta phase.

...

Read the original on www.openerv.ca »

6 715 shares, 28 trendiness

The era of open voice assistants has arrived

We all de­serve a voice as­sis­tant that does­n’t har­vest our data and ar­bi­trar­ily limit fea­tures. In the same way Home Assistant made pri­vate and lo­cal home au­toma­tion a vi­able op­tion, we be­lieve the same can, and must be done for voice as­sis­tants.

Since we be­gan de­vel­op­ing our open-source voice as­sis­tant for Home Assistant, one key el­e­ment has been miss­ing - great hard­ware that’s sim­ple to set up and use. Hardware that hears you, gives you clear feed­back, and seam­lessly fits into the home. Affordable and high-qual­ity voice hard­ware will let more peo­ple join in on its de­vel­op­ment and al­low any­one to pre­view the fu­ture of voice as­sis­tants to­day. Setting a stan­dard for the next sev­eral years to base our de­vel­op­ment around.

We’re launch­ing Home Assistant Voice Preview Edition to help ac­cel­er­ate our goal of not only match­ing the ca­pa­bil­i­ties of ex­ist­ing voice as­sis­tants but sur­pass­ing them. This is in­evitable: They’ll fo­cus their ef­forts on mon­e­tiz­ing voice, while our com­mu­nity will be fo­cused on im­prov­ing open and pri­vate voice. We’ll sup­port the lan­guages big tech ig­nores and pro­vide a real choice in how you run voice in your home.

The era of open, pri­vate voice as­sis­tants be­gins now, and we’d love for you to be part of it.

Our main goal with Voice Preview Edition was to make the best hard­ware to get started with Assist, Home Assistant’s built-in voice as­sis­tant. If you’re al­ready us­ing other third-party hard­ware to run Assist, this will be a big up­grade. We pri­or­i­tized its abil­ity to hear com­mands, giv­ing it an in­dus­try-lead­ing ded­i­cated au­dio proces­sor and dual mi­cro­phones - I’m al­ways blown away by how well it picks up my voice around the room.

Next, we en­sured it would blend into the home, giv­ing it a sleek but un­ob­tru­sive de­sign. That’s not to say it does­n’t have flair. When you get your hands on Voice Preview Edition the first thing you’ll no­tice is its pre­mium-feel­ing in­jec­tion-molded shell, which is semi-trans­par­ent, just like your fa­vorite 90s tech. The LED ring is also re­ally eye-catch­ing, and you can cus­tomize it to your heart’s con­tent from full gamer RGB to sub­tle glow.

It’s hard to con­vey how nice the ro­tary dial is to use; its sub­tle clicks paired with LED an­i­ma­tions are hard not to play with. Most im­por­tantly, the dial lets any­one in your home in­tu­itively ad­just the vol­ume. The same can be said for the mul­ti­pur­pose but­ton and mute switch (which phys­i­cally cuts power to the mi­cro­phone for ul­ti­mate pri­vacy). We knew for it to work best, it needed to be out in the open, and let’s just say that Home Approval Factor was very front of mind when de­sign­ing it.

We also worked hard to keep the price af­ford­able and com­pa­ra­ble to other voice as­sis­tant hard­ware at just $59 (that’s the rec­om­mended MSRP, and pric­ing will vary by re­tailer). This is­n’t a pre­order, it’s avail­able now!

For some, our voice as­sis­tant is all they need; they just want to say a cou­ple of com­mands, set timers, man­age their shop­ping list, and con­trol their most used de­vices. For oth­ers, we un­der­stand they want to ask their voice as­sis­tant to make whale sounds or to tell them how tall Taylor Swift is - this voice as­sis­tant does­n’t en­tirely do those things (yet). We think there is still more we can do be­fore this is ready for every home, and un­til then, we’ll be sell­ing this Preview of the fu­ture of voice as­sis­tants. We’ve built the best hard­ware on the mar­ket, and set a new stan­dard for the com­ing years, al­low­ing us to fo­cus our de­vel­op­ment as we pre­pare our voice as­sis­tant for every home. Taking back our pri­vacy is­n’t for every­one - it’s a jour­ney - and we want as many peo­ple as pos­si­ble to join us early and make it bet­ter.

Many other voice as­sis­tants work with Home Assistant, but this one was built for Home Assistant. Unlike other voice hard­ware that can work with Assist, this does­n’t re­quire flash­ing firmware or any as­sem­bly. You plug it into power, and it is seam­lessly dis­cov­ered by Home Assistant. A wiz­ard in­stantly starts help­ing you set up your voice as­sis­tant, but crit­i­cally, if you haven’t used voice be­fore, it will quickly guide you through what you need to get the best ex­pe­ri­ence.

Get up and run­ning with Voice Preview Edition in min­utes with our new wiz­ard

This is not a DIY prod­uct. We’ve worked to make the ex­pe­ri­ence as smooth as pos­si­ble, with easy and fast up­dates and set­tings you can man­age from the Home Assistant UI.

If you have been fol­low­ing our work on voice, you know we’ve tried a lot of dif­fer­ent voice as­sis­tant hard­ware. Most avail­able Assist-capable hard­ware is bad at its most im­por­tant job - hear­ing your voice and then pro­vid­ing au­dio­vi­sual feed­back. That was re­ally what drove us to build Voice Preview Edition.

Voice Preview Editions mics and au­dio proces­sors ef­fort­lessly hear com­mands through loud mu­sic it is play­ing

Our Assist soft­ware could only do so much with sub­stan­dard au­dio, and its func­tion­al­ity is mas­sively im­proved with clear au­dio. The dual mi­cro­phones com­bined with the XMOS au­dio pro­cess­ing chip are what makes it so ca­pa­ble. Together, they al­low Voice Preview Edition to have echo can­cel­la­tion, sta­tion­ary noise re­moval, and auto gain con­trol, which all adds up to clearer au­dio. This com­bined with an ESP32-S3 with 8 MB of oc­tal PSRAM - one of the fastest ESP and RAM com­bi­na­tions avail­able - makes for an in­cred­i­bly re­spon­sive de­vice. This is the best Assist hard­ware you can buy to­day, and it will con­tinue to give a great ex­pe­ri­ence as Assist’s fea­ture set ex­pands in the years to come.

Assist can do some­thing al­most no other voice as­sis­tant can achieve - it can run with­out the in­ter­net 🤯. You can speak to your Voice Preview Edition, and those com­mands can be processed com­pletely within the walls of your home. At the time of writ­ing this, there are some pretty big caveats, specif­i­cally that you need to speak a sup­ported lan­guage and have pretty pow­er­ful hard­ware to run it (we rec­om­mend a Home Assistant sys­tem run­ning on an Intel N100 or bet­ter).

If you use low-pow­ered Home Assistant hard­ware, there is an easy and af­ford­able in­ter­net-based so­lu­tion; Home Assistant Cloud. This pri­vacy-fo­cused ser­vice al­lows you to of­fload your speech-to-text and text-to-speech pro­cess­ing, all while be­ing very re­spon­sive and keep­ing your en­ergy bill low. Speech-to-text is the harder of the two to run lo­cally, and our cloud pro­cess­ing is al­most al­ways more ac­cu­rate for more lan­guages (visit our lan­guage sup­port checker here).

Our goal is for Assist to run eas­ily, af­ford­ably, and fully lo­cally for all lan­guages. As some­one who has seen the rapid de­vel­op­ment of this tech­nol­ogy over the past sev­eral years, I’m op­ti­mistic that this will hap­pen, but un­til then, many lan­guages have a good range of choices that pro­vide strong pri­vacy.

We are shar­ing the de­sign files if you want to 3D print a new case… these ones were in­evitable

We’re not just launch­ing a new prod­uct, we’re open sourc­ing all of it. We built this for the Home Assistant com­mu­nity. Our com­mu­nity does­n’t want a sin­gle voice as­sis­tant, they want the one that works for them — they want choice. Creating a voice as­sis­tant is hard, and un­til now, parts of the so­lu­tion were locked be­hind ex­pen­sive li­censes and pro­pri­etary soft­ware. With Voice Preview Edition be­ing open source, we hope to boot­strap an ecosys­tem of voice as­sis­tants.

We tried to make every as­pect of Voice Preview Edition cus­tomiz­able, which is ac­tu­ally pretty easy when you’re work­ing hand-in-hand with ESPHome and Home Assistant. It works great with the stock set­tings, but if you’re so in­clined, you can cus­tomize the Assist soft­ware, ESP32 firmware, and XMOS firmware.

Connecting Grove sen­sors al­lows you to use your Voice Preview Edition as a more tra­di­tional ESPHome de­vice - here is it act­ing as a voice as­sis­tant and air mon­i­tor.

We also made the hard­ware easy to mod­ify, in­side and out. For in­stance, the in­cluded speaker is for alerts and voice prompts, but if you want to use it as a me­dia player, con­nect a speaker to the in­cluded 3.5mm head­phone jack and con­trol it with soft­ware like Music Assistant. The in­cluded DAC is very clean and ca­pa­ble of stream­ing loss­less au­dio. It can also be used as a very ca­pa­ble ESP32 de­vice. On the bot­tom of the de­vice is a Grove port (concealed un­der a cover that can be per­ma­nently re­moved), which al­lows you to con­nect a large ecosys­tem of sen­sors and ac­ces­sories.

We’ve also made it quite pain­less to open, with easy-to-ac­cess screws and no clips. We even in­cluded ex­posed pads on the cir­cuit board to make mod­i­fy­ing it more straight­for­ward. We’re pro­vid­ing all the 3D files so you can print your own com­po­nents… even car­toon char­ac­ter-in­spired ones. We’re not here to dic­tate what you can and can’t do with your de­vice, and we tried our best to stay out of your way.

The beauty of Home Assistant and ESPHome is that you are never alone when fix­ing an is­sue or adding a fea­ture. We made this de­vice so the com­mu­nity could start work­ing more closely to­gether on voice; we even con­sid­ered call­ing it the Community edi­tion. Ultimately, it is the com­mu­nity dri­ving for­ward voice - ei­ther by tak­ing part in its de­vel­op­ment or sup­port­ing its de­vel­op­ment by buy­ing of­fi­cial hard­ware or Home Assistant Cloud. So much has al­ready been done for voice, and I can’t wait to see the ad­vance­ments we make to­gether.

Home Assistant cham­pi­ons choice. Today, we’re pro­vid­ing one of the best choices for voice hard­ware. One that is truly pri­vate and to­tally open. I’m so proud of the team for build­ing such a great work­ing and feel­ing piece of hard­ware - this is a re­ally big leap for voice hard­ware. I ex­pect it to be the hard­ware bench­mark for open-voice pro­jects for years to come. I would also like to thank our lan­guage lead­ers who are ex­pand­ing the reach of this pro­ject, our testers of this Preview Edition, and any­one who has joined in our voice work over the past years.

The hard­ware re­ally is only half the pic­ture, and it’s the soft­ware that re­ally brings this all to­gether. Mike Hansen has just writ­ten the Voice Chapter 8 blog to ac­com­pany this launch, and this ex­plains all the things we’ve built over the past two years to make Assist work in the home to­day. He also high­lights every­thing that Voice Preview Edition was built to help ac­cel­er­ate de­vel­op­ment.

...

Read the original on www.home-assistant.io »

7 699 shares, 28 trendiness

LFGSS and Microcosm shutting down 16th March 2025 (the day before the Online Safety Act is enforced)

London Fixed Gear and Single-Speed is a com­mu­nity of pre­dom­i­nantly fixed gear and sin­gle-speed cy­clists in and around London, UK.

This site is sup­ported al­most ex­clu­sively by do­na­tions. Please con­sider do­nat­ing a small amount reg­u­larly.

...

Read the original on www.lfgss.com »

8 667 shares, 27 trendiness

Getting to Two Million Users as a One Woman Dev Team

Sorry, your browser does­n’t sup­port em­bed­ded videos, but don’t worry, you can

down­load it.

Nadia Odunayo has been so of­ten the smil­ing face on the door of this event, but did you know she’s the founder and (more im­pres­sively!) one woman de­vel­op­ment team be­hind The StoryGraph, a read­ing com­mu­nity of over a mil­lion book lovers. Her story is one of grit, in­sight and tech­ni­cal in­sights into what it takes to ex­e­cute on the one per­son frame­work”.

Nadia Odunayo is the founder and CEO of The StoryGraph, the app that helps you to track your read­ing and choose which book to read next based on your mood and fa­vorite top­ics and themes. She pre­vi­ously worked at Pivotal Labs as a soft­ware en­gi­neer and orig­i­nally learnt to code at Makers Academy in London. In her spare time she loves to take dance class and, nat­u­rally, read!

...

Read the original on brightonruby.com »

9 663 shares, 26 trendiness

anvaka/map-of-github: Inspirational Mapping

This is a map of 400,000+ GitHub pro­jects. Each dot is a pro­ject. Dots are close to each other if they have a lot of com­mon stargaz­ers.

The first step was to fetch who gave stars to which repos­i­to­ries. For this I used a pub­lic data set of github ac­tiv­ity events on Google BigQuery, con­sid­er­ing only events be­tween Jan 2020 and March 2023. This gave me more than 350 mil­lion stars. (Side note: Mind blow­ing to think that Milky Way has more than 100 bil­lion stars)

In the sec­ond phase I com­puted ex­act Jaccard Similarity be­tween each repos­i­tory. For my home com­put­er’s 24GB RAM this was too much, how­ever an AWS EC2 in­stance with 512GB of RAM chewed through it in a few hours. (Side note: I tried other sim­i­lar­i­ties too, but Jaccard gave the most be­liev­able re­sults)

In the third phase I used a few clus­ter­ing al­go­rithms to split repos­i­to­ries to­gether. I liked Leiden clus­ter­ing

the best and ended up with 1000+ clus­ters.

In the fourth phase I used my own ngraph.force­lay­out to com­pute lay­outs of nodes in­side clus­ters, and a sep­a­rate con­fig­u­ra­tion to get global lay­out of clus­ters.

In the fifth phase we need to ren­der the map. Unlike my pre­vi­ous pro­jects, I did­n’t want to rein­vent the wheel, so ended up us­ing mapli­bre. All I had to do was con­vert my data into GeoJSON for­mat, gen­er­ate tiles with tippeca­noe and con­fig­ure the brows­ing ex­pe­ri­ence.

A lot of coun­try la­bels were gen­er­ated with help of ChatGPT. If you find some­thing wrong, you can right click it, edit, and send a pull re­quest - I’d be grate­ful.

The query that I used to gen­er­ate la­bels was:

To im­ple­ment a search­box, I used a sim­ple dump of all repos­i­to­ries, in­dexed by their first let­ter (or their au­thor’s). So when you type

a in the search box, I look up all repos­i­to­ries that start with a and show them to you with fuzzy matcher on the client.

Most of the time I like data pre­sented by this pro­ject bet­ter than vi­sual de­sign of the map. If you have ex­pe­ri­ence de­sign­ing maps or just have a won­der­ful de­sign vi­sion how it should look like - please don’t hes­i­tate to share. I’m still look­ing for the style that matches the data.

If you find this pro­ject use­ful and would like to sup­port it - please join the sup­port group. If you need any help with this pro­ject or have any ques­tions - don’t hes­i­tate to open an is­sue here or ping me on twit­ter

Thank you to all my friends and sup­port­ers who helped me to get this pro­ject off the ground: Ryan, Andrey, Alex, Dmytro. You are awe­some!

Thank you to my dear daugh­ter Louise for mak­ing a logo for this pro­ject. I love you!

Endless grat­i­tude to all open source con­trib­u­tors who made this pro­ject pos­si­ble. I’m stand­ing on the shoul­ders of gi­ants.

I’m re­leas­ing this repos­i­tory un­der MIT li­cense. However if you use the data in your own work, please con­sider giv­ing at­tri­bu­tion to this pro­ject.

...

Read the original on github.com »

10 636 shares, 25 trendiness

Futility Closet

Sophie Germain wrote, It has been said that al­ge­bra is but writ­ten geom­e­try and geom­e­try is but di­a­gram­matic al­ge­bra.”

...

Read the original on www.futilitycloset.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.