10 interesting stories served every morning and every evening.
Let’s take a look at the Moon as seen from space in all its sunlit glory. You can drag it around to change your point of view, and you can also use the slider to control the date and time:
In this convenient view, we can freely pan the camera around to see the Moon and its marvelous craters and mountains from various angles. Unfortunately, we don’t have that freedom of motion in our daily experience — the Moon wanders on its own path across the daily and nightly skies.
We can simulate these travels below, where you can see the current position of the Moon in the sky. You can drag that panorama around to adjust your viewing direction — this lets you see the breadth of the sky both above and below the horizon. By dragging the sliders you can witness how the position of the Moon changes in the sky across and hours of your local time. As the Moon’s placement in the sky shifts, the little arrow will guide you to its position.
You can also drag the little figurine on the globe in the bottom-right corner to see how the sky looks at that location on Earth. If your browser allows it, clickingtapping the button will automatically put the figurine at your current location. This may all feel quite overwhelming at the moment, but we’ll eventually see how all these pieces fit together:
Over the course of one day, the Moon travels on an arc in the sky almost completing a loop around the Earth. As the pass, the Moon’s illumination also visibly changes.
You’ll probably admit that it’s a little hard to focus on the tiny Moon as it shifts its position in the sky. To make things easier to see, I’ll zoom in the camera and lock its position on the Moon:
Notice that across a single day the Moon seems to rotate, and over it quite visibly wobbles. These wobbly variations let us occasionally see some hidden parts on the “edges” of the Moon, but our neighbor ultimately shows us only one of its sides. In our space-floating demo we could easily see the Moon from all sides, but on Earth we can never see most of the far side of the Moon.
Over the course of , the lighting on the Moon also changes dramatically. The line between the lit and unlit parts of the Moon, known as the terminator, sweeps across the Moon, revealing the details of its surface. Although the Moon has a spherical shape, the fully lit Moon looks more like a flat disk.
In this article I’ll explain all the effects we’ve just seen, and we’ll also learn about gravity, ocean tides, and eclipses. Let’s begin by exploring how celestial bodies move through space and how their mere presence influences the motion of their neighbors.
Let me introduce a little cosmic playground in which we’ll do our experiments. Inside it, I put a little planet that floats freely in space. You can drag the planet around to change its position. The arrow symbolizes the initial velocity of this body — you can tweak this velocity by dragging the dashed outline at the end of the arrow. To get things going, you can press the button in the bottom-left corner:
Notice that I’m drawing a ghost trail behind the moving planet, making it easier to track its motion. As you can see, once you let the planet go, it travels through space in a straight line, only to eventually get out of visible bounds.
Let’s complicate things a little by adding another body to this sandbox. You can tweak the positions and velocities of both bodies to see how their mutual presence impacts one another. I’m also marking the thin lines of trajectories that the bodies will take even before you let things go, making it easier to plan their motion:
The motion we see now isn’t as straightforward as before. In some scenarios, the two bodies travel past each other after tweaking their initial trajectories. In other configurations, both objects roam through space together, permanently locked in a swinging dance.
You may have also managed to make the two bodies run into each other. We’ll eventually see a more realistic visualization of that scenario, but in this simplified simulation when two objects collide, they just stick together and continue their coupled journey.
What’s responsible for all these effects is the force of gravity acting on the objects. Let’s explore that interaction up close. As before, you can drag the two bodies around, and you can also change their masses using the sliders below:
The arrows represent the force of gravity acting on the two bodies — the longer the arrow, the larger the force. For completeness, I’m displaying the values and units of masses and distances, but the numbers aren’t particularly important here. What matters is that when we increase either the mass of the first body m1 or the mass of the second body m2, the force of gravity grows too.
Moreover, the magnitude of gravity also depends on the distance r between the objects. As bodies move farther apart, the gravity weakens. Notice how the forces acting on each body have the same magnitude, but they point towards the other body, which indicates an attractive force.
If you paid close attention to the lengths of the arrows, you might have noticed that the force decreases quite rapidly with distance. We can visualize this with a plot, in which the white line shows the magnitude of gravity as a function of distance. More precisely, it shows that gravity is inversely proportional to the square of that distance:
Let’s take a very brief mathematical interlude to describe what we’ve seen in more detail. All these dependencies are captured in the following equation for the force of gravity F, between two objects with masses m1 and m2 separated by distance r:
The gravitational constant G seen in front of the right-hand side of the equation is incredibly small, making gravity a very weak force. We have no issues lifting everyday objects despite the might of the mass of the entire Earth pulling them down.
While the strength of gravity between any two bodies is equal, the resulting change in motion is not. You may recall from elementary physics classes that force F is equal to mass m times acceleration a. We can encapsulate this idea in a pair of simple formulas that tie these values for the first and second body:
By plugging in the equation for the force of gravity F and reducing the masses, we end up with a set of two equations for accelerations of the bodies:
Notice that the acceleration of the first body a1 depends on the mass of the second body m2. Similarly, the acceleration of the second body a2 depends on the mass of the first body m1. Let’s see this in practice in the demonstration below, where I’m temporarily making the big body twenty times more massive than the small body:
Notice that the body with smaller mass drastically changes its course, while the motion of the larger body is only marginally affected. This tracks with our day-to-day experience, where every item left hanging in the air very visibly accelerates towards the staggeringly massive Earth, but our planet doesn’t jump out of its way to meet the falling object.
Now that we understand that it’s the force of gravity that makes the bodies move towards each other, let’s do a better job of tracking the motions of these objects over time. Right now our camera is fixed in space, so the two bodies often fly out of visible bounds. Thankfully, we can easily fix this by moving the camera with the bodies.
In the demonstration below, I’m presenting the same scenario from two different vantage points. On the left, I’m showing the scene from the familiar point of view that’s fixed in space — you can plan the trajectories of the two bodies on that side.
On the right, you can see this simulation from the point of view of the camera that’s tied to the motion of the these objects. I’m marking the position of that camera with a white dot on the thin line joining the bodies. By dragging the slider you can move the camera between them:
With the camera following the bodies we can now track their motion forever. More importantly, we can also see the relative motion of the two objects. When you make the bodies move together, you can witness how from the perspective of the teal body, it’s the yellow body that orbits around the teal body, but from the perspective of the yellow body, it’s the other way around.
Better yet, if we position the camera halfway, or even anywhere else between the two bodies, both objects seem to orbit the camera. The perception of relative motion depends on the point of view, but there is one point that’s particularly useful for observation. In this next demonstration, I’ve added a little white trail to the camera itself. Watch how the path of the camera in space changes as you reposition it with the slider:
In general, the camera traverses some squiggly path in space. However, there is one special position between the two bodies for which the camera travels in a perfectly straight line. This point is known as the barycenter, and it’s located at the center of mass of these objects.
Let’s explore the concept of the barycenter a little closer. In the demonstration below, you can once again drag the bodies around to change the distance between them, and you can also use the sliders to tweak their masses. The center of mass of these two bodies is marked with a black and white symbol:
The equation in the bottom part explains the placement of the center of mass of these two objects — it is located at a point where its distance from the first body r1 multiplied by that body’s mass m1, equals that point’s distance from the second body r2 multiplied by its mass m2.
This simple rule becomes slightly more complicated when more than two bodies are involved. In those scenarios, the position of the center of mass is the weighted average of the positions of all the bodies, where the masses of these bodies serve, very appropriately, as weights.
We’ll only be interested in the center of mass of two bodies, so the demonstration we’ve just seen fits our needs well. Notice that as the bodies move farther away, the barycenter also migrates to stay in the constant proportion of the distance separating the objects. Moreover, if one of the bodies is much more massive than the other, the center of mass could lie inside that larger body.
In our space simulator, the mass of the teal body is three times the mass of the yellow body, so the barycenter of this system lies three-quarters of the way between the yellow and teal objects:
The motion of the barycenter shows us that the tangled dance of two celestial bodies hides a much simpler linear motion through space and some additional motion of the two bodies around that barycenter .
Let’s try to see that other motion more clearly by making one more modification to the right side of the demonstrations we’ve seen. Notice that the trails left by the bodies linger in space, but ideally, we’d also want to see the paths taken by the bodies relative to the moving camera.
To make this work we can attach a little drawing plane to the camera itself — I’m outlining that plane below with a thin rectangle. Then, as the bodies move around, they can trace their trails on that plane as well:
With this new method we can see the paths the bodies took relative to the moving camera . When seen from this perspective, we can finally reveal that, in most practical scenarios, the two orbiting bodies trace ellipses relative to each other.
Depending on the initial conditions, some of those ellipses are larger, and some are smaller. Some are almost circular, and some are quite elongated. Changing the position of the camera with the slider changes the relative sizes of these two ellipses, but they maintain their overall proportions. The ellipse of motion of one body seen from the perspective of the other is the same for both bodies, it just shifts in space.
As you may have seen on this blog before, an ellipse can be more formally characterized by its eccentricity and the size of its semi-major axis, which you can control using the sliders below:
Eccentricity specifies how elongated an ellipse is. It can be defined as the ratio of the length of the dark pink segment to the length of the semi-major axis. That segment spans the distance between the center of the ellipse and one of the two focus points, which are also jointly known as foci. When we watch orbital motion from the perspective of the orbited body, that body is always in one of the focus points of the orbital ellipse of the orbiting body.
I’ve also marked two special points on the orbital ellipse. At apoapsis, the orbiting body is at its farthest distance from the orbited body, and at periapsis the orbiting body is closest to that body. These two points are collectively known as apsides, and the line joining them is known as the line of apsides. The simple rule for remembering which apsis is which is that apoapsis is the one that’s farther way from the orbited body.
We’ve just described the orbital ellipse and its apsides as seen from the point of view of the larger body, but in our cosmic playground we’ve seen how moving the camera around with a slider can change the perception of motion:
With a two-body system like this one we actually have some flexibility in describing which body orbits which. We typically say that it’s the less massive object that orbits the more massive one, but the observer on the smaller body would just see the motion of the larger neighbor around it.
For us, it will be often useful to describe things from the point of view of the barycenter — we’ve seen earlier how that special point lets us decompose the motion of two solitary bodies into the movement on a straight line and the orbiting motion around that barycenter.
That particular viewpoint also lets us explain another irregular motion we can see in these elliptical orbits. Notice that as the two bodies are close to each other, they swing across their trajectories much faster.
You can see it best when looking at the dashed segments I’ve drawn on the elliptical orbits — traversal of each brighter or darker section takes the same amount of time. These lines are visibly longer when the bodies are close, which reflects their faster motion as they travel longer distance over the same period.
This non-uniform motion can also be seen in the angular velocity of the orbital motion, which describes how many degrees per second an orbiting body sweeps through. In this next demonstration the blue line rotates with constant angular velocity, so in every second it goes across the same number of degrees. As you can see, the orange line joining two bodies rotates with varying speed:
Notice how the orange line is sometimes ahead of and sometimes behind the blue line, which shows that the orbital motion doesn’t have a constant angular velocity.
This unusual behavior is more easily explained with the following contraption, where I put the two bodies on a giant bar that spins around on an axis placed right at the center of mass of the two bodies. Using the slider you can change the distance between these objects:
As the bodies get closer, the rotation speeds up. Conversely, as the bodies move farther apart, the rotation slows down. You can easily recreate a version of this experiment by holding heavy items in your hands and spinning on a desk chair with your arms spread out. As you pull them towards your torso, your rotation will speed up.
These are examples of conservation of angular momentum in which the speed of revolution and the mass distribution of a system are inherently tied together. Broadly speaking, when we double the distance from the axis of rotation, the angular velocity becomes four times smaller.
The space playgrounds we’ve looked at earlier work just like the demonstration with the bar, but instead of a slider, it’s the force of gravity that determines the distance between the bodies. Gravity pulls the objects closer together, increasing the speeds at which they swing by each other. As the bodies move past their closest distance, that increased speed shoots them out away from each other and the cycle continues.
The details on how this action creates elliptical paths are beautifully covered in the video on Feynman’s Lost Lecture, but for our needs it will be enough to just witness once more how all the initial values of masses, positions, and velocities of the two bodies decide everything about their motion:
With a firmer grasp on orbital motion in space, we can finally see how everything we’ve learned affects movement of our planet and its closest celestial neighbor.
Let’s first look at the Moon and Earth side by side to compare their masses and sizes in imperial unitsmetric units:
The Earth’s mean radius is only around 3.67 times larger than that of the Moon. Since the volume of a sphere grows with the third power of its radius, and the Earth is on average much denser, our planet’s mass ends up being around 81.3 times larger than the Moon’s.
Let’s try to replicate this table in our space simulator, where I added two bodies with sizes and masses matching those of the Earth and the Moon. Let’s see how these values affect the motion of the two objects:
With our simulated Earth being so massive, we can quite easily make this Moon orbit the Earth with various ellipses. Unfortunately, while this simulation correctly mimics the relative sizes of the real Earth and Moon, it doesn’t reflect the cosmic scale of the distance between these two bodies.
Let’s see how far away the Moon really is. In the demonstration below, you can use the slider to zoom away from the Earth until the Moon’s position becomes visible:
If you drag the slider all the way to the right, you’ll notice that I’m actually marking three distances between the centers of the Earth and the Moon. The orbit of the Moon doesn’t form a perfect circle, so the separating distance varies as the Moon gets closest to the Earth at periapsis, and farthest away at apoapsis. The values shown here in mileskilometers are the predicted maximum, mean, and minimum of that distance in the 21st century.
Let’s see the orbit of the Moon in more detail. The following demonstration shows the motion of our neighbor from the perspective of the Earth itself. You can drag around the following demonstration to change the viewing angle. The slider lets you control the speed of time:
With all the sizes and distances replicated realistically, it may be hard to see these tiny bodies. To make things more legible, you can press the button in the bottom right corner to toggle between the real and ten times larger artificial sizing of these bodies.
With this three dimensional view we can now see that the Moon’s motion lies in the orbital plane that I’m marking with a faint gray disc. To help us orient ourselves in space, I’ve also added a line that marks a fixed reference direction pointing at some very distant stars.
On average, it takes the Moon 27.322 days27 days, 7 hours, and 44 minutes to complete the whole orbit, as measured by crossings of the reference line. That period is known as the sidereal month, where sidereal means “with respect to stars”. This is only one of the four different types of lunar months that we’ll explore in this article.
As the Moon orbits the Earth, it traces the familiar elliptical shape. We can quite clearly see how the elliptical eccentricity shifts the Moon’s path relative to the perfect circle of the visualization of the orbital plane that I’ve drawn above.
Let’s take a closer look at some of the parameters of the Moon’s orbit. In this next demonstration I’m using the current position and velocity of the Moon to calculate an ellipse that best describes the Moon’s orbit at that moment of time. I’m drawing this ellipse with a dashed line, while the solid trail shows the actual path the Moon took:
Since we’re making the ellipse fit the current orbital motion, this idealized ellipse matches the actual trail very well in the vicinity of the orbiting Moon. However, farther away from the Moon this best-fitting ellipse diverges from the path the Moon actually took. This shows us that while it’s pretty close, the Moon’s trajectory doesn’t form a perfect ellipse.
As we see in the labels, both eccentricity and the length of the semi-major axis of this “currently best-fitting” ellipse vary over time. Measured over a long period, the eccentricity of the Moon’s orbit has the average value of 0.0549, while the semi-major axis has the average length of 239,071 mi384,748 km.
Moreover, the fitted orbital ellipse not only changes its shape, but also its orientation. The line of apsides of the ellipse which joins the apoapsis and the periapsis wobbles over time in a quite chaotic manner.
These effects happen because the Earth and the Moon aren’t the sole bodies in space — they’re both part of the Solar System. True to its name, the Solar System is dominated by the Sun itself, and it’s primarily the effects of the Sun’s gravity that cause all these perturbations of the Moon’s orbit.
We’ll soon explore the influence of the Sun in more detail, but for now let’s focus on the changes of the positions of apoapsis and periapsis. In the demonstration below, I’ve made time flow even faster than before. Additionally, every time the Moon is at its closest to the Earth, that is when it’s at the periapsis, I’m leaving a little marker on the orbital plane:
Notice how the line of apsides wobbles back and forth, but across many months it overall makes steady progress rotating, when seen from above, in the counter-clockwise direction. Averaged over long time, this line of apsides makes a full rotation in 8.85 years8 years and 310 days, which defines the period of the Moon’s apsidal precession.
The markers that I drop when the Moon crosses the periapsis measure the anomalistic month. Notice that the lengths of anomalistic months vary a lot as they happen on different parts of the orbit. Sometimes it takes the Moon less than 25 days to get closest to the Earth again, but sometimes it takes it over 28 days to reach periapsis again. Over long time the anomalistic month has a mean length of 27.554 days27 days, 13 hours, and 3 minutes.
This period is a bit longer than the 27.322 days27 days, 7 hours, and 44 minutes of the sidereal month, which is tracked by the crossings of the reference line. When averaged over time, the line of apsides rotates steadily in the same direction as the Moon’s orbital motion, so it takes the Moon a bit more time to catch up to periapsis.
All the demonstrations we’ve seen also show one more effect that we didn’t account for in our simple playground simulations — both the Earth and the Moon spin around their axes. You can see this more clearly in the demonstration below where I glued a blue arrow to the surface of the Earth, and a gray arrow to the surface of the Moon:
When viewed from the side, we can see that the axes of rotations of these two bodies aren’t neatly perpendicular to the orbital plane, and they also spin at very different rates. Our planet takes roughly 23.93 hours23 hours and 56 minutes or almost one day to complete a full revolution and point towards the reference direction again. The Moon rotates much slower, taking 27.322 days27 days, 7 hours, and 44 minutes to revolve just once and align with that direction again.
From above we can see that the gray arrow fixed to the Moon’s surface generally points towards the Earth, as indicated by the thin line joining the two bodies. If you pay close attention, you’ll notice that this arrow is sometimes pointing a bit ahead of that direction and sometimes a bit behind that direction.
This is a consequence of the Moon’s non-circular orbit — we’ve seen earlier how the angular velocity of an orbiting body changes as it sweeps through its orbital ellipse. The Moon rotates around its axis with more or less constant speed, but the Moon’s angular position relative to the Earth doesn’t advance at a constant rate. As a result, the two rotating motions don’t always perfectly cancel each other out.
In a close-up view of the bodies you might have also noticed that the rotation axis of the Moon is tilted relative to its orbital plane. Similarly, the axis of rotation of our planet is also tilted relative to that plane. Let’s briefly switch our point of view to align ourselves straight-up with the Earth’s rotation axis:
From this perspective we can see that the Moon’s orbital plane is inclined to our planet. Notice how the Moon’s position relative to the Earth changes during its orbital motion — it is sometimes “above” and sometimes “below” our planet, revealing the truly three dimensional aspects of the Moon’s motion.
All the orbital observations we’ve made will help to explain some of the effects we’ve seen at the beginning of this article, where we looked at the Moon through the eyes of an observer on the ground. Before we investigate these effects, we need to build a bit more intuition on how objects in space look to someone viewing them from the surface of Earth.
Let’s first place ourselves on Earth and look at the sky in which I artificially put three colorful celestial bodies. You can drag the demonstration around to change which part of the sky you’re looking at. If you lose track of these bodies, the little arrows will guide you back to their area of the sky:
Although the markers of the compass directions are of some help, it may be quite hard to grasp how this view from the Earth’s surface corresponds to the more external view from space we’ve gotten used to.
Let me clarify things in the next demonstration, where the left side shows the same view we’ve just seen, and the right side shows the same scene, but as seen from space. I’ve also outlined the sky view on the left with the four colored lines — as you pan around the landscape on the left, you can see that square outline reflected on the right. I’ve also added a figurine that represents a vastly enlarged observer standing on the ground. The figurine’s body and its right hand always point in the current direction of observation:
With that external view, we can see how the observer on the ground can’t see the sky in every possible direction. Half of it is obscured by the Earth itself, with the horizon clipping the whole breadth of the surrounding sky to only the visible hemisphere.
Moreover, notice how the actual size of an object doesn’t match its size seen in the Earthly observer’s sky. For example, both yellow and teal bodies are of the same physical size, but the latter looks smaller in the sky. Similarly, the pink body is physically larger than the yellow one, but they share similar size from the observer’s point of view.
We can understand these sizing effects with the help of cones that shoot out from the position of the observer towards the bodies in space. Note that these cones start on the ground here, because the actual observer is much smaller than the gigantic illustrative figurine.
The size of the intersection of those cones with the hemisphere of the sky, or the size of the projected area, determines the visible size. Intuitively, the farther away the object, the smaller it appears. If the projection occupies a larger fraction of the total hemisphere, the object will look larger as well.
We can conveniently describe the size of objects in the sky by measuring the angle spanned by the visible cone. In the demonstration below, I’m showing a flat side view of this cone. You can drag the yellow body around to change its distance from the observer. You can also use the slider to change the size of that body:
The closer the object is to the observer, or the larger the body, the greater the angle of the visible cone. That angle is known as the angular diameter or angular size of the observed object.
Having experienced how objects in the night sky may look at a fixed moment in time, let’s see how the Earth’s rotation affects observations done from the ground. In the demonstration below, you can scrub through time with the slider to witness the effects of the spin of our planet:
This scene may seem a bit contrived, because the three objects are just magically floating in space at fixed positions. Fortunately, it’s a decent representation of how all the stars in the night sky appear to Earthly observers — they’re distant enough that over the course of a day they essentially don’t move relative to the Earth’s center. As our planet spins, these three objects seem to rise over the horizon, travel across the visible sky, and then set below the horizon again.
You’ll probably agree that it’s a little annoying to have to manually keep panning through the night sky to look at these objects, so on the left side of this next demonstration I’m automatically adjusting the viewing angle to track the teal body. On the right side, I’m locking the camera on the figurine itself. Don’t be misled by what you see here — the Earth is still rotating around its axis, the camera just rotates with it:
As seen through the observer’s eyes on the left side, the other objects now seem to rotate around the teal one, but this is purely a consequence of the observer turning on the ground to keep facing the teal body.
You may have experienced something similar when watching an airplane flying over your head. As the plane is approaching, its front is closer to you and its tail is in the back, but after the plane has passed over, you see the plane’s tail as being closer to you, and its front is more distant. In your eyes the plane has rotated, but in fact the plane has kept its course the entire time, and it was you who turned to keep an eye on it.
When these celestial bodies disappear beneath the horizon, it becomes impossible to track them, but thankfully in these computer simulations I can make the Earth transparent, giving us an unobstructed view of the full sphere of the surrounding space:
With this approach we can now see the entire trajectory of the three objects as an observer on Earth sees them. Because these objects don’t move relative to the center of our planet, they travel on closed paths, returning to where they came from after the Earth completes one revolution around its axis over the course of 23.93 hours23 hours and 56 minutes.
Let’s bring back the Moon into the picture. In the simulation below, we can see the Moon in the starry sky as seen from the surface of the Earth. Note that I removed all the visual effects related to sunlight, including the daytime blue sky and any illumination changes on the surface of the Moon itself.
...
Read the original on ciechanow.ski »
OpenAI’s new o3 system - trained on the ARC-AGI-1 Public Training set - has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.
This is a surprising and important step-function increase in AI capabilities, showing novel task adaptation ability never seen before in the GPT-family models. For context, ARC-AGI-1 took 4 years to go from 0% with GPT-3 in 2020 to 5% in 2024 with GPT-4o. All intuition about AI capabilities will need to get updated for o3.
The mission of ARC Prize goes beyond our first benchmark: to be a North Star towards AGI. And we’re excited to be working with the OpenAI team and others next year to continue to design next-gen, enduring AGI benchmarks.
ARC-AGI-2 (same format - verified easy for humans, harder for AI) will launch alongside ARC Prize 2025. We’re committed to running the Grand Prize competition until a high-efficiency, open-source solution scoring 85% is created.
Read on for the full testing report.
We tested o3 against two ARC-AGI datasets:
At OpenAI’s direction, we tested at two levels of compute with variable sample sizes: 6 (high-efficiency) and 1024 (low-efficiency, 172x compute).
Here are the results.
Note: o3 high-compute costs not available as pricing and feature availability is still TBD. The amount of compute was roughly 172x the low-compute configuration.
Note on “tuned”: OpenAI shared they trained the o3 we tested on 75% of the Public Training set. They have not shared more details. We have not yet tested the ARC-untrained model to understand how much of the performance is due to ARC-AGI data.
Due to variable inference budget, efficiency (e.g., compute cost) is now a required metric when reporting performance. We’ve documented both the total costs and the cost per task as an initial proxy for efficiency. As an industry, we’ll need to figure out what metric best tracks efficiency, but directionally, cost is a solid starting point.
The high-efficiency score of 75.7% is within the budget rules of ARC-AGI-Pub (costs
The low-efficiency score of 87.5% is quite expensive, but still shows that performance on novel tasks does improve with increased compute (at least up to this level.)
Despite the significant cost per task, these numbers aren’t just the result of applying brute force compute to the benchmark. OpenAI’s new o3 model represents a significant leap forward in AI’s ability to adapt to novel tasks. This is not merely incremental improvement, but a genuine breakthrough, marking a qualitative shift in AI capabilities compared to the prior limitations of LLMs. o3 is a system capable of adapting to tasks it has never encountered before, arguably approaching human-level performance in the ARC-AGI domain.
Of course, such generality comes at a steep cost, and wouldn’t quite be economical yet: you could pay a human to solve ARC-AGI tasks for roughly $5 per task (we know, we did that), while consuming mere cents in energy. Meanwhile o3 requires $17-20 per task in the low-compute mode. But cost-performance will likely improve quite dramatically over the next few months and years, so you should plan for these capabilities to become competitive with human work within a fairly short timeline.
o3′s improvement over the GPT series proves that architecture is everything. You couldn’t throw more compute at GPT-4 and get these results. Simply scaling up the things we were doing from 2019 to 2023 — take the same architecture, train a bigger version on more data — is not enough. Further progress is about new ideas.
ARC-AGI serves as a critical benchmark for detecting such breakthroughs, highlighting generalization power in a way that saturated or less demanding benchmarks cannot. However, it is important to note that ARC-AGI is not an acid test for AGI — as we’ve repeated dozens of times this year. It’s a research tool designed to focus attention on the most challenging unsolved problems in AI, a role it has fulfilled well over the past five years.
Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don’t think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.
Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute (while a smart human would still be able to score over 95% with no training). This demonstrates the continued possibility of creating challenging, unsaturated benchmarks without having to rely on expert domain knowledge. You’ll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible.
Why does o3 score so much higher than o1? And why did o1 score so much higher than GPT-4o in the first place? I think this series of results provides invaluable data points for the ongoing pursuit of AGI.
My mental model for LLMs is that they work as a repository of vector programs. When prompted, they will fetch the program that your prompt maps to and “execute” it on the input at hand. LLMs are a way to store and operationalize millions of useful mini-programs via passive exposure to human-generated content.
This “memorize, fetch, apply” paradigm can achieve arbitrary levels of skills at arbitrary tasks given appropriate training data, but it cannot adapt to novelty or pick up new skills on the fly (which is to say that there is no fluid intelligence at play here.) This has been exemplified by the low performance of LLMs on ARC-AGI, the only benchmark specifically designed to measure adaptability to novelty — GPT-3 scored 0, GPT-4 scored near 0, GPT-4o got to 5%. Scaling up these models to the limits of what’s possible wasn’t getting ARC-AGI numbers anywhere near what basic brute enumeration could achieve years ago (up to 50%).
To adapt to novelty, you need two things. First, you need knowledge — a set of reusable functions or programs to draw upon. LLMs have more than enough of that. Second, you need the ability to recombine these functions into a brand new program when facing a new task — a program that models the task at hand. Program synthesis. LLMs have long lacked this feature. The o series of models fixes that.
For now, we can only speculate about the exact specifics of how o3 works. But o3′s core mechanism appears to be natural language program search and execution within token space — at test time, the model searches over the space of possible Chains of Thought (CoTs) describing the steps required to solve the task, in a fashion perhaps not too dissimilar to AlphaZero-style Monte-Carlo tree search. In the case of o3, the search is presumably guided by some kind of evaluator model. To note, Demis Hassabis hinted back in a June 2023 interview that DeepMind had been researching this very idea — this line of work has been a long time coming.
So while single-generation LLMs struggle with novelty, o3 overcomes this by generating and executing its own programs, where the program itself (the CoT) becomes the artifact of knowledge recombination. Although this is not the only viable approach to test-time knowledge recombination (you could also do test-time training, or search in latent space), it represents the current state-of-the-art as per these new ARC-AGI numbers.
Effectively, o3 represents a form of deep learning-guided program search. The model does test-time search over a space of “programs” (in this case, natural language programs — the space of CoTs that describe the steps to solve the task at hand), guided by a deep learning prior (the base LLM). The reason why solving a single ARC-AGI task can end up taking up tens of millions of tokens and cost thousands of dollars is because this search process has to explore an enormous number of paths through program space — including backtracking.
There are however two significant differences between what’s happening here and what I meant when I previously described “deep learning-guided program search” as the best path to get to AGI. Crucially, the programs generated by o3 are natural language instructions (to be “executed” by a LLM) rather than executable symbolic programs. This means two things. First, that they cannot make contact with reality via execution and direct evaluation on the task — instead, they must be evaluated for fitness via another model, and the evaluation, lacking such grounding, might go wrong when operating out of distribution. Second, the system cannot autonomously acquire the ability to generate and evaluate these programs (the way a system like AlphaZero can learn to play a board game on its own.) Instead, it is reliant on expert-labeled, human-generated CoT data.
It’s not yet clear what the exact limitations of the new system are and how far it might scale. We’ll need further testing to find out. Regardless, the current performance represents a remarkable achievement, and a clear confirmation that intuition-guided test-time search over program space is a powerful paradigm to build AI systems that can adapt to arbitrary tasks.
First of all, open-source replication of o3, facilitated by the ARC Prize competition in 2025, will be crucial to move the research community forward. A thorough analysis of o3′s strengths and limitations is necessary to understand its scaling behavior, the nature of its potential bottlenecks, and anticipate what abilities further developments might unlock.
Moreover, ARC-AGI-1 is now saturating — besides o3′s new score, the fact is that a large ensemble of low-compute Kaggle solutions can now score 81% on the private eval.
We’re going to be raising the bar with a new version — ARC-AGI-2 - which has been in the works since 2022. It promises a major reset of the state-of-the-art. We want it to push the boundaries of AGI research with hard, high-signal evals that highlight current AI limitations.
Our early ARC-AGI-2 testing suggests it will be useful and extremely challenging, even for o3. And, of course, ARC Prize’s objective is to produce a high-efficiency and open-source solution in order to win the Grand Prize. We currently intend to launch ARC-AGI-2 alongside ARC Prize 2025 (estimated launch: late Q1).
Going forward, the ARC Prize Foundation will continue to create new benchmarks to focus the attention of researchers on the hardest unsolved problems on the way to AGI. We’ve started work on a third-generation benchmark which departs completely from the 2019 ARC-AGI format and incorporates some exciting new ideas.
Today, we’re also releasing data (results, attempts, and prompt) from our high-compute o3 testing and would like your help to analyze the results. In particular, we are very curious about the ~9% set of Public Eval tasks o3 was unable to solve, even with lots of compute, yet are straightforward for humans.
We invite the community to help us assess the characteristics of both solved and unsolved tasks.
To get your ideas flowing, here are 3 examples of tasks unsolved by high-compute o3.
See our full set of o3 testing data.
Here’s the prompt that was used in testing.
We’ve also created a new channel in our Discord named oai-analysis and we’d love to hear your analysis and insights there. Or tag us on X/Twitter @arcprize.
To sum up — o3 represents a significant leap forward. Its performance on ARC-AGI highlights a genuine breakthrough in adaptability and generalization, in a way that no other benchmark could have made as explicit.
o3 fixes the fundamental limitation of the LLM paradigm — the inability to recombine knowledge at test time — and it does so via a form of LLM-guided natural language program search. This is not just incremental progress; it is new territory, and it demands serious scientific attention.
Sign up to get updates
We expect to re-launch ARC Prize in Q1 2025. Sign up now to receive official competition updates and news.
No spam. You can unsubscribe at anytime.
Subscribe to receive official competition updates and news.
No spam. You can unsubscribe at anytime.
...
Read the original on arcprize.org »
SAN FRANCISCO — A former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model has died, authorities confirmed this week.
Suchir Balaji, 26, was found dead inside his Buchanan Street apartment on Nov. 26, San Francisco police and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight residence at about 1 p.m. that day, after receiving a call asking officers to check on his well-being, a police spokesperson said.
The medical examiner’s office determined the manner of death to be suicide and police officials this week said there is “currently, no evidence of foul play.”
Information he held was expected to play a key part in lawsuits against the San Francisco-based company.
Balaji’s death comes three months after he publicly accused OpenAI of violating U. S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.
Its public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion.
The Mercury News and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.
In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.
“If you believe what I believe, you have to just leave the company,” he told the outlet, adding that “this is not a sustainable model for the internet ecosystem as a whole.”
Balaji grew up in Cupertino before attending UC Berkeley to study computer science. It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper.
But his outlook began to sour in 2022, two years after joining OpenAI as a researcher. He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported.
The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.
No known factors “seem to weigh in favor of ChatGPT being a fair use of its training data,” Balaji wrote. “That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains.”
Reached by this news agency, Balaji’s mother requested privacy while grieving the death of her son.
In a Nov. 18 letter filed in federal court, attorneys for The New York Times named Balaji as someone who had “unique and relevant documents” that would support their case against OpenAI. He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions.
Generative artificial intelligence programs work by analyzing an immense amount of data from the internet and using it to answer prompts submitted by users, or to create text, images or videos.
Ireland’s AI data centers are sucking up too much of the country’s energy
San Jose, SJSU announce collaboration with NVIDIA to further workforce development, AI innovation
When OpenAI released its ChatGPT program in late 2022, it turbocharged an industry of companies seeking to write essays, make art and create computer code. Many of the most valuable companies in the world now work in the field of artificial intelligence, or manufacture the computer chips needed to run those programs. OpenAI’s own value nearly doubled in the past year.
News outlets have argued that OpenAI and Microsoft — which is in business with OpenAI and also has been sued by The Mercury News — have plagiarized and stole its articles, undermining their business models.
“Microsoft and OpenAI simply take the work product of reporters, journalists, editorial writers, editors and others who contribute to the work of local newspapers — all without any regard for the efforts, much less the legal rights, of those who create and publish the news on which local communities rely,” the newspapers’ lawsuit said.
OpenAI has staunchly refuted those claims, stressing that all of its work remains legal under “fair use” laws.
“We see immense potential for AI tools like ChatGPT to deepen publishers’ relationships with readers and enhance the news experience,” the company said when the lawsuit was filed.
If you or someone you know is struggling with feelings of depression or suicidal thoughts, the 988 Suicide & Crisis Lifeline offers free, round-the-clock support, information and resources for help. Call or text the lifeline at 988, or see the 988lifeline.org website, where chat is available.
Jakob Rodgers is a senior breaking news reporter. Call, text or send him an encrypted message via Signal at 510-390-2351, or email him at jrodgers@bayareanewsgroup.com.
...
Read the original on www.mercurynews.com »
Psychologists at the University of York, who tested the impact that smartphones have on children’s behaviour for a new two-part documentary series for Channel 4, found that a ban in school impacted positively on sleep and mood.
Swiped: The School that Banned Smartphones airs this week on Channel 4
Swiped: The School that Banned Smartphones, hosted by Matt and Emma Willis, is based at The Stanway School in Colchester, and challenged a group of Year 8 pupils to give up their smartphones completely for 21 days.
The experiment, led by Professor Lisa Henderson and Dr Emma Sullivan from the University, saw pupils undergo a series of tests, with experts monitoring their behavioural changes throughout the period, and repeating the tests at the end of the three weeks to conclude what effects giving up your phone really does have on your brain including sleep, wellbeing and cognition.
They found that students in the phone ban group experienced notable improvements in their sleep. On average, they were falling asleep 20 minutes faster than before the ban, and reported getting a full hour of extra rest each night.
Children in the phone ban group also went to bed on average, 50 minutes earlier during the phone ban weeks compared to the week before the phone ban, for example, bedtime was 10:12 pm one-week post ban, and 11:02 pm the week before the ban. These changes, which were self-reported, were also verified with sleep-tracking devices.
Better sleep also appeared to coincide with a boost in mood. Pupils in the phone ban group reported a 17% reduction in feelings related to depression and an 18% reduction in feelings related to anxiety, feeling generally less upset and nervous. Pupils who slept better even showed changes in their heart rate that signalled improved well being.
Professor Lisa Henderson, from the University’s Department of Psychology, said: “This experiment incorporated a much longer abstinence period than previous studies, allowing us to see how a smartphone ban in school could impact on sleep, wellbeing, cognitive abilities, and alertness.
“The results showed that a smartphone ban in children under the age of 14 could have a positive impact on sleep, and connected to improved sleep, a boost in overall mood.”
Interestingly, the research didn’t show significant improvements in cognitive ability; the phone ban group showed a modest 3% boost in working memory, and there were no improvements in sustained attention. Researchers suggest that these results might mean that changes in cognitive ability could take longer than the study period of 21 days to materialise.
Dr Emma Sullivan, from the University’s Department of Psychology, said: “Our results come at an important time when government ministers in the UK are thinking about the impact of smartphones on young people, and when other parts of the world, such as Australia, are introducing a social media ban for under 16s.
“Evidence gathering is crucial to make these big decisions that impact on the lives of young people, and whilst more work is needed on this, these first sets of results are an interesting start to begin to have these better informed conversations.”
Swiped: The School that Banned Smartphones starts at 8pm, Wednesday, 11 December, on Channel 4.
...
Read the original on www.york.ac.uk »
The TW4 decentralized Energy Recovery Ventilator transfers clean outdoor air inside and moves polluted air outdoors, while harvesting ~90% of the heat energy - so you get fresh air without the downsides of heating or cooling it.
The WM12 is basically two TW4 modules side by side in a piece of tough polypropylene foam, so you can put it in a window.
The design is now in beta. People who request a unit by email will be contacted one at a time as units become available, starting with those most able to handle potential complications that come with the beta phase.
...
Read the original on www.openerv.ca »
We all deserve a voice assistant that doesn’t harvest our data and arbitrarily limit features. In the same way Home Assistant made private and local home automation a viable option, we believe the same can, and must be done for voice assistants.
Since we began developing our open-source voice assistant for Home Assistant, one key element has been missing - great hardware that’s simple to set up and use. Hardware that hears you, gives you clear feedback, and seamlessly fits into the home. Affordable and high-quality voice hardware will let more people join in on its development and allow anyone to preview the future of voice assistants today. Setting a standard for the next several years to base our development around.
We’re launching Home Assistant Voice Preview Edition to help accelerate our goal of not only matching the capabilities of existing voice assistants but surpassing them. This is inevitable: They’ll focus their efforts on monetizing voice, while our community will be focused on improving open and private voice. We’ll support the languages big tech ignores and provide a real choice in how you run voice in your home.
The era of open, private voice assistants begins now, and we’d love for you to be part of it.
Our main goal with Voice Preview Edition was to make the best hardware to get started with Assist, Home Assistant’s built-in voice assistant. If you’re already using other third-party hardware to run Assist, this will be a big upgrade. We prioritized its ability to hear commands, giving it an industry-leading dedicated audio processor and dual microphones - I’m always blown away by how well it picks up my voice around the room.
Next, we ensured it would blend into the home, giving it a sleek but unobtrusive design. That’s not to say it doesn’t have flair. When you get your hands on Voice Preview Edition the first thing you’ll notice is its premium-feeling injection-molded shell, which is semi-transparent, just like your favorite ’90s tech. The LED ring is also really eye-catching, and you can customize it to your heart’s content from full gamer RGB to subtle glow.
It’s hard to convey how nice the rotary dial is to use; its subtle clicks paired with LED animations are hard not to play with. Most importantly, the dial lets anyone in your home intuitively adjust the volume. The same can be said for the multipurpose button and mute switch (which physically cuts power to the microphone for ultimate privacy). We knew for it to work best, it needed to be out in the open, and let’s just say that Home Approval Factor was very front of mind when designing it.
We also worked hard to keep the price affordable and comparable to other voice assistant hardware at just $59 (that’s the recommended MSRP, and pricing will vary by retailer). This isn’t a preorder, it’s available now!
For some, our voice assistant is all they need; they just want to say a couple of commands, set timers, manage their shopping list, and control their most used devices. For others, we understand they want to ask their voice assistant to make whale sounds or to tell them how tall Taylor Swift is - this voice assistant doesn’t entirely do those things (yet). We think there is still more we can do before this is ready for every home, and until then, we’ll be selling this Preview of the future of voice assistants. We’ve built the best hardware on the market, and set a new standard for the coming years, allowing us to focus our development as we prepare our voice assistant for every home. Taking back our privacy isn’t for everyone - it’s a journey - and we want as many people as possible to join us early and make it better.
Many other voice assistants work with Home Assistant, but this one was built for Home Assistant. Unlike other voice hardware that can work with Assist, this doesn’t require flashing firmware or any assembly. You plug it into power, and it is seamlessly discovered by Home Assistant. A wizard instantly starts helping you set up your voice assistant, but critically, if you haven’t used voice before, it will quickly guide you through what you need to get the best experience.
Get up and running with Voice Preview Edition in minutes with our new wizard
This is not a DIY product. We’ve worked to make the experience as smooth as possible, with easy and fast updates and settings you can manage from the Home Assistant UI.
If you have been following our work on voice, you know we’ve tried a lot of different voice assistant hardware. Most available Assist-capable hardware is bad at its most important job - hearing your voice and then providing audiovisual feedback. That was really what drove us to build Voice Preview Edition.
Voice Preview Editions mics and audio processors effortlessly hear commands through loud music it is playing
Our Assist software could only do so much with substandard audio, and its functionality is massively improved with clear audio. The dual microphones combined with the XMOS audio processing chip are what makes it so capable. Together, they allow Voice Preview Edition to have echo cancellation, stationary noise removal, and auto gain control, which all adds up to clearer audio. This combined with an ESP32-S3 with 8 MB of octal PSRAM - one of the fastest ESP and RAM combinations available - makes for an incredibly responsive device. This is the best Assist hardware you can buy today, and it will continue to give a great experience as Assist’s feature set expands in the years to come.
Assist can do something almost no other voice assistant can achieve - it can run without the internet 🤯. You can speak to your Voice Preview Edition, and those commands can be processed completely within the walls of your home. At the time of writing this, there are some pretty big caveats, specifically that you need to speak a supported language and have pretty powerful hardware to run it (we recommend a Home Assistant system running on an Intel N100 or better).
If you use low-powered Home Assistant hardware, there is an easy and affordable internet-based solution; Home Assistant Cloud. This privacy-focused service allows you to offload your speech-to-text and text-to-speech processing, all while being very responsive and keeping your energy bill low. Speech-to-text is the harder of the two to run locally, and our cloud processing is almost always more accurate for more languages (visit our language support checker here).
Our goal is for Assist to run easily, affordably, and fully locally for all languages. As someone who has seen the rapid development of this technology over the past several years, I’m optimistic that this will happen, but until then, many languages have a good range of choices that provide strong privacy.
We are sharing the design files if you want to 3D print a new case… these ones were inevitable
We’re not just launching a new product, we’re open sourcing all of it. We built this for the Home Assistant community. Our community doesn’t want a single voice assistant, they want the one that works for them — they want choice. Creating a voice assistant is hard, and until now, parts of the solution were locked behind expensive licenses and proprietary software. With Voice Preview Edition being open source, we hope to bootstrap an ecosystem of voice assistants.
We tried to make every aspect of Voice Preview Edition customizable, which is actually pretty easy when you’re working hand-in-hand with ESPHome and Home Assistant. It works great with the stock settings, but if you’re so inclined, you can customize the Assist software, ESP32 firmware, and XMOS firmware.
Connecting Grove sensors allows you to use your Voice Preview Edition as a more traditional ESPHome device - here is it acting as a voice assistant and air monitor.
We also made the hardware easy to modify, inside and out. For instance, the included speaker is for alerts and voice prompts, but if you want to use it as a media player, connect a speaker to the included 3.5mm headphone jack and control it with software like Music Assistant. The included DAC is very clean and capable of streaming lossless audio. It can also be used as a very capable ESP32 device. On the bottom of the device is a Grove port (concealed under a cover that can be permanently removed), which allows you to connect a large ecosystem of sensors and accessories.
We’ve also made it quite painless to open, with easy-to-access screws and no clips. We even included exposed pads on the circuit board to make modifying it more straightforward. We’re providing all the 3D files so you can print your own components… even cartoon character-inspired ones. We’re not here to dictate what you can and can’t do with your device, and we tried our best to stay out of your way.
The beauty of Home Assistant and ESPHome is that you are never alone when fixing an issue or adding a feature. We made this device so the community could start working more closely together on voice; we even considered calling it the Community edition. Ultimately, it is the community driving forward voice - either by taking part in its development or supporting its development by buying official hardware or Home Assistant Cloud. So much has already been done for voice, and I can’t wait to see the advancements we make together.
Home Assistant champions choice. Today, we’re providing one of the best choices for voice hardware. One that is truly private and totally open. I’m so proud of the team for building such a great working and feeling piece of hardware - this is a really big leap for voice hardware. I expect it to be the hardware benchmark for open-voice projects for years to come. I would also like to thank our language leaders who are expanding the reach of this project, our testers of this Preview Edition, and anyone who has joined in our voice work over the past years.
The hardware really is only half the picture, and it’s the software that really brings this all together. Mike Hansen has just written the Voice Chapter 8 blog to accompany this launch, and this explains all the things we’ve built over the past two years to make Assist work in the home today. He also highlights everything that Voice Preview Edition was built to help accelerate development.
...
Read the original on www.home-assistant.io »
London Fixed Gear and Single-Speed is a community of predominantly fixed gear and single-speed cyclists in and around London, UK.
This site is supported almost exclusively by donations. Please consider donating a small amount regularly.
...
Read the original on www.lfgss.com »
Sorry, your browser doesn’t support embedded videos, but don’t worry, you can
download it.
Nadia Odunayo has been so often the smiling face on the door of this event, but did you know she’s the founder and (more impressively!) one woman development team behind The StoryGraph, a reading community of over a million book lovers. Her story is one of grit, insight and technical insights into what it takes to execute on the “one person framework”.
Nadia Odunayo is the founder and CEO of The StoryGraph, the app that helps you to track your reading and choose which book to read next based on your mood and favorite topics and themes. She previously worked at Pivotal Labs as a software engineer and originally learnt to code at Makers Academy in London. In her spare time she loves to take dance class and, naturally, read!
...
Read the original on brightonruby.com »
This is a map of 400,000+ GitHub projects. Each dot is a project. Dots are close to each other if they have a lot of common stargazers.
The first step was to fetch who gave stars to which repositories. For this I used a public data set of github activity events on Google BigQuery, considering only events between Jan 2020 and March 2023. This gave me more than 350 million stars. (Side note: Mind blowing to think that Milky Way has more than 100 billion stars)
In the second phase I computed exact Jaccard Similarity between each repository. For my home computer’s 24GB RAM this was too much, however an AWS EC2 instance with 512GB of RAM chewed through it in a few hours. (Side note: I tried other similarities too, but Jaccard gave the most believable results)
In the third phase I used a few clustering algorithms to split repositories together. I liked Leiden clustering
the best and ended up with 1000+ clusters.
In the fourth phase I used my own ngraph.forcelayout to compute layouts of nodes inside clusters, and a separate configuration to get global layout of clusters.
In the fifth phase we need to render the map. Unlike my previous projects, I didn’t want to reinvent the wheel, so ended up using maplibre. All I had to do was convert my data into GeoJSON format, generate tiles with tippecanoe and configure the browsing experience.
A lot of country labels were generated with help of ChatGPT. If you find something wrong, you can right click it, edit, and send a pull request - I’d be grateful.
The query that I used to generate labels was:
To implement a searchbox, I used a simple dump of all repositories, indexed by their first letter (or their author’s). So when you type
a in the search box, I look up all repositories that start with a and show them to you with fuzzy matcher on the client.
Most of the time I like data presented by this project better than visual design of the map. If you have experience designing maps or just have a wonderful design vision how it should look like - please don’t hesitate to share. I’m still looking for the style that matches the data.
If you find this project useful and would like to support it - please join the support group. If you need any help with this project or have any questions - don’t hesitate to open an issue here or ping me on twitter
Thank you to all my friends and supporters who helped me to get this project off the ground: Ryan, Andrey, Alex, Dmytro. You are awesome!
Thank you to my dear daughter Louise for making a logo for this project. I love you!
Endless gratitude to all open source contributors who made this project possible. I’m standing on the shoulders of giants.
I’m releasing this repository under MIT license. However if you use the data in your own work, please consider giving attribution to this project.
...
Read the original on github.com »
Sophie Germain wrote, “It has been said that algebra is but written geometry and geometry is but diagrammatic algebra.”
...
Read the original on www.futilitycloset.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.