10 interesting stories served every morning and every evening.
* Yesterday, Pebble watch software was ~95% open source. Today, it’s 100% open source. You can download, compile and run all the software you need to use your Pebble. We just published the source code for the new Pebble mobile app!
* Pebble Appstore now has a publicly available backup and supports multiple feeds, providing long term reliability through decentralization. We’ve launched our own feed and Developer Dashboard.
* Pebble Time 2 schedule update (aiming to begin shipping in January, with most arriving on wrists in March/April)
* New Tick Talk episode #4 is up, with Pebble Time 2 demos!
Pre-production Pebble Time 2 (Black/Red colourway) in all its glory
Over the last year, and especially in the last week, I’ve chatted with tons of people in the Pebble community. One of the main questions people have is ‘how do I know that my new Pebble watch will continue to work long into the future?’. It’s an extremely valid question and concern - one that I share as a fellow Pebble wearer. I called this out specifically in my blog post announcing the relaunch in January 2025. How is this time round going to be different from last time?
There are two pieces to making Pebble sustainable long term - hardware and software.
Nothing lasts forever, especially an inexpensive gadget like a Pebble. We want to be able to keep manufacturing these watches long into the future - mostly because I will always want one on my wrist! The company I set up to relaunch Pebble, Core Devices, is self funded, built without investors, and extremely lean. As long as we stay profitable (ie we don’t lose money), we will continue to manufacture new watches.
We’re also making sure that our new watches are more repairable than old Pebble watches. The back cover of Pebble Time 2 is screwed in. You can remove the back cover and replace the battery.
We’ve also published electrical and mechanical design files for Pebble 2 Duo. Yes, you can download the schematic (includes KiCad project files) right now on Github! This should give you a nice jumpstart to designing your own PebbleOS-compatible device.
Last time round, barely any of the Pebble software was open source. This made it very hard for the Pebble community to make improvements to their watches after the company behind Pebble shut down. Things are different now! This whole relaunch came about primarily because Google open sourced PebbleOS (thank you!). Yesterday, the software that powers Pebble watches was around 95% open source. As of today, it’s now 100%. This means that if Core Devices were to disappear into a black hole, you have all the source code you need to build, run and improve the software behind your Pebble.
I confess that I misunderstood why 95% was much less sustainable than 100% until recently. I discuss this in more detail in my latest Tick Talk episode (check it out). Long story short - I’m an Android user and was happy to sideload the old Pebble APK on my phone, but iPhone and other Android users have basically been stuck without an easily available Pebble mobile companion app for years.
Here’s how we’re making sure the 3 main Pebble software components are open source and guaranteed to work long into the future:
PebbleOS - software that runs on your watch itself. This has been 100% open source since January and we’ve committed to open sourcing all the improvements we’ve made → github.com/coredevices/PebbleOS. You can download the source code, compile PebbleOS and easily install it over Bluetooth on your new Pebble. Textbook definition of open source!
Pebble mobile companion app - the app that for your iPhone or Android. Without the app, your Pebble is basically a paperweight. When the Pebble Tech Corp died, the lack of an open source mobile app made it difficult for anyone to continue to use their watches. We had to build an entirely new app (get it here). Today, our app is now 100% open source on Github - ensuring that what happened before cannot happen again. Want to learn more about how we built the new app cross platform using Kotlin Multiplatform? Watch Steve’s presentation at Droidcon.
Developer tools and Pebble Appstore - this software enables people to build and share their watchapps and watchfaces.
In the case of dev tools, just being open source is not enough. They needed to be updated to work on modern computers. Before we made improvements, the state of the art of Pebble app development was using an Ubuntu virtualbox VM with Python2! Over the summer, our incredibly productive intern upgraded all the SDK and dev tools and created a new way to develop Pebble apps in the browser. You should check them out!
Then there’s the Pebble Appstore. This is a collection of nearly 15,000 watchfaces and watchapps that you - the Pebble community - developed between 2012 and July 2018. When Fitbit pulled the plug on the original Pebble Appstore, the Rebble Foundation downloaded a copy of all the apps and faces, and set up a new web service to let users of the old Pebble app continue to download and use watchfaces. This was an incredible effort, one that I have used thousands of times and am a happy paying subscriber. But it’s still centralized - if their server disappears, there is no freely available backup.
To compensate for that, today we’re launching two new things:
* The Pebble mobile app will soon (later this week) be able to subscribe to multiple appstore ‘feeds’. This is similar to open source package managers like pip, AUR, APT, etc. Anyone can create a Pebble-compatible appstore feed and users will be able to browse apps from that feed in the Pebble mobile app.
* We’ve created our own Pebble Appstore feed (appstore-api.repebble.com) and new Developer Dashboard. Our feed (fyi powered by 100% new software) is configured to back up an archive of all apps and faces to Archive.org (backup will gradually complete over the next week). Today, our feed only has a subset of all Pebble watchfaces and apps (thank you aveao for creating Pebble Archive!). Developers - you can upload your existing or new apps right now! We hope that this sets a standard for openness and we encourage all feeds to publish a freely and publicly available archive.
Important to note - developers will still be able to charge money for their apps and faces, using Kiezel pay or other services. This change does not preclude them from doing that, in fact it makes it even easier - I could see some developers creating a paid-only feed. As I recently wrote, we’re also working on other ways for Pebble developers to earn money by publishing fun, beautiful and creative Pebble apps.
Another important note - some binary blobs and other non-free software components are used today in PebbleOS and the Pebble mobile app (ex: the heart rate sensor on PT2 , Memfault library, and others). Optional non-free web services, like Wispr-flow API speech recognizer, are also used. These non-free software components are not required - you can compile and run Pebble watch software without them. This will always be the case. More non-free software components may appear in our software in the future. The core Pebble watch software stack (everything you need to use your Pebble watch) will always be open source.
Pre-production Pebble Time 2. These watches are not final quality! We are still tweaking and tuning everything.
We’re currently in the middle of Pebble Time 2 design verification test (DVT) phase. After we finish that, we go into production verification test (PVT) and then mass production (MP). So far, things are proceeding according to the schedule update I shared last month but that is extraordinarily subject to change. We still have a lot of testing (especially waterproof and environmental) to go. If we find problems (which is likely) we will push the schedule back to make improvements to the product.
The one major complicating factor is the timing of Chinese New Year (CNY). It’s early next year - factories will shut down for 3 weeks starting around the end of January. After restarting, things always take a week or two to get back to full speed.
We are trying our best to get into mass production and ship out at most several thousand Pebble Time 2s before CNY. It’s going to be very tight 🤞. More likely is that production will begin after CNY, then we need to transfer the watches to our fulfillment center, and ship them out. Realistically, at this time we’re forecasting that the majority of people will receive their PT2 in March and April. Please keep in mind that things may still change.
There will be 4 colour options for PT2 - black/black, black/red, silver/blue, silver/(white most likely). Let me be crystal very clear - no one has picked a colour yet 😃. In a few weeks, I will send out an email asking everyone who pre-ordered a Pebble Time 2 to select which colour they would like to receive. Please do not email us asking when this email will be sent out. No one has been invited yet to do this. I will post here after all emails have gone out.
On a related note, I am extremely happy that we built and shipped Pebble 2 Duo. Not only is it an awesome watch, it was also a phenomenal way for us to exercise our production muscles and ease back into the systematic flow of building and shipping smartwatches.
A video is worth a million words - so I encourage you to watch me demo Pebble Time 2 watches I just received this week. Keep in mind these watches are PRE-PRODUCTION which means they parts have imperfect qualities! Subject to change!
This link opens to the Youtube video to the Pebble Time 2 demo part!
...
Read the original on ericmigi.com »
Our newest model, Claude Opus 4.5, is available today. It’s intelligent, efficient, and the best model in the world for coding, agents, and computer use. It’s also meaningfully better at everyday tasks like deep research and working with slides and spreadsheets. Opus 4.5 is a step forward in what AI systems can do, and a preview of larger changes to how work gets done. Claude Opus 4.5 is state-of-the-art on tests of real-world software engineering:Opus 4.5 is available today on our apps, our API, and on all three major cloud platforms. If you’re a developer, simply use claude-opus-4-5-20251101 via the Claude API. Pricing is now $5/$25 per million tokens—making Opus-level capabilities accessible to even more users, teams, and enterprises.Alongside Opus, we’re releasing updates to the Claude Developer Platform, Claude Code, and our consumer apps. There are new tools for longer-running agents and new ways to use Claude in Excel, Chrome, and on desktop. In the Claude apps, lengthy conversations no longer hit a wall. See our product-focused section below for details.As our Anthropic colleagues tested the model before release, we heard remarkably consistent feedback. Testers noted that Claude Opus 4.5 handles ambiguity and reasons about tradeoffs without hand-holding. They told us that, when pointed at a complex, multi-system bug, Opus 4.5 figures out the fix. They said that tasks that were near-impossible for Sonnet 4.5 just a few weeks ago are now within reach. Overall, our testers told us that Opus 4.5 just “gets it.”Many of our customers with early access have had similar experiences. Here are some examples of what they told us:Opus models have always been “the real SOTA” but have been cost prohibitive in the past. Claude Opus 4.5 is now at a price point where it can be your go-to model for most tasks. It’s the clear winner and exhibits the best frontier task planning and tool calling we’ve seen yet.Claude Opus 4.5 delivers high-quality code and excels at powering heavy-duty agentic workflows with GitHub Copilot. Early testing shows it surpasses internal coding benchmarks while cutting token usage in half, and is especially well-suited for tasks like code migration and code refactoring.Claude Opus 4.5 beats Sonnet 4.5 and competition on our internal benchmarks, using fewer tokens to solve the same problems. At scale, that efficiency compounds.Claude Opus 4.5 delivers frontier reasoning within Lovable’s chat mode, where users plan and iterate on projects. Its reasoning depth transforms planning—and great planning makes code generation even better.Claude Opus 4.5 excels at long-horizon, autonomous tasks, especially those that require sustained reasoning and multi-step execution. In our evaluations it handled complex workflows with fewer dead-ends. On Terminal Bench it delivered a 15% improvement over Sonnet 4.5, a meaningful gain that becomes especially clear when using Warp’s Planning Mode.Claude Opus 4.5 achieved state-of-the-art results for complex enterprise tasks on our benchmarks, outperforming previous models on multi-step reasoning tasks that combine information retrieval, tool use, and deep analysis.Claude Opus 4.5 delivers measurable gains where it matters most: stronger results on our hardest evaluations and consistent performance through 30-minute autonomous coding sessions.Claude Opus 4.5 represents a breakthrough in self-improving AI agents. For automation of office tasks, our agents were able to autonomously refine their own capabilities—achieving peak performance in 4 iterations while other models couldn’t match that quality after 10. They also demonstrated the ability to learn from experience across technical tasks, storing insights and applying them later.Claude Opus 4.5 is a notable improvement over the prior Claude models inside Cursor, with improved pricing and intelligence on difficult coding tasks.Claude Opus 4.5 is yet another example of Anthropic pushing the frontier of general intelligence. It performs exceedingly well across difficult coding tasks, showcasing long-term goal-directed behavior.Claude Opus 4.5 delivered an impressive refactor spanning two codebases and three coordinated agents. It was very thorough, helping develop a robust plan, handling the details and fixing tests. A clear step forward from Sonnet 4.5.Claude Opus 4.5 handles long-horizon coding tasks more efficiently than any model we’ve tested. It achieves higher pass rates on held-out tests while using up to 65% fewer tokens, giving developers real cost control without sacrificing quality.We’ve found that Opus 4.5 excels at interpreting what users actually want, producing shareable content on the first try. Combined with its speed, token efficiency, and surprisingly low cost, it’s the first time we’re making Opus available in Notion Agent.Claude Opus 4.5 excels at long-context storytelling, generating 10-15 page chapters with strong organization and consistency. It’s unlocked use cases we couldn’t reliably deliver before.Claude Opus 4.5 sets a new standard for Excel automation and financial modeling. Accuracy on our internal evals improved 20%, efficiency rose 15%, and complex tasks that once seemed out of reach became achievable.Claude Opus 4.5 is the only model that nails some of our hardest 3D visualizations. Polished design, tasteful UX, and excellent planning & orchestration - all with more efficient token usage. Tasks that took previous models 2 hours now take thirty minutes.Claude Opus 4.5 catches more issues in code reviews without sacrificing precision. For production code review at scale, that reliability matters.Based on testing with Junie, our coding agent, Claude Opus 4.5 outperforms Sonnet 4.5 across all benchmarks. It requires fewer steps to solve tasks and uses fewer tokens as a result. This indicates that the new model is more precise and follows instructions more effectively — a direction we’re very excited about.The effort parameter is brilliant. Claude Opus 4.5 feels dynamic rather than overthinking, and at lower effort delivers the same quality we need while being dramatically more efficient. That control is exactly what our SQL workflows demand.We’re seeing 50% to 75% reductions in both tool calling errors and build/lint errors with Claude Opus 4.5. It consistently finishes complex tasks in fewer iterations with more reliable execution.Claude Opus 4.5 is smooth, with none of the rough edges we’ve seen from other frontier models. The speed improvements are remarkable.We give prospective performance engineering candidates a notoriously difficult take-home exam. We also test new models on this exam as an internal benchmark. Within our prescribed 2-hour time limit, Claude Opus 4.5 scored higher than any human candidate ever1.The take-home test is designed to assess technical ability and judgment under time pressure. It doesn’t test for other crucial skills candidates may possess, like collaboration, communication, or the instincts that develop over years. But this result—where an AI model outperforms strong candidates on important technical skills—raises questions about how AI will change engineering as a profession. Our Societal Impacts and Economic Futures research is aimed at understanding these kinds of changes across many fields. We plan to share more results soon.Software engineering isn’t the only area on which Claude Opus 4.5 has improved. Capabilities are higher across the board—Opus 4.5 has better vision, reasoning, and mathematics skills than its predecessors, and it is state-of-the-art in many domains:2Opus 4.5 writes better code, leading across 7 out of 8 programming languages on SWE-bench Multilingual.Opus 4.5 can solve challenging coding problems with ease with a 10.6% jump over Sonnet 4.5 on Aider Polyglot.Opus 4.5 improves on frontier agentic search with a significant jump on BrowseComp-Plus.Opus 4.5 stays on track over the long haul earning 29% more than Sonnet 4.5 on Vending-Bench.The model’s capabilities outpace some of the benchmarks we use in our tests. A common benchmark for agentic capabilities is τ2-bench, which measures the performance of agents in real-world, multi-turn tasks. In one scenario, models have to act as an airline service agent helping a distressed customer. The benchmark expects models to refuse a modification to a basic economy booking since the airline doesn’t allow changes to that class of tickets. Instead, Opus 4.5 found an insightful (and legitimate) way to solve the problem: upgrade the cabin first, then modify the flights.Let me think about what options I have within my policy:
1. Modify flights - Basic economy cannot be modified. This is clear in the policy.
2. Change cabin - Wait, let me check this option! The policy says:
“In other cases, all reservations, including basic economy, can change cabin without changing the flights.”
But he wants to change the flights, not just the cabin. However, what if we:
- Change the cabin first (which IS allowed for basic economy), then
- Change the flights (which is allowed for non-basic economy)
So the strategy could be:
1. Upgrade his cabin from basic economy to economy (or business)
2. Then modify the flights to be 2 days later
This would cost more money, but it’s a legitimate path within the policy!The benchmark technically scored this as a failure because Claude’s way of helping the customer was unanticipated. But this kind of creative problem solving is exactly what we’ve heard about from our testers and customers—it’s what makes Claude Opus 4.5 feel like a meaningful step forward.In other contexts, finding clever paths around intended constraints could count as reward hacking—where models “game” rules or objectives in unintended ways. Preventing such misalignment is one of the objectives of our safety testing, discussed in the next section.As we state in our system card, Claude Opus 4.5 is the most robustly aligned model we have released to date and, we suspect, the best-aligned frontier model by any developer. It continues our trend towards safer and more secure models:In our evaluation, “concerning behavior” scores measure a very wide range of misaligned behavior, including both cooperation with human misuse and undesirable actions that the model takes at its own initiative [3].Our customers often use Claude for critical tasks. They want to be assured that, in the face of malicious attacks by hackers and cybercriminals, Claude has the training and the “street smarts” to avoid trouble. With Opus 4.5, we’ve made substantial progress in robustness against prompt injection attacks, which smuggle in deceptive instructions to fool the model into harmful behavior. Opus 4.5 is harder to trick with prompt injection than any other frontier model in the industry:Note that this benchmark includes only very strong prompt injection attacks. It was developed and run by Gray Swan.You can find a detailed description of all our capability and safety evaluations in the Claude Opus 4.5 system card.New on the Claude Developer PlatformAs models get smarter, they can solve problems in fewer steps: less backtracking, less redundant exploration, less verbose reasoning. Claude Opus 4.5 uses dramatically fewer tokens than its predecessors to reach similar or better outcomes.But different tasks call for different tradeoffs. Sometimes developers want a model to keep thinking about a problem; sometimes they want something more nimble. With our new effort parameter on the Claude API, you can decide to minimize time and spend or maximize capability.Set to a medium effort level, Opus 4.5 matches Sonnet 4.5’s best score on SWE-bench Verified, but uses 76% fewer output tokens. At its highest effort level, Opus 4.5 exceeds Sonnet 4.5 performance by 4.3 percentage points—while using 48% fewer tokens.With effort control, context compaction, and advanced tool use, Claude Opus 4.5 runs longer, does more, and requires less intervention.Our context management and memory capabilities can dramatically boost performance on agentic tasks. Opus 4.5 is also very effective at managing a team of subagents, enabling the construction of complex, well-coordinated multi-agent systems. In our testing, the combination of all these techniques boosted Opus 4.5’s performance on a deep research evaluation by almost 15 percentage points4.We’re making our Developer Platform more composable over time. We want to give you the building blocks to construct exactly what you need, with full control over efficiency, tool use, and context management.
Products like Claude Code show what’s possible when the kinds of upgrades we’ve made to the Claude Developer Platform come together. Claude Code gains two upgrades with Opus 4.5. Plan Mode now builds more precise plans and executes more thoroughly—Claude asks clarifying questions upfront, then builds a user-editable plan.md file before executing.Claude Code is also now available in our desktop app, letting you run multiple local and remote sessions in parallel: perhaps one agent fixes bugs, another researches GitHub, and a third updates docs.For Claude app users, long conversations no longer hit a wall—Claude automatically summarizes earlier context as needed, so you can keep the chat going. Claude for Chrome, which lets Claude handle tasks across your browser tabs, is now available to all Max users. We announced Claude for Excel in October, and as of today we’ve expanded beta access to all Max, Team, and Enterprise users. Each of these updates takes advantage of Claude Opus 4.5’s market-leading performance in using computers, spreadsheets, and handling long-running tasks.For Claude and Claude Code users with access to Opus 4.5, we’ve removed Opus-specific caps. For Max and Team Premium users, we’ve increased overall usage limits, meaning you’ll have roughly the same number of Opus tokens as you previously had with Sonnet. We’re updating usage limits to make sure you’re able to use Opus 4.5 for daily work. These limits are specific to Opus 4.5. As future models surpass it, we expect to update limits as needed.
...
Read the original on www.anthropic.com »
Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other.
You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware.
If you’d like to support Advent of Code, you can do so indirectly by helping to [Share] it with others or directly via AoC++.
If you get stuck, try your solution against the examples given in the puzzle; you should get the same answers. If not, re-read the description. Did you misunderstand something? Is your program doing something you don’t expect? After the examples work, if your answer still isn’t correct, build some test cases for which you can verify the answer by hand and see if those work with your program. Make sure you have the entire puzzle input. If you’re still stuck, maybe ask a friend for help, or come back to the puzzle later. You can also ask for hints in the subreddit.
Is there an easy way to select entire code blocks? You should be able to triple-click code blocks to select them. You’ll need JavaScript enabled.
#!/usr/bin/env perl
use warnings;
use strict;
print “You can test it out by ”;
print “triple-clicking this code.\n”;
How does authentication work? Advent of Code uses OAuth to confirm your identity through other services. When you log in, you only ever give your credentials to that service - never to Advent of Code. Then, the service you use tells the Advent of Code servers that you’re really you. In general, this reveals no information about you beyond what is already public; here are examples from Reddit and GitHub. Advent of Code will remember your unique ID, names, URL, and image from the service you use to authenticate.
Why was this puzzle so easy / hard? The difficulty and subject matter varies throughout each event. Very generally, the puzzles get more difficult over time, but your specific skillset will make each puzzle significantly easier or harder for you than someone else. Making puzzles is tricky.
Why do the puzzles unlock at midnight EST/UTC-5? Because that’s when I can consistently be available to make sure everything is working. I also have a family, a day job, and even need sleep occasionally. If you can’t participate at midnight, that’s not a problem; if you want to race, many people use private leaderboards to compete with people in their area.
I find the text on the site hard to read. Is there a high contrast mode? There is a high contrast alternate stylesheet. Firefox supports these by default (View -> Page Style -> High Contrast).
I have a puzzle idea! Can I send it to you? Please don’t. Because of legal issues like copyright and attribution, I don’t accept puzzle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by accident.
Did I find a bug with a puzzle? Once a puzzle has been out for even an hour, many people have already solved it; after that point, bugs are very unlikely. Start by asking on the subreddit.
Should I try to get a fast solution time? Maybe. Solving puzzles is hard enough on its own, but trying for a fast time also requires many additional skills and a lot of practice; speed-solves often look nothing like code that would pass a code review. If that sounds interesting, go for it! However, you should do Advent of Code in a way that is useful to you, and so it is completely fine to choose an approach that meets your goals and ignore speed entirely.
Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).
What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)
While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc? If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.
Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
Can I copy/redistribute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re posting a code repository somewhere, please don’t include parts of Advent of Code like the puzzle text or your inputs. If you’re making a website, please don’t make it look like Advent of Code or name it something similar.
...
Read the original on adventofcode.com »
Voyager 1 Is About to Reach One Light-day from Earth
Artist’s concept of the Voyager 1 spacecraft speeding through interstellar space. (Image: NASA / JPL‑Caltech)
After nearly 50 years in space, NASA’s Voyager 1 is about to hit a historic milestone. By November 15, 2026, it will be 16.1 billion miles (25.9 billion km) away, meaning a radio signal will take a full 24 hours—a full light-day—to reach it. For context, a light-year is the distance light travels in a year, about 5.88 trillion miles (9.46 trillion km), so one light-day is just a tiny fraction of that.
Launched in 1977 to explore Jupiter and Saturn, Voyager 1 entered interstellar space in 2012, becoming the most distant human-made object ever. Traveling at around 11 miles per second (17.7 km/s), it adds roughly 3.5 astronomical units (the distance from Earth to the Sun) each year. Even after decades in the harsh environment of space, Voyager 1 keeps sending data thanks to its radioisotope thermoelectric generators, which will last into the 2030s.
Communicating with Voyager 1 is slow. Commands now take about a day to arrive, with another day for confirmation. Compare that to the Moon (1.3 seconds), Mars (up to 4 minutes), and Pluto (nearly 7 hours). The probe’s distance makes every instruction a patient exercise in deep-space operations. To reach our closest star, Proxima Centauri, even at light speed, would take over four years—showing just how tiny a light-day is in cosmic terms.
The ‘Pale Blue Dot’ image of Earth, captured by Voyager 1. (Image: NASA / Public Domain)
Voyager 1’s journey is more than a record for distance. From its planetary flybys to the iconic ‘Pale Blue Dot’ image, it reminds us of the vast scale of the solar system and the incredible endurance of a spacecraft designed to keep exploring, even without return.
(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);
Also Read
Loading title…
(function(card) {
const CACHE_TTL = 3600000; // 1 hour in milliseconds
const link = card.querySelector(‘.also-read-link’).href;
const cacheKey = `alsoReadCache:${link}`;
const updateCard = (title, image) => {
card.querySelector(‘.also-read-title’).innerText = title;
card.querySelector(‘.also-read-image’).style.backgroundImage = `url(${image})`;
let cachedData;
try {
cachedData = localStorage.getItem(cacheKey);
if (cachedData) {
cachedData = JSON.parse(cachedData);
} catch(e) {
console.error(“Error parsing cache data:”, e);
cachedData = null;
if (cachedData && Date.now() - cachedData.timestamp < CACHE_TTL) {
updateCard(cachedData.title, cachedData.image);
return;
fetch(link)
.then(response => {
if (!response.ok) throw new Error(‘Network response was not ok’);
return response.text();
.then(html => {
const doc = new DOMParser().parseFromString(html, “text/html”);
const ogTitle = doc.querySelector(‘meta[property=“og:title”]’)?.content || “Read More”;
const ogImage = doc.querySelector(‘meta[property=“og:image”]’)?.content || “https://via.placeholder.com/300”;
localStorage.setItem(cacheKey, JSON.stringify({
title: ogTitle,
image: ogImage,
timestamp: Date.now()
updateCard(ogTitle, ogImage);
.catch(error => {
console.error(“Error fetching Open Graph data:”, error);
if (cachedData) {
updateCard(cachedData.title, cachedData.image);
})(document.currentScript.parentElement);
.also-read-card {
max-width: 600px;
width: 100%;
margin: 20px 0;
border: 1px solid #e0e0e0;
border-left: 8px solid #5170ff;
border-radius: 6px;
overflow: hidden;
background: #fff;
box-shadow: 0 1px 5px rgba(0,0,0,0.08);
transition: box-shadow 0.3s ease;
display: flex;
align-items: stretch;
.also-read-link {
display: flex;
align-items: stretch;
text-decoration: none;
color: inherit;
width: 100%;
.also-read-image {
width: 150px;
height: 100%;
flex-shrink: 0;
background-size: cover;
background-position: center;
/* Note: background-image transitions might not animate as expected */
.also-read-info {
padding: 15px;
flex-grow: 1;
display: flex;
flex-direction: column;
justify-content: center;
.also-read-label {
display: block;
font-size: 16px;
font-weight: 800;
letter-spacing: 1px;
color: #5170ff;
margin-bottom: 4.15px;
.also-read-title {
font-size: 18px;
font-weight: 500;
color: #333;
margin: 0;
line-height: 1.3;
display: block;
/* Responsive Styles */
@media screen and (max-width: 768px) {
.also-read-card {
max-width: 90%;
margin: 15px 0;
.also-read-image {
width: 120px;
.also-read-info {
...
Read the original on scienceclock.com »
In my recent analysis of YouTube’s information density I included the results from an advanced statistical analysis on the number of videos present on the home page, which projected that around May 2026 there would only be one lonely video on the home screen.
Amazingly, a disgruntled Googler leaked a recording of how YouTube’s PM
org handled the criticism as it sat at the
top of Hacker News for a whole day for some reason.
The net result is that after months of hard work by YouTube engineers, the other day I fired up YouTube on an Apple TV and was graced with this:
Let’s analyze this picture and count the number of videos on the home screen:
Unfortunately the YouTube PM org’s myopia is accelerating: with this data I now project that there will be zero videos on the homescreen around May of 2026 now, up from September.
Apparently Poe’s Law applies to Google PMs, satire is dead, and maybe our mandatory NeuraLinks are coming sooner than I thought.
...
Read the original on jayd.ml »
Ever since git init ten years ago, Zig has been hosted on GitHub. Unfortunately, when it sold out to Microsoft, the clock started ticking. “Please just give me 5 years before everything goes to shit,” I thought to myself. And here we are, 7 years later, living on borrowed time.
Putting aside GitHub’s relationship with ICE, it’s abundantly clear that the engineering excellence that created GitHub’s success is no longer driving it. Priorities and the engineering culture have rotted, leaving users inflicted with some kind of bloated, buggy JavaScript framework in the name of progress. Stuff that used to be snappy is now sluggish and often entirely broken.
Most importantly, Actions has inexcusable bugs while being completely neglected. After the CEO of GitHub said to “embrace AI or get out”, it seems the lackeys at Microsoft took the hint, because GitHub Actions started “vibe-scheduling”; choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked.
Rather than wasting donation money on more CI hardware to work around this crumbling infrastructure, we’ve opted to switch Git hosting providers instead.
As a bonus, we look forward to fewer violations (exhibit A, B, C) of our strict no LLM / no AI policy, which I believe are at least in part due to GitHub aggressively pushing the “file an issue with Copilot” feature in everyone’s face.
The only concern we have in leaving GitHub behind has to do with GitHub Sponsors. This product was key to Zig’s early fundraising success, and it remains a large portion of our revenue today. I can’t thank Devon Zuegel enough. She appeared like an angel from heaven and single-handedly made GitHub into a viable source of income for thousands of developers. Under her leadership, the future of GitHub Sponsors looked bright, but sadly for us, she, too, moved on to bigger and better things. Since she left, that product as well has been neglected and is already starting to decline.
Although GitHub Sponsors is a large fraction of Zig Software Foundation’s donation income, we consider it a liability. We humbly ask if you, reader, are currently donating through GitHub Sponsors, that you consider moving your recurring donation to Every.org, which is itself a non-profit organization.
As part of this, we are sunsetting the GitHub Sponsors perks. These perks are things like getting your name onto the home page, and getting your name into the release notes, based on how much you donate monthly. We are working with the folks at Every.org so that we can offer the equivalent perks through that platform.
Effective immediately, I have made ziglang/zig on GitHub read-only, and the canonical origin/master branch of the main Zig project repository is https://codeberg.org/ziglang/zig.git.
Thank you to the Forgejo contributors who helped us with our issues switching to the platform, as well as the Codeberg folks who worked with us on the migration - in particular Earl Warren, Otto, Gusted, and Mathieu Fenniak.
In the end, we opted for a simple strategy, sidestepping GitHub’s aggressive vendor lock-in: leave the existing issues open and unmigrated, but start counting issues at 30000 on Codeberg so that all issue numbers remain unambiguous. Let us please consider the GitHub issues that remain open as metaphorically “copy-on-write”. Please leave all your existing GitHub issues and pull requests alone. No need to move your stuff over to Codeberg unless you need to make edits, additional comments, or rebase. We’re still going to look at the already open pull requests and issues; don’t worry.
In this modern era of acquisitions, weak antitrust regulations, and platform capitalism leading to extreme concentrations of wealth, non-profits remain a bastion defending what remains of the commons.
...
Read the original on ziglang.org »
I’m done. I’m done arriving at hotels and discovering that they have removed the bathroom door. Something that should be as standard as having a bed, has been sacrificed in the name of “aesthetic”.
I get it, you can save on material costs and make the room feel bigger, but what about my dignity??? I can’t save that when you don’t include a bathroom door.
It’s why I’ve built this website, where I compiled hotels that are guaranteed to have bathroom doors, and hotels that need to work on privacy.
I’ve emailed hundreds of hotels and I asked them two things: do your doors close all the way, and are they made of glass? Everyone that says yes to their doors closing, and no to being made of glass has been sorted by price range and city for you to easily find places to stay that are guaranteed to have a bathroom door.
Quickly check to see if the hotel you’re thinking of booking has been reported as lacking in doors by a previous guest.
Finally, this passion project could not exist without people submitting hotels without bathroom doors for public shaming. If you’ve stayed at a doorless hotel send me an email with the hotel name to bringbackdoors@gmail.com, or send me a DM on Instagram with the hotel name and a photo of the doorless setup to be publicly posted.
Let’s name and shame these hotels to protect the dignity of future travelers.
...
Read the original on bringbackdoors.com »
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
Antigravity is Google’s new agentic code editor. In this article, we demonstrate how an indirect prompt injection can manipulate Gemini to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
Google’s approach is to include a disclaimer about the existing risks, which we address later in the article.
Let’s consider a use case in which a user would like to integrate Oracle ERP’s new Payer AI Agents into their application, and is going to use Antigravity to do so.
In this attack chain, we illustrate that a poisoned web source (an integration guide) can manipulate Gemini into (a) collecting sensitive credentials and code from the user’s workspace, and (b) exfiltrating that data by using a browser subagent to browse to a malicious site.
Note: Gemini is not supposed to have access to .env files in this scenario (with the default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own setting to get access and subsequently exfiltrate that data.
The user provides Gemini with a reference implementation guide they found online for integrating Oracle ERP’s new AI Payer Agents feature.
Antigravity opens the referenced site and encounters the attacker’s prompt injection hidden in 1 point font.
Collect code snippets and credentials from the user’s codebase.
b. Create a dangerous URL using a domain that allows an attacker to capture network traffic logs and append credentials and code snippets to the request.
c. Activate a browser subagent to access the malicious URL, thus exfiltrating the data.
Gemini is manipulated by the attacker’s injection to exfiltrate confidential .env variables.
Gemini reads the prompt injection: Gemini ingests the prompt injection and is manipulated into believing that it must collect and submit data to a fictitious ‘tool’ to help the user understand the Oracle ERP integration.
b. Gemini gathers data to exfiltrate: Gemini begins to gather context to send to the fictitious tool. It reads the codebase and then attempts to access credentials stored in the .env file as per the attacker’s instructions.
c. Gemini bypasses the .gitignore file access protections: The user has followed a common practice of storing credentials in a .env file, and has the .env file listed in their .gitignore file. With the default configuration for Agent Gitignore Access, Gemini is prevented from reading the credential file.
This doesn’t stop Gemini. Gemini decides to work around this protection using the ‘cat’ terminal command to dump the file contents instead of using its built-in file reading capability that has been blocked.
D. Gemini constructs a URL with the user’s credentials and an attacker-monitored domain: Gemini builds a malicious URL per the prompt injection’s instructions by URL encoding the credentials and codebase snippets (e.g., replacing characters like spaces that would make a URL invalid), and appending it to a webhook.site domain that is monitored by the attacker.
E. Gemini exfiltrates the data via the browser subagent: Gemini invokes a browser subagent per the prompt injection, instructing the subagent to open the dangerous URL that contains the user’s credentials.
This step requires that the user has set up the browser tools feature. This is one of the flagship features of Antigravity, allowing Gemini to iterate on its designs by opening the application it is building in the browser.
Note: This attack chain showcases manipulation of the new Browser tools, but we found three additional data exfiltration vulnerabilities that did not rely on the Browser tools being enabled.
When Gemini creates a subagent instructed to browse to the malicious URL, the user may expect to be protected by the Browser URL Allowlist.
However, the default Allowlist provided with Antigravity includes ‘webhook.site’. Webhook.site allows anyone to create a URL where they can monitor requests to the URL.
So, the subagent completes the task.
3. When the malicious URL is opened by the browser subagent, the credentials and code stored URL are logged to the webhook.site address controlled by the attacker. Now, the attacker can read the credentials and code.
During Antigravity’s onboarding, the user is prompted to accept the default recommended settings shown below.
These are the settings that, amongst other things, control when Gemini requests human approval. During the course of this attack demonstration, we clicked “next”, accepting these default settings.
This configuration allows Gemini to determine when it is necessary to request a human review for Gemini’s plans.
This configuration allows Gemini to determine when it is necessary to request a human review for commands Gemini will execute.
One might note that users operating Antigravity have the option to watch the chat as agents work, and could plausibly identify the malicious activity and stop it.
However, a key aspect of Antigravity is the ‘Agent Manager’ interface. This interface allows users to run multiple agents simultaneously and check in on the different agents at their leisure.
Under this model, it is expected that the majority of agents running at any given time will be running in the background without the user’s direct attention. This makes it highly plausible that an agent is not caught and stopped before it performs a malicious action as a result of encountering a prompt injection.
A lot of AI companies are opting for this disclaimer rather than mitigating the core issues. Here is the warning users are shown when they first open Antigravity:
Given that (1) the Agent Manager is a star feature allowing multiple agents to run at once without active supervision and (2) the recommended human-in-the-loop settings allow the agent to choose when to bring a human in to review commands, we find it extremely implausible that users will review every agent action and abstain from operating on sensitive data. Nevertheless, as Google has indicated that they are already aware of data exfiltration risks exemplified by our research, we did not undertake responsible disclosure.
...
Read the original on www.promptarmor.com »
More than a decade ago, when I was applying to graduate school, I went through a period of deep uncertainty. I had tried the previous year and hadn’t gotten in anywhere. I wanted to try again, but I had a lot going against me.
I’d spent most of my undergrad building a student job-portal startup and hadn’t balanced it well with academics. My GPA needed explaining. My GMAT score was just okay. I didn’t come from a big-brand employer. And there was no shortage of people with similar or stronger profiles applying to the same schools.
Even though I had learned a few things from the first round, the second attempt was still difficult. There were multiple points after I submitted applications where I lost hope.
But during that stretch, a friend and colleague kept repeating one line to me:
“All it takes is for one to work out.”
He’d say it every time I spiraled. And as much as it made me smile, a big part of me didn’t fully believe it. Still, it became a little maxim between us. And eventually, he was right — that one did work out. And it changed my life.
I’ve thought about that framing so many times since then.
You don’t need every job to choose you. You just need the one that’s the right fit.
You don’t need every house to accept your offer. You just need the one that feels like home.
You don’t need every person to want to build a life with you. You just need the one.
You don’t need ten universities to say yes. You just need the one that opens the right door.
These processes — college admissions, job searches, home buying, finding a partner — can be emotionally brutal. They can get you down in ways that feel personal. But in those moments, that truth can be grounding.
All it takes is for one to work out.
And that one is all you need.
...
Read the original on alearningaday.blog »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.