10 interesting stories served every morning and every evening.
Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other.
You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware.
If you’d like to support Advent of Code, you can do so indirectly by helping to [Share] it with others or directly via AoC++.
If you get stuck, try your solution against the examples given in the puzzle; you should get the same answers. If not, re-read the description. Did you misunderstand something? Is your program doing something you don’t expect? After the examples work, if your answer still isn’t correct, build some test cases for which you can verify the answer by hand and see if those work with your program. Make sure you have the entire puzzle input. If you’re still stuck, maybe ask a friend for help, or come back to the puzzle later. You can also ask for hints in the subreddit.
Is there an easy way to select entire code blocks? You should be able to triple-click code blocks to select them. You’ll need JavaScript enabled.
#!/usr/bin/env perl
use warnings;
use strict;
print “You can test it out by ”;
print “triple-clicking this code.\n”;
How does authentication work? Advent of Code uses OAuth to confirm your identity through other services. When you log in, you only ever give your credentials to that service - never to Advent of Code. Then, the service you use tells the Advent of Code servers that you’re really you. In general, this reveals no information about you beyond what is already public; here are examples from Reddit and GitHub. Advent of Code will remember your unique ID, names, URL, and image from the service you use to authenticate.
Why was this puzzle so easy / hard? The difficulty and subject matter varies throughout each event. Very generally, the puzzles get more difficult over time, but your specific skillset will make each puzzle significantly easier or harder for you than someone else. Making puzzles is tricky.
Why do the puzzles unlock at midnight EST/UTC-5? Because that’s when I can consistently be available to make sure everything is working. I also have a family, a day job, and even need sleep occasionally. If you can’t participate at midnight, that’s not a problem; if you want to race, many people use private leaderboards to compete with people in their area.
I find the text on the site hard to read. Is there a high contrast mode? There is a high contrast alternate stylesheet. Firefox supports these by default (View -> Page Style -> High Contrast).
I have a puzzle idea! Can I send it to you? Please don’t. Because of legal issues like copyright and attribution, I don’t accept puzzle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by accident.
Did I find a bug with a puzzle? Once a puzzle has been out for even an hour, many people have already solved it; after that point, bugs are very unlikely. Start by asking on the subreddit.
Should I try to get a fast solution time? Maybe. Solving puzzles is hard enough on its own, but trying for a fast time also requires many additional skills and a lot of practice; speed-solves often look nothing like code that would pass a code review. If that sounds interesting, go for it! However, you should do Advent of Code in a way that is useful to you, and so it is completely fine to choose an approach that meets your goals and ignore speed entirely.
Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).
What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)
While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc? If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.
Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
Can I copy/redistribute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re posting a code repository somewhere, please don’t include parts of Advent of Code like the puzzle text or your inputs. If you’re making a website, please don’t make it look like Advent of Code or name it something similar.
...
Read the original on adventofcode.com »
Note: this post is also applicable to AGENTS.md, the open-source equivalent of CLAUDE.md for agents and harnesses like OpenCode, Zed, Cursor and Codex.
LLMs are stateless functions. Their weights are frozen by the time they’re used for inference, so they don’t learn over time. The only thing that the model knows about your codebase is the tokens you put into it.
Similarly, coding agent harnesses such as Claude Code usually require you to manage agents’ memory explicitly. CLAUDE.md (or AGENTS.md) is the only file that by default goes into every single conversation you have with the agent.
This has three important implications:
Coding agents know absolutely nothing about your codebase at the beginning of each session.
The agent must be told anything that’s important to know about your codebase each time you start a session.
CLAUDE.md is the preferred way of doing this.
Since Claude doesn’t know anything about your codebase at the beginning of each session, you should use CLAUDE.md to onboard Claude into your codebase. At a high level, this means it should cover:
WHAT: tell Claude about the tech, your stack, the project structure. Give Claude a map of the codebase. This is especially important in monorepos! Tell Claude what the apps are, what the shared packages are, and what everything is for so that it knows where to look for things
WHY: tell Claude the purpose of the project and what everything is doing in the repository. What are the purpose and function of the different parts of the project?
HOW: tell Claude how it should work on the project. For example, do you use bun instead of node? You want to include all the information it needs to actually do meaningful work on the project. How can Claude verify Claude’s changes? How can it run tests, typechecks, and compilation steps?
But the way you do this is important! Don’t try to stuff every command Claude could possibly need to run in your CLAUDE.md file - you will get sub-optimal results.
Regardless of which model you’re using, you may notice that Claude frequently ignores your CLAUDE.md file’s contents.
You can investigate this yourself by putting a logging proxy between the claude code CLI and the Anthropic API using ANTHROPIC_BASE_URL. Claude code injects the following system reminder with your CLAUDE.md file in the user message to the agent:
IMPORTANT: this context may or may not be relevant to your tasks.
You should not respond to this context unless it is highly relevant to your task.
As a result, Claude will ignore the contents of your CLAUDE.md if it decides that it is not relevant to its current task. The more information you have in the file that’s not universally applicable to the tasks you have it working on, the more likely it is that Claude will ignore your instructions in the file.
Why did Anthropic add this? It’s hard to say for sure, but we can speculate a bit. Most CLAUDE.md files we come across include a bunch of instructions in the file that aren’t broadly applicable. Many users treat the file as a way to add “hotfixes” to behavior they didn’t like by appending lots of instructions that weren’t necessarily broadly applicable.
We can only assume that the Claude Code team found that by telling Claude to ignore the bad instructions, the harness actually produced better results.
The following section provides a number of recommendations on how to write a good CLAUDE.md file following context engineering best practices.
Your mileage may vary. Not all of these rules are necessarily optimal for every setup. Like anything else, feel free to break the rules once…
you understand when & why it’s okay to break them
you have a good reason to do so
### Less (instructions) is more
It can be tempting to try and stuff every single command that claude could possibly need to run, as well as your code standards and style guidelines into CLAUDE.md. We recommend against this.
Though the topic hasn’t been investigated in an incredibly rigorous manner, some research has been done which indicates the following:
Frontier thinking LLMs can follow ~ 150-200 instructions with reasonable consistency. Smaller models can attend to fewer instructions than larger models, and non-thinking models can attend to fewer instructions than thinking models.
Smaller models get MUCH worse, MUCH more quickly. Specifically, smaller models tend to exhibit an expotential decay in instruction-following performance as the number of instructions increase, whereas larger frontier thinking models exhibit a linear decay (see below). For this reason, we recommend against using smaller models for multi-step tasks or complicated implementation plans.
LLMs bias towards instructions that are on the peripheries of the prompt: at the very beginning (the Claude Code system message and CLAUDE.md), and at the very end (the most-recent user messages)
As instruction count increases, instruction-following quality decreases uniformly. This means that as you give the LLM more instructions, it doesn’t simply ignore the newer (“further down in the file”) instructions - it begins to ignore all of them uniformly
Our analysis of the Claude Code harness indicates that Claude Code’s system prompt contains ~50 individual instructions. Depending on the model you’re using, that’s nearly a third of the instructions your agent can reliably follow already - and that’s before rules, plugins, skills, or user messages.
This implies that your CLAUDE.md file should contain as few instructions as possible - ideally only ones which are universally applicable to your task.
All else being equal, an LLM will perform better on a task when its’ context window is full of focused, relevant context including examples, related files, tool calls, and tool results compared to when its context window has a lot of irrelevant context.
Since CLAUDE.md goes into every single session, you should ensure that its contents are as universally applicable as possible.
For example, avoid including instructions about (for example) how to structure a new database schema - this won’t matter and will distract the model when you’re working on something else that’s unrelated!
Length-wise, the less is more principle applies as well. While Anthropic does not have an official recommendation on how long your CLAUDE.md file should be, general consensus is that < 300 lines is best, and shorter is even better.
At HumanLayer, our root CLAUDE.md file is less than sixty lines.
Writing a concise CLAUDE.md file that covers everything you want Claude to know can be challenging, especially in larger projects.
To address this, we can leverage the principle of Progressive Disclosure to ensure that claude only sees task- or project-specific instructions when it needs them.
Instead of including all your different instructions about building your project, running tests, code conventions, or other important context in your CLAUDE.md file, we recommend keeping task-specific instructions in separate markdown files with self-descriptive names somewhere in your project.
agent_docs/
|- building_the_project.md
|- running_tests.md
|- code_conventions.md
|- service_architecture.md
|- database_schema.md
|- service_communication_patterns.md
Then, in your CLAUDE.md file, you can include a list of these files with a brief description of each, and instruct Claude to decide which (if any) are relevant and to read them before it starts working. Or, ask Claude to present you with the files it wants to read for aproval first before reading them.
Prefer pointers to copies. Don’t include code snippets in these files if possible - they will become out-of-date quickly. Instead, include file:line references to point Claude to the authoritative context.
Conceptually, this is very similar to how Claude Skills are intended to work, although skills are more focused on tool use than instructions.
### Claude is (not) an expensive linter
One of the most common things that we see people put in their CLAUDE.md file is code style guidelines. Never send an LLM to do a linter’s job. LLMs are comparably expensive and incredibly slow compared to traditional linters and formatters. We think you should always use deterministic tools whenever you can.
Code style guidelines will inevitably add a bunch of instructions and mostly-irrelevant code snippets into your context window, degrading your LLM’s performance and instruction-following and eating up your context window.
LLMs are in-context learners! If your code follows a certain set of style guidelines or patterns, you should find that armed with a few searches of your codebase (or a good research document!) your agent should tend to follow existing code patterns and conventions without being told to.
If you feel very stronly about this, you might even consider setting up a Claude Code Stop hook that runs your formatter & linter and presents errors to Claude for it to fix. Don’t make Claude find the formatting issues itself.
Bonus points: use a linter that can automatically fix issues (we like Biome), and carefully tune your rules about what can safely be auto-fixed for maximum (safe) coverage.
You could also create a Slash Command that includes your code guidelines and which points claude at the changes in version control, or at your git status, or similar. This way, you can handle implementation and formatting separately. You will see better results with both as a result.
### Don’t use /init or auto-generate your CLAUDE.md
Both Claude Code and other harnesses with OpenCode come with ways to auto-generate your CLAUDE.md file (or AGENTS.md).
Because CLAUDE.md goes into every single session with Claude code, it is one of the highest leverage points of the harness - for better or for worse, depending on how you use it.
A bad line of code is a bad line of code. A bad line of an implementation plan has the potential to create a lot of bad lines of code. A bad line of a research that misunderstands how the system works has the potential to result in a lot of bad lines in the plan, and therefore a lot more bad lines of code as a result.
But the CLAUDE.md file affects every single phase of your workflow and every single artifact produced by it. As a result, we think you should spend some time thinking very carefully about every single line that goes into it:
CLAUDE.md is for onboarding Claude into your codebase. It should define your project’s WHY, WHAT, and HOW.
Less (instructions) is more. While you shouldn’t omit necessary instructions, you should include as few instructions as reasonably possible in the file.
Keep the contents of your CLAUDE.md concise and universally applicable.
Use Progressive Disclosure - don’t tell Claude all the information you could possibly want it to know. Rather, tell it how to find important information so that it can find and use it, but only when it needs to to avoid bloating your context window or instruction count.
Claude is not a linter. Use linters and code formatters, and use other features like Hooks and Slash Commands as necessary.
CLAUDE.md is the highest leverage point of the harness, so avoid auto-generating it. You should carefully craft its contents for best results.
...
Read the original on www.humanlayer.dev »
How to Get to Zero at Pioneer Works, Sep 12 - Dec 14, 2025. Review in the Art Newspaper, Oct 14. Offset at MediaLive: Data Rich, Dirt Poor at BMoCA, Sep 12 - Jan 11, 2026.
A browser extension for avoiding AI slop.
Download it for Chrome or Firefox.
This is a search tool that will only return content created before ChatGPT’s first public release on November 30, 2022.
Since the public release of ChatGTPT and other large language models, the internet is being increasingly polluted by AI generated text, images and video. This browser extension uses the Google search API to only return content published before Nov 30th, 2022 so you can be sure that it was written or produced by the human hand.
...
Read the original on tegabrain.com »
- Programming
- Windows
On its own, the title of this post is just a true piece of trivia, verifiable with the built-in subst tool (among other methods).
Here’s an example creating the drive +:\ as an alias for a directory at C:\foo:
subst +: C:\foo
The +:\ drive then works as normal (at least in cmd.exe, this will be discussed more later):
> cd /D +:\
+:\> tree .
Folder PATH listing
Volume serial number is 00000001 12AB:23BC
└───bar
However, understanding why it’s true elucidates a lot about how Windows works under the hood, and turns up a few curious behaviors.
The paths that most people are familiar with are Win32 namespace paths, e.g. something like C:\foo which is a drive-absolute Win32 path. However, the high-level APIs that take Win32 paths like CreateFileW ultimately will convert a path like C:\foo into a NT namespace path before calling into a lower level API within ntdll.dll like NtCreateFile.
This can be confirmed with NtTrace, where a call to CreateFileW with C:\foo ultimately leads to a call of NtCreateFile with \??\C:\foo:
NtCreateFile( FileHandle=0x40c07ff640 [0xb8], DesiredAccess=SYNCHRONIZE|GENERIC_READ|0x80, ObjectAttributes=“\??\C:\foo”, IoStatusBlock=0x40c07ff648 [0/1], AllocationSize=null, FileAttributes=0, ShareAccess=7, CreateDisposition=1, CreateOptions=0x4000, EaBuffer=null, EaLength=0 ) => 0
NtClose( Handle=0xb8 ) => 0
That \??\C:\foo is a NT namespace path, which is what NtCreateFile expects. To understand this path, though, we need to talk about the Object Manager, which is responsible for handling NT paths.
The Object Manager is responsible for keeping track of named objects, which we can explore using the WinObj tool. The \?? part of the \??\C:\foo path is actually a special virtual folder within the Object Manager that combines the \GLOBAL?? folder and a per-user DosDevices folder together.
For me, the object C: is within \GLOBAL??, and is actually a symbolic link to \Device\HarddiskVolume4:
So, \??\C:\foo ultimately resolves to \Device\HarddiskVolume4\foo, and then it’s up to the actual device to deal with the foo part of the path.
The important thing here, though, is that \??\C:\foo is just one way of referring to the device path \Device\HarddiskVolume4\foo. For example, volumes will also get a named object created using their GUID with the format Volume{18123456-abcd-efab-cdef-1234abcdabcd} that is also a symlink to something like \Device\HarddiskVolume4, so a path like \??\Volume{18123456-abcd-efab-cdef-1234abcdabcd}\foo is effectively equivalent to \??\C:\foo.
All this is to say that there’s nothing innately special about the named object C:; the Object Manager treats it just like any other symbolic link and resolves it accordingly.
How I see it, drive letters are essentially just a convention borne out of the conversion of a Win32 path into a NT path. In particular, that would be down to the implementation of RtlDosPathNameToNtPathName_U.
In other words, since RtlDosPathNameToNtPathName_U converts C:\foo to \??\C:\foo, then an object named C: will behave like a drive letter. To give an example of what I mean by that: in an alternate universe, RtlDosPathNameToNtPathName_U could convert the path FOO:\bar to \??\FOO:\bar and then FOO: could behave like a drive letter.
So, getting back to the title, how does RtlDosPathNameToNtPathName_U treat something like +:\foo? Well, exactly the same as C:\foo:
> paths.exe C:\foo
path type: .DriveAbsolute
nt path: \??\C:\foo
> paths.exe +:\foo
path type: .DriveAbsolute
nt path: \??\+:\foo
Therefore, if an object with the name +: is within the virtual folder \??, we can expect the Win32 path +:\ to behave like any other drive-absolute path, which is exactly what we see.
This section only focuses on a few things that were relevant to what I was working on. I encourage others to investigate the implications of this further if they feel so inclined.
Drives with a drive-letter other than A-Z do not appear in File Explorer, and cannot be navigated to in File Explorer.
For the “do not appear” part, my guess as to what’s happening is that explorer.exe is walking \?? and looking specifically for objects named A: through Z:. For the “cannot be navigated to” part, that’s a bit more mysterious, but my guess is that explorer.exe has a lot of special logic around handling paths typed into the location bar, and part of that restricts drive letters to A-Z (i.e. it’s short-circuiting before it ever tries to actually open the path).
PowerShell seems to reject non-A-Z drives as well:
PS C:\> cd +:\
cd : Cannot find drive. A drive with the name ‘+’ does not exist.
At line:1 char:1
+ cd +:\
+ CategoryInfo : ObjectNotFound: (+:String) [Set-Location], DriveNotFoundException
+ FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
Drive letters don’t have to be within the ASCII range at all; they can also be non-ASCII characters.
> subst €: C:\foo
> cd /D €:\
€:\> tree .
Folder PATH listing
Volume serial number is 000000DE 12AB:23BC
└───bar
Non-ASCII drive letters are even case-insensitive like A-Z are:
> subst Λ: C:\foo
> cd /D λ:\
λ:\> tree .
Folder PATH listing
Volume serial number is 000000DE 12AB:23BC
λ:\
└───bar
However, drive-letters cannot be arbitrary Unicode graphemes or even arbitrary code points; they are restricted to a single WTF-16 code unit (a u16, so U+FFFF). The tool that we’ve been using so far (subst.exe) errors with Invalid parameter if you try to use a drive letter with a code point larger than U+FFFF, but you can get around that by going through the MountPointManager directly:
However, having the symlink in place doesn’t solve anything on its own:
> cd /D 𤭢:\
The filename, directory name, or volume label syntax is incorrect.
This is because there’s no way to get the drive-absolute Win32 path 𤭢:\ to end up as the relevant NT path. As mentioned earlier, the behavior of RtlDosPathNameToNtPathName_U is what matters, and we can verify that it will not convert a drive-absolute path with a drive letter bigger than U+FFFF to the relevant NT path:
C:\foo> paths.exe 𤭢:\foo
path type: .Relative
nt path: \??\C:\foo\𤭢:\foo
It’s very common for path-related functions to be written without the use of system-specific APIs, which means that there’s high potential for a mismatch between how RtlDosPathNameToNtPathName_U treats a file path and how something like a particular implementation of path.isAbsolute treats a file path.
As a random example, Rust only considers paths with A-Z drive letters as absolute:
use std::path::Path;
fn main() {
println!(“C:\\ {}”, Path::new(“C:\\foo”).is_absolute());
println!(“+:\\ {}”, Path::new(“+:\\foo”).is_absolute());
println!(“€:\\ {}”, Path::new(“€:\\foo”).is_absolute());
> rustc test.rs
> test.exe
C:\ true
+:\ false
€:\ false
Whether or not this represents a problem worth fixing is left as an exercise for the reader (I genuinely don’t know if it is a problem), but there’s a second wrinkle (hinted at previously) involving text encoding that can make something like an isAbsolute implementation return different results for the same path. This wrinkle is the reason I looked into this whole thing in the first place, as when I was doing some work on Zig’s path-related functions recently I realized that looking at path[0], path[1], and path[2] for a pattern like C:\ will look at different parts of the path depending on the encoding. That is, for something like €:\ (which is made up of the code points ):
* Encoded as WTF-16 where U+20AC can be encoded as the single u16 code unit 0x20AC, that’d mean path[0] will be 0x20AC, path[1] will be 0x3A (:), and path[2] will be 0x5C (\), which looks like a drive-absolute path
* Encoded as WTF-8 where U+20AC is encoded as three u8 code units (0xE2 0x82 0xAC), that’d mean path[0] will be 0xE2, path[1] will be 0x82, and path[2] will be 0xAC, meaning it will look nothing like a drive-absolute path
So, to write an implementation that treats paths the same regardless of encoding, some decision has to be made:
* If strict compatibility with RtlDetermineDosPathNameType_U/RtlDosPathNameToNtPathName_U is desired, decode the first code point and check for when dealing with WTF-8 (this is the option I went with for the Zig standard library, but I’m not super happy about it)
* If you want to be able to always check path[0]/path[1]/path[2] and don’t care about non-ASCII drive letters, check for path[0] regardless of encoding
* If you don’t care about anything other than the standard A-Z drive letters, then check for that explicitly (this is what Rust does)
Something bizarre that I found with this whole thing is that the kernel32.dll API SetVolumeMountPointW has it’s own unique quirk when dealing with non-ASCII drive letters. Specifically, this code (attempting to create the drive €:\) will succeed:
const std = @import(“std”);
const windows = std.os.windows;
const L = std.unicode.wtf8ToWtf16LeStringLiteral;
extern “kernel32” fn SetVolumeMountPointW(
VolumeMountPoint: windows.LPCWSTR,
VolumeName: windows.LPCWSTR,
) callconv(.winapi) windows.BOOL;
pub fn main() !void {
const volume_name = L(“\\\\?\\Volume{18123456-abcd-efab-cdef-1234abcdabcd}\");
const mount_point = L(“€:\");
if (SetVolumeMountPointW(mount_point, volume_name) == 0) {
const err = windows.GetLastError();
std.debug.print(“{any}\n”, .{err});
return error.Failed;
However, when we look at the Object Manager, the €: symlink won’t exist… but ¬: will:
...
Read the original on www.ryanliptak.com »
AI is being done wrong.
It’s being pushed down our throats. It’s in our search bars, our operating systems, and even our creative tools, whether we asked for it or not. It feels less like an upgrade and more like a force-feeding.
It doesn’t need to be this way. Technology can be adopted slowly. Organically. One piece at a time.
Right now, the frantic pace of deployment isn’t about utility; it’s about liquidity. It’s being shoved down our throats because some billionaires need to make some more billions before they die.
We don’t owe them anything.
It is time to do AI the right way. The honeymoon phase of the hype cycle is over. We now know the limitations. We see the hallucinations. We see the errors. Let’s pick the things which work and slowly integrate it into our lives. We don’t need to do it this quarter just because some startup has to do an earnings call. We will do it if it helps us.
And let’s be clear: We don’t need AGI (Artificial General Intelligence). We don’t need a digital god. We just need software that works.
If the current models don’t work? No problem. Let the researchers go back to the lab and do their jobs. We will continue doing ours. We might even generate more data for them in the process—but this time, we do it correctly. We will work with the creators, writers, and artists, instead of ripping off their life’s work to feed the model.
I hear the complaints from the tech giants already: “But we bought too many GPUs! We spent billions on infrastructure! They have to be put to work!”
I will use what creates value for me. I will not buy anything that is of no use to me.
There are plenty of legitimate use cases for AI and enough places to make money without force-feeding the market. But I will not allow AI to be pushed down my throat just to justify your bad investment.
...
Read the original on gpt3experiments.substack.com »
I would like to migrate the Dillo project away from
GitHub
into a new home which is more friendly to be used with Dillo and solves some of its problems. This page summarizes the current situation with GitHub and why I decided to move away from it into a self-hosted server with multiple mirrors in other forges.
Before we dive into the details, I would like to briefly mention what happened with the old site. The original Dillo website was at dillo.org, which also had the source code of Dillo in a mercurial repository at hg.dillo.org. But it also included the mail server used to reach the developers, a bug tracker and archives for the mailing list. However, in 2022
the domain was lost and someone else decided to buy it to put a similar site but plaged with AI generated ads. The original developers are no longer active, but luckily I had a copy of the mercurial repository and with some help I was able to recover a lot of material from the original server (some parts are still missing to this day).
I want to avoid this situation as much as possible, so we cannot rely on a single site that can go down and the whole project become lost. Initially, I uploaded the Dillo source and website to git repositories on GitHub, but I no longer think this is a good idea.
GitHub has been useful to store all repositories of the Dillo project, as well as to run the CI workflows for platforms in which I don’t have a machine available (like Windows, Mac OS or some BSDs).
However, it has several problems that make it less suitable to develop Dillo anymore. The most annoying problem is that the frontend barely works without JavaScript, so we cannot open issues, pull requests, source code or CI logs in Dillo itself, despite them being mostly plain HTML, which I don’t think is acceptable. In the past, it used to gracefully degrade without enforcing JavaScript, but now it doesn’t. Additionally, the page is very resource hungry, which I don’t think is needed to render mostly static text.
Another big problem is that it is a single point of failure. I don’t mean that GitHub is stored in a single machine, but it is controlled by a single entity which can unilateraly ban our repository or account and we would lose the ability to notify in that URL what happened. This can cause data loss if we don’t have a local copy of all the data.
On the usability side, the platform has become more and more slow over time, which is affecting the development process. It also requires you to have a fast Internet connection at all times, which is not the case for me sometimes. Additionally, GitHub seems to encourage a “push model” in which you are notified when a new event occurs in your project(s), but I don’t want to work with that model. Instead, I prefer it to work as a “pull model”, so I only get updates when I specifically look for them. This model would also allow me to easily work offline. Unfortunately, I see that the same push model has been copied to alternative forges.
On the social side, I feel that it doesn’t have the right tools to moderate users, specially for projects where the ratio of non-technical users to developers is high. This is specially problematic when active issues with developer notes begin to be filled with comments from users that have never contributed to the project and usually do more harm than good. This situation ends up causing burnout in developers.
Lastly, GitHub seem to follow the current trend of over-focusing on LLMs and generative AI, which are destroying the open web (or what remains of it) among
other
problems. It has a direct impact on us because sites protect themseves with a JavaScript wall (or worse, browser fingerprinting) to prevent aggresive LLM crawler bots from overloading the site, but they also leave Dillo users out. So I would prefer not to encourage this trend. Despite my intentions, moving Dillo away won’t change much their capability to train their model with our code, but at least I won’t be actively helping.
After researching the available options, it seems that none of the current forges would allow us to have a redundant system that can prevent the forge from becoming a single point of failure and solve the rest of the problems with GitHub. Therefore, I decided to self-host Dillo myself, move all important data to git repositories and keep them synchronized in multiple git mirrors.
I decided to buy the dillo-browser.org domain name and setup a very small VPS. Initially, I was very skeptical that it would be able to survive on today’s web, but it seems to be doing an acceptable job at handling it (mostly AI bot traffic masquerading as users). The Dillo website is available here:
I researched which git frontends may suit our needs, and I discovered that most options are very complicated to self-host and require a lot of server resources and JavaScript on the frontend. I ended up testing cgit, which is written in C and it seems to be very lightweight both on RAM and CPU usage. Furthermore, the web frontend doesn’t require JS, so I can use it from Dillo (I modified cgit CSS slightly to work well on Dillo). It is available on this URL:
Regarding the bug tracker, I also took a look at the available options. They are all too complicated for what I would like to have and they seem to centralize the data into a database that can get lost. This is precisely the case that happened with the old dillo bug tracker and we are still unable to recover the original bug entries.
To avoid this problem, I created my own bug tracker software,
buggy, which is a very simple C tool that parses plain Markdown files and creates a single HTML page for each bug. All bugs are stored in a
git repository
and a git hook regenerates the bug pages and the index on each new commit. As it is simply plain text, I can edit the bugs locally and only push them to the remote when I have Internet back, so it works nice offline. Also, as the output is just an static HTML site, I don’t need to worry about having any vulnerabilities in my code, as it will only run at build time. You can see it live here, with the exported issues from GitHub:
The mailing list archives are stored by three independent external services, but I might include a copy with our own archives in the future.
As all the important data is now stored in git repositories, we can mirror them in any forge, without having to rely on their custom storage format for the issues or other data. If a forge goes down (or goes rogue) we can simply switch to another site with low switching cost. To this end, I have created git mirrors in Codeberg and Sourcehut that are synced with our git server:
However, we still have a single point of failure: the DNS entry of the dillo-browser.org domain. If we lose the DNS entry (like with dillo.org) it would cause a problem as all services will be unreachable. We could recover from such situation by relying on alternative ways to reach users, by the mailing list, fediverse or IRC, as well as updating the mirrors to reflect the current situation. It is not ideal, but I don’t think it would cause a catastrophic data loss (like it happened before) as all the data is now stored in git and replicated across independent locations.
In order for this page to have some authority, the HTML file is signed with my GPG key
(32E65EC501A1B6FDF8190D293EE6BA977EB2A253), which is the same that I use to sign the last releases of Dillo and is also listed in my GitHub user. The signature is available here and is linked to the page with the tag using the rel=signature
relation. You can find more information and how to verify the signature in the
Dillo RFC-006.
Using OpenPGP signatures is robust against losing the DNS entry, as the authority is not given by the TLS certificate chain but by the trust in the OpenPGP signature, so we could move the site elsewhere and still claim that is owned by us. Additionally, as we can store the signatures inside all git mirrors, they are also resilient against data loss.
Keep in mind that the migration process requires several moving parts and it will take a while for it to stabilize (switching costs). The GitHub
repositories won’t be removed at any point in time and they will continue to
be updated until we finish the migration. When the migration process is completed, I will mark the Dillo repositories as archived and properly comunicate it in our site. It is important that we don’t remove any commit or tarball release to avoid breaking downstream builds that still rely on the GitHub URL.
Lastly, I’m glad that we can have our own fully independent and self-hosted site with relatively low expenses and very little energy cost (which is good for the environment, but probably not even noticeable at large scale). With the current DNS and server costs and our current donations I consider that it is likely that we can continue covering the expenses for at least the next 3 years in the worst case scenario. If you are interested in keeping us afloat, you can help via Liberapay.
...
Read the original on dillo-browser.org »
I’m still the new person here, learning your ways, stumbling over the occasional quirk, smiling when I find the small touches that make you different. You remind me of what computing felt like before the noise. Before hype cycles and performance theatre. Before every tool needed a plugin system and a logo. You are coherent. You are deliberate. You are the kind of system that doesn’t have to shout to belong.
You carry the quiet strength of the greats, like a mainframe humming in a locked room, not chasing attention, just doing its work, year after year. Your base system feels like it was built by people who cared about the whole picture, not just the pieces. Your boot environments are like an old IBM i’s “side A / side B” IPL, a built-in escape hatch that says, we’ve thought ahead for you. You could be, you should be, the open-source mainframe: aligned with hardware lifecycles of three to five years or more, built for long-term trust, a platform people bet their uptime on. Your core design reminds me of Solaris in its best days: a stable base that commercial and community software could rely on without fear of shifting foundations.
And make uptime a design goal: a thousand-day uptime shouldn’t be folklore, it should be normal. Not a party trick, not a screenshot to boast about, but simply the natural consequence of a system built to endure. Mainframes never apologised for uptime measured in years, and neither should you. Apply updates without fear, reboot only when the kernel truly demands it, and let administrators see longevity as a feature, not a gamble.
I know you are reaching further into the desktop now. I understand why, and I can see how it might widen your reach. But here I find myself wondering: how do you keep the heartbeat of a rock-solid server while also embracing the quicker pulse of a modern desktop? I don’t pretend to have all the answers, I’m too new to you for that, but my first instinct is to lean on what you already have: the natural separation between CURRENT and RELEASE. Let those worlds move at their own pace, without asking one to carry the other’s compromises.
And now, with pkgbase in play, the stability of packages matters as much as the base system itself. The base must remain untouchable in its reliability, but I dream of a world where the package ecosystem is available in clear stability channels: from a rock-solid “production tier” you can stake a business on, to faster-moving streams where new features can flow without fear of breaking mission-critical systems. Too many times in the past, packages vanished or broke unexpectedly. I understand the core is sacred, but I wouldn’t mind if some of the wider ecosystem inherited that same level of care.
Culture matters too. One reason I stepped away from Linux was the noise, the debates that drowned out the joy of building. Please keep FreeBSD the kind of place where thoughtful engineering is welcome without ego battles, where enterprise focus and technical curiosity can sit at the same table. That spirit, the calm, shared purpose that carried Unix from the PDP-11 labs to the backbone of the Internet, is worth protecting.
There’s also the practical side: keep the doors open with hardware vendors like Dell and HPE, so FreeBSD remains a first-class citizen. Give me the tools to flash firmware without having to borrow Linux or Windows. Make hardware lifecycle alignment part of your story, major releases paced with the real world, point releases treated as refinement rather than disruption.
My hope is simple: that you stay different. Not in the way that shouts for attention, but in the way that earns trust. If someone wants hype or the latest shiny thing every month, they have Linux. If they want a platform that feels like it could simply run, and keep running, the way the best of Unix always did, they should know they can find it here. And I still dream of a future where a purpose-built “open-source mainframe” exists: a modern, reliable hardware system running FreeBSD with the same quiet presence as Sun’s Enterprise 10k once did.
And maybe, one day, someone will walk past a rack of servers, hear the steady, unhurried rhythm of a FreeBSD system still running, and smile, knowing that in a world that burns through trends, there is still something built to last.
With gratitude,
and with the wish to stay for the long run,
A newcomer who finally feels at home.
...
Read the original on www.tara.sh »
Norway’s $2 trillion wealth fund said on Sunday it would vote for a shareholder proposal at the upcoming Microsoft annual general meeting requiring for a report on the risks of operating in countries with significant human rights concerns.
Microsoft management had recommended shareholders voted against the motion.
The fund also said it would vote against the re-appointment of CEO Satya Nadella as chair of the board, as well as against his pay package.
The fund owned a 1.35% stake worth $50 billion in the company as of June 30, according to fund data, making it the fund’s second-largest equity holding overall, after Nvidia.
It is Microsoft’s eighth-largest shareholder, according to LSEG data.
Investors in the U. S. tech company will decide whether to ratify the proposed motions at the AGM on Dec. 5.
...
Read the original on www.cnbc.com »
In which I talk about the process involved in switching forges, and how well that went.
Spoiler alert: this very site that you’re reading this on is not served from GitHub Pages anymore! At this point, I’d call my migration successful. But it took more than clicking a single button, so let’s talk about the steps involved, at least for me. I’m hoping that it can help be an example for other people, and show that it’s actually not that complicated.
First, I took an hour or so to set up my profile picture, email address(es), SSH keys…
This wasn’t difficult, because Forgejo (the forge software that powers Codeberg) offers a “migrate from GitHub” functionality. You need to generate a PAT on GitHub to import things like issues (which is awesome!), and as a bonus it also speeds up the process.
It was, however, tedious, because the process was entirely manual (perhaps there’s a way to automate it, like by using some Forgejo CLI tool, but I didn’t bother looking into that). And, due to GitHub API rate limits, whenever I tried importing two repos at the same time, one or both would fail. (It wasn’t too bad, though, since I could fill out the migration page for the next while one was in progress; and generally, it took me roughly as long to fill it out as it took Codeberg to perform the import.)
I’m really happy that issues, PRs, wikis, and releases can be imported flawlessly: this makes it possible to not have to refer to GitHub anymore!
Of course I don’t control all links that point to my stuff, but I could at least run rg -F github.com/ISSOtm in my home directory, to catch those within my own repos. It’s possible to automate the replacing process:$ sed –in-place –regexp-extended ’s,github.com/ISSOtm,codeberg.org/ISSOtm,′
…and if you’re feeling like bulk-replacing all files in a directory:$ find
Repositories, however, may still be pointing to GitHub:$ git remote -v origin git@github.com:ISSOtm/rsgbds.git (fetch) origin git@github.com:ISSOtm/rsgbds.git (push)
You can either manually git remote set-url origin git@codeberg.org:ISSOtm/rsgbds.git (or the equivalent if you’re using HTTPS), or use one of the replace commands above, since remote URLs are stored textually:# Within a single repo: $ find .git -name config -exec sed -Ei ‘s,github.com:ISSOtm,codeberg.org:ISSOtm,’ {} + # Replace the colons with slashes if you’re using HTTPS!
# For all repos within the current directory: (requires `shopt -s globstar` if using Bash) $ find **/.git -name config -exec sed -Ei ‘s,github.com:ISSOtm,codeberg.org:ISSOtm,’ {} + # Ditto the above.
…then it’s a matter of pushing the changes to all of the repos.
I also wanted to make it clear that my repos were now living on Codeberg; so, I created a little script in an empty directory:#!/bin/bash set -euo pipefail
git remote set-url origin git@github.com:ISSOtm/$1 cat <README.md
# Moved to https://codeberg.org/ISSOtm/$1
[See my blog](http://eldred.fr/blog/codeberg) as to why.
EOF
git add README.md
git commit –amend –message ‘Add move notice’
git push –force
gh repo edit –description “Moved to https://codeberg.org/ISSOtm/$1” –homepage “https://codeberg.org/ISSOtm/$1”
gh repo archive –yes
Then, to run it:$ chmod +x stub_out.sh $ git init $ git remote add origin ‘’ $ ./stub_out.sh rsgbds $ ./stub_out.sh fortISSimO # …etc.
The automation made it not painful, so this went pretty well.
Now, onto the harder stuff :)
The first interesting thing that I noticed is this section of Codeberg’s CI documentation:Running CI/CD pipelines can use significant amounts of energy. As much as it is tempting to have green checkmarks everywhere, running the jobs costs real money and has environmental costs. Unlike other giant platforms, we do not encourage you to write “heavy” pipelines and charge you for the cost later. We expect you to carefully consider the costs and benefits from your pipelines and reduce CI/CD usage to a minimum amount necessary to guarantee consistent quality for your projects.
That got me to think about which projects of mine really need CI, and ultimately, I decided that I would only need CI for publishing my website, and the documentation of gb-starter-kit and fortISSimO; the rest of my projects don’t get contributions anyway, so I can live without CI on them, at least for now.
Anyway, Codeberg actually has two different CI solutions: Woodpecker, and Forgejo Actions; the former seems to be more powerful, but you need to apply for access, and the latter is very close to GitHub Actions, which should facilitate the migration. So I picked Forgejo Actions, even though it’s marked as being in beta.
It’s not very difficult to port a YAML file from GHA to Forgejo Actions; for example, look at the commit porting gb-starter-kit’s publishing CI. (This doesn’t really appear as a diff, since I’ve moved the file; but it’s small, so it’s easy to compare manually.)
Here are some salient points:Actions are normally just referred to as owner/repo, but Forgejo supports cloning any Git repo, especially across forges. It’s actually recommended to use full URLs always, so you don’t rely on the default prefix, which is configurable by the instance admin and thus not necessarily portable. I could have kept the files in .github/workflows, since Forgejo picks up that directory automatically if .forgejo/workflows doesn’t exist; however, I think it’s more convenient to keep un-migrated scripts in .github and migrated ones in .forgejo.Most Actions (the individual steps, not the workflow files) actually work out of the box on Forgejo Actions. Nice!Codeberg’s runners differ from GitHub’s significantly: they have way less software installed by default, fewer resources, and only Linux runners are provided (Ubuntu by default, but you can use any Docker container image). macOS and Windows being non-free OSes, Codeberg has no plans to offer either of those! For both philosophical and financial reasons. If this is a deal-breaker for you, consider cross-compiling, or bringing your own runner.Unless low latency is crucial, consider using the lazy runners for better load balancing and possibly greener energy consumption. In practice I haven’t seen delays beyond a few minutes, which is acceptable to me.
I actually spent some extra time trying to use less compute to perform my CI jobs, somewhat motivated by the small size of the runners, and because I’m guessing that the smaller the runner you’re picking, the faster your job will be able to be scheduled. Here is one such commit; note in particular line 50, where I tried using a Docker image with LaTeX preinstalled, which saves the time taken by apt install and requires fewer writes to the filesystem, freeing up RAM.
All of the previous steps were done within the span of a few days; however, since my website (this very website) was hosted using GitHub Pages, I couldn’t migrate its repos (yes, plural: you can configure individual repos to be published separately, which is how e.g. https://eldred.fr/fortISSimO is published, despite not being in the website’s main repo).
Nominally, Codeberg has an equivalent, Codeberg Pages; however, as mentioned on that page, the software behind this feature is currently in maintenance mode, because of complexity and performance issues. So I left it at that for roughly a month, hoping there’ll eventually be an update. Also, subprojects are published as subdomains instead of subdirectories, which would have broken links (e.g. http://eldred.fr/fortISSimO would have become http://fortISSimO.eldred.fr). Meh…
And then (by chance lol) I discovered git-pages and its public instance Grebedoc! It functions much like GitHub Pages, though with a bit more setup since it’s not integrated within the forge itself.
git-pages actually has several niceties:My website had zero downtime during the entire migration, as git-pages supports uploading your website before updating your DNS records!It also supports server-side redirects, which lets me redirect people who still go to http://eldred.fr/gb-asm-tutorial/* to its new home, for example. People have been getting 404s because of incomplete client-side coverage on my side, but no more!It also also supports custom headers; I’m not particularly interested in CORS, but I’ve used that file to pay my respects.
Oh, and also, Codeberg’s November 2025 newsletter mentions that Codeberg is planning to gradually migrate to [git-pages]. Exciting!
I’m actually much happier using this than GitHub Pages; so, I’ve joined Catherine’s Patreon, because I want to see this go far.
Steps 1 through 3 (migrating the repos) took me the better part of an afternoon; step 4 (porting CI) took me another afternoon, mostly to learn the new CI system; and step 5 (the website) took me… well, it should have taken an afternoon, but I used the opportunity to also pay down some tech debt (merging my slides repo into my main website), which took a few days due to required rearchitecting.
All in all, even with 45 repos migrated, this basically took a weekend. And I didn’t find it annoying!
Since the task seemed really daunting, my anxiety caused me to procrastinate this a lot, but in the end it was little work. One of the reasons I’m writing this is to let other people know that, so they can overcome their own anxiety. Maybe. :P
All in all, I’m very happy with this migration! As far as I can tell, nothing on this website has broken, and I’ve tried reasonably containing the breakage over on GitHub: I have truncated the master branches, but all other branches and tags remain in place (mostly due to laziness lol), permalinks (e.g. https://github.com/ISSOtm/gb-bootroms/blob/c8ed9e106e0ab1193a57071820e46358006c79d0/src/dmg.asm) still work, only non-perma links (e.g. https://github.com/ISSOtm/gb-bootroms/blob/master/src/dmg.asm) are broken, but those are unreliable in the first place anyway.
Since that means that all of my code is still on GitHub, I want to delete my repos; but that would be a bad idea at this point, due to leaving no redirects or anything. I’ll consider that again in… idk, a year or something. I would also like to delete my GitHub account (like I have deleted my Twitter account when… *gestures vaguely*), but not only do I need my repos to be up, I also need my account to contribute to projects that are still on GitHub.
One downside of this migration is that since I’m moving off of The Main Forge, my projects are likely to get fewer contributions… But I wasn’t getting many in the first place, and some people have already made accounts on Codeberg to keep contributing to my stuff. Likewise, I’m not really worried about discoverability. We’ll see I guess lol 🤷♂️
Lastly, I’m writing this after the migration, and I haven’t really taken notes during it; so, if I’ve forgotten any steps, feel free to let me know in the comments below or by opening an issue, and I’ll edit this article.
...
Read the original on eldred.fr »
Add AP News as your preferred source to see more of our stories on Google.
Add AP News as your preferred source to see more of our stories on Google.
While driving to a new restaurant, your car’s satellite navigation system tracks your location and guides you to the destination. Onboard cameras constantly track your face and eye movements. When another car veers into your path, forcing you to slam on the brakes, sensors are assisting and recording. Waiting at a stoplight, the car notices when you unbuckle your seat belt to grab your sunglasses in the backseat.
Modern cars are computers on wheels that are becoming increasingly connected, enabling innovative new features that make driving safer and more convenient. But these systems are also collecting reams of data on our driving habits and other personal information, raising concerns about data privacy.
Here is what to know about how your car spies on you and how you can minimize it:
It’s hard to figure out exactly how much data a modern car is collecting on you, according to the Mozilla Foundation, which analyzed privacy practices at 25 auto brands in 2023. It declared that cars were the worst product category that the group had ever reviewed for privacy.
The data points include all your normal interactions with the car — such as turning the steering wheel or unlocking doors — but also data from connected onboard services, like satellite radio, GPS navigation systems, connected devices, telematics systems as well as data from sensors or cameras.
Vehicle telematics systems started to become commonplace about a decade ago, and the practice of automotive data collection took off about five years ago.
The problem is not just that data is being collected but who it’s provided to, including insurers, marketing companies and shadowy data brokers. The issue surfaced earlier this year when General Motors was banned for five years from disclosing data collected from drivers to consumer reporting agencies.
The Federal Trade Commission accused GM of not getting consent before sharing the data, which included every instance when a driver was speeding or driving late at night. It was ultimately provided to insurance companies that used it to set their rates.
The first thing drivers should do is be aware of what data their car is collecting, said Andrea Amico, founder of Privacy4Cars, an automotive privacy company.
In an ideal world, drivers would read through the instruction manuals and documentation that comes with their cars, and quiz the dealership about what’s being collected.
But it’s not always practical to do this, and manufacturers don’t always make it easy to find out, while dealership staff aren’t always the best informed, Amico said.
Privacy4Cars offers a free auto privacy labeling service at vehicleprivacyreport.com that can summarize what your car could be tracking.
Owners can punch in their car’s Vehicle Identification Number, which then pulls up the automaker’s data privacy practices, such as whether the car collects location data and whether it’s given to insurers, data brokers or law enforcement.
Data collection and tracking start as soon as you drive a new car off the dealership lot, with drivers unwittingly consenting when they’re confronted with warning menus on dashboard touch screens.
Experts say that some of the data collection is baked into the system, you can revoke your consent by going back into the menus.
“There are permissions in your settings that you can make choices about,” said Lauren Hendry Parsons of Mozilla. “Go through on a granular level and look at those settings where you can.”
For example, Toyota says on its website that drivers can decline what it calls “Master Data Consent” through the Toyota app. Ford says owners can opt to stop sharing vehicle data with the company by going through the dashboard settings menu or on the FordPass app.
BMW says privacy settings can be adjusted through the infotainment system, “on a spectrum between” allowing all services including analysis data and none at all.
Drivers in the U. S. can ask carmakers to restrict what they do with their data.
Under state privacy laws, some carmakers allow owners across the United States to submit requests to limit the use of their personal data, opt out of sharing it, or delete it, Consumer Reports says. Other auto companies limit the requests to people in states with applicable privacy laws, the publication says.
You can file a request either through an online form or the carmaker’s mobile app.
You can also go through Privacy4Cars, which provides a free online service that streamlines the process. It can either point car owners to their automaker’s request portal or file a submission on behalf of owners in the U. S., Canada, the European Union, Britain and Australia.
Experts warn that there’s usually a trade-off if you decide to switch off data collection.
Most people, for example, have switched to satellite navigation systems over paper maps because it’s “worth the convenience of being able to get from point A to point B really easily,” said Hendry Parsons.
Turning off location tracking could also halt features like roadside assistance or disable smartphone app features like remote door locking, Consumer Reports says.
BMW advises that if an owner opts to have no data shared at all, “their vehicle will behave like a smartphone in flight mode and will not transmit any data to the BMW back end.”
When the time comes to sell your car or trade it in for a newer model, it’s no longer as simple as handing over the keys and signing over some paperwork.
If you’ve got a newer car, experts say you should always do a factory reset to wipe all the data, which will also include removing any smartphone connections.
And don’t forget to notify the manufacturer about the change of ownership.
Amico said that’s important because if you trade in your vehicle, you don’t want insurers to associate it with your profile if the dealer is letting customers take it for test drives.
“Now your record may be affected by somebody else’s driving — a complete stranger that you have no relationship with.”
Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.
This story has been corrected to show that the Mozilla representative’s first name is Lauren, not Laura.
...
Read the original on apnews.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.