10 interesting stories served every morning and every evening.




1 1,014 shares, 43 trendiness

Advent of Code

Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.

Advent of Code is an Advent cal­en­dar of small pro­gram­ming puz­zles for a va­ri­ety of skill lev­els that can be solved in any pro­gram­ming lan­guage you like. People use them as in­ter­view prep, com­pany train­ing, uni­ver­sity course­work, prac­tice prob­lems, a speed con­test, or to chal­lenge each other.

You don’t need a com­puter sci­ence back­ground to par­tic­i­pate - just a lit­tle pro­gram­ming knowl­edge and some prob­lem solv­ing skills will get you pretty far. Nor do you need a fancy com­puter; every prob­lem has a so­lu­tion that com­pletes in at most 15 sec­onds on ten-year-old hard­ware.

If you’d like to sup­port Advent of Code, you can do so in­di­rectly by help­ing to [Share] it with oth­ers or di­rectly via AoC++.

If you get stuck, try your so­lu­tion against the ex­am­ples given in the puz­zle; you should get the same an­swers. If not, re-read the de­scrip­tion. Did you mis­un­der­stand some­thing? Is your pro­gram do­ing some­thing you don’t ex­pect? After the ex­am­ples work, if your an­swer still is­n’t cor­rect, build some test cases for which you can ver­ify the an­swer by hand and see if those work with your pro­gram. Make sure you have the en­tire puz­zle in­put. If you’re still stuck, maybe ask a friend for help, or come back to the puz­zle later. You can also ask for hints in the sub­red­dit.

Is there an easy way to se­lect en­tire code blocks? You should be able to triple-click code blocks to se­lect them. You’ll need JavaScript en­abled.

#!/usr/bin/env perl

use warn­ings;

use strict;

print You can test it out by ;

print triple-clicking this code.\n”;

How does au­then­ti­ca­tion work? Advent of Code uses OAuth to con­firm your iden­tity through other ser­vices. When you log in, you only ever give your cre­den­tials to that ser­vice - never to Advent of Code. Then, the ser­vice you use tells the Advent of Code servers that you’re re­ally you. In gen­eral, this re­veals no in­for­ma­tion about you be­yond what is al­ready pub­lic; here are ex­am­ples from Reddit and GitHub. Advent of Code will re­mem­ber your unique ID, names, URL, and im­age from the ser­vice you use to au­then­ti­cate.

Why was this puz­zle so easy / hard? The dif­fi­culty and sub­ject mat­ter varies through­out each event. Very gen­er­ally, the puz­zles get more dif­fi­cult over time, but your spe­cific skillset will make each puz­zle sig­nif­i­cantly eas­ier or harder for you than some­one else. Making puz­zles is tricky.

Why do the puz­zles un­lock at mid­night EST/UTC-5? Because that’s when I can con­sis­tently be avail­able to make sure every­thing is work­ing. I also have a fam­ily, a day job, and even need sleep oc­ca­sion­ally. If you can’t par­tic­i­pate at mid­night, that’s not a prob­lem; if you want to race, many peo­ple use pri­vate leader­boards to com­pete with peo­ple in their area.

I find the text on the site hard to read. Is there a high con­trast mode? There is a high con­trast al­ter­nate stylesheet. Firefox sup­ports these by de­fault (View -> Page Style -> High Contrast).

I have a puz­zle idea! Can I send it to you? Please don’t. Because of le­gal is­sues like copy­right and at­tri­bu­tion, I don’t ac­cept puz­zle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by ac­ci­dent.

Did I find a bug with a puz­zle? Once a puz­zle has been out for even an hour, many peo­ple have al­ready solved it; af­ter that point, bugs are very un­likely. Start by ask­ing on the sub­red­dit.

Should I try to get a fast so­lu­tion time? Maybe. Solving puz­zles is hard enough on its own, but try­ing for a fast time also re­quires many ad­di­tional skills and a lot of prac­tice; speed-solves of­ten look noth­ing like code that would pass a code re­view. If that sounds in­ter­est­ing, go for it! However, you should do Advent of Code in a way that is use­ful to you, and so it is com­pletely fine to choose an ap­proach that meets your goals and ig­nore speed en­tirely.

Why did the num­ber of days per event change? It takes a ton of my free time every year to run Advent of Code, and build­ing the puz­zles ac­counts for the ma­jor­ity of that time. After keep­ing a con­sis­tent sched­ule for ten years(!), I needed a change. The puz­zles still start on December 1st so that the day num­bers make sense (Day 1 = Dec 1), and puz­zles come out every day (ending mid-De­cem­ber).

What hap­pened to the global leader­board? The global leader­board was one of the largest sources of stress for me, for the in­fra­struc­ture, and for many users. People took things too se­ri­ously, go­ing way out­side the spirit of the con­test; some peo­ple even re­sorted to things like DDoS at­tacks. Many peo­ple in­cor­rectly con­cluded that they were some­how worse pro­gram­mers be­cause their own times did­n’t com­pare. What started as a fun fea­ture in 2015 be­came an ever-grow­ing prob­lem, and so, af­ter ten years of Advent of Code, I re­moved the global leader­board. (However, I’ve made it so you can share a read-only view of your pri­vate leader­board. Please don’t use this fea­ture or data to cre­ate a new” global leader­board.)

While try­ing to get a fast time on a pri­vate leader­board, may I use AI / watch stream­ers / check the so­lu­tion threads / ask a friend for help / etc? If you are a mem­ber of any pri­vate leader­boards, you should ask the peo­ple that run them what their ex­pec­ta­tions are of their mem­bers. If you don’t agree with those ex­pec­ta­tions, you should find a new pri­vate leader­board or start your own! Private leader­boards might have rules like max­i­mum run­time, al­lowed pro­gram­ming lan­guage, what time you can first open the puz­zle, what tools you can use, or whether you have to wear a silly hat while work­ing.

Should I use AI to solve Advent of Code puz­zles? No. If you send a friend to the gym on your be­half, would you ex­pect to get stronger? Advent of Code puz­zles are de­signed to be in­ter­est­ing for hu­mans to solve - no con­sid­er­a­tion is made for whether AI can or can­not solve a puz­zle. If you want prac­tice prompt­ing an AI, there are al­most cer­tainly bet­ter ex­er­cises else­where de­signed with that in mind.

Can I copy/​re­dis­trib­ute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re post­ing a code repos­i­tory some­where, please don’t in­clude parts of Advent of Code like the puz­zle text or your in­puts. If you’re mak­ing a web­site, please don’t make it look like Advent of Code or name it some­thing sim­i­lar.

...

Read the original on adventofcode.com »

2 560 shares, 33 trendiness

Writing a good CLAUDE.md

Note: this post is also ap­plic­a­ble to AGENTS.md, the open-source equiv­a­lent of CLAUDE.md for agents and har­nesses like OpenCode, Zed, Cursor and Codex.

LLMs are state­less func­tions. Their weights are frozen by the time they’re used for in­fer­ence, so they don’t learn over time. The only thing that the model knows about your code­base is the to­kens you put into it.

Similarly, cod­ing agent har­nesses such as Claude Code usu­ally re­quire you to man­age agents’ mem­ory ex­plic­itly. CLAUDE.md (or AGENTS.md) is the only file that by de­fault goes into every sin­gle con­ver­sa­tion you have with the agent.

This has three im­por­tant im­pli­ca­tions:

Coding agents know ab­solutely noth­ing about your code­base at the be­gin­ning of each ses­sion.

The agent must be told any­thing that’s im­por­tant to know about your code­base each time you start a ses­sion.

CLAUDE.md is the pre­ferred way of do­ing this.

Since Claude does­n’t know any­thing about your code­base at the be­gin­ning of each ses­sion, you should use CLAUDE.md to on­board Claude into your code­base. At a high level, this means it should cover:

WHAT: tell Claude about the tech, your stack, the pro­ject struc­ture. Give Claude a map of the code­base. This is es­pe­cially im­por­tant in monore­pos! Tell Claude what the apps are, what the shared pack­ages are, and what every­thing is for so that it knows where to look for things

WHY: tell Claude the pur­pose of the pro­ject and what every­thing is do­ing in the repos­i­tory. What are the pur­pose and func­tion of the dif­fer­ent parts of the pro­ject?

HOW: tell Claude how it should work on the pro­ject. For ex­am­ple, do you use bun in­stead of node? You want to in­clude all the in­for­ma­tion it needs to ac­tu­ally do mean­ing­ful work on the pro­ject. How can Claude ver­ify Claude’s changes? How can it run tests, type­checks, and com­pi­la­tion steps?

But the way you do this is im­por­tant! Don’t try to stuff every com­mand Claude could pos­si­bly need to run in your CLAUDE.md file - you will get sub-op­ti­mal re­sults.

Regardless of which model you’re us­ing, you may no­tice that Claude fre­quently ig­nores your CLAUDE.md file’s con­tents.

You can in­ves­ti­gate this your­self by putting a log­ging proxy be­tween the claude code CLI and the Anthropic API us­ing ANTHROPIC_BASE_URL. Claude code in­jects the fol­low­ing sys­tem re­minder with your CLAUDE.md file in the user mes­sage to the agent:

IMPORTANT: this con­text may or may not be rel­e­vant to your tasks.

You should not re­spond to this con­text un­less it is highly rel­e­vant to your task.

As a re­sult, Claude will ig­nore the con­tents of your CLAUDE.md if it de­cides that it is not rel­e­vant to its cur­rent task. The more in­for­ma­tion you have in the file that’s not uni­ver­sally ap­plic­a­ble to the tasks you have it work­ing on, the more likely it is that Claude will ig­nore your in­struc­tions in the file.

Why did Anthropic add this? It’s hard to say for sure, but we can spec­u­late a bit. Most CLAUDE.md files we come across in­clude a bunch of in­struc­tions in the file that aren’t broadly ap­plic­a­ble. Many users treat the file as a way to add hotfixes” to be­hav­ior they did­n’t like by ap­pend­ing lots of in­struc­tions that weren’t nec­es­sar­ily broadly ap­plic­a­ble.

We can only as­sume that the Claude Code team found that by telling Claude to ig­nore the bad in­struc­tions, the har­ness ac­tu­ally pro­duced bet­ter re­sults.

The fol­low­ing sec­tion pro­vides a num­ber of rec­om­men­da­tions on how to write a good CLAUDE.md file fol­low­ing con­text en­gi­neer­ing best prac­tices.

Your mileage may vary. Not all of these rules are nec­es­sar­ily op­ti­mal for every setup. Like any­thing else, feel free to break the rules once…

you un­der­stand when & why it’s okay to break them

you have a good rea­son to do so

### Less (instructions) is more

It can be tempt­ing to try and stuff every sin­gle com­mand that claude could pos­si­bly need to run, as well as your code stan­dards and style guide­lines into CLAUDE.md. We rec­om­mend against this.

Though the topic has­n’t been in­ves­ti­gated in an in­cred­i­bly rig­or­ous man­ner, some re­search has been done which in­di­cates the fol­low­ing:

Frontier think­ing LLMs can fol­low ~ 150-200 in­struc­tions with rea­son­able con­sis­tency. Smaller mod­els can at­tend to fewer in­struc­tions than larger mod­els, and non-think­ing mod­els can at­tend to fewer in­struc­tions than think­ing mod­els.

Smaller mod­els get MUCH worse, MUCH more quickly. Specifically, smaller mod­els tend to ex­hibit an ex­po­ten­tial de­cay in in­struc­tion-fol­low­ing per­for­mance as the num­ber of in­struc­tions in­crease, whereas larger fron­tier think­ing mod­els ex­hibit a lin­ear de­cay (see be­low). For this rea­son, we rec­om­mend against us­ing smaller mod­els for multi-step tasks or com­pli­cated im­ple­men­ta­tion plans.

LLMs bias to­wards in­struc­tions that are on the pe­riph­eries of the prompt: at the very be­gin­ning (the Claude Code sys­tem mes­sage and CLAUDE.md), and at the very end (the most-re­cent user mes­sages)

As in­struc­tion count in­creases, in­struc­tion-fol­low­ing qual­ity de­creases uni­formly. This means that as you give the LLM more in­struc­tions, it does­n’t sim­ply ig­nore the newer (“further down in the file”) in­struc­tions - it be­gins to ig­nore all of them uni­formly

Our analy­sis of the Claude Code har­ness in­di­cates that Claude Code’s sys­tem prompt con­tains ~50 in­di­vid­ual in­struc­tions. Depending on the model you’re us­ing, that’s nearly a third of the in­struc­tions your agent can re­li­ably fol­low al­ready - and that’s be­fore rules, plu­g­ins, skills, or user mes­sages.

This im­plies that your CLAUDE.md file should con­tain as few in­struc­tions as pos­si­ble - ide­ally only ones which are uni­ver­sally ap­plic­a­ble to your task.

All else be­ing equal, an LLM will per­form bet­ter on a task when its’ con­text win­dow is full of fo­cused, rel­e­vant con­text in­clud­ing ex­am­ples, re­lated files, tool calls, and tool re­sults com­pared to when its con­text win­dow has a lot of ir­rel­e­vant con­text.

Since CLAUDE.md goes into every sin­gle ses­sion, you should en­sure that its con­tents are as uni­ver­sally ap­plic­a­ble as pos­si­ble.

For ex­am­ple, avoid in­clud­ing in­struc­tions about (for ex­am­ple) how to struc­ture a new data­base schema - this won’t mat­ter and will dis­tract the model when you’re work­ing on some­thing else that’s un­re­lated!

Length-wise, the less is more prin­ci­ple ap­plies as well. While Anthropic does not have an of­fi­cial rec­om­men­da­tion on how long your CLAUDE.md file should be, gen­eral con­sen­sus is that < 300 lines is best, and shorter is even bet­ter.

At HumanLayer, our root CLAUDE.md file is less than sixty lines.

Writing a con­cise CLAUDE.md file that cov­ers every­thing you want Claude to know can be chal­leng­ing, es­pe­cially in larger pro­jects.

To ad­dress this, we can lever­age the prin­ci­ple of Progressive Disclosure to en­sure that claude only sees task- or pro­ject-spe­cific in­struc­tions when it needs them.

Instead of in­clud­ing all your dif­fer­ent in­struc­tions about build­ing your pro­ject, run­ning tests, code con­ven­tions, or other im­por­tant con­text in your CLAUDE.md file, we rec­om­mend keep­ing task-spe­cific in­struc­tions in sep­a­rate mark­down files with self-de­scrip­tive names some­where in your pro­ject.

agen­t_­docs/

|- build­ing_the_pro­ject.md

|- run­ning_tests.md

|- code_­con­ven­tions.md

|- ser­vice_ar­chi­tec­ture.md

|- data­base_schema.md

|- ser­vice_­com­mu­ni­ca­tion_­pat­terns.md

Then, in your CLAUDE.md file, you can in­clude a list of these files with a brief de­scrip­tion of each, and in­struct Claude to de­cide which (if any) are rel­e­vant and to read them be­fore it starts work­ing. Or, ask Claude to pre­sent you with the files it wants to read for aproval first be­fore read­ing them.

Prefer point­ers to copies. Don’t in­clude code snip­pets in these files if pos­si­ble - they will be­come out-of-date quickly. Instead, in­clude file:line ref­er­ences to point Claude to the au­thor­i­ta­tive con­text.

Conceptually, this is very sim­i­lar to how Claude Skills are in­tended to work, al­though skills are more fo­cused on tool use than in­struc­tions.

### Claude is (not) an ex­pen­sive lin­ter

One of the most com­mon things that we see peo­ple put in their CLAUDE.md file is code style guide­lines. Never send an LLM to do a lin­ter’s job. LLMs are com­pa­ra­bly ex­pen­sive and in­cred­i­bly slow com­pared to tra­di­tional lin­ters and for­mat­ters. We think you should al­ways use de­ter­min­is­tic tools when­ever you can.

Code style guide­lines will in­evitably add a bunch of in­struc­tions and mostly-ir­rel­e­vant code snip­pets into your con­text win­dow, de­grad­ing your LLMs per­for­mance and in­struc­tion-fol­low­ing and eat­ing up your con­text win­dow.

LLMs are in-con­text learn­ers! If your code fol­lows a cer­tain set of style guide­lines or pat­terns, you should find that armed with a few searches of your code­base (or a good re­search doc­u­ment!) your agent should tend to fol­low ex­ist­ing code pat­terns and con­ven­tions with­out be­ing told to.

If you feel very stronly about this, you might even con­sider set­ting up a Claude Code Stop hook that runs your for­mat­ter & lin­ter and pre­sents er­rors to Claude for it to fix. Don’t make Claude find the for­mat­ting is­sues it­self.

Bonus points: use a lin­ter that can au­to­mat­i­cally fix is­sues (we like Biome), and care­fully tune your rules about what can safely be auto-fixed for max­i­mum (safe) cov­er­age.

You could also cre­ate a Slash Command that in­cludes your code guide­lines and which points claude at the changes in ver­sion con­trol, or at your git sta­tus, or sim­i­lar. This way, you can han­dle im­ple­men­ta­tion and for­mat­ting sep­a­rately. You will see bet­ter re­sults with both as a re­sult.

### Don’t use /init or auto-gen­er­ate your CLAUDE.md

Both Claude Code and other har­nesses with OpenCode come with ways to auto-gen­er­ate your CLAUDE.md file (or AGENTS.md).

Because CLAUDE.md goes into every sin­gle ses­sion with Claude code, it is one of the high­est lever­age points of the har­ness - for bet­ter or for worse, de­pend­ing on how you use it.

A bad line of code is a bad line of code. A bad line of an im­ple­men­ta­tion plan has the po­ten­tial to cre­ate a lot of bad lines of code. A bad line of a re­search that mis­un­der­stands how the sys­tem works has the po­ten­tial to re­sult in a lot of bad lines in the plan, and there­fore a lot more bad lines of code as a re­sult.

But the CLAUDE.md file af­fects every sin­gle phase of your work­flow and every sin­gle ar­ti­fact pro­duced by it. As a re­sult, we think you should spend some time think­ing very care­fully about every sin­gle line that goes into it:

CLAUDE.md is for on­board­ing Claude into your code­base. It should de­fine your pro­jec­t’s WHY, WHAT, and HOW.

Less (instructions) is more. While you should­n’t omit nec­es­sary in­struc­tions, you should in­clude as few in­struc­tions as rea­son­ably pos­si­ble in the file.

Keep the con­tents of your CLAUDE.md con­cise and uni­ver­sally ap­plic­a­ble.

Use Progressive Disclosure - don’t tell Claude all the in­for­ma­tion you could pos­si­bly want it to know. Rather, tell it how to find im­por­tant in­for­ma­tion so that it can find and use it, but only when it needs to to avoid bloat­ing your con­text win­dow or in­struc­tion count.

Claude is not a lin­ter. Use lin­ters and code for­mat­ters, and use other fea­tures like Hooks and Slash Commands as nec­es­sary.

CLAUDE.md is the high­est lever­age point of the har­ness, so avoid auto-gen­er­at­ing it. You should care­fully craft its con­tents for best re­sults.

...

Read the original on www.humanlayer.dev »

3 505 shares, 65 trendiness

Slop Evader — Tega Brain

How to Get to Zero at Pioneer Works, Sep 12 - Dec 14, 2025. Review in the Art Newspaper, Oct 14. Offset at MediaLive: Data Rich, Dirt Poor at BMoCA, Sep 12 - Jan 11, 2026.

A browser ex­ten­sion for avoid­ing AI slop.

Download it for Chrome or Firefox.

This is a search tool that will only re­turn con­tent cre­ated be­fore ChatGPT’s first pub­lic re­lease on November 30, 2022.

Since the pub­lic re­lease of ChatGTPT and other large lan­guage mod­els, the in­ter­net is be­ing in­creas­ingly pol­luted by AI gen­er­ated text, im­ages and video. This browser ex­ten­sion uses the Google search API to only re­turn con­tent pub­lished be­fore Nov 30th, 2022 so you can be sure that it was writ­ten or pro­duced by the hu­man hand.

...

Read the original on tegabrain.com »

4 454 shares, 19 trendiness

Windows drive letters are not limited to A-Z

- Programming

- Windows

On its own, the ti­tle of this post is just a true piece of trivia, ver­i­fi­able with the built-in subst tool (among other meth­ods).

Here’s an ex­am­ple cre­at­ing the drive +:\ as an alias for a di­rec­tory at C:\foo:

subst +: C:\foo

The +:\ drive then works as nor­mal (at least in cmd.exe, this will be dis­cussed more later):

> cd /D +:\

+:\> tree .

Folder PATH list­ing

Volume se­r­ial num­ber is 00000001 12AB:23BC

└───bar

However, un­der­stand­ing why it’s true elu­ci­dates a lot about how Windows works un­der the hood, and turns up a few cu­ri­ous be­hav­iors.

The paths that most peo­ple are fa­mil­iar with are Win32 name­space paths, e.g. some­thing like C:\foo which is a drive-ab­solute Win32 path. However, the high-level APIs that take Win32 paths like CreateFileW ul­ti­mately will con­vert a path like C:\foo into a NT name­space path be­fore call­ing into a lower level API within nt­dll.dll like NtCreateFile.

This can be con­firmed with NtTrace, where a call to CreateFileW with C:\foo ul­ti­mately leads to a call of NtCreateFile with \??\C:\foo:

NtCreateFile( FileHandle=0x40c07ff640 [0xb8], DesiredAccess=SYNCHRONIZE|GENERIC_READ|0x80, ObjectAttributes=“\??\C:\foo”, IoStatusBlock=0x40c07ff648 [0/1], AllocationSize=null, FileAttributes=0, ShareAccess=7, CreateDisposition=1, CreateOptions=0x4000, EaBuffer=null, EaLength=0 ) => 0

NtClose( Handle=0xb8 ) => 0

That \??\C:\foo is a NT name­space path, which is what NtCreateFile ex­pects. To un­der­stand this path, though, we need to talk about the Object Manager, which is re­spon­si­ble for han­dling NT paths.

The Object Manager is re­spon­si­ble for keep­ing track of named ob­jects, which we can ex­plore us­ing the WinObj tool. The \?? part of the \??\C:\foo path is ac­tu­ally a spe­cial vir­tual folder within the Object Manager that com­bines the \GLOBAL?? folder and a per-user DosDevices folder to­gether.

For me, the ob­ject C: is within \GLOBAL??, and is ac­tu­ally a sym­bolic link to \Device\HarddiskVolume4:

So, \??\C:\foo ul­ti­mately re­solves to \Device\HarddiskVolume4\foo, and then it’s up to the ac­tual de­vice to deal with the foo part of the path.

The im­por­tant thing here, though, is that \??\C:\foo is just one way of re­fer­ring to the de­vice path \Device\HarddiskVolume4\foo. For ex­am­ple, vol­umes will also get a named ob­ject cre­ated us­ing their GUID with the for­mat Volume{18123456-abcd-efab-cdef-1234abcdabcd} that is also a sym­link to some­thing like \Device\HarddiskVolume4, so a path like \??\Volume{18123456-abcd-efab-cdef-1234abcdabcd}\foo is ef­fec­tively equiv­a­lent to \??\C:\foo.

All this is to say that there’s noth­ing in­nately spe­cial about the named ob­ject C:; the Object Manager treats it just like any other sym­bolic link and re­solves it ac­cord­ingly.

How I see it, drive let­ters are es­sen­tially just a con­ven­tion borne out of the con­ver­sion of a Win32 path into a NT path. In par­tic­u­lar, that would be down to the im­ple­men­ta­tion of RtlDosPathNameToNtPathName_U.

In other words, since RtlDosPathNameToNtPathName_U con­verts C:\foo to \??\C:\foo, then an ob­ject named C: will be­have like a drive let­ter. To give an ex­am­ple of what I mean by that: in an al­ter­nate uni­verse, RtlDosPathNameToNtPathName_U could con­vert the path FOO:\bar to \??\FOO:\bar and then FOO: could be­have like a drive let­ter.

So, get­ting back to the ti­tle, how does RtlDosPathNameToNtPathName_U treat some­thing like +:\foo? Well, ex­actly the same as C:\foo:

> paths.exe C:\foo

path type: .DriveAbsolute

nt path: \??\C:\foo

> paths.exe +:\foo

path type: .DriveAbsolute

nt path: \??\+:\foo

Therefore, if an ob­ject with the name +: is within the vir­tual folder \??, we can ex­pect the Win32 path +:\ to be­have like any other drive-ab­solute path, which is ex­actly what we see.

This sec­tion only fo­cuses on a few things that were rel­e­vant to what I was work­ing on. I en­cour­age oth­ers to in­ves­ti­gate the im­pli­ca­tions of this fur­ther if they feel so in­clined.

Drives with a drive-let­ter other than A-Z do not ap­pear in File Explorer, and can­not be nav­i­gated to in File Explorer.

For the do not ap­pear” part, my guess as to what’s hap­pen­ing is that ex­plorer.exe is walk­ing \?? and look­ing specif­i­cally for ob­jects named A: through Z:. For the cannot be nav­i­gated to” part, that’s a bit more mys­te­ri­ous, but my guess is that ex­plorer.exe has a lot of spe­cial logic around han­dling paths typed into the lo­ca­tion bar, and part of that re­stricts drive let­ters to A-Z (i.e. it’s short-cir­cuit­ing be­fore it ever tries to ac­tu­ally open the path).

PowerShell seems to re­ject non-A-Z dri­ves as well:

PS C:\> cd +:\

cd : Cannot find drive. A drive with the name +’ does not ex­ist.

At line:1 char:1

+ cd +:\

+ CategoryInfo  : ObjectNotFound: (+:String) [Set-Location], DriveNotFoundException

+ FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.SetLocationCommand

Drive let­ters don’t have to be within the ASCII range at all; they can also be non-ASCII char­ac­ters.

> subst €: C:\foo

> cd /D €:\

€:\> tree .

Folder PATH list­ing

Volume se­r­ial num­ber is 000000DE 12AB:23BC

└───bar

Non-ASCII drive let­ters are even case-in­sen­si­tive like A-Z are:

> subst Λ: C:\foo

> cd /D λ:\

λ:\> tree .

Folder PATH list­ing

Volume se­r­ial num­ber is 000000DE 12AB:23BC

λ:\

└───bar

However, drive-let­ters can­not be ar­bi­trary Unicode graphemes or even ar­bi­trary code points; they are re­stricted to a sin­gle WTF-16 code unit (a u16, so U+FFFF). The tool that we’ve been us­ing so far (subst.exe) er­rors with Invalid pa­ra­me­ter if you try to use a drive let­ter with a code point larger than U+FFFF, but you can get around that by go­ing through the MountPointManager di­rectly:

However, hav­ing the sym­link in place does­n’t solve any­thing on its own:

> cd /D 𤭢:\

The file­name, di­rec­tory name, or vol­ume la­bel syn­tax is in­cor­rect.

This is be­cause there’s no way to get the drive-ab­solute Win32 path 𤭢:\ to end up as the rel­e­vant NT path. As men­tioned ear­lier, the be­hav­ior of RtlDosPathNameToNtPathName_U is what mat­ters, and we can ver­ify that it will not con­vert a drive-ab­solute path with a drive let­ter big­ger than U+FFFF to the rel­e­vant NT path:

C:\foo> paths.exe 𤭢:\foo

path type: .Relative

nt path: \??\C:\foo\𤭢:\foo

It’s very com­mon for path-re­lated func­tions to be writ­ten with­out the use of sys­tem-spe­cific APIs, which means that there’s high po­ten­tial for a mis­match be­tween how RtlDosPathNameToNtPathName_U treats a file path and how some­thing like a par­tic­u­lar im­ple­men­ta­tion of path.is­Ab­solute treats a file path.

As a ran­dom ex­am­ple, Rust only con­sid­ers paths with A-Z drive let­ters as ab­solute:

use std::path::Path;

fn main() {

println!(“C:\\ {}”, Path::new(“C:\\foo”).is_absolute());

println!(“+:\\ {}”, Path::new(“+:\\foo”).is_absolute());

println!(“€:\\ {}”, Path::new(“€:\\foo”).is_absolute());

> rustc test.rs

> test.exe

C:\ true

+:\ false

€:\ false

Whether or not this rep­re­sents a prob­lem worth fix­ing is left as an ex­er­cise for the reader (I gen­uinely don’t know if it is a prob­lem), but there’s a sec­ond wrin­kle (hinted at pre­vi­ously) in­volv­ing text en­cod­ing that can make some­thing like an is­Ab­solute im­ple­men­ta­tion re­turn dif­fer­ent re­sults for the same path. This wrin­kle is the rea­son I looked into this whole thing in the first place, as when I was do­ing some work on Zig’s path-re­lated func­tions re­cently I re­al­ized that look­ing at path[0], path[1], and path[2] for a pat­tern like C:\ will look at dif­fer­ent parts of the path de­pend­ing on the en­cod­ing. That is, for some­thing like €:\ (which is made up of the code points ):

* Encoded as WTF-16 where U+20AC can be en­coded as the sin­gle u16 code unit 0x20AC, that’d mean path[0] will be 0x20AC, path[1] will be 0x3A (:), and path[2] will be 0x5C (\), which looks like a drive-ab­solute path

* Encoded as WTF-8 where U+20AC is en­coded as three u8 code units (0xE2 0x82 0xAC), that’d mean path[0] will be 0xE2, path[1] will be 0x82, and path[2] will be 0xAC, mean­ing it will look noth­ing like a drive-ab­solute path

So, to write an im­ple­men­ta­tion that treats paths the same re­gard­less of en­cod­ing, some de­ci­sion has to be made:

* If strict com­pat­i­bil­ity with RtlDetermineDosPathNameType_U/RtlDosPathNameToNtPathName_U is de­sired, de­code the first code point and check for when deal­ing with WTF-8 (this is the op­tion I went with for the Zig stan­dard li­brary, but I’m not su­per happy about it)

* If you want to be able to al­ways check path[0]/​path[1]/​path[2] and don’t care about non-ASCII drive let­ters, check for path[0] re­gard­less of en­cod­ing

* If you don’t care about any­thing other than the stan­dard A-Z drive let­ters, then check for that ex­plic­itly (this is what Rust does)

Something bizarre that I found with this whole thing is that the ker­nel32.dll API SetVolumeMountPointW has it’s own unique quirk when deal­ing with non-ASCII drive let­ters. Specifically, this code (attempting to cre­ate the drive €:\) will suc­ceed:

const std = @import(“std”);

const win­dows = std.os.win­dows;

const L = std.uni­code.wt­f8­ToWt­f16LeStringLit­eral;

ex­tern kernel32” fn SetVolumeMountPointW(

VolumeMountPoint: win­dows.LPCW­STR,

VolumeName: win­dows.LPCW­STR,

) call­conv(.winapi) win­dows.BOOL;

pub fn main() !void {

const vol­ume_­name = L(“\\\\?\\Volume{18123456-abcd-efab-cdef-1234abcdabcd}\");

const moun­t_­point = L(“€:\");

if (SetVolumeMountPointW(mount_point, vol­ume_­name) == 0) {

const err = win­dows.Get­LastEr­ror();

std.de­bug.print(“{any}\n”, .{err});

re­turn er­ror.Failed;

However, when we look at the Object Manager, the €: sym­link won’t ex­ist… but ¬: will:

...

Read the original on www.ryanliptak.com »

5 407 shares, 20 trendiness

Don't Push AI Down Our Throats

AI is be­ing done wrong.

It’s be­ing pushed down our throats. It’s in our search bars, our op­er­at­ing sys­tems, and even our cre­ative tools, whether we asked for it or not. It feels less like an up­grade and more like a force-feed­ing.

It does­n’t need to be this way. Technology can be adopted slowly. Organically. One piece at a time.

Right now, the fran­tic pace of de­ploy­ment is­n’t about util­ity; it’s about liq­uid­ity. It’s be­ing shoved down our throats be­cause some bil­lion­aires need to make some more bil­lions be­fore they die.

We don’t owe them any­thing.

It is time to do AI the right way. The hon­ey­moon phase of the hype cy­cle is over. We now know the lim­i­ta­tions. We see the hal­lu­ci­na­tions. We see the er­rors. Let’s pick the things which work and slowly in­te­grate it into our lives. We don’t need to do it this quar­ter just be­cause some startup has to do an earn­ings call. We will do it if it helps us.

And let’s be clear: We don’t need AGI (Artificial General Intelligence). We don’t need a dig­i­tal god. We just need soft­ware that works.

If the cur­rent mod­els don’t work? No prob­lem. Let the re­searchers go back to the lab and do their jobs. We will con­tinue do­ing ours. We might even gen­er­ate more data for them in the process—but this time, we do it cor­rectly. We will work with the cre­ators, writ­ers, and artists, in­stead of rip­ping off their life’s work to feed the model.

I hear the com­plaints from the tech gi­ants al­ready: But we bought too many GPUs! We spent bil­lions on in­fra­struc­ture! They have to be put to work!”

I will use what cre­ates value for me. I will not buy any­thing that is of no use to me.

There are plenty of le­git­i­mate use cases for AI and enough places to make money with­out force-feed­ing the mar­ket. But I will not al­low AI to be pushed down my throat just to jus­tify your bad in­vest­ment.

...

Read the original on gpt3experiments.substack.com »

6 372 shares, 14 trendiness

Migrating Dillo from GitHub

I would like to mi­grate the Dillo pro­ject away from

GitHub

into a new home which is more friendly to be used with Dillo and solves some of its prob­lems. This page sum­ma­rizes the cur­rent sit­u­a­tion with GitHub and why I de­cided to move away from it into a self-hosted server with mul­ti­ple mir­rors in other forges.

Before we dive into the de­tails, I would like to briefly men­tion what hap­pened with the old site. The orig­i­nal Dillo web­site was at dillo.org, which also had the source code of Dillo in a mer­cu­r­ial repos­i­tory at hg.dillo.org. But it also in­cluded the mail server used to reach the de­vel­op­ers, a bug tracker and archives for the mail­ing list. However, in 2022

the do­main was lost and some­one else de­cided to buy it to put a sim­i­lar site but plaged with AI gen­er­ated ads. The orig­i­nal de­vel­op­ers are no longer ac­tive, but luck­ily I had a copy of the mer­cu­r­ial repos­i­tory and with some help I was able to re­cover a lot of ma­te­r­ial from the orig­i­nal server (some parts are still miss­ing to this day).

I want to avoid this sit­u­a­tion as much as pos­si­ble, so we can­not rely on a sin­gle site that can go down and the whole pro­ject be­come lost. Initially, I up­loaded the Dillo source and web­site to git repos­i­to­ries on GitHub, but I no longer think this is a good idea.

GitHub has been use­ful to store all repos­i­to­ries of the Dillo pro­ject, as well as to run the CI work­flows for plat­forms in which I don’t have a ma­chine avail­able (like Windows, Mac OS or some BSDs).

However, it has sev­eral prob­lems that make it less suit­able to de­velop Dillo any­more. The most an­noy­ing prob­lem is that the fron­tend barely works with­out JavaScript, so we can­not open is­sues, pull re­quests, source code or CI logs in Dillo it­self, de­spite them be­ing mostly plain HTML, which I don’t think is ac­cept­able. In the past, it used to grace­fully de­grade with­out en­forc­ing JavaScript, but now it does­n’t. Additionally, the page is very re­source hun­gry, which I don’t think is needed to ren­der mostly sta­tic text.

Another big prob­lem is that it is a sin­gle point of fail­ure. I don’t mean that GitHub is stored in a sin­gle ma­chine, but it is con­trolled by a sin­gle en­tity which can uni­lat­er­aly ban our repos­i­tory or ac­count and we would lose the abil­ity to no­tify in that URL what hap­pened. This can cause data loss if we don’t have a lo­cal copy of all the data.

On the us­abil­ity side, the plat­form has be­come more and more slow over time, which is af­fect­ing the de­vel­op­ment process. It also re­quires you to have a fast Internet con­nec­tion at all times, which is not the case for me some­times. Additionally, GitHub seems to en­cour­age a push model” in which you are no­ti­fied when a new event oc­curs in your pro­ject(s), but I don’t want to work with that model. Instead, I pre­fer it to work as a pull model”, so I only get up­dates when I specif­i­cally look for them. This model would also al­low me to eas­ily work of­fline. Unfortunately, I see that the same push model has been copied to al­ter­na­tive forges.

On the so­cial side, I feel that it does­n’t have the right tools to mod­er­ate users, spe­cially for pro­jects where the ra­tio of non-tech­ni­cal users to de­vel­op­ers is high. This is spe­cially prob­lem­atic when ac­tive is­sues with de­vel­oper notes be­gin to be filled with com­ments from users that have never con­tributed to the pro­ject and usu­ally do more harm than good. This sit­u­a­tion ends up caus­ing burnout in de­vel­op­ers.

Lastly, GitHub seem to fol­low the cur­rent trend of over-fo­cus­ing on LLMs and gen­er­a­tive AI, which are de­stroy­ing the open web (or what re­mains of it) among

other

prob­lems. It has a di­rect im­pact on us be­cause sites pro­tect them­seves with a JavaScript wall (or worse, browser fin­ger­print­ing) to pre­vent ag­gre­sive LLM crawler bots from over­load­ing the site, but they also leave Dillo users out. So I would pre­fer not to en­cour­age this trend. Despite my in­ten­tions, mov­ing Dillo away won’t change much their ca­pa­bil­ity to train their model with our code, but at least I won’t be ac­tively help­ing.

After re­search­ing the avail­able op­tions, it seems that none of the cur­rent forges would al­low us to have a re­dun­dant sys­tem that can pre­vent the forge from be­com­ing a sin­gle point of fail­ure and solve the rest of the prob­lems with GitHub. Therefore, I de­cided to self-host Dillo my­self, move all im­por­tant data to git repos­i­to­ries and keep them syn­chro­nized in mul­ti­ple git mir­rors.

I de­cided to buy the dillo-browser.org do­main name and setup a very small VPS. Initially, I was very skep­ti­cal that it would be able to sur­vive on to­day’s web, but it seems to be do­ing an ac­cept­able job at han­dling it (mostly AI bot traf­fic mas­querad­ing as users). The Dillo web­site is avail­able here:

I re­searched which git fron­tends may suit our needs, and I dis­cov­ered that most op­tions are very com­pli­cated to self-host and re­quire a lot of server re­sources and JavaScript on the fron­tend. I ended up test­ing cgit, which is writ­ten in C and it seems to be very light­weight both on RAM and CPU us­age. Furthermore, the web fron­tend does­n’t re­quire JS, so I can use it from Dillo (I mod­i­fied cgit CSS slightly to work well on Dillo). It is avail­able on this URL:

Regarding the bug tracker, I also took a look at the avail­able op­tions. They are all too com­pli­cated for what I would like to have and they seem to cen­tral­ize the data into a data­base that can get lost. This is pre­cisely the case that hap­pened with the old dillo bug tracker and we are still un­able to re­cover the orig­i­nal bug en­tries.

To avoid this prob­lem, I cre­ated my own bug tracker soft­ware,

buggy, which is a very sim­ple C tool that parses plain Markdown files and cre­ates a sin­gle HTML page for each bug. All bugs are stored in a

git repos­i­tory

and a git hook re­gen­er­ates the bug pages and the in­dex on each new com­mit. As it is sim­ply plain text, I can edit the bugs lo­cally and only push them to the re­mote when I have Internet back, so it works nice of­fline. Also, as the out­put is just an sta­tic HTML site, I don’t need to worry about hav­ing any vul­ner­a­bil­i­ties in my code, as it will only run at build time. You can see it live here, with the ex­ported is­sues from GitHub:

The mail­ing list archives are stored by three in­de­pen­dent ex­ter­nal ser­vices, but I might in­clude a copy with our own archives in the fu­ture.

As all the im­por­tant data is now stored in git repos­i­to­ries, we can mir­ror them in any forge, with­out hav­ing to rely on their cus­tom stor­age for­mat for the is­sues or other data. If a forge goes down (or goes rogue) we can sim­ply switch to an­other site with low switch­ing cost. To this end, I have cre­ated git mir­rors in Codeberg and Sourcehut that are synced with our git server:

However, we still have a sin­gle point of fail­ure: the DNS en­try of the dillo-browser.org do­main. If we lose the DNS en­try (like with dillo.org) it would cause a prob­lem as all ser­vices will be un­reach­able. We could re­cover from such sit­u­a­tion by re­ly­ing on al­ter­na­tive ways to reach users, by the mail­ing list, fe­di­verse or IRC, as well as up­dat­ing the mir­rors to re­flect the cur­rent sit­u­a­tion. It is not ideal, but I don’t think it would cause a cat­a­strophic data loss (like it hap­pened be­fore) as all the data is now stored in git and repli­cated across in­de­pen­dent lo­ca­tions.

In or­der for this page to have some au­thor­ity, the HTML file is signed with my GPG key

(32E65EC501A1B6FDF8190D293EE6BA977EB2A253), which is the same that I use to sign the last re­leases of Dillo and is also listed in my GitHub user. The sig­na­ture is avail­able here and is linked to the page with the tag us­ing the rel=sig­na­ture

re­la­tion. You can find more in­for­ma­tion and how to ver­ify the sig­na­ture in the

Dillo RFC-006.

Using OpenPGP sig­na­tures is ro­bust against los­ing the DNS en­try, as the au­thor­ity is not given by the TLS cer­tifi­cate chain but by the trust in the OpenPGP sig­na­ture, so we could move the site else­where and still claim that is owned by us. Additionally, as we can store the sig­na­tures in­side all git mir­rors, they are also re­silient against data loss.

Keep in mind that the mi­gra­tion process re­quires sev­eral mov­ing parts and it will take a while for it to sta­bi­lize (switching costs). The GitHub

repos­i­to­ries won’t be re­moved at any point in time and they will con­tinue to

be up­dated un­til we fin­ish the mi­gra­tion. When the mi­gra­tion process is com­pleted, I will mark the Dillo repos­i­to­ries as archived and prop­erly co­mu­ni­cate it in our site. It is im­por­tant that we don’t re­move any com­mit or tar­ball re­lease to avoid break­ing down­stream builds that still rely on the GitHub URL.

Lastly, I’m glad that we can have our own fully in­de­pen­dent and self-hosted site with rel­a­tively low ex­penses and very lit­tle en­ergy cost (which is good for the en­vi­ron­ment, but prob­a­bly not even no­tice­able at large scale). With the cur­rent DNS and server costs and our cur­rent do­na­tions I con­sider that it is likely that we can con­tinue cov­er­ing the ex­penses for at least the next 3 years in the worst case sce­nario. If you are in­ter­ested in keep­ing us afloat, you can help via Liberapay.

...

Read the original on dillo-browser.org »

7 338 shares, 19 trendiness

A Love Letter to FreeBSD

I’m still the new per­son here, learn­ing your ways, stum­bling over the oc­ca­sional quirk, smil­ing when I find the small touches that make you dif­fer­ent. You re­mind me of what com­put­ing felt like be­fore the noise. Before hype cy­cles and per­for­mance the­atre. Before every tool needed a plu­gin sys­tem and a logo. You are co­her­ent. You are de­lib­er­ate. You are the kind of sys­tem that does­n’t have to shout to be­long.

You carry the quiet strength of the greats, like a main­frame hum­ming in a locked room, not chas­ing at­ten­tion, just do­ing its work, year af­ter year. Your base sys­tem feels like it was built by peo­ple who cared about the whole pic­ture, not just the pieces. Your boot en­vi­ron­ments are like an old IBM i’s side A / side B” IPL, a built-in es­cape hatch that says, we’ve thought ahead for you. You could be, you should be, the open-source main­frame: aligned with hard­ware life­cy­cles of three to five years or more, built for long-term trust, a plat­form peo­ple bet their up­time on. Your core de­sign re­minds me of Solaris in its best days: a sta­ble base that com­mer­cial and com­mu­nity soft­ware could rely on with­out fear of shift­ing foun­da­tions.

And make up­time a de­sign goal: a thou­sand-day up­time should­n’t be folk­lore, it should be nor­mal. Not a party trick, not a screen­shot to boast about, but sim­ply the nat­ural con­se­quence of a sys­tem built to en­dure. Mainframes never apol­o­gised for up­time mea­sured in years, and nei­ther should you. Apply up­dates with­out fear, re­boot only when the ker­nel truly de­mands it, and let ad­min­is­tra­tors see longevity as a fea­ture, not a gam­ble.

I know you are reach­ing fur­ther into the desk­top now. I un­der­stand why, and I can see how it might widen your reach. But here I find my­self won­der­ing: how do you keep the heart­beat of a rock-solid server while also em­brac­ing the quicker pulse of a mod­ern desk­top? I don’t pre­tend to have all the an­swers, I’m too new to you for that, but my first in­stinct is to lean on what you al­ready have: the nat­ural sep­a­ra­tion be­tween CURRENT and RELEASE. Let those worlds move at their own pace, with­out ask­ing one to carry the oth­er’s com­pro­mises.

And now, with pkg­base in play, the sta­bil­ity of pack­ages mat­ters as much as the base sys­tem it­self. The base must re­main un­touch­able in its re­li­a­bil­ity, but I dream of a world where the pack­age ecosys­tem is avail­able in clear sta­bil­ity chan­nels: from a rock-solid production tier” you can stake a busi­ness on, to faster-mov­ing streams where new fea­tures can flow with­out fear of break­ing mis­sion-crit­i­cal sys­tems. Too many times in the past, pack­ages van­ished or broke un­ex­pect­edly. I un­der­stand the core is sa­cred, but I would­n’t mind if some of the wider ecosys­tem in­her­ited that same level of care.

Culture mat­ters too. One rea­son I stepped away from Linux was the noise, the de­bates that drowned out the joy of build­ing. Please keep FreeBSD the kind of place where thought­ful en­gi­neer­ing is wel­come with­out ego bat­tles, where en­ter­prise fo­cus and tech­ni­cal cu­rios­ity can sit at the same table. That spirit, the calm, shared pur­pose that car­ried Unix from the PDP-11 labs to the back­bone of the Internet, is worth pro­tect­ing.

There’s also the prac­ti­cal side: keep the doors open with hard­ware ven­dors like Dell and HPE, so FreeBSD re­mains a first-class cit­i­zen. Give me the tools to flash firmware with­out hav­ing to bor­row Linux or Windows. Make hard­ware life­cy­cle align­ment part of your story, ma­jor re­leases paced with the real world, point re­leases treated as re­fine­ment rather than dis­rup­tion.

My hope is sim­ple: that you stay dif­fer­ent. Not in the way that shouts for at­ten­tion, but in the way that earns trust. If some­one wants hype or the lat­est shiny thing every month, they have Linux. If they want a plat­form that feels like it could sim­ply run, and keep run­ning, the way the best of Unix al­ways did, they should know they can find it here. And I still dream of a fu­ture where a pur­pose-built open-source main­frame” ex­ists: a mod­ern, re­li­able hard­ware sys­tem run­ning FreeBSD with the same quiet pres­ence as Sun’s Enterprise 10k once did.

And maybe, one day, some­one will walk past a rack of servers, hear the steady, un­hur­ried rhythm of a FreeBSD sys­tem still run­ning, and smile, know­ing that in a world that burns through trends, there is still some­thing built to last.

With grat­i­tude,

and with the wish to stay for the long run,

A new­comer who fi­nally feels at home.

...

Read the original on www.tara.sh »

8 326 shares, 14 trendiness

Norway wealth fund to vote for human rights report at Microsoft AGM, against management

Norway’s $2 tril­lion wealth fund said on Sunday it would vote for a share­holder pro­posal at the up­com­ing Microsoft annual gen­eral meet­ing re­quir­ing for a re­port on the risks of op­er­at­ing in coun­tries with sig­nif­i­cant hu­man rights con­cerns.

Microsoft man­age­ment had rec­om­mended share­hold­ers voted against the mo­tion.

The fund also said it would vote against the re-ap­point­ment of CEO Satya Nadella as chair of the board, as well as against his pay pack­age.

The fund owned a 1.35% stake worth $50 bil­lion in the com­pany as of June 30, ac­cord­ing to fund data, mak­ing it the fund’s sec­ond-largest eq­uity hold­ing over­all, af­ter Nvidia.

It is Microsoft’s eighth-largest share­holder, ac­cord­ing to LSEG data.

Investors in the U. S. tech com­pany will de­cide whether to rat­ify the pro­posed mo­tions at the AGM on Dec. 5.

...

Read the original on www.cnbc.com »

9 286 shares, 16 trendiness

GitHub → Codeberg

In which I talk about the process in­volved in switch­ing forges, and how well that went.

Spoiler alert: this very site that you’re read­ing this on is not served from GitHub Pages any­more! At this point, I’d call my mi­gra­tion suc­cess­ful. But it took more than click­ing a sin­gle but­ton, so let’s talk about the steps in­volved, at least for me. I’m hop­ing that it can help be an ex­am­ple for other peo­ple, and show that it’s ac­tu­ally not that com­pli­cated.

First, I took an hour or so to set up my pro­file pic­ture, email ad­dress(es), SSH keys…

This was­n’t dif­fi­cult, be­cause Forgejo (the forge soft­ware that pow­ers Codeberg) of­fers a migrate from GitHub” func­tion­al­ity. You need to gen­er­ate a PAT on GitHub to im­port things like is­sues (which is awe­some!), and as a bonus it also speeds up the process.

It was, how­ever, te­dious, be­cause the process was en­tirely man­ual (perhaps there’s a way to au­to­mate it, like by us­ing some Forgejo CLI tool, but I did­n’t bother look­ing into that). And, due to GitHub API rate lim­its, when­ever I tried im­port­ing two re­pos at the same time, one or both would fail. (It was­n’t too bad, though, since I could fill out the mi­gra­tion page for the next while one was in progress; and gen­er­ally, it took me roughly as long to fill it out as it took Codeberg to per­form the im­port.)

I’m re­ally happy that is­sues, PRs, wikis, and re­leases can be im­ported flaw­lessly: this makes it pos­si­ble to not have to re­fer to GitHub any­more!

Of course I don’t con­trol all links that point to my stuff, but I could at least run rg -F github.com/​IS­SOtm in my home di­rec­tory, to catch those within my own re­pos. It’s pos­si­ble to au­to­mate the re­plac­ing process:$ sed –in-place –regexp-extended s,github.com/​IS­SOtm,code­berg.org/​IS­SOtm,′

…and if you’re feel­ing like bulk-re­plac­ing all files in a di­rec­tory:$ find

Repositories, how­ever, may still be point­ing to GitHub:$ git re­mote -v ori­gin git@github.com:ISSOtm/rsgbds.git (fetch) ori­gin git@github.com:ISSOtm/rsgbds.git (push)

You can ei­ther man­u­ally git re­mote set-url ori­gin git@code­berg.org:ISSOtm/rsgbds.git (or the equiv­a­lent if you’re us­ing HTTPS), or use one of the re­place com­mands above, since re­mote URLs are stored tex­tu­ally:# Within a sin­gle repo: $ find .git -name con­fig -exec sed -Ei s,github.com:ISSOtm,codeberg.org:ISSOtm,’ {} + # Replace the colons with slashes if you’re us­ing HTTPS!

# For all re­pos within the cur­rent di­rec­tory: (requires `shopt -s glob­star` if us­ing Bash) $ find **/.git -name con­fig -exec sed -Ei s,github.com:ISSOtm,codeberg.org:ISSOtm,’ {} + # Ditto the above.

…then it’s a mat­ter of push­ing the changes to all of the re­pos.

I also wanted to make it clear that my re­pos were now liv­ing on Codeberg; so, I cre­ated a lit­tle script in an empty di­rec­tory:#!/​bin/​bash set -euo pipefail

git re­mote set-url ori­gin git@github.com:ISSOtm/$1 cat <README.md

# Moved to https://​code­berg.org/​IS­SOtm/$​1

[See my blog](http://​el­dred.fr/​blog/​code­berg) as to why.

EOF

git add README.md

git com­mit –amend –message Add move no­tice’

git push –force

gh repo edit –description Moved to https://​code­berg.org/​IS­SOtm/$​1 –homepage https://​code­berg.org/​IS­SOtm/$​1

gh repo archive –yes

Then, to run it:$ chmod +x stub_out.sh $ git init $ git re­mote add ori­gin ’ $ ./stub_out.sh rs­gbds $ ./stub_out.sh for­tIS­SimO # …etc.

The au­toma­tion made it not painful, so this went pretty well.

Now, onto the harder stuff :)

The first in­ter­est­ing thing that I no­ticed is this sec­tion of Codeberg’s CI doc­u­men­ta­tion:Run­ning CI/CD pipelines can use sig­nif­i­cant amounts of en­ergy. As much as it is tempt­ing to have green check­marks every­where, run­ning the jobs costs real money and has en­vi­ron­men­tal costs. Unlike other gi­ant plat­forms, we do not en­cour­age you to write heavy” pipelines and charge you for the cost later. We ex­pect you to care­fully con­sider the costs and ben­e­fits from your pipelines and re­duce CI/CD us­age to a min­i­mum amount nec­es­sary to guar­an­tee con­sis­tent qual­ity for your pro­jects.

That got me to think about which pro­jects of mine re­ally need CI, and ul­ti­mately, I de­cided that I would only need CI for pub­lish­ing my web­site, and the doc­u­men­ta­tion of gb-starter-kit and for­tIS­SimO; the rest of my pro­jects don’t get con­tri­bu­tions any­way, so I can live with­out CI on them, at least for now.

Anyway, Codeberg ac­tu­ally has two dif­fer­ent CI so­lu­tions: Woodpecker, and Forgejo Actions; the for­mer seems to be more pow­er­ful, but you need to ap­ply for ac­cess, and the lat­ter is very close to GitHub Actions, which should fa­cil­i­tate the mi­gra­tion. So I picked Forgejo Actions, even though it’s marked as be­ing in beta.

It’s not very dif­fi­cult to port a YAML file from GHA to Forgejo Actions; for ex­am­ple, look at the com­mit port­ing gb-starter-kit’s pub­lish­ing CI. (This does­n’t re­ally ap­pear as a diff, since I’ve moved the file; but it’s small, so it’s easy to com­pare man­u­ally.)

Here are some salient points:Ac­tions are nor­mally just re­ferred to as owner/​repo, but Forgejo sup­ports cloning any Git repo, es­pe­cially across forges. It’s ac­tu­ally rec­om­mended to use full URLs al­ways, so you don’t rely on the de­fault pre­fix, which is con­fig­urable by the in­stance ad­min and thus not nec­es­sar­ily portable. I could have kept the files in .github/workflows, since Forgejo picks up that di­rec­tory au­to­mat­i­cally if .forgejo/workflows does­n’t ex­ist; how­ever, I think it’s more con­ve­nient to keep un-mi­grated scripts in .github and mi­grated ones in .forgejo.Most Actions (the in­di­vid­ual steps, not the work­flow files) ac­tu­ally work out of the box on Forgejo Actions. Nice!Codeberg’s run­ners dif­fer from GitHub’s sig­nif­i­cantly: they have way less soft­ware in­stalled by de­fault, fewer re­sources, and only Linux run­ners are pro­vided (Ubuntu by de­fault, but you can use any Docker con­tainer im­age). ma­cOS and Windows be­ing non-free OSes, Codeberg has no plans to of­fer ei­ther of those! For both philo­soph­i­cal and fi­nan­cial rea­sons. If this is a deal-breaker for you, con­sider cross-com­pil­ing, or bring­ing your own run­ner.Un­less low la­tency is cru­cial, con­sider us­ing the lazy run­ners for bet­ter load bal­anc­ing and pos­si­bly greener en­ergy con­sump­tion. In prac­tice I haven’t seen de­lays be­yond a few min­utes, which is ac­cept­able to me.

I ac­tu­ally spent some ex­tra time try­ing to use less com­pute to per­form my CI jobs, some­what mo­ti­vated by the small size of the run­ners, and be­cause I’m guess­ing that the smaller the run­ner you’re pick­ing, the faster your job will be able to be sched­uled. Here is one such com­mit; note in par­tic­u­lar line 50, where I tried us­ing a Docker im­age with LaTeX pre­in­stalled, which saves the time taken by apt in­stall and re­quires fewer writes to the filesys­tem, free­ing up RAM.

All of the pre­vi­ous steps were done within the span of a few days; how­ever, since my web­site (this very web­site) was hosted us­ing GitHub Pages, I could­n’t mi­grate its re­pos (yes, plural: you can con­fig­ure in­di­vid­ual re­pos to be pub­lished sep­a­rately, which is how e.g. https://​el­dred.fr/​for­tIS­SimO is pub­lished, de­spite not be­ing in the web­site’s main repo).

Nominally, Codeberg has an equiv­a­lent, Codeberg Pages; how­ever, as men­tioned on that page, the soft­ware be­hind this fea­ture is cur­rently in main­te­nance mode, be­cause of com­plex­ity and per­for­mance is­sues. So I left it at that for roughly a month, hop­ing there’ll even­tu­ally be an up­date. Also, sub­pro­jects are pub­lished as sub­do­mains in­stead of sub­di­rec­to­ries, which would have bro­ken links (e.g. http://​el­dred.fr/​for­tIS­SimO would have be­come http://​for­tIS­SimO.el­dred.fr). Meh…

And then (by chance lol) I dis­cov­ered git-pages and its pub­lic in­stance Grebedoc! It func­tions much like GitHub Pages, though with a bit more setup since it’s not in­te­grated within the forge it­self.

git-pages ac­tu­ally has sev­eral niceties:My web­site had zero down­time dur­ing the en­tire mi­gra­tion, as git-pages sup­ports up­load­ing your web­site be­fore up­dat­ing your DNS records!It also sup­ports server-side redi­rects, which lets me redi­rect peo­ple who still go to http://​el­dred.fr/​gb-asm-tu­to­r­ial/* to its new home, for ex­am­ple. People have been get­ting 404s be­cause of in­com­plete client-side cov­er­age on my side, but no more!It also also sup­ports cus­tom head­ers; I’m not par­tic­u­larly in­ter­ested in CORS, but I’ve used that file to pay my re­spects.

Oh, and also, Codeberg’s November 2025 newslet­ter men­tions that Codeberg is plan­ning to grad­u­ally mi­grate to [git-pages]. Exciting!

I’m ac­tu­ally much hap­pier us­ing this than GitHub Pages; so, I’ve joined Catherine’s Patreon, be­cause I want to see this go far.

Steps 1 through 3 (migrating the re­pos) took me the bet­ter part of an af­ter­noon; step 4 (porting CI) took me an­other af­ter­noon, mostly to learn the new CI sys­tem; and step 5 (the web­site) took me… well, it should have taken an af­ter­noon, but I used the op­por­tu­nity to also pay down some tech debt (merging my slides repo into my main web­site), which took a few days due to re­quired rearchi­tect­ing.

All in all, even with 45 re­pos mi­grated, this ba­si­cally took a week­end. And I did­n’t find it an­noy­ing!

Since the task seemed re­ally daunt­ing, my anx­i­ety caused me to pro­cras­ti­nate this a lot, but in the end it was lit­tle work. One of the rea­sons I’m writ­ing this is to let other peo­ple know that, so they can over­come their own anx­i­ety. Maybe. :P

All in all, I’m very happy with this mi­gra­tion! As far as I can tell, noth­ing on this web­site has bro­ken, and I’ve tried rea­son­ably con­tain­ing the break­age over on GitHub: I have trun­cated the mas­ter branches, but all other branches and tags re­main in place (mostly due to lazi­ness lol), perma­links (e.g. https://​github.com/​IS­SOtm/​gb-bootroms/​blob/​c8ed9e106e0ab1193a57071820e46358006c79d0/​src/​dmg.asm) still work, only non-perma links (e.g. https://​github.com/​IS­SOtm/​gb-bootroms/​blob/​mas­ter/​src/​dmg.asm) are bro­ken, but those are un­re­li­able in the first place any­way.

Since that means that all of my code is still on GitHub, I want to delete my re­pos; but that would be a bad idea at this point, due to leav­ing no redi­rects or any­thing. I’ll con­sider that again in… idk, a year or some­thing. I would also like to delete my GitHub ac­count (like I have deleted my Twitter ac­count when… *gestures vaguely*), but not only do I need my re­pos to be up, I also need my ac­count to con­tribute to pro­jects that are still on GitHub.

One down­side of this mi­gra­tion is that since I’m mov­ing off of The Main Forge, my pro­jects are likely to get fewer con­tri­bu­tions… But I was­n’t get­ting many in the first place, and some peo­ple have al­ready made ac­counts on Codeberg to keep con­tribut­ing to my stuff. Likewise, I’m not re­ally wor­ried about dis­cov­er­abil­ity. We’ll see I guess lol 🤷‍♂️

Lastly, I’m writ­ing this af­ter the mi­gra­tion, and I haven’t re­ally taken notes dur­ing it; so, if I’ve for­got­ten any steps, feel free to let me know in the com­ments be­low or by open­ing an is­sue, and I’ll edit this ar­ti­cle.

...

Read the original on eldred.fr »

10 285 shares, 11 trendiness

Modern cars are spying on you. Here's what you can do about it

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

While dri­ving to a new restau­rant, your car’s satel­lite nav­i­ga­tion sys­tem tracks your lo­ca­tion and guides you to the des­ti­na­tion. Onboard cam­eras con­stantly track your face and eye move­ments. When an­other car veers into your path, forc­ing you to slam on the brakes, sen­sors are as­sist­ing and record­ing. Waiting at a stop­light, the car no­tices when you un­buckle your seat belt to grab your sun­glasses in the back­seat.

Modern cars are com­put­ers on wheels that are be­com­ing in­creas­ingly con­nected, en­abling in­no­v­a­tive new fea­tures that make dri­ving safer and more con­ve­nient. But these sys­tems are also col­lect­ing reams of data on our dri­ving habits and other per­sonal in­for­ma­tion, rais­ing con­cerns about data pri­vacy.

Here is what to know about how your car spies on you and how you can min­i­mize it:

It’s hard to fig­ure out ex­actly how much data a mod­ern car is col­lect­ing on you, ac­cord­ing to the Mozilla Foundation, which an­a­lyzed pri­vacy prac­tices at 25 auto brands in 2023. It de­clared that cars were the worst prod­uct cat­e­gory that the group had ever re­viewed for pri­vacy.

The data points in­clude all your nor­mal in­ter­ac­tions with the car — such as turn­ing the steer­ing wheel or un­lock­ing doors — but also data from con­nected on­board ser­vices, like satel­lite ra­dio, GPS nav­i­ga­tion sys­tems, con­nected de­vices, telem­at­ics sys­tems as well as data from sen­sors or cam­eras.

Vehicle telem­at­ics sys­tems started to be­come com­mon­place about a decade ago, and the prac­tice of au­to­mo­tive data col­lec­tion took off about five years ago.

The prob­lem is not just that data is be­ing col­lected but who it’s pro­vided to, in­clud­ing in­sur­ers, mar­ket­ing com­pa­nies and shad­owy data bro­kers. The is­sue sur­faced ear­lier this year when General Motors was banned for five years from dis­clos­ing data col­lected from dri­vers to con­sumer re­port­ing agen­cies.

The Federal Trade Commission ac­cused GM of not get­ting con­sent be­fore shar­ing the data, which in­cluded every in­stance when a dri­ver was speed­ing or dri­ving late at night. It was ul­ti­mately pro­vided to in­sur­ance com­pa­nies that used it to set their rates.

The first thing dri­vers should do is be aware of what data their car is col­lect­ing, said Andrea Amico, founder of Privacy4Cars, an au­to­mo­tive pri­vacy com­pany.

In an ideal world, dri­vers would read through the in­struc­tion man­u­als and doc­u­men­ta­tion that comes with their cars, and quiz the deal­er­ship about what’s be­ing col­lected.

But it’s not al­ways prac­ti­cal to do this, and man­u­fac­tur­ers don’t al­ways make it easy to find out, while deal­er­ship staff aren’t al­ways the best in­formed, Amico said.

Privacy4Cars of­fers a free auto pri­vacy la­bel­ing ser­vice at ve­hi­clepri­va­cyre­port.com that can sum­ma­rize what your car could be track­ing.

Owners can punch in their car’s Vehicle Identification Number, which then pulls up the au­tomak­er’s data pri­vacy prac­tices, such as whether the car col­lects lo­ca­tion data and whether it’s given to in­sur­ers, data bro­kers or law en­force­ment.

Data col­lec­tion and track­ing start as soon as you drive a new car off the deal­er­ship lot, with dri­vers un­wit­tingly con­sent­ing when they’re con­fronted with warn­ing menus on dash­board touch screens.

Experts say that some of the data col­lec­tion is baked into the sys­tem, you can re­voke your con­sent by go­ing back into the menus.

There are per­mis­sions in your set­tings that you can make choices about,” said Lauren Hendry Parsons of Mozilla. Go through on a gran­u­lar level and look at those set­tings where you can.”

For ex­am­ple, Toyota says on its web­site that dri­vers can de­cline what it calls Master Data Consent” through the Toyota app. Ford says own­ers can opt to stop shar­ing ve­hi­cle data with the com­pany by go­ing through the dash­board set­tings menu or on the FordPass app.

BMW says pri­vacy set­tings can be ad­justed through the in­fo­tain­ment sys­tem, on a spec­trum be­tween” al­low­ing all ser­vices in­clud­ing analy­sis data and none at all.

Drivers in the U. S. can ask car­mak­ers to re­strict what they do with their data.

Under state pri­vacy laws, some car­mak­ers al­low own­ers across the United States to sub­mit re­quests to limit the use of their per­sonal data, opt out of shar­ing it, or delete it, Consumer Reports says. Other auto com­pa­nies limit the re­quests to peo­ple in states with ap­plic­a­ble pri­vacy laws, the pub­li­ca­tion says.

You can file a re­quest ei­ther through an on­line form or the car­mak­er’s mo­bile app.

You can also go through Privacy4Cars, which pro­vides a free on­line ser­vice that stream­lines the process. It can ei­ther point car own­ers to their au­tomak­er’s re­quest por­tal or file a sub­mis­sion on be­half of own­ers in the U. S., Canada, the European Union, Britain and Australia.

Experts warn that there’s usu­ally a trade-off if you de­cide to switch off data col­lec­tion.

Most peo­ple, for ex­am­ple, have switched to satel­lite nav­i­ga­tion sys­tems over pa­per maps be­cause it’s worth the con­ve­nience of be­ing able to get from point A to point B re­ally eas­ily,” said Hendry Parsons.

Turning off lo­ca­tion track­ing could also halt fea­tures like road­side as­sis­tance or dis­able smart­phone app fea­tures like re­mote door lock­ing, Consumer Reports says.

BMW ad­vises that if an owner opts to have no data shared at all, their ve­hi­cle will be­have like a smart­phone in flight mode and will not trans­mit any data to the BMW back end.”

When the time comes to sell your car or trade it in for a newer model, it’s no longer as sim­ple as hand­ing over the keys and sign­ing over some pa­per­work.

If you’ve got a newer car, ex­perts say you should al­ways do a fac­tory re­set to wipe all the data, which will also in­clude re­mov­ing any smart­phone con­nec­tions.

And don’t for­get to no­tify the man­u­fac­turer about the change of own­er­ship.

Amico said that’s im­por­tant be­cause if you trade in your ve­hi­cle, you don’t want in­sur­ers to as­so­ci­ate it with your pro­file if the dealer is let­ting cus­tomers take it for test dri­ves.

Now your record may be af­fected by some­body else’s dri­ving — a com­plete stranger that you have no re­la­tion­ship with.”

Is there a tech topic that you think needs ex­plain­ing? Write to us at [email protected] with your sug­ges­tions for fu­ture edi­tions of One Tech Tip.

This story has been cor­rected to show that the Mozilla rep­re­sen­ta­tive’s first name is Lauren, not Laura.

...

Read the original on apnews.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.