10 interesting stories served every morning and every evening.




1 711 shares, 59 trendiness

Advent of Code

Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.

Advent of Code is an Advent cal­en­dar of small pro­gram­ming puz­zles for a va­ri­ety of skill lev­els that can be solved in any pro­gram­ming lan­guage you like. People use them as in­ter­view prep, com­pany train­ing, uni­ver­sity course­work, prac­tice prob­lems, a speed con­test, or to chal­lenge each other.

You don’t need a com­puter sci­ence back­ground to par­tic­i­pate - just a lit­tle pro­gram­ming knowl­edge and some prob­lem solv­ing skills will get you pretty far. Nor do you need a fancy com­puter; every prob­lem has a so­lu­tion that com­pletes in at most 15 sec­onds on ten-year-old hard­ware.

If you’d like to sup­port Advent of Code, you can do so in­di­rectly by help­ing to [Share] it with oth­ers or di­rectly via AoC++.

If you get stuck, try your so­lu­tion against the ex­am­ples given in the puz­zle; you should get the same an­swers. If not, re-read the de­scrip­tion. Did you mis­un­der­stand some­thing? Is your pro­gram do­ing some­thing you don’t ex­pect? After the ex­am­ples work, if your an­swer still is­n’t cor­rect, build some test cases for which you can ver­ify the an­swer by hand and see if those work with your pro­gram. Make sure you have the en­tire puz­zle in­put. If you’re still stuck, maybe ask a friend for help, or come back to the puz­zle later. You can also ask for hints in the sub­red­dit.

Is there an easy way to se­lect en­tire code blocks? You should be able to triple-click code blocks to se­lect them. You’ll need JavaScript en­abled.

#!/usr/bin/env perl

use warn­ings;

use strict;

print You can test it out by ;

print triple-clicking this code.\n”;

How does au­then­ti­ca­tion work? Advent of Code uses OAuth to con­firm your iden­tity through other ser­vices. When you log in, you only ever give your cre­den­tials to that ser­vice - never to Advent of Code. Then, the ser­vice you use tells the Advent of Code servers that you’re re­ally you. In gen­eral, this re­veals no in­for­ma­tion about you be­yond what is al­ready pub­lic; here are ex­am­ples from Reddit and GitHub. Advent of Code will re­mem­ber your unique ID, names, URL, and im­age from the ser­vice you use to au­then­ti­cate.

Why was this puz­zle so easy / hard? The dif­fi­culty and sub­ject mat­ter varies through­out each event. Very gen­er­ally, the puz­zles get more dif­fi­cult over time, but your spe­cific skillset will make each puz­zle sig­nif­i­cantly eas­ier or harder for you than some­one else. Making puz­zles is tricky.

Why do the puz­zles un­lock at mid­night EST/UTC-5? Because that’s when I can con­sis­tently be avail­able to make sure every­thing is work­ing. I also have a fam­ily, a day job, and even need sleep oc­ca­sion­ally. If you can’t par­tic­i­pate at mid­night, that’s not a prob­lem; if you want to race, many peo­ple use pri­vate leader­boards to com­pete with peo­ple in their area.

I find the text on the site hard to read. Is there a high con­trast mode? There is a high con­trast al­ter­nate stylesheet. Firefox sup­ports these by de­fault (View -> Page Style -> High Contrast).

I have a puz­zle idea! Can I send it to you? Please don’t. Because of le­gal is­sues like copy­right and at­tri­bu­tion, I don’t ac­cept puz­zle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by ac­ci­dent.

Did I find a bug with a puz­zle? Once a puz­zle has been out for even an hour, many peo­ple have al­ready solved it; af­ter that point, bugs are very un­likely. Start by ask­ing on the sub­red­dit.

Should I try to get a fast so­lu­tion time? Maybe. Solving puz­zles is hard enough on its own, but try­ing for a fast time also re­quires many ad­di­tional skills and a lot of prac­tice; speed-solves of­ten look noth­ing like code that would pass a code re­view. If that sounds in­ter­est­ing, go for it! However, you should do Advent of Code in a way that is use­ful to you, and so it is com­pletely fine to choose an ap­proach that meets your goals and ig­nore speed en­tirely.

Why did the num­ber of days per event change? It takes a ton of my free time every year to run Advent of Code, and build­ing the puz­zles ac­counts for the ma­jor­ity of that time. After keep­ing a con­sis­tent sched­ule for ten years(!), I needed a change. The puz­zles still start on December 1st so that the day num­bers make sense (Day 1 = Dec 1), and puz­zles come out every day (ending mid-De­cem­ber).

What hap­pened to the global leader­board? The global leader­board was one of the largest sources of stress for me, for the in­fra­struc­ture, and for many users. People took things too se­ri­ously, go­ing way out­side the spirit of the con­test; some peo­ple even re­sorted to things like DDoS at­tacks. Many peo­ple in­cor­rectly con­cluded that they were some­how worse pro­gram­mers be­cause their own times did­n’t com­pare. What started as a fun fea­ture in 2015 be­came an ever-grow­ing prob­lem, and so, af­ter ten years of Advent of Code, I re­moved the global leader­board. (However, I’ve made it so you can share a read-only view of your pri­vate leader­board. Please don’t use this fea­ture or data to cre­ate a new” global leader­board.)

While try­ing to get a fast time on a pri­vate leader­board, may I use AI / watch stream­ers / check the so­lu­tion threads / ask a friend for help / etc? If you are a mem­ber of any pri­vate leader­boards, you should ask the peo­ple that run them what their ex­pec­ta­tions are of their mem­bers. If you don’t agree with those ex­pec­ta­tions, you should find a new pri­vate leader­board or start your own! Private leader­boards might have rules like max­i­mum run­time, al­lowed pro­gram­ming lan­guage, what time you can first open the puz­zle, what tools you can use, or whether you have to wear a silly hat while work­ing.

Should I use AI to solve Advent of Code puz­zles? No. If you send a friend to the gym on your be­half, would you ex­pect to get stronger? Advent of Code puz­zles are de­signed to be in­ter­est­ing for hu­mans to solve - no con­sid­er­a­tion is made for whether AI can or can­not solve a puz­zle. If you want prac­tice prompt­ing an AI, there are al­most cer­tainly bet­ter ex­er­cises else­where de­signed with that in mind.

Can I copy/​re­dis­trib­ute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re post­ing a code repos­i­tory some­where, please don’t in­clude parts of Advent of Code like the puz­zle text or your in­puts. If you’re mak­ing a web­site, please don’t make it look like Advent of Code or name it some­thing sim­i­lar.

...

Read the original on adventofcode.com »

2 700 shares, 33 trendiness

Boing

...

Read the original on boing.greg.technology »

3 451 shares, 21 trendiness

Zigbook is Plagiarizing the Zigtools Playground

For those un­fa­mil­iar, Zigtools was founded to sup­port the Zig com­mu­nity, es­pe­cially new­com­ers, by cre­at­ing ed­i­tor tool­ing such as ZLS, pro­vid­ing build­ing blocks for lan­guage servers writ­ten in Zig with lsp-kit, work­ing on tools like the Zigtools Playground, and con­tribut­ing to Zig ed­i­tor ex­ten­sions like vs­code-zig.

A cou­ple weeks ago, a Zig re­source called Zigbook was re­leased with a bold claim of zero AI and an orig­i­nal project-based” struc­ture.

Unfortunately, even a cur­sory look at the non­sense chap­ter struc­ture, book con­tent, ex­am­ples, generic web­site, or post-back­lash is­sue-dis­abled repo re­veals that the book is wholly LLM slop and the pro­ject it­self is struc­tured like some sort of syco­phan­tic psy-op, with bot­ted ac­counts and fake re­ac­tions.

We’re leav­ing out all di­rect links to Zigbook to not give them any more SEO trac­tion.

We thought that the broad com­mu­nity back­lash would be the end of the pro­ject, but Zigbook per­se­vered, re­leas­ing just last week a brand new fea­ture, a high-voltage beta” Zig play­ground.

As we at Zigtools have our own Zig play­ground (repo, web­site), our in­ter­est was im­me­di­ately piqued. The form and func­tion­al­ity looked pretty sim­i­lar and Zigbook even in­te­grated (in a non-func­tional man­ner) ZLS into their play­ground to pro­vide all the fancy ed­i­tor bells-and-whis­tles, like code com­ple­tions and goto de­f­i­n­i­tion.

Knowing Zigbook’s his­tory of de­cep­tion, we im­me­di­ately in­ves­ti­gated the WASM blobs. Unfortunately, the WASM blobs are byte-for-byte iden­ti­cal to ours. This can­not be a co­in­ci­dence given the two blobs (zig.wasm, a lightly mod­i­fied ver­sion of the Zig com­piler, and zls.wasm, ZLS with a mod­i­fied en­try point for WASI) are en­tirely cus­tom-made for the Zigtools Playground.

We archived the WASM files for your con­ve­nience, cour­tesy of the great Internet Archive:

We pro­ceeded to look at the JavaScript code, which we quickly de­ter­mined was sim­i­larly copied, but with LLM dis­tor­tions, likely to pre­vent the code from be­ing com­pletely iden­ti­cal. Still, cer­tain sec­tions were copied one-to-one, like the JavaScript worker data-pass­ing struc­ture and log­ging (original ZLS play­ground code, pla­gia­rized Zigbook code).

The fol­low­ing code from both files is iden­ti­cal:

try {

// @ts-ignore

const ex­it­Code = wasi.start(in­stance);

postMes­sage({

stderr: `\n\n–-\nexit with exit code ${exitCode}\n–-\n`,

} catch (err) {

postMes­sage({ stderr: `${err}` });

postMes­sage({

done: true,

on­mes­sage = (event) => {

if (event.data.run) {

run(event.data.run);

The \n\n–-\nexit with exit code ${exitCode}\n–-\n is per­haps the most ob­vi­ously copied string.

Funnily enough, de­spite copy­ing many parts of our code, Zigbook did­n’t copy the most im­por­tant part of the ZLS in­te­gra­tion code, the JavaScript ZLS API de­signed to work with the ZLS WASM bi­na­ry’s API. That JavaScript code is ab­solutely re­quired to in­ter­act with the ZLS bi­nary which they did pla­gia­rize. Zigbook ei­ther avoided copy­ing that JavaScript code be­cause they knew it would be too glar­ingly ob­vi­ous, be­cause they fun­da­men­tally do not un­der­stand how the Zigtools Playground works, or be­cause they plan to copy more of our code.

To be clear, copy­ing our code and WASM blobs is en­tirely per­mis­si­ble given that the play­ground and Zig are MIT li­censed. Unfortunately, Zigbook has not com­plied with the terms of the MIT li­cense at all, and seem­ingly claims the code and blobs as their own with­out cor­rectly re­pro­duc­ing the li­cense.

We sent Zigbook a neu­tral PR cor­rect­ing the li­cense vi­o­la­tions, but they quickly closed it and deleted the de­scrip­tion, seem­ingly to hide their mis­deeds.

The orig­i­nal de­scrip­tion (also avail­able in the edits” drop­down of the orig­i­nal PR com­ment) is re­pro­duced be­low:

We (@zigtools) no­ticed you were us­ing code from the Zigtools Playground, in­clud­ing byte-by-byte copies of our WASM blobs and ex­cerpts of our JavaScript source code. This is a vi­o­la­tion of the MIT li­cense that the Zigtools Playground is li­censed un­der along­side a vi­o­la­tion of the Zig MIT li­cense (for the zig.wasm blob).The above copy­right no­tice and this per­mis­sion no­tice shall be in­cluded in

all copies or sub­stan­tial por­tions of the Software.

We’ve fixed this by adding the li­censes in ques­tion to your repos­i­tory. As your repos­i­tory does not in­clude a di­rect link to the *.wasm de­pen­den­cies, we’ve added a li­cense dis­claimer on the play­ground page as well that men­tions the li­censes.

Zigbook’s afore­men­tioned bad be­hav­ior and their con­tin­ued vi­o­la­tion of our li­cense and un­will­ing­ness to fix the vi­o­la­tion mo­ti­vated us to write this blog post.

It’s sad that our first blog post is about the pla­gia­rism of our coolest sub­pro­ject. We chal­lenged our­selves by cre­at­ing a WASM-based client-side play­ground to en­able of­fline us­age, code pri­vacy, and no server costs.

This in­ci­dent has mo­ti­vated us to in­vest more time into our play­ground and has gen­er­ated a cou­ple of ideas:

* We’d like to en­able mul­ti­file sup­port to al­low more com­plex Zig pro­jects to be run in the browser

* We’d like to col­lab­o­rate with fel­low Ziguanas to in­te­grate the play­ground into their ex­cel­lent Zig tu­to­ri­als, books, and blog­postsA per­fect ex­am­ple use­case would be en­abling folks to hop into Ziglings on­line with the play­groundThe Zig web­site it­self would be a great tar­get as well!

* A per­fect ex­am­ple use­case would be en­abling folks to hop into Ziglings on­line with the play­ground

* The Zig web­site it­self would be a great tar­get as well!

* We’d like to sup­port stack traces us­ing DWARF de­bug info which is not yet emit­ted by the self-hosted Zig com­piler

As Zig com­mu­nity mem­bers, we ad­vise all other mem­bers of the Zig com­mu­nity to steer clear of Zigbook.

If you’re look­ing to learn Zig, we strongly rec­om­mend look­ing at the ex­cel­lent of­fi­cial Zig learn page which con­tains ex­cel­lent re­sources from the pre­vi­ously men­tioned Ziglings to Karl Seguin’s Learning Zig.

We’re also us­ing this op­por­tu­nity to men­tion that we’re fundrais­ing to keep ZLS sus­tain­able for our only full-time main­tainer, Techatrix. We’d be thrilled if you’d be will­ing to give just $5 a month. You can check out our OpenCollective or GitHub Sponsors.

...

Read the original on zigtools.org »

4 364 shares, 31 trendiness

Windows drive letters are not limited to A-Z

- Programming

- Windows

On its own, the ti­tle of this post is just a true piece of trivia, ver­i­fi­able with the built-in subst tool (among other meth­ods).

Here’s an ex­am­ple cre­at­ing the drive +:\ as an alias for a di­rec­tory at C:\foo:

subst +: C:\foo

The +:\ drive then works as nor­mal (at least in cmd.exe, this will be dis­cussed more later):

> cd /D +:\

+:\> tree .

Folder PATH list­ing

Volume se­r­ial num­ber is 00000001 12AB:23BC

└───bar

However, un­der­stand­ing why it’s true elu­ci­dates a lot about how Windows works un­der the hood, and turns up a few cu­ri­ous be­hav­iors.

The paths that most peo­ple are fa­mil­iar with are Win32 name­space paths, e.g. some­thing like C:\foo which is a drive-ab­solute Win32 path. However, the high-level APIs that take Win32 paths like CreateFileW ul­ti­mately will con­vert a path like C:\foo into a NT name­space path be­fore call­ing into a lower level API within nt­dll.dll like NtCreateFile.

This can be con­firmed with NtTrace, where a call to CreateFileW with C:\foo ul­ti­mately leads to a call of NtCreateFile with \??\C:\foo:

NtCreateFile( FileHandle=0x40c07ff640 [0xb8], DesiredAccess=SYNCHRONIZE|GENERIC_READ|0x80, ObjectAttributes=“\??\C:\foo”, IoStatusBlock=0x40c07ff648 [0/1], AllocationSize=null, FileAttributes=0, ShareAccess=7, CreateDisposition=1, CreateOptions=0x4000, EaBuffer=null, EaLength=0 ) => 0

NtClose( Handle=0xb8 ) => 0

That \??\C:\foo is a NT name­space path, which is what NtCreateFile ex­pects. To un­der­stand this path, though, we need to talk about the Object Manager, which is re­spon­si­ble for han­dling NT paths.

The Object Manager is re­spon­si­ble for keep­ing track of named ob­jects, which we can ex­plore us­ing the WinObj tool. The \?? part of the \??\C:\foo path is ac­tu­ally a spe­cial vir­tual folder within the Object Manager that com­bines the \GLOBAL?? folder and a per-user DosDevices folder to­gether.

For me, the ob­ject C: is within \GLOBAL??, and is ac­tu­ally a sym­bolic link to \Device\HarddiskVolume4:

So, \??\C:\foo ul­ti­mately re­solves to \Device\HarddiskVolume4\foo, and then it’s up to the ac­tual de­vice to deal with the foo part of the path.

The im­por­tant thing here, though, is that \??\C:\foo is just one way of re­fer­ring to the de­vice path \Device\HarddiskVolume4\foo. For ex­am­ple, vol­umes will also get a named ob­ject cre­ated us­ing their GUID with the for­mat Volume{18123456-abcd-efab-cdef-1234abcdabcd} that is also a sym­link to some­thing like \Device\HarddiskVolume4, so a path like \??\Volume{18123456-abcd-efab-cdef-1234abcdabcd}\foo is ef­fec­tively equiv­a­lent to \??\C:\foo.

All this is to say that there’s noth­ing in­nately spe­cial about the named ob­ject C:; the Object Manager treats it just like any other sym­bolic link and re­solves it ac­cord­ingly.

How I see it, drive let­ters are es­sen­tially just a con­ven­tion borne out of the con­ver­sion of a Win32 path into a NT path. In par­tic­u­lar, that would be down to the im­ple­men­ta­tion of RtlDosPathNameToNtPathName_U.

In other words, since RtlDosPathNameToNtPathName_U con­verts C:\foo to \??\C:\foo, then an ob­ject named C: will be­have like a drive let­ter. To give an ex­am­ple of what I mean by that: in an al­ter­nate uni­verse, RtlDosPathNameToNtPathName_U could con­vert the path FOO:\bar to \??\FOO:\bar and then FOO: could be­have like a drive let­ter.

So, get­ting back to the ti­tle, how does RtlDosPathNameToNtPathName_U treat some­thing like +:\foo? Well, ex­actly the same as C:\foo:

> paths.exe C:\foo

path type: .DriveAbsolute

nt path: \??\C:\foo

> paths.exe +:\foo

path type: .DriveAbsolute

nt path: \??\+:\foo

Therefore, if an ob­ject with the name +: is within the vir­tual folder \??, we can ex­pect the Win32 path +:\ to be­have like any other drive-ab­solute path, which is ex­actly what we see.

This sec­tion only fo­cuses on a few things that were rel­e­vant to what I was work­ing on. I en­cour­age oth­ers to in­ves­ti­gate the im­pli­ca­tions of this fur­ther if they feel so in­clined.

Drives with a drive-let­ter other than A-Z do not ap­pear in File Explorer, and can­not be nav­i­gated to in File Explorer.

For the do not ap­pear” part, my guess as to what’s hap­pen­ing is that ex­plorer.exe is walk­ing \?? and look­ing specif­i­cally for ob­jects named A: through Z:. For the cannot be nav­i­gated to” part, that’s a bit more mys­te­ri­ous, but my guess is that ex­plorer.exe has a lot of spe­cial logic around han­dling paths typed into the lo­ca­tion bar, and part of that re­stricts drive let­ters to A-Z (i.e. it’s short-cir­cuit­ing be­fore it ever tries to ac­tu­ally open the path).

PowerShell seems to re­ject non-A-Z dri­ves as well:

PS C:\> cd +:\

cd : Cannot find drive. A drive with the name +’ does not ex­ist.

At line:1 char:1

+ cd +:\

+ CategoryInfo  : ObjectNotFound: (+:String) [Set-Location], DriveNotFoundException

+ FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.SetLocationCommand

Drive let­ters don’t have to be within the ASCII range at all; they can also be non-ASCII char­ac­ters.

> subst €: C:\foo

> cd /D €:\

€:\> tree .

Folder PATH list­ing

Volume se­r­ial num­ber is 000000DE 12AB:23BC

└───bar

Non-ASCII drive let­ters are even case-in­sen­si­tive like A-Z are:

> subst Λ: C:\foo

> cd /D λ:\

λ:\> tree .

Folder PATH list­ing

Volume se­r­ial num­ber is 000000DE 12AB:23BC

λ:\

└───bar

However, drive-let­ters can­not be ar­bi­trary Unicode graphemes or even ar­bi­trary code points; they are re­stricted to a sin­gle WTF-16 code unit (a u16, so U+FFFF). The tool that we’ve been us­ing so far (subst.exe) er­rors with Invalid pa­ra­me­ter if you try to use a drive let­ter with a code point larger than U+FFFF, but you can get around that by go­ing through the MountPointManager di­rectly:

However, hav­ing the sym­link in place does­n’t solve any­thing on its own:

> cd /D 𤭢:\

The file­name, di­rec­tory name, or vol­ume la­bel syn­tax is in­cor­rect.

This is be­cause there’s no way to get the drive-ab­solute Win32 path 𤭢:\ to end up as the rel­e­vant NT path. As men­tioned ear­lier, the be­hav­ior of RtlDosPathNameToNtPathName_U is what mat­ters, and we can ver­ify that it will not con­vert a drive-ab­solute path with a drive let­ter big­ger than U+FFFF to the rel­e­vant NT path:

C:\foo> paths.exe 𤭢:\foo

path type: .Relative

nt path: \??\C:\foo\𤭢:\foo

It’s very com­mon for path-re­lated func­tions to be writ­ten with­out the use of sys­tem-spe­cific APIs, which means that there’s high po­ten­tial for a mis­match be­tween how RtlDosPathNameToNtPathName_U treats a file path and how some­thing like a par­tic­u­lar im­ple­men­ta­tion of path.is­Ab­solute treats a file path.

As a ran­dom ex­am­ple, Rust only con­sid­ers paths with A-Z drive let­ters as ab­solute:

use std::path::Path;

fn main() {

println!(“C:\\ {}”, Path::new(“C:\\foo”).is_absolute());

println!(“+:\\ {}”, Path::new(“+:\\foo”).is_absolute());

println!(“€:\\ {}”, Path::new(“€:\\foo”).is_absolute());

> rustc test.rs

> test.exe

C:\ true

+:\ false

€:\ false

Whether or not this rep­re­sents a prob­lem worth fix­ing is left as an ex­er­cise for the reader (I gen­uinely don’t know if it is a prob­lem), but there’s a sec­ond wrin­kle (hinted at pre­vi­ously) in­volv­ing text en­cod­ing that can make some­thing like an is­Ab­solute im­ple­men­ta­tion re­turn dif­fer­ent re­sults for the same path. This wrin­kle is the rea­son I looked into this whole thing in the first place, as when I was do­ing some work on Zig’s path-re­lated func­tions re­cently I re­al­ized that look­ing at path[0], path[1], and path[2] for a pat­tern like C:\ will look at dif­fer­ent parts of the path de­pend­ing on the en­cod­ing. That is, for some­thing like €:\ (which is made up of the code points ):

* Encoded as WTF-16 where U+20AC can be en­coded as the sin­gle u16 code unit 0x20AC, that’d mean path[0] will be 0x20AC, path[1] will be 0x3A (:), and path[2] will be 0x5C (\), which looks like a drive-ab­solute path

* Encoded as WTF-8 where U+20AC is en­coded as three u8 code units (0xE2 0x82 0xAC), that’d mean path[0] will be 0xE2, path[1] will be 0x82, and path[2] will be 0xAC, mean­ing it will look noth­ing like a drive-ab­solute path

So, to write an im­ple­men­ta­tion that treats paths the same re­gard­less of en­cod­ing, some de­ci­sion has to be made:

* If strict com­pat­i­bil­ity with RtlDetermineDosPathNameType_U/RtlDosPathNameToNtPathName_U is de­sired, de­code the first code point and check for when deal­ing with WTF-8 (this is the op­tion I went with for the Zig stan­dard li­brary, but I’m not su­per happy about it)

* If you want to be able to al­ways check path[0]/​path[1]/​path[2] and don’t care about non-ASCII drive let­ters, check for path[0] re­gard­less of en­cod­ing

* If you don’t care about any­thing other than the stan­dard A-Z drive let­ters, then check for that ex­plic­itly (this is what Rust does)

Something bizarre that I found with this whole thing is that the ker­nel32.dll API SetVolumeMountPointW has it’s own unique quirk when deal­ing with non-ASCII drive let­ters. Specifically, this code (attempting to cre­ate the drive €:\) will suc­ceed:

const std = @import(“std”);

const win­dows = std.os.win­dows;

const L = std.uni­code.wt­f8­ToWt­f16LeStringLit­eral;

ex­tern kernel32” fn SetVolumeMountPointW(

VolumeMountPoint: win­dows.LPCW­STR,

VolumeName: win­dows.LPCW­STR,

) call­conv(.winapi) win­dows.BOOL;

pub fn main() !void {

const vol­ume_­name = L(“\\\\?\\Volume{18123456-abcd-efab-cdef-1234abcdabcd}\");

const moun­t_­point = L(“€:\");

if (SetVolumeMountPointW(mount_point, vol­ume_­name) == 0) {

const err = win­dows.Get­LastEr­ror();

std.de­bug.print(“{any}\n”, .{err});

re­turn er­ror.Failed;

However, when we look at the Object Manager, the €: sym­link won’t ex­ist… but ¬: will:

...

Read the original on www.ryanliptak.com »

5 341 shares, 46 trendiness

Don't Push AI Down Our Throats

AI is be­ing done wrong.

It’s be­ing pushed down our throats. It’s in our search bars, our op­er­at­ing sys­tems, and even our cre­ative tools, whether we asked for it or not. It feels less like an up­grade and more like a force-feed­ing.

It does­n’t need to be this way. Technology can be adopted slowly. Organically. One piece at a time.

Right now, the fran­tic pace of de­ploy­ment is­n’t about util­ity; it’s about liq­uid­ity. It’s be­ing shoved down our throats be­cause some bil­lion­aires need to make some more bil­lions be­fore they die.

We don’t owe them any­thing.

It is time to do AI the right way. The hon­ey­moon phase of the hype cy­cle is over. We now know the lim­i­ta­tions. We see the hal­lu­ci­na­tions. We see the er­rors. Let’s pick the things which work and slowly in­te­grate it into our lives. We don’t need to do it this quar­ter just be­cause some startup has to do an earn­ings call. We will do it if it helps us.

And let’s be clear: We don’t need AGI (Artificial General Intelligence). We don’t need a dig­i­tal god. We just need soft­ware that works.

If the cur­rent mod­els don’t work? No prob­lem. Let the re­searchers go back to the lab and do their jobs. We will con­tinue do­ing ours. We might even gen­er­ate more data for them in the process—but this time, we do it cor­rectly. We will work with the cre­ators, writ­ers, and artists, in­stead of rip­ping off their life’s work to feed the model.

I hear the com­plaints from the tech gi­ants al­ready: But we bought too many GPUs! We spent bil­lions on in­fra­struc­ture! They have to be put to work!”

I will use what cre­ates value for me. I will not buy any­thing that is of no use to me.

There are plenty of le­git­i­mate use cases for AI and enough places to make money with­out force-feed­ing the mar­ket. But I will not al­low AI to be pushed down my throat just to jus­tify your bad in­vest­ment.

...

Read the original on gpt3experiments.substack.com »

6 305 shares, 26 trendiness

Norway wealth fund to vote for human rights report at Microsoft AGM, against management

Norway’s $2 tril­lion wealth fund said on Sunday it would vote for a share­holder pro­posal at the up­com­ing Microsoft annual gen­eral meet­ing re­quir­ing for a re­port on the risks of op­er­at­ing in coun­tries with sig­nif­i­cant hu­man rights con­cerns.

Microsoft man­age­ment had rec­om­mended share­hold­ers voted against the mo­tion.

The fund also said it would vote against the re-ap­point­ment of CEO Satya Nadella as chair of the board, as well as against his pay pack­age.

The fund owned a 1.35% stake worth $50 bil­lion in the com­pany as of June 30, ac­cord­ing to fund data, mak­ing it the fund’s sec­ond-largest eq­uity hold­ing over­all, af­ter Nvidia.

It is Microsoft’s eighth-largest share­holder, ac­cord­ing to LSEG data.

Investors in the U. S. tech com­pany will de­cide whether to rat­ify the pro­posed mo­tions at the AGM on Dec. 5.

...

Read the original on www.cnbc.com »

7 288 shares, 51 trendiness

Writing a good CLAUDE.md

Note: this post is also ap­plic­a­ble to AGENTS.md, the open-source equiv­a­lent of CLAUDE.md for agents and har­nesses like OpenCode, Zed, Cursor and Codex.

LLMs are state­less func­tions. Their weights are frozen by the time they’re used for in­fer­ence, so they don’t learn over time. The only thing that the model knows about your code­base is the to­kens you put into it.

Similarly, cod­ing agent har­nesses such as Claude Code usu­ally re­quire you to man­age agents’ mem­ory ex­plic­itly. CLAUDE.md (or AGENTS.md) is the only file that by de­fault goes into every sin­gle con­ver­sa­tion you have with the agent.

This has three im­por­tant im­pli­ca­tions:

Coding agents know ab­solutely noth­ing about your code­base at the be­gin­ning of each ses­sion.

The agent must be told any­thing that’s im­por­tant to know about your code­base each time you start a ses­sion.

CLAUDE.md is the pre­ferred way of do­ing this.

Since Claude does­n’t know any­thing about your code­base at the be­gin­ning of each ses­sion, you should use CLAUDE.md to on­board Claude into your code­base. At a high level, this means it should cover:

WHAT: tell Claude about the tech, your stack, the pro­ject struc­ture. Give Claude a map of the code­base. This is es­pe­cially im­por­tant in monore­pos! Tell Claude what the apps are, what the shared pack­ages are, and what every­thing is for so that it knows where to look for things

WHY: tell Claude the pur­pose of the pro­ject and what every­thing is do­ing in the repos­i­tory. What are the pur­pose and func­tion of the dif­fer­ent parts of the pro­ject?

HOW: tell Claude how it should work on the pro­ject. For ex­am­ple, do you use bun in­stead of node? You want to in­clude all the in­for­ma­tion it needs to ac­tu­ally do mean­ing­ful work on the pro­ject. How can Claude ver­ify Claude’s changes? How can it run tests, type­checks, and com­pi­la­tion steps?

But the way you do this is im­por­tant! Don’t try to stuff every com­mand Claude could pos­si­bly need to run in your CLAUDE.md file - you will get sub-op­ti­mal re­sults.

Regardless of which model you’re us­ing, you may no­tice that Claude fre­quently ig­nores your CLAUDE.md file’s con­tents.

You can in­ves­ti­gate this your­self by putting a log­ging proxy be­tween the claude code CLI and the Anthropic API us­ing ANTHROPIC_BASE_URL. Claude code in­jects the fol­low­ing sys­tem re­minder with your CLAUDE.md file in the user mes­sage to the agent:

IMPORTANT: this con­text may or may not be rel­e­vant to your tasks.

You should not re­spond to this con­text un­less it is highly rel­e­vant to your task.

As a re­sult, Claude will ig­nore the con­tents of your CLAUDE.md if it de­cides that it is not rel­e­vant to its cur­rent task. The more in­for­ma­tion you have in the file that’s not uni­ver­sally ap­plic­a­ble to the tasks you have it work­ing on, the more likely it is that Claude will ig­nore your in­struc­tions in the file.

Why did Anthropic add this? It’s hard to say for sure, but we can spec­u­late a bit. Most CLAUDE.md files we come across in­clude a bunch of in­struc­tions in the file that aren’t broadly ap­plic­a­ble. Many users treat the file as a way to add hotfixes” to be­hav­ior they did­n’t like by ap­pend­ing lots of in­struc­tions that weren’t nec­es­sar­ily broadly ap­plic­a­ble.

We can only as­sume that the Claude Code team found that by telling Claude to ig­nore the bad in­struc­tions, the har­ness ac­tu­ally pro­duced bet­ter re­sults.

The fol­low­ing sec­tion pro­vides a num­ber of rec­om­men­da­tions on how to write a good CLAUDE.md file fol­low­ing con­text en­gi­neer­ing best prac­tices.

Your mileage may vary. Not all of these rules are nec­es­sar­ily op­ti­mal for every setup. Like any­thing else, feel free to break the rules once…

you un­der­stand when & why it’s okay to break them

you have a good rea­son to do so

### Less (instructions) is more

It can be tempt­ing to try and stuff every sin­gle com­mand that claude could pos­si­bly need to run, as well as your code stan­dards and style guide­lines into CLAUDE.md. We rec­om­mend against this.

Though the topic has­n’t been in­ves­ti­gated in an in­cred­i­bly rig­or­ous man­ner, some re­search has been done which in­di­cates the fol­low­ing:

Frontier think­ing LLMs can fol­low ~ 150-200 in­struc­tions with rea­son­able con­sis­tency. Smaller mod­els can at­tend to fewer in­struc­tions than larger mod­els, and non-think­ing mod­els can at­tend to fewer in­struc­tions than think­ing mod­els.

Smaller mod­els get MUCH worse, MUCH more quickly. Specifically, smaller mod­els tend to ex­hibit an ex­po­ten­tial de­cay in in­struc­tion-fol­low­ing per­for­mance as the num­ber of in­struc­tions in­crease, whereas larger fron­tier think­ing mod­els ex­hibit a lin­ear de­cay (see be­low). For this rea­son, we rec­om­mend against us­ing smaller mod­els for multi-step tasks or com­pli­cated im­ple­men­ta­tion plans.

LLMs bias to­wards in­struc­tions that are on the pe­riph­eries of the prompt: at the very be­gin­ning (the Claude Code sys­tem mes­sage and CLAUDE.md), and at the very end (the most-re­cent user mes­sages)

As in­struc­tion count in­creases, in­struc­tion-fol­low­ing qual­ity de­creases uni­formly. This means that as you give the LLM more in­struc­tions, it does­n’t sim­ply ig­nore the newer (“further down in the file”) in­struc­tions - it be­gins to ig­nore all of them uni­formly

Our analy­sis of the Claude Code har­ness in­di­cates that Claude Code’s sys­tem prompt con­tains ~50 in­di­vid­ual in­struc­tions. Depending on the model you’re us­ing, that’s nearly a third of the in­struc­tions your agent can re­li­ably fol­low al­ready - and that’s be­fore rules, plu­g­ins, skills, or user mes­sages.

This im­plies that your CLAUDE.md file should con­tain as few in­struc­tions as pos­si­ble - ide­ally only ones which are uni­ver­sally ap­plic­a­ble to your task.

All else be­ing equal, an LLM will per­form bet­ter on a task when its’ con­text win­dow is full of fo­cused, rel­e­vant con­text in­clud­ing ex­am­ples, re­lated files, tool calls, and tool re­sults com­pared to when its con­text win­dow has a lot of ir­rel­e­vant con­text.

Since CLAUDE.md goes into every sin­gle ses­sion, you should en­sure that its con­tents are as uni­ver­sally ap­plic­a­ble as pos­si­ble.

For ex­am­ple, avoid in­clud­ing in­struc­tions about (for ex­am­ple) how to struc­ture a new data­base schema - this won’t mat­ter and will dis­tract the model when you’re work­ing on some­thing else that’s un­re­lated!

Length-wise, the less is more prin­ci­ple ap­plies as well. While Anthropic does not have an of­fi­cial rec­om­men­da­tion on how long your CLAUDE.md file should be, gen­eral con­sen­sus is that < 300 lines is best, and shorter is even bet­ter.

At HumanLayer, our root CLAUDE.md file is less than sixty lines.

Writing a con­cise CLAUDE.md file that cov­ers every­thing you want Claude to know can be chal­leng­ing, es­pe­cially in larger pro­jects.

To ad­dress this, we can lever­age the prin­ci­ple of Progressive Disclosure to en­sure that claude only sees task- or pro­ject-spe­cific in­struc­tions when it needs them.

Instead of in­clud­ing all your dif­fer­ent in­struc­tions about build­ing your pro­ject, run­ning tests, code con­ven­tions, or other im­por­tant con­text in your CLAUDE.md file, we rec­om­mend keep­ing task-spe­cific in­struc­tions in sep­a­rate mark­down files with self-de­scrip­tive names some­where in your pro­ject.

agen­t_­docs/

|- build­ing_the_pro­ject.md

|- run­ning_tests.md

|- code_­con­ven­tions.md

|- ser­vice_ar­chi­tec­ture.md

|- data­base_schema.md

|- ser­vice_­com­mu­ni­ca­tion_­pat­terns.md

Then, in your CLAUDE.md file, you can in­clude a list of these files with a brief de­scrip­tion of each, and in­struct Claude to de­cide which (if any) are rel­e­vant and to read them be­fore it starts work­ing. Or, ask Claude to pre­sent you with the files it wants to read for aproval first be­fore read­ing them.

Prefer point­ers to copies. Don’t in­clude code snip­pets in these files if pos­si­ble - they will be­come out-of-date quickly. Instead, in­clude file:line ref­er­ences to point Claude to the au­thor­i­ta­tive con­text.

Conceptually, this is very sim­i­lar to how Claude Skills are in­tended to work, al­though skills are more fo­cused on tool use than in­struc­tions.

### Claude is (not) an ex­pen­sive lin­ter

One of the most com­mon things that we see peo­ple put in their CLAUDE.md file is code style guide­lines. Never send an LLM to do a lin­ter’s job. LLMs are com­pa­ra­bly ex­pen­sive and in­cred­i­bly slow com­pared to tra­di­tional lin­ters and for­mat­ters. We think you should al­ways use de­ter­min­is­tic tools when­ever you can.

Code style guide­lines will in­evitably add a bunch of in­struc­tions and mostly-ir­rel­e­vant code snip­pets into your con­text win­dow, de­grad­ing your LLMs per­for­mance and in­struc­tion-fol­low­ing and eat­ing up your con­text win­dow.

LLMs are in-con­text learn­ers! If your code fol­lows a cer­tain set of style guide­lines or pat­terns, you should find that armed with a few searches of your code­base (or a good re­search doc­u­ment!) your agent should tend to fol­low ex­ist­ing code pat­terns and con­ven­tions with­out be­ing told to.

If you feel very stronly about this, you might even con­sider set­ting up a Claude Code Stop hook that runs your for­mat­ter & lin­ter and pre­sents er­rors to Claude for it to fix. Don’t make Claude find the for­mat­ting is­sues it­self.

Bonus points: use a lin­ter that can au­to­mat­i­cally fix is­sues (we like Biome), and care­fully tune your rules about what can safely be auto-fixed for max­i­mum (safe) cov­er­age.

You could also cre­ate a Slash Command that in­cludes your code guide­lines and which points claude at the changes in ver­sion con­trol, or at your git sta­tus, or sim­i­lar. This way, you can han­dle im­ple­men­ta­tion and for­mat­ting sep­a­rately. You will see bet­ter re­sults with both as a re­sult.

### Don’t use /init or auto-gen­er­ate your CLAUDE.md

Both Claude Code and other har­nesses with OpenCode come with ways to auto-gen­er­ate your CLAUDE.md file (or AGENTS.md).

Because CLAUDE.md goes into every sin­gle ses­sion with Claude code, it is one of the high­est lever­age points of the har­ness - for bet­ter or for worse, de­pend­ing on how you use it.

A bad line of code is a bad line of code. A bad line of an im­ple­men­ta­tion plan has the po­ten­tial to cre­ate a lot of bad lines of code. A bad line of a re­search that mis­un­der­stands how the sys­tem works has the po­ten­tial to re­sult in a lot of bad lines in the plan, and there­fore a lot more bad lines of code as a re­sult.

But the CLAUDE.md file af­fects every sin­gle phase of your work­flow and every sin­gle ar­ti­fact pro­duced by it. As a re­sult, we think you should spend some time think­ing very care­fully about every sin­gle line that goes into it:

CLAUDE.md is for on­board­ing Claude into your code­base. It should de­fine your pro­jec­t’s WHY, WHAT, and HOW.

Less (instructions) is more. While you should­n’t omit nec­es­sary in­struc­tions, you should in­clude as few in­struc­tions as rea­son­ably pos­si­ble in the file.

Keep the con­tents of your CLAUDE.md con­cise and uni­ver­sally ap­plic­a­ble.

Use Progressive Disclosure - don’t tell Claude all the in­for­ma­tion you could pos­si­bly want it to know. Rather, tell it how to find im­por­tant in­for­ma­tion so that it can find and use it, but only when it needs to to avoid bloat­ing your con­text win­dow or in­struc­tion count.

Claude is not a lin­ter. Use lin­ters and code for­mat­ters, and use other fea­tures like Hooks and Slash Commands as nec­es­sary.

CLAUDE.md is the high­est lever­age point of the har­ness, so avoid auto-gen­er­at­ing it. You should care­fully craft its con­tents for best re­sults.

...

Read the original on www.humanlayer.dev »

8 265 shares, 19 trendiness

Migrating Dillo from GitHub

I would like to mi­grate the Dillo pro­ject away from

GitHub

into a new home which is more friendly to be used with Dillo and solves some of its prob­lems. This page sum­ma­rizes the cur­rent sit­u­a­tion with GitHub and why I de­cided to move away from it into a self-hosted server with mul­ti­ple mir­rors in other forges.

Before we dive into the de­tails, I would like to briefly men­tion what hap­pened with the old site. The orig­i­nal Dillo web­site was at dillo.org, which also had the source code of Dillo in a mer­cu­r­ial repos­i­tory at hg.dillo.org. But it also in­cluded the mail server used to reach the de­vel­op­ers, a bug tracker and archives for the mail­ing list. However, in 2022

the do­main was lost and some­one else de­cided to buy it to put a sim­i­lar site but plaged with AI gen­er­ated ads. The orig­i­nal de­vel­op­ers are no longer ac­tive, but luck­ily I had a copy of the mer­cu­r­ial repos­i­tory and with some help I was able to re­cover a lot of ma­te­r­ial from the orig­i­nal server (some parts are still miss­ing to this day).

I want to avoid this sit­u­a­tion as much as pos­si­ble, so we can­not rely on a sin­gle site that can go down and the whole pro­ject be­come lost. Initially, I up­loaded the Dillo source and web­site to git repos­i­to­ries on GitHub, but I no longer think this is a good idea.

GitHub has been use­ful to store all repos­i­to­ries of the Dillo pro­ject, as well as to run the CI work­flows for plat­forms in which I don’t have a ma­chine avail­able (like Windows, Mac OS or some BSDs).

However, it has sev­eral prob­lems that make it less suit­able to de­velop Dillo any­more. The most an­noy­ing prob­lem is that the fron­tend barely works with­out JavaScript, so we can­not open is­sues, pull re­quests, source code or CI logs in Dillo it­self, de­spite them be­ing mostly plain HTML, which I don’t think is ac­cept­able. In the past, it used to grace­fully de­grade with­out en­forc­ing JavaScript, but now it does­n’t. Additionally, the page is very re­source hun­gry, which I don’t think is needed to ren­der mostly sta­tic text.

Another big prob­lem is that it is a sin­gle point of fail­ure. I don’t mean that GitHub is stored in a sin­gle ma­chine, but it is con­trolled by a sin­gle en­tity which can uni­lat­er­aly ban our repos­i­tory or ac­count and we would lose the abil­ity to no­tify in that URL what hap­pened. This can cause data loss if we don’t have a lo­cal copy of all the data.

On the us­abil­ity side, the plat­form has be­come more and more slow over time, which is af­fect­ing the de­vel­op­ment process. It also re­quires you to have a fast Internet con­nec­tion at all times, which is not the case for me some­times. Additionally, GitHub seems to en­cour­age a push model” in which you are no­ti­fied when a new event oc­curs in your pro­ject(s), but I don’t want to work with that model. Instead, I pre­fer it to work as a pull model”, so I only get up­dates when I specif­i­cally look for them. This model would also al­low me to eas­ily work of­fline. Unfortunately, I see that the same push model has been copied to al­ter­na­tive forges.

On the so­cial side, I feel that it does­n’t have the right tools to mod­er­ate users, spe­cially for pro­jects where the ra­tio of non-tech­ni­cal users to de­vel­op­ers is high. This is spe­cially prob­lem­atic when ac­tive is­sues with de­vel­oper notes be­gin to be filled with com­ments from users that have never con­tributed to the pro­ject and usu­ally do more harm than good. This sit­u­a­tion ends up caus­ing burnout in de­vel­op­ers.

Lastly, GitHub seem to fol­low the cur­rent trend of over-fo­cus­ing on LLMs and gen­er­a­tive AI, which are de­stroy­ing the open web (or what re­mains of it) among

other

prob­lems. It has a di­rect im­pact on us be­cause sites pro­tect them­seves with a JavaScript wall (or worse, browser fin­ger­print­ing) to pre­vent ag­gre­sive LLM crawler bots from over­load­ing the site, but they also leave Dillo users out. So I would pre­fer not to en­cour­age this trend. Despite my in­ten­tions, mov­ing Dillo away won’t change much their ca­pa­bil­ity to train their model with our code, but at least I won’t be ac­tively help­ing.

After re­search­ing the avail­able op­tions, it seems that none of the cur­rent forges would al­low us to have a re­dun­dant sys­tem that can pre­vent the forge from be­com­ing a sin­gle point of fail­ure and solve the rest of the prob­lems with GitHub. Therefore, I de­cided to self-host Dillo my­self, move all im­por­tant data to git repos­i­to­ries and keep them syn­chro­nized in mul­ti­ple git mir­rors.

I de­cided to buy the dillo-browser.org do­main name and setup a very small VPS. Initially, I was very skep­ti­cal that it would be able to sur­vive on to­day’s web, but it seems to be do­ing an ac­cept­able job at han­dling it (mostly AI bot traf­fic mas­querad­ing as users). The Dillo web­site is avail­able here:

I re­searched which git fron­tends may suit our needs, and I dis­cov­ered that most op­tions are very com­pli­cated to self-host and re­quire a lot of server re­sources and JavaScript on the fron­tend. I ended up test­ing cgit, which is writ­ten in C and it seems to be very light­weight both on RAM and CPU us­age. Furthermore, the web fron­tend does­n’t re­quire JS, so I can use it from Dillo (I mod­i­fied cgit CSS slightly to work well on Dillo). It is avail­able on this URL:

Regarding the bug tracker, I also took a look at the avail­able op­tions. They are all too com­pli­cated for what I would like to have and they seem to cen­tral­ize the data into a data­base that can get lost. This is pre­cisely the case that hap­pened with the old dillo bug tracker and we are still un­able to re­cover the orig­i­nal bug en­tries.

To avoid this prob­lem, I cre­ated my own bug tracker soft­ware,

buggy, which is a very sim­ple C tool that parses plain Markdown files and cre­ates a sin­gle HTML page for each bug. All bugs are stored in a

git repos­i­tory

and a git hook re­gen­er­ates the bug pages and the in­dex on each new com­mit. As it is sim­ply plain text, I can edit the bugs lo­cally and only push them to the re­mote when I have Internet back, so it works nice of­fline. Also, as the out­put is just an sta­tic HTML site, I don’t need to worry about hav­ing any vul­ner­a­bil­i­ties in my code, as it will only run at build time. You can see it live here, with the ex­ported is­sues from GitHub:

The mail­ing list archives are stored by three in­de­pen­dent ex­ter­nal ser­vices, but I might in­clude a copy with our own archives in the fu­ture.

As all the im­por­tant data is now stored in git repos­i­to­ries, we can mir­ror them in any forge, with­out hav­ing to rely on their cus­tom stor­age for­mat for the is­sues or other data. If a forge goes down (or goes rogue) we can sim­ply switch to an­other site with low switch­ing cost. To this end, I have cre­ated git mir­rors in Codeberg and Sourcehut that are synced with our git server:

However, we still have a sin­gle point of fail­ure: the DNS en­try of the dillo-browser.org do­main. If we lose the DNS en­try (like with dillo.org) it would cause a prob­lem as all ser­vices will be un­reach­able. We could re­cover from such sit­u­a­tion by re­ly­ing on al­ter­na­tive ways to reach users, by the mail­ing list, fe­di­verse or IRC, as well as up­dat­ing the mir­rors to re­flect the cur­rent sit­u­a­tion. It is not ideal, but I don’t think it would cause a cat­a­strophic data loss (like it hap­pened be­fore) as all the data is now stored in git and repli­cated across in­de­pen­dent lo­ca­tions.

In or­der for this page to have some au­thor­ity, the HTML file is signed with my GPG key

(32E65EC501A1B6FDF8190D293EE6BA977EB2A253), which is the same that I use to sign the last re­leases of Dillo and is also listed in my GitHub user. The sig­na­ture is avail­able here and is linked to the page with the tag us­ing the rel=sig­na­ture

re­la­tion. You can find more in­for­ma­tion and how to ver­ify the sig­na­ture in the

Dillo RFC-006.

Using OpenPGP sig­na­tures is ro­bust against los­ing the DNS en­try, as the au­thor­ity is not given by the TLS cer­tifi­cate chain but by the trust in the OpenPGP sig­na­ture, so we could move the site else­where and still claim that is owned by us. Additionally, as we can store the sig­na­tures in­side all git mir­rors, they are also re­silient against data loss.

Keep in mind that the mi­gra­tion process re­quires sev­eral mov­ing parts and it will take a while for it to sta­bi­lize (switching costs). The GitHub

repos­i­to­ries won’t be re­moved at any point in time and they will con­tinue to

be up­dated un­til we fin­ish the mi­gra­tion. When the mi­gra­tion process is com­pleted, I will mark the Dillo repos­i­to­ries as archived and prop­erly co­mu­ni­cate it in our site. It is im­por­tant that we don’t re­move any com­mit or tar­ball re­lease to avoid break­ing down­stream builds that still rely on the GitHub URL.

Lastly, I’m glad that we can have our own fully in­de­pen­dent and self-hosted site with rel­a­tively low ex­penses and very lit­tle en­ergy cost (which is good for the en­vi­ron­ment, but prob­a­bly not even no­tice­able at large scale). With the cur­rent DNS and server costs and our cur­rent do­na­tions I con­sider that it is likely that we can con­tinue cov­er­ing the ex­penses for at least the next 3 years in the worst case sce­nario. If you are in­ter­ested in keep­ing us afloat, you can help via Liberapay.

...

Read the original on dillo-browser.org »

9 260 shares, 17 trendiness

CachyOS — Blazingly Fast OS based on Arch Linux

CachyOS is de­signed to de­liver light­ning-fast speeds and sta­bil­ity, en­sur­ing a smooth and en­joy­able com­put­ing ex­pe­ri­ence every time you use it. Whether you’re a sea­soned Linux user or just start­ing out, CachyOS is the ideal choice for those look­ing for a pow­er­ful, cus­tomiz­able and blaz­ingly fast op­er­at­ing sys­tem.

Blazingly Fast & Customizable

Linux dis­tri­b­u­tion CachyOS is de­signed to de­liver light­ning-fast speeds and sta­bil­ity, en­sur­ing a smooth and en­joy­able com­put­ing ex­pe­ri­ence every time you use it. Whether you’re a sea­soned Linux user or just start­ing out, CachyOS is the ideal choice for those look­ing for a pow­er­ful, cus­tomiz­able and blaz­ingly fast op­er­at­ing sys­tem.

Download

Learn more

CachyOS is de­signed to de­liver light­ning-fast speeds and sta­bil­ity, en­sur­ing a smooth and en­joy­able com­put­ing ex­pe­ri­ence every time you use it. Whether you’re a sea­soned Linux user or just start­ing out, CachyOS is the ideal choice for those look­ing for a pow­er­ful, cus­tomiz­able and blaz­ingly fast op­er­at­ing sys­tem.

Blazingly Fast & Customizable

Linux dis­tri­b­u­tion CachyOS is de­signed to de­liver light­ning-fast speeds and sta­bil­ity, en­sur­ing a smooth and en­joy­able com­put­ing ex­pe­ri­ence every time you use it. Whether you’re a sea­soned Linux user or just start­ing out, CachyOS is the ideal choice for those look­ing for a pow­er­ful, cus­tomiz­able and blaz­ingly fast op­er­at­ing sys­tem.

Download

Learn more

Experience Cutting-Edge Linux Performance with CachyOS - A dis­tri­b­u­tion built on Arch Linux, CachyOS fea­tures the op­ti­mized linux-cachyos ker­nel uti­liz­ing the ad­vanced BORE Scheduler for un­par­al­leled per­for­mance.

CachyOS does com­pile pack­ages with the x86-64-v3, x86-64-v4 and Zen4 in­struc­tion set and LTO to pro­vide a higher per­for­mance. Core pack­ages also get PGO or BOLT op­ti­miza­tion. CachyOS of­fers a va­ri­ety of pop­u­lar Desktop Environments, Wayland Compositors and X11 Window Managers in­clud­ing KDE Plasma, GNOME, XFCE, i3, Wayfire, LXQt, Openbox, Cinnamon, COSMIC, UKUI, LXDE, Mate, Budgie, Qtile, Hyprland, Sway and Niri. Select your pre­ferred en­vi­ron­ment dur­ing the on­line in­stal­la­tion process. CachyOS of­fers a choice of two in­stallers to fit your needs: a user-friendly GUI ver­sion based on Calamares, and a CLI-based op­tion for those who pre­fer a stream­lined, non-graph­i­cal in­stal­la­tion ex­pe­ri­ence. Power Up Your Computing with Robust Kernel Support CachyOS uti­lizes the BORE Scheduler for bet­ter in­ter­ac­tiv­ity, and of­fers a va­ri­ety of sched­uler op­tions in­clud­ing EEVDF, sched-ext, ECHO, and RT. All ker­nels are com­piled with op­ti­mized x86-64-v3, x86-64-v4, Zen4 in­struc­tions and LTO to be op­ti­mized for your CPU.

...

Read the original on cachyos.org »

10 254 shares, 10 trendiness

The HTTP QUERY Method

This note is to be re­moved be­fore pub­lish­ing as an RFC.¶

Discussion of this draft takes place on the HTTP work­ing group mail­ing list (ietf-http-wg@w3.org), which is archived at https://​lists.w3.org/​Archives/​Pub­lic/​ietf-http-wg/.¶

Working Group in­for­ma­tion can be found at https://​httpwg.org/; source code and is­sues list for this draft can be found at https://​github.com/​httpwg/​http-ex­ten­sions/​la­bels/​query-method.¶

The changes in this draft are sum­ma­rized in Appendix C.14.¶

This Internet-Draft is sub­mit­ted in full con­for­mance with the pro­vi­sions of BCP 78 and BCP 79.¶

Internet-Drafts are work­ing doc­u­ments of the Internet Engineering Task Force (IETF). Note that other groups may also dis­trib­ute work­ing doc­u­ments as Internet-Drafts. The list of cur­rent Internet-Drafts is at https://​data­tracker.ietf.org/​drafts/​cur­rent/.¶

Internet-Drafts are draft doc­u­ments valid for a max­i­mum of six months and may be up­dated, re­placed, or ob­so­leted by other doc­u­ments at any time. It is in­ap­pro­pri­ate to use Internet-Drafts as ref­er­ence ma­te­r­ial or to cite them other than as work in progress.“¶

This Internet-Draft will ex­pire on 22 May 2026.¶

Copyright (c) 2025 IETF Trust and the per­sons iden­ti­fied as the doc­u­ment au­thors. All rights re­served.¶

This doc­u­ment is sub­ject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://​trustee.ietf.org/​li­cense-info) in ef­fect on the date of pub­li­ca­tion of this doc­u­ment. Please re­view these doc­u­ments care­fully, as they de­scribe your rights and re­stric­tions with re­spect to this doc­u­ment. Code Components ex­tracted from this doc­u­ment must in­clude Revised BSD License text as de­scribed in Section 4.e of the Trust Legal Provisions and are pro­vided with­out war­ranty as de­scribed in the Revised BSD License.¶

This spec­i­fi­ca­tion de­fines the HTTP QUERY re­quest method as a means of mak­ing a safe, idem­po­tent re­quest (Section 9.2 of [HTTP]) that en­closes a rep­re­sen­ta­tion de­scrib­ing how the re­quest is to be processed by the tar­get re­source.¶

However, when the data con­veyed is too vo­lu­mi­nous to be en­coded in the re­quest’s URI, this pat­tern be­comes prob­lem­atic:¶

* of­ten size lim­its are not known ahead of time be­cause a re­quest can pass through many un­co­or­di­nated

sys­tems (but note that rec­om­mends senders and re­cip­i­ents to sup­port at least 8000 octets),¶

* ex­press­ing cer­tain kinds of data in the tar­get URI is in­ef­fi­cient be­cause of the over­head of en­cod­ing that data into a valid URI,¶

* re­quest URIs are more likely to be logged than re­quest con­tent, and may also turn up in book­marks,¶

* en­cod­ing queries di­rectly into the re­quest URI ef­fec­tively casts every pos­si­ble com­bi­na­tion of query in­puts as dis­tinct

re­sources.¶

As an al­ter­na­tive to us­ing GET, many im­ple­men­ta­tions make use of the HTTP POST method to per­form queries, as il­lus­trated in the ex­am­ple be­low. In this case, the in­put to the query op­er­a­tion is passed as the re­quest con­tent as op­posed to us­ing the re­quest URIs query com­po­nent.¶

A typ­i­cal use of HTTP POST for re­quest­ing a query is:¶

In this vari­a­tion, how­ever, it is not read­ily ap­par­ent — ab­sent spe­cific knowl­edge of the re­source and server to which the re­quest is be­ing sent — that a safe, idem­po­tent query is be­ing per­formed.¶

The QUERY method pro­vides a so­lu­tion that spans the gap be­tween the use of GET and POST, with the ex­am­ple above be­ing ex­pressed as:¶

As with POST, the in­put to the query op­er­a­tion is passed as the con­tent of the re­quest rather than as part of the re­quest URI. Unlike POST, how­ever, the method is ex­plic­itly safe and idem­po­tent, al­low­ing func­tions like caching and au­to­matic re­tries to op­er­ate.¶

Recognizing the de­sign prin­ci­ple that any im­por­tant re­source ought to be iden­ti­fied by a URI, this spec­i­fi­ca­tion de­scribes how a server can as­sign URIs to both the query it­self or a spe­cific query re­sult, for later use in a GET re­quest.¶

The QUERY method is used to ini­ti­ate a server-side query. Unlike the GET method, which re­quests a rep­re­sen­ta­tion of the re­source iden­ti­fied by the tar­get URI (as de­fined by Section 7.1 of [HTTP]), the QUERY method is used to ask the tar­get re­source to per­form a query op­er­a­tion within the scope of that tar­get re­source.¶

The con­tent of the re­quest and its me­dia type de­fine the query. The ori­gin server de­ter­mines the scope of the op­er­a­tion based on the tar­get re­source.¶

Servers MUST fail the re­quest if the Content-Type re­quest field ([HTTP], Section 8.3) is miss­ing or is in­con­sis­tent with the re­quest con­tent.¶

As for all HTTP meth­ods, the tar­get URIs query part takes part in iden­ti­fy­ing the re­source be­ing queried. Whether and how it di­rectly af­fects the re­sult of the query is spe­cific to the re­source and out of scope for this spec­i­fi­ca­tion.¶

QUERY re­quests are safe with re­gard to the tar­get re­source ([HTTP], Section 9.2.1) —  that is, the client does not re­quest or ex­pect any change to the state of the tar­get re­source. This does not pre­vent the server from cre­at­ing ad­di­tional HTTP re­sources through which ad­di­tional in­for­ma­tion can be re­trieved (see Sections 2.3

and 2.4).¶

Furthermore, QUERY re­quests are idem­po­tent ([HTTP], Section 9.2.2) —  they can be re­tried or re­peated when needed, for in­stance af­ter a con­nec­tion fail­ure.¶

As per Section 15.3 of [HTTP], a 2xx (Successful) re­sponse code sig­nals that the re­quest was suc­cess­fully re­ceived, un­der­stood, and ac­cepted.¶

In par­tic­u­lar, a 200 (OK) re­sponse in­di­cates that the query was suc­cess­fully processed and the re­sults of that pro­cess­ing are en­closed as the re­sponse con­tent.¶

The Accept-Query” re­sponse header field can be used by a re­source to di­rectly sig­nal sup­port for the QUERY method while iden­ti­fy­ing the spe­cific query for­mat me­dia type(s) that may be used.¶

Accept-Query con­tains a list of me­dia ranges (Section 12.5.1 of [HTTP]) us­ing Structured Fields” syn­tax ([STRUCTURED-FIELDS]). Media ranges are rep­re­sented by a List Structured Header Field of ei­ther Tokens or Strings, con­tain­ing the me­dia range value with­out pa­ra­me­ters.¶

Media type pa­ra­me­ters, if any, are mapped to Structured Field Parameters of type String or Token. The choice of Token vs. String is se­man­ti­cally in­signif­i­cant. That is, re­cip­i­ents MAY con­vert Tokens to Strings, but MUST NOT process them dif­fer­ently based on the re­ceived type.¶

Media types do not ex­actly map to Tokens, for in­stance they al­low a lead­ing digit. In cases like these, the String for­mat needs to be used.¶

The only sup­ported uses of wild­cards are */*”, which matches any type, or xxxx/*”, which matches any sub­type of the in­di­cated type.¶

The or­der of types listed in the field value is not sig­nif­i­cant.¶

The value of the Accept-Query field ap­plies to every URI on the server that shares the same path; in other words, the query com­po­nent is ig­nored. If re­quests to the same re­source re­turn dif­fer­ent Accept-Query val­ues, the most re­cently re­ceived fresh value (per Section 4.2 of [HTTP-CACHING]) is used.¶

Although the syn­tax for this field ap­pears to be sim­i­lar to other fields, such as Accept” (Section 12.5.1 of [HTTP]), it is a Structured Field and thus MUST be processed as spec­i­fied in Section 4 of [STRUCTURED-FIELDS].¶

The QUERY method is sub­ject to the same gen­eral se­cu­rity con­sid­er­a­tions as all HTTP meth­ods as de­scribed in [HTTP].¶

It can be used as an al­ter­na­tive to pass­ing re­quest in­for­ma­tion in the URI (e.g., in the query com­po­nent). This is pre­ferred in some cases, as the URI is more likely to be logged or oth­er­wise processed by in­ter­me­di­aries than the re­quest con­tent. In other cases, where the query con­tains sen­si­tive in­for­ma­tion, the po­ten­tial for log­ging of the URI might mo­ti­vate the use of QUERY over GET.¶

If a server cre­ates a tem­po­rary re­source to rep­re­sent the re­sults of a QUERY re­quest (e.g., for use in the Location or Content-Location field), as­signs a URI to that re­source, and the re­quest con­tains sen­si­tive in­for­ma­tion that can­not be logged, then that URI SHOULD be cho­sen such that it does not in­clude any sen­si­tive por­tions of the orig­i­nal re­quest con­tent.¶

Caches that nor­mal­ize QUERY con­tent in­cor­rectly or in ways that are sig­nif­i­cantly dif­fer­ent from how the re­source processes the con­tent can re­turn an in­cor­rect re­sponse if nor­mal­iza­tion re­sults in a false pos­i­tive.¶

A QUERY re­quest from user agents im­ple­ment­ing CORS (Cross-Origin Resource Sharing) will re­quire a preflight” re­quest, as QUERY does not be­long to the set of CORS-safelisted meth­ods (see Methods” in [FETCH]).¶

The ex­am­ples be­low are for il­lus­tra­tive pur­poses only; if one needs to send queries that are ac­tu­ally this short, it is likely bet­ter to use GET.¶

The me­dia type used in most ex­am­ples is application/x-www-form-urlencoded” (as used in POST re­quests from browser user clients, de­fined in application/x-www-form-urlencoded” in [URL]). The Content-Length fields have been omit­ted for brevity.¶

The HTTP Method Registry (http://​www.iana.org/​as­sign­ments/​http-meth­ods) al­ready con­tains three other meth­ods with the prop­er­ties safe” and idempotent”: PROPFIND ([RFC4918]), REPORT ([RFC3253]), and SEARCH ([RFC5323]).¶

It would have been pos­si­ble to re-use any of these, up­dat­ing it in a way that it matches what this spec­i­fi­ca­tion de­fines as the new method QUERY. Indeed, the early stages of this spec­i­fi­ca­tion used SEARCH”.¶

The method name QUERY ul­ti­mately was cho­sen be­cause:¶

* The al­ter­na­tives use a generic me­dia type for the re­quest con­tent (“application/xml”); the

se­man­tics of the re­quest de­pends solely on the re­quest con­tent.¶

* Furthermore, they all orig­i­nate from the WebDAV ac­tiv­ity, about which many have mixed feel­ings.¶

* QUERY cap­tures the re­la­tion with the URIs query com­po­nent well.¶

This sec­tion is to be re­moved be­fore pub­lish­ing as an RFC.¶

We thank all mem­bers of the HTTP Working Group for ideas, re­views, and feed­back.¶

The fol­low­ing in­di­vid­u­als de­serve spe­cial recog­ni­tion: Carsten Bormann, Mark Nottingham, Martin Thomson, Michael Thornburgh, Roberto Polli, Roy Fielding, and Will Hawkins.¶

Ashok Malhotra par­tic­i­pated in early dis­cus­sions lead­ing to this spec­i­fi­ca­tion:¶

Discussion on the this HTTP method was re­opened by Asbjørn Ulsberg dur­ing the HTTP Workshop in 2019:¶

...

Read the original on www.ietf.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.