10 interesting stories served every morning and every evening.
A man takes a train from London to the coast. He’s visiting a town called Wulfleet. It’s small and old, the kind of place with a pub that’s been pouring pints since the Battle of Bosworth Field. He’s going to write about it for his blog. He’s excited.
He arrives, he checks in. He walks to the cute B&B he’d picked out online. And he writes it all up like any good travel blogger would: in that breezy LiveJournal style from 25 years ago, perhaps, in his case, trying a little too hard.
But as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler.
By the middle of his post, he’s writing in what might as well be a foreign language.
But it’s not a foreign language. It’s all English.
None of the story is real: not the blogger, not the town. But the language is real, or at least realistic. I constructed the passages myself, working from what we know about how English was written in each period.
It’s a thousand years of the English language, compressed into a single blog post.
Read it and notice where you start to struggle. Notice where you give up entirely. Then meet me on the other side and I’ll tell you what happened to the language (and the blogger).
You’re reading The Dead Language Society, where 35,000+ readers explore the hidden history of the English language. I’m Colin Gorrie: PhD linguist and your guide through 1,500 years of English being weird.
I publish every Wednesday. Paid subscribers get every issue, the full archive, and the content I’m most proud of: practical guides to reading historical texts yourself, honest takes on how language really works, and live book clubs where we read texts like Beowulf and (up next!) Sir Gawain and the Green Knight.
Well, I finally got to the town everyone has been talking about lately. Wulfleet. And let me tell you, it was not easy to get here. It’s ridiculous how close this place is to London, and yet how hard it is to get here. I took a train to some place whose name I can’t pronounce, and then from there I had to hop on a bus. The whole day was shot just getting here.
Not going to lie though: so far, it’s totally worth it.
Yes, it’s the typical English coastal town: the seagulls, the cobblestone streets, the works. But there’s something about it that just makes me want to dress up in a cape and walk around like I’m in a Gothic novel. Although, let’s be honest, do I really need an excuse to do that? :)
Everyone seems really nice here, although I did have one really weird encounter on the way to the B&B. A guy was following me for a while. It kind of freaked me out. Anyway, if you go to Wulfleet, just watch out for this one weird guy who hangs out near the bus stop. I know, real specific. But anyway, that was just a bit odd.
Speaking of which, the B&B is also… interesting. LOL. It has separate hot and cold taps and everything. I’m about to see how the “bed” portion works. I’ll update you on the “breakfast” tomorrow morning. If I can find an internet cafe around here, that is.
My plans for an untroubled sleep were upset, however, when I woke with a start before dawn. The window had, it seemed, come open in the night, though I was perfectly certain I had fastened it. I sprang up from the bed to see what was the cause, but I could see nothing in the darkness — nothing, that is, that I could satisfactorily account for. I closed the window again but was entirely unable to fall asleep due to the shock. I am not, I hope, an easily frightened man, but I confess the incident left me not a little unsettled.
When dawn finally came, I went downstairs to find a well-appointed dining room in which there was laid out a modest but perfectly adequate meal. After I ate, and thanked the landlady — a respectable woman of the kind one expects to find in charge of such an establishment — I decided to take a stroll around the town. The sea air did something to revive me after the events of the previous day, not to mention the night, although a question still weighed on me. Do windows simply burst open in the night? Or was there something else afoot? I resolved to make enquiries, though of whom I was not yet certain.
After spending the day wandering around the environs of the town, and, finding myself hungry, I sought out an inn, where I might buy some supper. It was not difficult to find one, and, sitting alone, I called for supper from what the publican had to offer. I confess I gave no great thought to the quality of the fare. Hunger, that great leveller, makes philosophers of us all, and renders even the meanest dish agreeable.
The place was adequately charming. The tables were covered with guttering candles, and the local rustics seemed to be amusing themselves with great jollity. Reader, I am not one of those travellers who holds himself above the common people of the places he visits. I saw fit rather to join in with their sport and we whiled away the hours together in good cheer. I found them to be as honest and amiable a company as one could wish for.
The only thing that disturbed my good humour was when I thought, for a brief moment, that I saw the man who accosted me yesterday among the crowd. But it must have been a mere fancy, for whatever I thought I saw vanished as quickly as it had appeared. I chided myself for the weakness of my nerves, and took another draught to steady them.
When, at long last, the entertainment was spent, I undertook to return to my lodgings; however, finding myself quite unable to find my way, a fact which owed something to having imbibed rather immoderately in the hours prior — and here let me caution the reader against the particular hospitality of country innkeepers, which is liberal beyond what prudence would advise — I soon found myself at the harbour’s edge.
When I was firſt come to Wulfleet, I did not see the harbour, for I was weary and would ſooner go to the inn, that I might ſleep. It is a truth well known to travellers, that wearineſs of body breeds a kind of blindneſs to all things, however remarkable, and ſo it was with me. But now that I beheld the ſight of it, I marvelled. In the inky blackneſs I could see not a ſtar, nor even a ſliver of the moon. It was indeed a wonder that I did not ſtumble on my way, and periſh in a gutter, for many a man has come to his end by leſs.
Finally, with my mind much filled with reflection, I found my way through dark ſtreets to a familiar alley. This was a welcome sight, as an ill foreboding was lately come into my mind. I entertained for a moment such unmanly thoughts as are far from my cuſtom, and which I ſhould be aſhamed to ſet down here, were it not that an honeſt account requires it. I felt eſpecially that I was purſued by ſome thing unknown to me. I glanced backwards, to ſee if I might eſpy that man. But there was no one, or at least no one that I could diſcern.
At laſt, I found the doorway of the inn, as much by chance as by deſign, and retired to ſleep with a mind addled half by drink and the other half by a fear for which I could not well account. I commended myſelf to Providence, and reſolved to think no more on it.
That night I was vntroubled by such euents as I had vndergone the night before, for I had barred the door ere I ſlept, and so fortified, that so no force might open it. This town of Wulfleet was paſſing ſtrange, as ſtrange I dare ſay as any place whereof Plinie wrote, or any iland discovered in the voyages of Sir Walter Raleigh. But I was bound to my taſk, and would not flinch from it. I would record the occurrents in Wulfleet, howeuer ſtrange they might ſeem, yea, though they were ſuch things as would make a leſſer man forſake his purpoſe.
But I ſoon forgot my earlier dread, for the morning brought with it ſo fair a ſight as to diſpel all feare. The people of the town had erected ouernight a market of ſuch variety and abundance as I haue not ſeen the like. Animals walked among men, and men among animals, a true maruel!
As I looked on this aſſembled throng, greatly pleaſed and not a little amazed, a man approached me. He ſtartled me, but I quickly saw he was nothing but a farmer come to hawke his wares. “Would you haue a fowl, sir?” ſaid he, “My hens are fat and luſty, and you may haue them cheap.”
I said in reply, “No, I thanke thee,” He was a churliſh fellow, rude of ſpeech and meane of aſpect, and I felt no ſhame at thouing ſuch a man as that.
I went forthe among the people, and as I paſſed throughe the market and the ſtretes of the towne, euer lokyng aboute me with grete care, leſt I ſholde agayn encountre ſome peryl, thee appeared, from oute of the prees that ſame man whom I ſo dredde. And he was passyng foule was of vyſage, as it ſemed to me, more foule than ony man I had ſene in al my lyf.
He turned hym towarde me and ſayd, “Straunger, wherefore art thou come hydder?”
And I anſwerd hym nott, for I knewe nott what I ſholde ſaye, ne what answere myght ſerue me beſt in ſuche a caas.
Than hee asked me, “Was it for that thou wouldeſt ſee the Maiſter?”
And verely this name dyd me ſore affright, for who was this Maiſter wherof he ſpake? And what maner of man was he, that his very name ſholde be ſpoken wyth ſuche reuerence and drede. I wolde haue fledde but he purſued me and by myn avys he was the ſwifter, for he caught me full ſoone.
I sayd to him, “What meaneſt thou? Who is the Maiſter?”
And he sayd, “I ſhall brynge the vnto hym, and thou ſhalt ſee for thy ſelf what maner of lorde he is.”
But I wolde not, and cryed out ayenſt hym with grete noyſe, leſt he ſholde take me thyder by violence and ayenſt my wille.
Bot þe man wolde me nat abandone þer, ne suffre me to passen forþ. I miȝt nat flee, for hys companiouns, of whom þer were a gret nombre, beſet me aboute, and heelden me faſt þat I ne scholde nat ascapen. And þei weren stronge menn and wel douȝti, of grymme contenaunce and fiers, and armed wiþ swerdes and wiþ knyues, so þat it were gret foly for eny man to wiþstonden hem.
So þei bounden me hond and foot and ledden me to þe one þei callede Maiſter, of whom I hadde herd so muchel and knewe so litel.
Þe sayde Maiſter, what that hee apperid bifore me, was verely a Deuill, or so me þouȝte, for neuer in al my lyf hadde I beholden so foule a creature. Hee bore a blak clok þat heng to þe grounde, and ſpake neuer a worde. Bot his countenaunce was hidous and so dredful þat my blood wexed colde to loken on hym. For he hadde nat þe visage of a man bot of a beest, wiþ þe teeþ and ſnoute of a wulf, scharpe and crueel. And his eres weren longe eres, as of a wulf, and bihynde him þer heng a gret tayl, as wulf haþ. And hys eyen schon in þe derknesse lyke brennyng coles.
Bot þei maden no answer, neyþer good ne yuel. Þei weren stille as stoon, and stoden about me as men þat wayte on þeir lordes commandement.
Þanne after muchel tyme spak þe Maiſter, and his wordes weren colde as wintres is. His vois was as þe crying of rauenes, scharpe and schille, and al þat herde hym weren adrade and durst nat speken.
“I deme þe to þe deeþ, straunger. Here ſchaltou dyen, fer fram þi kynne and fer fram þine owen londe, and non ſchal knowen þi name, ne non schal þe biwepe.”
And I sayde to hym, wiþ what boldenesse I miȝte gaderen, “Whi fareſt þou wiþ me þus? What treſpaas haue I wrouȝt ayeins þe, þat þou demeſt me so harde a dome?”
“Swie!” quoþ he, and smot me wiþ his honde, so þat I fel to þe erþe. And þe blod ran doun from mi mouþe.
And I swied, for þe grete drede þat was icumen vpon mee was more þan I miȝte beren. Mi herte bicam as stoon, and mi lymes weren heuy as leed, and I ne miȝte namore stonden ne spoken.
Þe euele man louȝ, whan that he sawe my peine, and it was a crueel louȝter, wiþouten merci or pitee as of a man þat haþ no rewþe in his herte.
Allas! I scholde neuer hauen icumen to þis toune of Wuluesfleete! Cursed be þe dai and cursed be þe houre þat I first sette foot þerinne!
Hit is muchel to seggen all þat pinunge hie on me uuroȝten, al þar sor and al þat sorȝe. Ne scal ic nefre hit forȝeten, naht uuhiles ic libbe!
Ac þer com me gret sped, and þat was a uuif, strong and stiþ! Heo com in among þe yuele men and me nerede fram heore honden.
Heo sloȝ þe heþene men þat me pyneden, sloȝ hem and fælde hem to þe grunde. Þer was blod and bale inouȝ And hie feollen leien stille, for hie ne miȝten namore stonden. Ac þe Maister, þe uuraþþe Maister, he flaȝ awei in þe deorcnesse and was iseon namore.
Ic seide hire, “Ic þanke þe, leoue uuif, for þu hauest me ineredd from dæðe and from alle mine ifoan!”
Þæt ƿif me andsƿarode and cƿæð, “Ic eom Ælfgifu gehaten. Þu scalt me to ƿife nimen, þeah þe þu hit ne ƿite gyt, for hit is sƿa gedon þæt nan man ne nan ƿif ne mote heonon faren buten þurh þone dæð þæs Hlafordes.”
“Ac þær is gyt mare to donne her, forþi ƿe nabbaþ þone Hlaford ofslagenne. He is strong and sƿiðe yfel, and manige gode men he hæfð fordone on þisse stoƿe.”
And þæt heo sægde wæs eall soþ. Ic ƿifode on hire, and heo ƿæs ful scyne ƿif, ƿis ond ƿælfæst. Ne gemette ic næfre ær sƿylce ƿifman. Heo ƿæs on gefeohte sƿa beald swa ænig mann, and þeah hƿæþere hire andƿlite wæs ƿynsum and fæger.
Ac ƿe naƿiht freo ne sindon, for þy þe ƿe næfre ne mihton fram Ƿulfesfleote geƿitan, nefne ƿe þone Hlaford finden and hine ofslean. Se Hlaford hæfþ þisne stede mid searocræftum gebunden, þæt nan man ne mæg hine forlætan. Ƿe sindon her sƿa fuglas on nette, swa fixas on ƿere.
The blog ends there. No sign-off, no “thanks for reading.” Just a few sentences in a language that most of us lost the ability to follow somewhere around the thirteenth century.
So, how far did you get?
Let me take you back through it.
Written English has been remarkably stable over the last 300 years. Spelling was standardized in the mid-1700s, and grammar has barely changed at all. This means that, if you can read Harry Potter (1997–2003), you can read Robinson Crusoe (1719), which is good news to fans of the English novel.
What has changed is the voice.
Blog post became diary entry became travel letter. The format changed much faster than the language. Compare the very first line, “Well, I finally got to the town everyone has been talking about lately” with the line from the 1800 section, “Hunger, that great leveller, makes philosophers of us all, and renders even the meanest dish agreeable.”
They’re both performances of a sort: the 2000s protagonist is performing for his blog’s audience, so the tone is chatty and personal. The 1800s protagonist, with the mind of a Georgian diarist, is performing for posterity, so he philosophizes.
The one visible change in the language itself is the appearance, in the 1700 passage, of the long s (ſ). This wasn’t a different letter, just a variant form of s used in certain positions within a word. It disappeared fully from English printing in the early 19th century, although its use was dwindling even before that, which is why it does not appear in the 1800 passage. It’s a typographic change rather than a linguistic one, but it’s the first unmistakable sign that the text is getting older.
This is where the ground starts to move under our feet.
Before the mid 1700s, there was no such thing as standardized spelling. Writers spelled words as they heard them, or as they felt like spelling them, which is why the 1500s and 1600s sections look so alien, even when the words, underneath the surface, are ones you know.
For another difficulty, take the word vntroubled from the 1600 section. This is our familiar untroubled, but the u is replaced by a v, because u and v were not yet considered separate letters. They were variants of the same latter, used to represent both sounds. The convention was to write v at the beginning of words and u in the middle, which give us spelling like vnto (unto), euents (events), ouernight (overnight), and howeuer (however). It looks weird at first, but once you know the rule, the words become much more readable.
Another new arrival — or, more accurately, late departure — from the language is the letter thorn (þ), which first appears in the 1400 section. Thorn is simply th. That’s it. Wherever you see þ, read th, and the word will usually reveal itself: þe is the, þei is they, þat is that. If you’ve ever seen a pub called “Ye Olde” anything, that ye is actually þe, an attempt by early printers to write a thorn without having to make an expensive new letter.
Thorn’s companion, yogh (ȝ), is more complicated. It represents sounds that modern English spells as gh or y — so miȝt is might, ȝe is ye. The reasons for this are a story unto themselves.
But the most interesting change in this period isn’t a letter. Rather, it’s a pronoun. Notice the moment in the 1600 section where our blogger meets a farmer and says, “No, I thanke thee.” Then he adds, “I felt no ſhame at thouing ſuch a man as that.”
Thouing. To thou someone, or to use thou when talking to them, was, by the 1600s, a deliberate social statement. Thou was the old singular form of you; you was originally the plural. Over the centuries, you came to be used as a polite singular, much as French uses vous. Gradually, you took over entirely. By Shakespeare’s time (1564–1616), thou survived in two main contexts: intimacy (as in prayer) and insult. Our blogger is being a little rude here. He’s looking down on a man he considers beneath him, and his language gives him a way of making his feelings perfectly clear.
Somewhere in this section — and if you’re like most readers, it happened around 1300 or 1200 — the language crossed a boundary. Up to this point, comprehension felt like it was dropping gradually, but now it’s fallen off a cliff. In one section, you could get by by squinting and guessing; in the next you were utterly lost. You have hit the wall.
There are two reasons for this. The first is vocabulary. As you move backwards in time, the French and Latin loanwords that make up an enormous proportion of the Modern English vocabulary grow fewer and fewer. When you pass 1250, they drop off almost altogether. Where a modern writer would say he underwent torture, a 1200-era writer must say that he suffered pinunge instead.
The farther back you go, the more the familiar Latinate layer of English is stripped away, revealing the Germanic core underneath: a language that looks to modern eyes more like German or Icelandic than anything we’d call English.
The second reason for the difficulty is grammar. Old English (450–1100) was an inflected language: it used endings on nouns, adjectives, and verbs to mark their grammatical roles in a sentence, much as Latin or modern German do. Alongside these endings came a greater freedom in word order, which makes sense given that the endings told you who was doing what to whom.
English lost most of these endings over the course of the period linguists call Middle English (1100–1450), and it tightened its word order as if to compensate. When you look at these final sections, if you can make out the words, you will see the effects of this freer word order. For example, in 1200 we read monige gode men he hæfð fordone ‘many good men he has destroyed’, where we’d expect a Modern English order more like and he has destroyed many good men.
To make matters worse, a few unfamiliar letters also appear: wynn (ƿ) is simply w, eth (ð) means the same as thorn (þ) — both represent th, and ash (æ) represents the vowel in cat and hat.:
All of these factors combined likely made it difficult, if not impossible, to follow the plot. So let me tell you what happened. In the 1400 section, the blogger was seized. He was dragged before a creature they called the Master, and the Master was no man. He had the teeth and snout of a wolf, as well as a wolf’s long ears and great tail. His eyes glowed like burning coals. Wulfleet was once Wulfesfleot ‘the Bay of the Wolf.’
In the 1300 section, the Master condemned our hero to death. In the 1200 section, a woman appeared and killed his captor. The Master, however, fled into the darkness. In the 1100 section, the woman revealed her name: Ælfgifu ‘gift of the elves.’ She told the blogger — can we still call him that in 1100? — they would marry, and she shares the terrible truth about Wulfleet: no one leaves until the Master is dead.
In the 1000 section, they are married. She is, he writes, as bold as any man in battle, and yet fair of face. But they are not free. Together, through the dark streets of Wulfleet, they hunt the Master still.
The English in which I write this paragraph is not the English of fifty years ago, and it will not be the English of fifty years in the future.
Go back far enough, and English writing becomes unrecognisable. Go forward far enough and the same thing will happen, though none of us will be around to notice.
Our poor blogger didn’t notice either, even as he and his language travelled back in time through the centuries. He just kept writing even as he was carried off to somewhere he couldn’t come back from. Some say that, far away in Wulfleet, he’s writing still.
...
Read the original on www.deadlanguagesociety.com »
Date: 01 Apr 88 1620 PST
From: Les Earnest
Subject: The “previous account” referred to in RISKS-6.51
Reading a book got me into early trouble–I had an FBI record
by age twelve. This bizarre incident caused a problem much later
when I needed a security clearance. I learned that I could obtain
one only by concealing my sordid past.
A friend named Bob and I read the book ``Secret and Urgent,‘’ by Fletcher Pratt [Blue Ribbon Books; Garden City, NY; 1942] which was an early popular account of codes and ciphers. Pratt showed how to use letter frequencies to break ciphers and reported that the most frequently occurring letters in typical English text are e-t-a-o-n-r-i, in that order. (The letter frequency order of the story you are now reading is e-t-a-i-o-n-r. The higher frequency of ``i’′ probably reflects the fact that _I_ use the first person singular a lot.) Pratt’s book also treated more advanced cryptographic schemes.
Bob and I decided that we needed to have a secure way to communicate with each other, so we put together a rather elaborate jargon code based on the principles described in the book. I don’t remember exactly why we thought we needed it–we spent much of our time outside of school together, so there was ample time to talk privately. Still, you never could tell when you might need to send a secret message!
We made two copies of the code key (a description of how to encrypt and decrypt our messages) in the form of a single typewritten sheet. We each took a copy and carried it on our persons at all times when we were wearing clothes.
I actually didn’t wear clothes much. I spent nearly all my time outside school wearing just a baggy pair of maroon swimming trunks. That wasn’t considered too weird in San Diego.
I had recently been given glasses to wear but generally kept them in a hard case in the pocket of the trousers that I wore to school. I figured that this was a good place to hide my copy of the code key, so I carefully folded it to one-eighth of its original size and stuck it at the bottom of the case, under my glasses.
Every chance I got, I went body surfing at Old Mission Beach. I usually went by streetcar and, since I had to transfer Downtown, I wore clothes. Unfortunately, while I was riding the trolley home from the beach one Saturday, the case carrying my glasses slipped out of my pocket unnoticed. I reported the loss to my mother that night. She chastised me and later called the streetcar company. They said that the glasses hadn’t been turned in.
After a few weeks of waiting in vain for the glasses to turn up, we began to lose hope. My mother didn’t rush getting replacement glasses in view of the fact that I hadn’t worn them much and they cost about $8, a large sum at that time. (To me, $8 represented 40 round trips to the beach by streetcar, or 80 admission fees to the movies.)
Unknown to us, the case had been found by a patriotic citizen who opened it, discovered the code key, recognized that it must belong to a Japanese spy and turned it over to the FBI This was in 1943, just after citizens of Japanese descent had been forced off their property and taken away to concentration camps. I remember hearing that a local grocer was secretly a Colonel in the Japanese Army and had hidden his uniform in the back of his store. A lot of people actually believed these things.
About six weeks later, when I happened to be off on another escapade, my mother was visited by a man who identified himself as an investigator from the FBI (She was a school administrator, but happened to be at home working on her Ph. D. dissertation.) She noticed that there were two more men waiting in a car outside. The agent asked a number of questions about me, including my occupation. He reportedly was quite disappointed when he learned that I was only 12 years old.
He eventually revealed why I was being investigated, showed my mother the glasses and the code key and asked her if she knew where it came from. She didn’t, of course. She asked if we could get the glasses back and he agreed.
My mother told the investigator how glad she was to get them back, considering that they cost $8. He did a slow burn, then said ``Lady, this case has cost the government thousands of dollars. It has been the top priority in our office for the last six weeks. We traced the glasses to your son from the prescription by examining the files of nearly every optometrist in San Diego.‘’ It apparently didn’t occur to them that if I were a real Japanese spy, I might have brought the glasses with me from headquarters.
The FBI agent gave back the glasses but kept the code key ``for our records.‘’ They apparently were not fully convinced that they were dealing just with kids.
Since our communication scheme had been compromised, Bob and I devised a new key. I started carrying it in my wallet, which I thought was more secure. I don’t remember ever exchanging any cryptographic messages. I was always ready, though.
A few years later when I was in college, I got a summer job at the Naval Electronics Lab, which required a security clearance. One of the questions on the application form was ``Have you ever been investigated by the FBI?‘’ Naturally, I checked ``Yes.‘’ The next question was, ``If so, describe the circumstances.‘’ There was very little space on the form, so I answered simply and honestly, ``I was suspected of being a Japanese spy.‘’
When I handed the form in to the security officer, he scanned it quickly, looked me over slowly, then said, ``Explain this’’–pointing at the FBI question. I described what had happened. He got very agitated, picked up my form, tore it in pieces, and threw it in the waste basket.
He then got out a blank form and handed it to me, saying ``Here, fill it out again and don’t mention that. If you do, I’ll make sure that you never get a security clearance.‘’
I did as he directed and was shortly granted the clearance. I never again disclosed that incident on security clearance forms.
On another occasion much later, I learned by chance that putting certain provocative information on a security clearance form can greatly speed up the clearance process. But that is another story.
Edited and converted to HTML by Dan Bornstein.
...
Read the original on milk.com »
The Workflow in One Sentence I’ve been using Claude Code as my primary development tool for approx 9 months, and the workflow I’ve settled into is radically different from what most people do with AI coding tools. Most developers type a prompt, sometimes use plan mode, fix the errors, repeat. The more terminally online are stitching together ralph loops, mcps, gas towns (remember those?), etc. The results in both cases are a mess that completely falls apart for anything non-trivial.
The workflow I’m going to describe has one core principle: never let Claude write code until you’ve reviewed and approved a written plan. This separation of planning and execution is the single most important thing I do. It prevents wasted effort, keeps me in control of architecture decisions, and produces significantly better results with minimal token usage than jumping straight to code.
flowchart LR
R[Research] –> P[Plan]
P –> A[Annotate]
A –>|repeat 1-6x| A
A –> T[Todo List]
T –> I[Implement]
I –> F[Feedback & Iterate]
Every meaningful task starts with a deep-read directive. I ask Claude to thoroughly understand the relevant part of the codebase before doing anything else. And I always require the findings to be written into a persistent markdown file, never just a verbal summary in the chat.
read this folder in depth, understand how it works deeply, what it does and all its specificities. when that’s done, write a detailed report of your learnings and findings in research.md
study the notification system in great details, understand the intricacies of it and write a detailed research.md document with everything there is to know about how notifications work
go through the task scheduling flow, understand it deeply and look for potential bugs. there definitely are bugs in the system as it sometimes runs tasks that should have been cancelled. keep researching the flow until you find all the bugs, don’t stop until all the bugs are found. when you’re done, write a detailed report of your findings in research.md
Notice the language: “deeply”, “in great details”, “intricacies”, “go through everything”. This isn’t fluff. Without these words, Claude will skim. It’ll read a file, see what a function does at the signature level, and move on. You need to signal that surface-level reading is not acceptable.
The written artifact (research.md) is critical. It’s not about making Claude do homework. It’s my review surface. I can read it, verify Claude actually understood the system, and correct misunderstandings before any planning happens. If the research is wrong, the plan will be wrong, and the implementation will be wrong. Garbage in, garbage out.
This is the most expensive failure mode with AI-assisted coding, and it’s not wrong syntax or bad logic. It’s implementations that work in isolation but break the surrounding system. A function that ignores an existing caching layer. A migration that doesn’t account for the ORM’s conventions. An API endpoint that duplicates logic that already exists elsewhere. The research phase prevents all of this.
Once I’ve reviewed the research, I ask for a detailed implementation plan in a separate markdown file.
I want to build a new feature that extends the system to perform . write a detailed plan.md document outlining how to implement this. include code snippets
the list endpoint should support cursor-based pagination instead of offset. write a detailed plan.md for how to achieve this. read source files before suggesting changes, base the plan on the actual codebase
The generated plan always includes a detailed explanation of the approach, code snippets showing the actual changes, file paths that will be modified, and considerations and trade-offs.
I use my own .md plan files rather than Claude Code’s built-in plan mode. The built-in plan mode sucks. My markdown file gives me full control. I can edit it in my editor, add inline notes, and it persists as a real artifact in the project.
One trick I use constantly: for well-contained features where I’ve seen a good implementation in an open source repo, I’ll share that code as a reference alongside the plan request. If I want to add sortable IDs, I paste the ID generation code from a project that does it well and say “this is how they do sortable IDs, write a plan.md explaining how we can adopt a similar approach.” Claude works dramatically better when it has a concrete reference implementation to work from rather than designing from scratch.
But the plan document itself isn’t the interesting part. The interesting part is what happens next.
This is the most distinctive part of my workflow, and the part where I add the most value.
flowchart TD
W[Claude writes plan.md] –> R[I review in my editor]
R –> N[I add inline notes]
N –> S[Send Claude back to the document]
S –> U[Claude updates plan]
U –> D{Satisfied?}
D –>|No| R
D –>|Yes| T[Request todo list]
After Claude writes the plan, I open it in my editor and add inline notes directly into the document. These notes correct assumptions, reject approaches, add constraints, or provide domain knowledge that Claude doesn’t have.
The notes vary wildly in length. Sometimes a note is two words: “not optional” next to a parameter Claude marked as optional. Other times it’s a paragraph explaining a business constraint or pasting a code snippet showing the data shape I expect.
“use drizzle:generate for migrations, not raw SQL” — domain knowledge Claude doesn’t have
“no — this should be a PATCH, not a PUT” — correcting a wrong assumption
“remove this section entirely, we don’t need caching here” — rejecting a proposed approach
“the queue consumer already handles retries, so this retry logic is redundant. remove it and just let it fail” — explaining why something should change
“this is wrong, the visibility field needs to be on the list itself, not on individual items. when a list is public, all items are public. restructure the schema section accordingly” — redirecting an entire section of the plan
Then I send Claude back to the document:
I added a few notes to the document, address all the notes and update the document accordingly. don’t implement yet
This cycle repeats 1 to 6 times. The explicit “don’t implement yet” guard is essential. Without it, Claude will jump to code the moment it thinks the plan is good enough. It’s not good enough until I say it is.
Why This Works So Well
The markdown file acts as shared mutable state between me and Claude. I can think at my own pace, annotate precisely where something is wrong, and re-engage without losing context. I’m not trying to explain everything in a chat message. I’m pointing at the exact spot in the document where the issue is and writing my correction right there.
This is fundamentally different from trying to steer implementation through chat messages. The plan is a structured, complete specification I can review holistically. A chat conversation is something I’d have to scroll through to reconstruct decisions. The plan wins every time.
Three rounds of “I added notes, update the plan” can transform a generic implementation plan into one that fits perfectly into the existing system. Claude is excellent at understanding code, proposing solutions, and writing implementations. But it doesn’t know my product priorities, my users’ pain points, or the engineering trade-offs I’m willing to make. The annotation cycle is how I inject that judgement.
add a detailed todo list to the plan, with all the phases and individual tasks necessary to complete the plan - don’t implement yet
This creates a checklist that serves as a progress tracker during implementation. Claude marks items as completed as it goes, so I can glance at the plan at any point and see exactly where things stand. Especially valuable in sessions that run for hours.
When the plan is ready, I issue the implementation command. I’ve refined this into a standard prompt I reuse across sessions:
implement it all. when you’re done with a task or phase, mark it as completed in the plan document. do not stop until all tasks and phases are completed. do not add unnecessary comments or jsdocs, do not use any or unknown types. continuously run typecheck to make sure you’re not introducing new issues.
This single prompt encodes everything that matters:
“implement it all”: do everything in the plan, don’t cherry-pick
“mark it as completed in the plan document”: the plan is the source of truth for progress
“do not stop until all tasks and phases are completed”: don’t pause for confirmation mid-flow
“do not add unnecessary comments or jsdocs”: keep the code clean
“do not use any or unknown types”: maintain strict typing
“continuously run typecheck”: catch problems early, not at the end
I use this exact phrasing (with minor variations) in virtually every implementation session. By the time I say “implement it all,” every decision has been made and validated. The implementation becomes mechanical, not creative. This is deliberate. I want implementation to be boring. The creative work happened in the annotation cycles. Once the plan is right, execution should be straightforward.
Without the planning phase, what typically happens is Claude makes a reasonable-but-wrong assumption early on, builds on top of it for 15 minutes, and then I have to unwind a chain of changes. The “don’t implement yet” guard eliminates this entirely.
Once Claude is executing the plan, my role shifts from architect to supervisor. My prompts become dramatically shorter.
flowchart LR
I[Claude implements] –> R[I review / test]
R –> C{Correct?}
C –>|No| F[Terse correction]
F –> I
C –>|Yes| N{More tasks?}
N –>|Yes| I
N –>|No| D[Done]
Where a planning note might be a paragraph, an implementation correction is often a single sentence:
“You built the settings page in the main app when it should be in the admin app, move it.”
Claude has the full context of the plan and the ongoing session, so terse corrections are enough.
Frontend work is the most iterative part. I test in the browser and fire off rapid corrections:
For visual issues, I sometimes attach screenshots. A screenshot of a misaligned table communicates the problem faster than describing it.
“this table should look exactly like the users table, same header, same pagination, same row density.”
This is far more precise than describing a design from scratch. Most features in a mature codebase are variations on existing patterns. A new settings page should look like the existing settings pages. Pointing to the reference communicates all the implicit requirements without spelling them out. Claude would typically read the reference file(s) before making the correction.
When something goes in a wrong direction, I don’t try to patch it. I revert and re-scope by discarding the git changes:
“I reverted everything. Now all I want is to make the list view more minimal — nothing else.”
Narrowing scope after a revert almost always produces better results than trying to incrementally fix a bad approach.
Even though I delegate execution to Claude, I never give it total autonomy over what gets built. I do the vast majority of the active steering in the plan.md documents.
This matters because Claude will sometimes propose solutions that are technically correct but wrong for the project. Maybe the approach is over-engineered, or it changes a public API signature that other parts of the system depend on, or it picks a more complex option when a simpler one would do. I have context about the broader system, the product direction, and the engineering culture that Claude doesn’t.
flowchart TD
P[Claude proposes changes] –> E[I evaluate each item]
E –> A[Accept as-is]
E –> M[Modify approach]
E –> S[Skip / remove]
E –> O[Override technical choice]
A & M & S & O –> R[Refined implementation scope]
Cherry-picking from proposals: When Claude identifies multiple issues, I go through them one by one: “for the first one, just use Promise.all, don’t make it overly complicated; for the third one, extract it into a separate function for readability; ignore the fourth and fifth ones, they’re not worth the complexity.” I’m making item-level decisions based on my knowledge of what matters right now.
Trimming scope: When the plan includes nice-to-haves, I actively cut them. “remove the download feature from the plan, I don’t want to implement this now.” This prevents scope creep.
Protecting existing interfaces: I set hard constraints when I know something shouldn’t change: “the signatures of these three functions should not change, the caller should adapt, not the library.”
Overriding technical choices: Sometimes I have a specific preference Claude wouldn’t know about: “use this model instead of that one” or “use this library’s built-in method instead of writing a custom one.” Fast, direct overrides.
Claude handles the mechanical execution, while I make the judgement calls. The plan captures the big decisions upfront, and selective guidance handles the smaller ones that emerge during implementation.
I run research, planning, and implementation in a single long session rather than splitting them across separate sessions. A single session might start with deep-reading a folder, go through three rounds of plan annotation, then run the full implementation, all in one continuous conversation.
I am not seeing the performance degradation everyone talks about after 50% context window. Actually, by the time I say “implement it all,” Claude has spent the entire session building understanding: reading files during research, refining its mental model during annotation cycles, absorbing my domain knowledge corrections.
When the context window fills up, Claude’s auto-compaction maintains enough context to keep going. And the plan document, the persistent artifact, survives compaction in full fidelity. I can point Claude to it at any point in time.
The Workflow in One Sentence
Read deeply, write a plan, annotate the plan until it’s right, then let Claude execute the whole thing without stopping, checking types along the way.
That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing. The research prevents Claude from making ignorant changes. The plan prevents it from making wrong changes. The annotation cycle injects my judgement. And the implementation command lets it run without interruption once every decision has been made.
Try my workflow, you’ll wonder how you ever shipped anything with coding agents without an annotated plan document sitting between you and the code.
The Workflow in One Sentence
...
Read the original on boristane.com »
The state of coding agents can be summed up by this fact
Claude spent $20k on an agent swarm implementing (kinda) a C-compiler in Rust, but desktop Claude is an Electron app.
If you’re unfamiliar, Electron is a coding framework for building desktop applications using web tech, specifically HTML, CSS, and JS. What’s great about Electron is it allows you to build one desktop app that supports Windows, Mac, and Linux. Plus it lets developers use existing web app code to get started. It’s great for teams big and small. Many apps you probably use every day are built with Electron: Slack, Discord, VS Code, Teams, Notion, and more.
There are downsides though. Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes. They are often laggy or unresponsive. They don’t integrate well with OS features.
But these downsides are dramatically outweighed by the ability to build and maintain one app, shipping it everywhere.
But now we have coding agents! And one thing coding agents are proving to be pretty good at is cross-platform, cross-language implementations given a well-defined spec and test suite.
On the surface, this ability should render Electron’s benefits obsolete! Rather than write one web app and ship it to each platform, we should write one spec and test suite and use coding agents to ship native code to each platform. If this ability is real and adopted, users get snappy, performant, native apps from small, focused teams serving a broad market.
But we’re still leaning on Electron. Even Anthropic, one of the leaders in AI coding tools, who keeps publishing flashy agentic coding achievements, still uses Electron in the Claude desktop app. And it’s slow, buggy, and bloated app.
So why are we still using Electron and not embracing the agent-powered, spec driven development future?
For one thing, coding agents are really good at the first 90% of dev. But that last bit — nailing down all the edge cases and continuing support once it meets the real world — remains hard, tedious, and requires plenty of agent hand-holding.
Anthropic’s Rust-base C compiler slammed into this wall, after screaming through the bulk of the tests:
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
The resulting compiler is impressive, given the time it took to deliver it and the number of people who worked on it, but it is largely unusable. That last mile is hard.
And this gets even worse once a program meets the real world. Messy, unexpected scenarios stack up and development never really ends. Agents make it easier, sure, but hard product decisions become challenged and require human decisions.
Further, with 3 different apps produced (Mac, Windows, and Linux) the surface area for bugs and support increases 3-fold. Sure, there are local quirks with Electron apps, but most of it is mitigated by the common wrapper. Not so with native!
A good test suite and spec could enable the Claude team to ship a Claude desktop app native to each platform. But the resulting overhead of that last 10% of dev and the increased support and maintenance burden will remain.
For now, Electron still makes sense. Coding agents are amazing. But the last mile of dev and the support surface area remains a real concern.
...
Read the original on www.dbreunig.com »
Andrej Karpathy talks about “Claws”. Andrej Karpathy tweeted a mini-essay about buying a Mac Mini (“The apple store person told me they are selling like hotcakes and everyone is confused”) to tinker with Claws:
Andrej Karpathy talks about “Claws”. Andrej Karpathy tweeted a mini-essay about buying a Mac Mini (“The apple store person told me they are selling like hotcakes and everyone is confused”) to tinker with Claws:
I’m definitely a bit sus’d to run OpenClaw specifically […] But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level.
Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. […]
Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). […]
Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.
...
Read the original on simonwillison.net »
You can click here to subscribe to this list automatically. This link works only if you have uBlock Origin installed.
Alternatively, import the following URL as a 3rd party list in uBlock Origin.
While browsing it happens most of the times that I come across websites which text is written by generative AI. These websites provide no useful information, have mediocre content and are filled up with ads and referral links to earn money. So when I find this kind of website, I put it here.
The key idea is simple: if I wanted my question to be answered by AI, I would ask AI. If I’m searching online, it means that I want an answer written by a person. A person has experience, opinions, ideas, creativity and a lot more information that might want to share with the web. AI doesn’t.
What’s more, AI content can also be dangerous: articles in content farms are not checked by anyone before being published, since they are massively generated. An AI might hallucinate when writing about dangerous topics: it might suggest you to short-circuit on your circuit board. Or to execute dangerous commands on your PC such as rm -rf /. Or to mix bleach and ammonia (DON’T DO THESE THINGS). AI generated content is not reliable, and if no one is checking what is being published, it needs to be blocked.
As I said, I’m adding pages as I browse, so each entry is added manually. I’m not considering using automated tools simply because it’s hard for an algorithm to understand if a page is AI generated or not, especially with the guidelines I wrote next. An argument could be that this list is useless because it’s too short. However, since these websites are doing SEO to appear first on search engines, you will meet the same website more than once, especially for searches related to each other. I’ve found this list to be blocking websites since the very first day I began writing it, even with very few entries.
However, there is indeed some bias for my entries. For example, as I am an Italian citizen, you will find a lot of Italian websites. This is another reason why pull requests are welcome.
If you’re not a technical user and don’t know how GitHub works, simply report your suspects creating an issue, by clicking here.
If you want to create a pull request, here’s how to add a website to the list. First, try to find the scope of the AI spammer. Usually, it will be a domain, but I’ve found also a lot of Medium blogs or dev.to blogs. These platforms should not be blocked as a whole, but just the blog who’s spamming.
Say that you want to add entry example.com/@slopUser, simply put a line to file list.txt as following:
||example.com/@slopUser^$doc
The whole example.com domain hosts AI garbage, you say? Add only the domain:
||example.com^$doc
If you really hate AI and have a lot of time to spend, you are welcome to do some research about the website you have found. Most of these content farms are built by people or organizations who sell SEO and digital marketing. If you find the source, you might also find other content farms they have created. If you do, add them at the bottom of the file.
Content farm have have some patterns that make them recognizable. Here are the ones I understood. Of course, these are not strict rules.
Unnecessary introduction and conclusion: this is probably the easiest way to spot content farms. Often, the intro is also annoyingly baroque. For example, you click on some guide on how to use a specific feature in Flutter. The article would begin giving an introduction about Flutter itself like the following:
In today’s fast-moving digital landscape, users expect apps to be fast, beautiful, and consistent across every device they touch.
Probably, no human would write such an introduction to explain a specific feature. It could fit to generally speak about Flutter (still too baroque though), but not for a super specific code that can be understood only by experienced developers. If I’m writing such an article, I’m supposing that the reader will already know what Flutter is!
[topic]: A Comprehensive Guide/A Step-To-Step Guide/Ultimate Guide: LLMs love these catchphrases for titles of tutorials and guides.
No sources and references: same as above, but for sources about facts. This is very important to check, especially for articles about (pseudo)science, politics and content that could spread misinformation.
Referral links everywhere: content farms are there just to make money. I’ve personally seen sites shamelessly putting purchase advice in navbars or footers.
Reference to product of company: the website is owned by a company that sells some product or service. The page will probably be like “How to solve [problem]” -> buy our product.
Blog with hundreds of thousands of articles: especially when published in a very short time span. A lot of AI slop blogs post tens or hundreds of articles a day, mostly by the same author.
Date after November 2022: the AI hype began with the release of ChatGPT in November 2022. A weak guideline for sure, but it adds up to all others. Dates can be easily faked though.
No/few images, videos, non-text media: these pages are automatically generated and published, and it’s hard to generate other kind of content to be put into the page.
AI generated images, logos: usually it’s the banner of the article or the blog logo.
Not-rendered Markdown characters: text has no formatting and has Markdown syntax.
Long post, with unnecessary or out-of-context content: at some point the article can start talking about another topic, which can be related to the original one but irrelevant.
Always on top of search engines result: they are of course abusing SEO.
Know-it-all blog: the same blog appears in the search engine for completely different topics.
Vague content: lots of headings, each with a short content that provides no useful information.
Unprofessional or missing contact information: they have bought a domain, why are they using Gmail for contact?
AI enthusiastic content: if someone loves ChatGPT they will use it for everything. So if you stumble across an AI-devoted blog, it’s for sure 100% AI generated.
Because AI users are dumb enough to copy-paste LLM responses without reading them, the LLM will sometimes reveal itself, for example in the intro of the content. The following are some Google Dorks to find 100% AI generated pages. Such pages will have their whole domain put into the uBlock list. Dorks can be easily generated by asking an LLM to generate naive content, for example by prompting Generate an article for my blog about dogs. The LLM will answer with an intro like Sure! Here’s an article about. Take this phrase and search it surrounded by quotes.
* uBlockOrigin & uBlacklist Huge AI Blocklist: this projects hides every AI related result from search engines, including
websites that I believe are legit tools (e.g. ChatGPT). What I want instead is blocking access only to garbage AI content farms.
...
Read the original on github.com »
Fifteen years ago, we started work on the Dark Sky weather app.
Over the years it went through numerous iterations — including more than one major redesign — as we worked our way through the process of learning what makes a great weather app. Eventually, in time, it was acquired by Apple, where the forecast and some core features were incorporated into Apple Weather.
We enjoyed our time at Apple. So why did we leave to start another weather company?
It’s simple: when looking at the landscape of the countless weather apps out there, many of them lovely, we found ourselves feeling unsatisfied. The more we spoke to friends and family, the more we heard that many of them did too. And, of course, we missed those days as a small scrappy shop.
So let’s try this again…
Our biggest pet peeve with most weather apps is how they deal (or rather, don’t deal) with forecast uncertainty. It is a simple fact that no weather forecast will ever be 100% reliable: the weather is moody, fickle, and chaotic. Forecasts are often wrong.
Understanding this uncertainty is crucial for planning your day. Most weather apps will give you their single best guess, leaving you to wonder how sure they actually are, and what else might happen instead. Will it actually start raining at 9am, or might it end up pushed off until noon? Will there be rain or snow? How sure are you? You can’t plan your day if you don’t know how much you can trust the forecast, or know what other possibilities might arise. Rather than pretending we will always be right, Acme Weather embraces the idea that our forecast will sometimes be wrong. We address this uncertainty in several ways:
Our homegrown forecasts are produced using many different data sources, including numerical weather prediction models, satellite data, ground station observations, and radar data. Most of the time, our forecast will be a reliable source of information (it’s better than the one we had at Dark Sky). But, crucially, we supplement the main forecast with a spread of alternate predictions. These are additional forecast lines that capture a range of alternate possible outcomes:
First, the spread of the lines offers a sort of intuition as to how reliable the forecast is. Take the two forecasts below. In the first, the alternate predictions are tightly focused and the forecast can be considered robust and reliable. In the second, there is a significant spread, which is an indication that something is up and the forecast may be subject to change. It’s a call to action to check other conditions or maps, or come back to the app more frequently:
Over time, you build up an intuitive sense of just how much you can actually trust the forecast. After using this for the past six months, I never want to go back to a single forecast again!
Second, it simply shows what else might plausibly happen. In what time range might the storm arrive? Will the snow hit early, or might it be delayed and turn mostly to rain? When the weather is changing rapidly, predictions can become less reliable. We’ll show you different possible futures, so you can be better informed.
Alternative Forecasts are designed to help make better hour-to-hour or day-by-day decisions. Community reports are intended to help with real-time weather events. Current conditions often evolve quickly during storms, radar is imperfect and can fail to detect precipitation during light rain, snow versus freezing rain can be volatile, etc.
To address this, we’ve built a feature that allows any user to submit a report of the current conditions near them, which can be seen on the map:
You can choose from a pre-selected list of weather condition icons (or even a list of emojis for when it’s feeling particularly 💩 out). There’s nothing more reliable than when a person nearby tells you what’s happening, so if there are recent reports near you we’ll flag it in the app.
We absolutely love maps. They provide the context around the weather. Sure, the forecast may say you’ll get rain, but a map will complete the picture by showing you the full breadth of the storm, where it will hit, and where you are relative to it.
We’ve built a large number of maps, including radar and lightning, rain and snow totals (why do most other weather apps not offer this?), wind, temperature and humidity, cloud cover, hurricane tracks, etc. While you can explore them all in the map tab, we also make sure to embed the most relevant maps directly inside the forecasts to provide a contextual backdrop.
A weather app is only useful if you remember to check it. I’ve lost count of the number of times I’ve gotten stuck in the rain — not because the forecast was wrong, but because I simply didn’t check the app.
The solution? A comprehensive set of weather notifications. Turn them on, and no longer worry about missing important weather events.
Our notifications include everything from down-to-the-minute rain warnings, to government severe weather alerts, nearby lightning, community reports, and even whether or not a rainbow might be visible outside your house.
We also let you create custom notifications, tailored to whatever you care about. Want to know if it’ll be windy, or if the UV index will be high, or if you’ll get heavy rain in the next 24 hours? We’ve got you covered.
A weather app shouldn’t just be about helping you avoid bad weather; it should also be fun! There’s a wide world of meteorological phenomena that we would like to highlight. As such, we’re launching “Acme Labs”, which is a set of experimental tools inside the app to highlight fun and interesting things happening where you are.
We’re starting with a couple initial features, including: Rainbow alerts, where we bring our hyperlocal rain forecasts to bear to pinpoint where rainbows are occuring right now, and beautiful sunsets, where we’ll let you know if the sunset will be particularly lovely this evening.
We take your privacy very seriously. Please review our privacy policy, but here is our philosophy in a nutshell:
* We won’t collect any information other than what is necessary to provide the service you’re paying for. This information will only be used to provide that service, and for nothing else.
* We won’t save or store any information that isn’t necessary. We don’t want or need your location history, for example, and simply not storing it in the first place means it can’t fall into the wrong hands.
* We will never sell or give your information to third parties, such as advertisers. We make our money directly from our customers.
* We don’t use third-party trackers or analytics services, because we can’t guarantee what they will or won’t do with the information we send them.
Well, that’s Acme Weather. We’ve been making weather apps for 15 years, from Dark Sky to Apple, and this is the culmination (the acme?) of everything we’ve learned along the way. It’s the weather app we’ve always wanted, and always wanted to build.
Acme Weather is in the iOS App Store, and we plan on offering an Android version soon (if you’re an Android developer and would like help, drop us a line!) The app is a $25/year subscription, with a two-week free trial.
We think you’ll love it. So try it out, and let us know what you think!
...
Read the original on acmeweather.com »
A new law to ensure that batteries are collected, reused and recycled in Europe is entering into force today. The new Batteries Regulation will ensure that, in the future, batteries have a low carbon footprint, use minimal harmful substances, need less raw materials from non-EU countries, and are collected, reused and recycled to a high degree in Europe. This will support the shift to a circular economy, increase security of supply for raw materials and energy, and enhance the EU’s strategic autonomy.
In line with the circularity ambitions of the European Green Deal, the Batteries Regulation is the first piece of European legislation taking a full life-cycle approach in which sourcing, manufacturing, use and recycling are addressed and enshrined in a single law.
Batteries are a key technology to drive the green transition, support sustainable mobility and contribute to climate neutrality by 2050. To that end, starting from 2025, the Regulation will gradually introduce declaration requirements, performance classes and maximum limits on the carbon footprint of electric vehicles, light means of transport (such as e-bikes and scooters) and rechargeable industrial batteries.
The Batteries Regulation will ensure that batteries placed on the EU single market will only be allowed to contain a restricted amount of harmful substances that are necessary. Substances of concerns used in batteries will be regularly reviewed.
Targets for recycling efficiency, material recovery and recycled content will be introduced gradually from 2025 onwards. All collected waste batteries will have to be recycled and high levels of recovery will have to be achieved, in particular of critical raw materials such as cobalt, lithium and nickel. This will guarantee that valuable materials are recovered at the end of their useful life and brought back in the economy by adopting stricter targets for recycling efficiency and material recovery over time.
Starting in 2027, consumers will be able to remove and replace the portable batteries in their electronic products at any time of the life cycle. This will extend the life of these products before their final disposal, will encourage re-use and will contribute to the reduction of post-consumer waste.
To help consumers make informed decisions on which batteries to purchase, key data will be provided on a label. A QR code will provide access to a digital passport with detailed information on each battery that will help consumers and especially professionals along the value chain in their efforts to make the circular economy a reality for batteries.
Under the new law’s due diligence obligations, companies must identify, prevent and address social and environmental risks linked to the sourcing, processing and trading of raw materials such as lithium, cobalt, nickel and natural graphite contained in their batteries. The expected massive increase in demand for batteries in the EU should not contribute to an increase of such environmental and social risks.
Work will now focus on the application of the law in the Member States, and the redaction of secondary legislation (implementing and delegated acts) providing more detailed rules.
Since 2006, batteries and waste batteries have been regulated at EU level under the Batteries Directive. The Commission proposed to revise this Directive in December 2020 due to new socioeconomic conditions, technological developments, markets, and battery uses.
Demand for batteries is increasing rapidly. It is set to increase 14-fold globally by 2030 and the EU could account for 17% of that demand. This is mostly driven by the electrification of transport. Such exponential growth in demand for batteries will lead to an equivalent increase in demand for raw materials, hence the need to minimise their environmental impact.
In 2017, the Commission launched the European Battery Alliance to build an innovative, sustainable and globally competitive battery value chain in Europe, and ensure supply of batteries needed for decarbonising the transport and energy sectors.
...
Read the original on environment.ec.europa.eu »
I first took a polygraph when I applied to the CIA and went through the applicant screening process.
To prepare for the test, I read A Tremor in the Blood by David T. Lykken. The book described the use of control versus relevant questions as well as countermeasures such as butt-clenching. I had no desire to use countermeasures. I wasn’t out to “beat” the test: I wanted to understand how it worked. A future colleague at the Agency advised me, “Spill your guts.” I thought it was good advice, and I planned to follow it.
I knew I was taking a risk in applying to the Agency. I worked as a defense contractor on a project for the CIA, so I already held CIA TS/SCI clearances. If I failed the polygraph, I could lose my clearances, and I might lose my job as well.
I flew to Northern Virginia for two days of pre-employment screening. A bus took us from the hotel to a nondescript building in Vienna. The examiner was a young woman. She asked me to sign a consent form and told me not to talk about the polygraph with anyone else.
In the pretest interview, the examiner asked, “Did you make any false statements on your application?” I said, “Yes. Under height and weight, I put 130 lb. I actually weigh 134 lbs.” She laughed. Then she asked if I’d read about polygraphs. I said I’d just finished A Tremor in the Blood. She claimed she’d never heard of it. I was surprised. It’s an important book about her field, I would have thought all polygraphers knew of it.
She wired me up, and the polygraph began. My hand turned purple, which hurt terribly. My body twitched from the exaggerated stillness the test required. Halfway through, the examiner left the room, saying she had to show the charts to her supervisor. I came to think of this as the What will it take to get you to buy the car? part of the test. I waited twenty minutes or so, resisting the urge to press my nose against the one-way mirror and peer into it through cupped hands.
The examiner came back. “You’re having a problem with one of the questions. Do you know which one?” I had no idea. I’d answered all of them truthfully. She said, “How about, ’Have you ever lied to your boss?’” I said I hadn’t. She pressed me until I came up with an occasion when I’d passed my boss in the hall. She said, “How are you?” and I said, “Fine.” But I wasn’t fine, I was in the middle of a cancer scare.
I failed the poly and was told to come back the next day. I couldn’t understand why I hadn’t passed. I’d spilled my guts, and I hadn’t used countermeasures.
On the bus back to the hotel, a woman was sobbing, “Do they count something less than $50 as theft?” I felt bad for her because she was crying, but I wondered why a petty thief thought she could get into the Agency.
That evening, the other applicants went to the nearby Tysons Corner mall to go shopping. I didn’t feel festive enough to join them so I withdrew to my room. I ordered from room service but couldn’t eat my dinner. I sat by the window for hours, looking into the darkness. She’d seen something inside me I hadn’t known about. I’d always thought I was a good person. Now I wasn’t sure.
The next morning, I rode back to the same crumbling building where I’d been polygraphed the day before. The examiner said, “Now that you’ve had a chance to think about it, is there anything you’d like to say?” He didn’t need to ask me twice. “You bet there is. I did my part, now I expect you to do yours.” It wasn’t until late that afternoon, when I was waiting for my plane at Dulles, that I realized, “Is there anything you’d like to say?” does not mean, “Please tell us all our faults.”
The examiner wired me up. He began with what he called a calibration test. He took a piece of paper and wrote the numbers one through five in a vertical column. He asked me to pick a number. I picked three. He drew a square around the number three, then taped the paper to the back of a chair where I could see it. I was supposed to lie about having selected the number three.
The test began. He asked, “Is the number one? Is the number two? Is the number three?” I said “No,” and butt-clenched. “Nice strong response!” he said.
I wasn’t hiding anything, so I had no reason to use countermeasures. On the other hand, the analytical part of me enjoyed poking at the test to figure out how it worked. And I was still mad about having failed the previous day, so I was messing with him. Curiosity satisfied, I didn’t try the butt-clench again, not on that day or ever.
During the real test, the examiner said, “Your breathing is unnatural.” He described a kid in little league who kept swinging the bat wrong, and then suddenly got it. If I just kept trying, I could get it too. It took almost four hours, but by the end of the session, he told me I’d passed. I’d just cleared the last hurdle to joining the CIA.
I entered on duty and began a week of orientation. On the first morning, we introduced ourselves and said a few words about what we’d be doing at the Agency. Four guys sitting together at a table up front identified themselves as polygraphers. Everyone else in the room hissed. It wasn’t friendly teasing, either. At lunch, no one would sit with them.
A few years into my Agency career, I took a battery of vocational and aptitude tests including the MMPI, a personality inventory. The MMPI results came back, and they said I fell on the extreme end of the honesty spectrum, or in the words of the Agency psychiatrist, “You’re honest to the point of being naïve.” I was kind of offended. Naïve? Who are you calling naïve? On the other hand, it was nice to have that in my official file.
CIA employees were required to take a polygraph every five years. We all did it, but there was a lot of complaining.
I never heard that anyone worried about losing their job to the poly. It was said that new applicants failed in large numbers, but once you were in, you were in. The re-poly could be unpleasant, though. If you failed, you had to keep taking it. It was said that there was an upper manager who just couldn’t pass, no matter how many times he tried. After something like seven attempts, Polygraph gave up and stopped calling him back. The manager remained in his job.
We weren’t supposed to discuss the polygraph among ourselves, but of course we did. When people came back from a poly, they talked about how it had gone. A woman who’d never seen an illegal drug in her life was accused of being a major drug user. Someone who hated computers so much that she had the secretary print out her emails so she could read them was interrogated for hours about hacking into Agency networks.
A pattern emerged. In a normal polygraph, there was often a gross mismatch between a person and the accusations made against them. I don’t think the officials at Polygraph had any idea how unintentionally humorous this was. Not to the person it happened to, of course, but the rest of us found it hysterically funny.
Once, the examiner got in my face and shouted, “Admit it, you’re deeply in debt. Creditors are pounding on your door!” I said. “You’ve just revealed to me that you haven’t bothered to pull my credit report. Are you lazy, or are you cheap?” I offered to pay the $15 fee myself, but he didn’t take me up on it.
Another time, the examiner accused me of working for a foreign intelligence service and traveling overseas to meet my handler. I rolled my eyes. “Do you want to see my passport? It’s been expired for nine years.” No, he didn’t want to see my passport.
I told my office mates I’d figured out why the accusations were so consistently off-the-wall. Polygraph must have a question of the day. Everyone who went in on Monday would be accused of dealing drugs. Tuesday was espionage day, Wednesday was marital infidelity day, and so on.
Then Aldrich Ames was arrested, and polygraphs became more brutal. People who’d never had trouble passing were being called back two and three times. Thank you, Mr. Ames.
I overheard a fragment of conversation at CIA Headquarters, “I thought I was a good person, but after that last poly, I’m not so sure.” It was hard on people.
Because of Ames, the Agency introduced a policy of random polygraphs. I knew someone who completed a five year poly. A few months later, he was called back for a random one.
I’d been at the Agency for ten years when I went through the reinvestigation poly again.
The test was administered by an inexperienced young woman. In the pretest interview, she asked me a question. I answered truthfully. She asked again as if she was looking for a better answer. Call me a liar. It made me furious.
Well into the session, she said I was failing. I was so frustrated, I started to cry. I knew I could pass it if I just had enough time, but she had to run an errand. She failed me and ended the session early.
I wrote a letter of complaint to the Chief of Polygraph saying I didn’t like the way I’d been treated. He sent me a letter of apology. He said he’d reviewed the tapes and that he was sorry about the abuse. He said the polygrapher had been reprimanded.
I was surprised to get a response of any kind, but a letter of apology was astonishing. Although the cynical part of me might call it a “please don’t sue me” letter.
But I was also puzzled. The apology was for the wrong thing. The Director seemed to think I’d gone through a particularly abusive polygraph. I hadn’t. It was a perfectly ordinary polygraph, no different from any other. I just wrote to say that I didn’t like polygraphs.
I had to take the test again because I’d failed the first one. The second examiner was experienced and had a mild disposition. I passed without difficulty.
Over the course of many years at CIA, I formed an impression that a typical polygraph involves an inexperienced examiner who grills you harshly and then fails you, followed by a re-poly with a more experienced examiner who guides you through it with no fuss. I’ve had two polygraphs in which I passed on the first try. On both occasions, an experienced examiner conducted the test.
I worked at CIA for eleven years. It was a terrific experience, and I count those years as among the happiest in my life. I left only because I got married and had a baby. CIA is many things, but family friendly is not one of them.
I joined a small defense contractor known for its work-life balance. The company supported most of the three-letter agencies, and I settled into doing the same sort of work I’d done before.
My first assignment was on a National Reconnaissance Office (NRO) project. The NRO clearances required a poly, which I agreed to. The test was administered by a woman with many years’ experience. She told me I’d passed. It’s possible to have a polygraph that isn’t confrontational and doesn’t leave you feeling violated. It’s rare, though.
I’d been supporting an FBI project for several years when the phone rang in the kitchen. Someone from the FBI asked me to come in for a routine polygraph, a requirement to keep my clearances up to date.
To prepare, I read the 2002 National Academy of Sciences report. It was an eye-opener, even though it confirmed what I’d already begun to suspect, that the polygraph didn’t work.
I arrived for my polygraph appointment, which would be administered in a building across the street from my children’s orthodontist.
I stood in the marble lobby, waiting for the examiner to come and collect me. The 2002 NAS report made me cynical. In the interview room, I thought, “Don’t look at the man behind the curtain.”
The examiner asked if I’d ever visited the anti-polygraph sites online. I said yes, that’s where I found the 2002 NAS report. He said he’d never heard of it. He also said there was no such a thing as a control question. I hate being lied to; it makes me angry.
In the pretest interview, he asked how many polygraphs I’d had before this one. I wasn’t sure, but I thought it was probably six or seven. He asked what medications I took. I listed everything, including a cortisone cream for a patch of eczema on my hand. He went on and on about my skin condition. What does this have to do with national security, and why is it any of your business? Maybe violating people’s boundaries is a way to establish dominance.
The examiner wired me up, and we did the card trick. He drew a vertical column of numbers, told me to pick one, drew a box around it, and pinned it up where I could see it.
It occurred to me that we were playing a guessing game in which the examiner knew the answer before the game began. I’d have been more impressed if he’d had to discover my number using only the polygraph equipment and/or his ability to read people. I was tempted to suggest it, but I didn’t think the idea would be well received.
The test proceeded normally. The examiner left the room. When he came back, he didn’t meet my eyes, and the muscles of his face were tight. “The test shows deception.” He was right. I had been deceptive, but only about one thing. I hadn’t told him I knew the polygraph didn’t work.
The examiner hammered on me to come clean. I kept repeating, “No, I can’t think of anything else.” I was tempted to make something up, just to make it stop, but I’m not comfortable lying.
At the end of the interview, the examiner looked at me, gloating. “You claim you’ve taken seven polygraphs before today, but later, you said it was only six.” That’s all you’ve got on me? I’m underwhelmed.
Being able to recognize interrogation techniques didn’t make me immune to them. Exhausted, I hung my head, feeling like he’d broken me under interrogation. I’ve always found the shame of being broken was the worst thing about being polygraphed.
A few days later, I was still seriously rattled. I didn’t realize how badly the polygraph had affected me until I plowed through a red light and almost hit another car. I’d never run a red light before. I couldn’t think what had gotten into me.
I told a relative I’d had a really hard time with the polygraph. There was an embarrassed silence, followed by a rapid change of subject. What did you do wrong, and why don’t you own up to it? This from someone who’s known me from birth and has always taken my side. It shows how strongly people in our culture believe in the polygraph.
I wrote to my congressman and asked him to support legislation banning the polygraph. I said it was a civil rights issue to subject an entire workforce to a brutal police-style interrogation in the absence of probable cause, especially if they might be harmed by it.
Although I failed the FBI polygraph, I remained on the project.
Much to my surprise, I was granted additional clearances and assigned to more highly classified work than I’d been doing before the polygraph.
The work dried up, and I moved on to something else. Seven months after the failed FBI poly, I was summoned to FBI Headquarters “to talk about your polygraph.”
It took several hours to drive downtown and find a place to park. I found FBI headquarters. Two burly agents met me at the door. I wondered if they had the power to arrest me. My hands shook. It was like a scene from a movie where government officials are trying to be intimidating. It worked. As we walked across the lobby, I thought I was going to faint.
They escorted me upstairs. Neither of them spoke. They took me to a small room. A folder lay open on a desk. Papers spilled from it.
I said, “Does it matter that I was laid off the project four months ago?”
It was like watching method actors break character. “We’re sorry, we didn’t mean to bring you all the way down here for nothing. Maybe you could visit the spy museum? It’s right across the street.”
Years later, I joined a DIA project which required a CI (counterintelligence) polygraph. I liked the work and the people doing it, so I agreed.
I started in the summer. In late September, Polygraph asked me to come in. I was no longer afraid of them. I didn’t doubt the apparatus took accurate measurements of heart rate, blood pressure, respiration, and perspiration, but I’d stopped believing the examiner could infer a person’s emotions or thoughts from the data in the tracings.
To prepare for the test, I read the DoD Polygraph Institute Interview and Interrogation handbook (PDF), which describes techniques like “offer hope, take it away.” That book is evil. I made copies and gave them to my writer friends.
I now knew the examiners worked from a script like the one in the handbook. On page 24, the script calls for empathetic listening to put the subject at ease. On page 53, it says to change gears and confront the subject with evidence of a crime. I thought that since I was familiar with the script, the polygrapher would lose his power over me.
I arrived for the appointment. The examiner asked if I was nervous, and I said no (true). In the pretest interview, he asked if I’d visited the anti-polygraph websites to learn how to beat the test. I said I did read the sites, but I was looking for articles on psychological damage from interrogation and how to recover from it. Beating the polygraph was of no interest to me (true).
The real test began. In my mind, I turned the pages of the script. The examiner excused himself and stepped outside. He returned, and whatever warmth there’d been in his manner had vanished. Oh c**p, we’re on page 53. He accused me of deception. I hunkered down until it was over. I reminded myself, Whatever you do, don’t make admissions. I didn’t have anything to confess, but if the pressure were bad enough, I might be tempted to make things up.
At the end of the test, the examiner told me I’d failed. But, and this is huge, for the first time I didn’t leave the poly broken and weeping. I was annoyed, but I hadn’t been harmed.
Two weeks later, they asked me to come in for a retest. The examiner was different, but the test was the same. Everything went smoothly, and I assumed I’d passed. The examiner said he needed to run the chart by his supervisor, and I’d hear back later.
Weeks went by, and then months. The computer account I would get as soon as I passed the poly was still in limbo, which was not a good sign.
During the quiet time over the Christmas break, I came to believe I’d lost the ability to pass a polygraph. By this time, I’d failed three in a row, two for DIA and the one for FBI.
I wondered if it was because I’d grown cynical. I now thought of the polygraph apparatus as a colander attached to a Xerox machine. Even so, I’d tried to cooperate. I’d answered their questions truthfully, and I hadn’t used countermeasures. But I no longer feared the examiners, and I no longer broke under interrogation. According to the handbook, breaking the subject is an important part of the process.
On the first day back from Christmas break, my department head stopped by to give me my annual salary review. It was my highest raise in five years. She said the people on my project liked my work, and they liked me.
Later the same day, I got a call from Polygraph asking me to come in the next morning to answer the polygraph questions under oath rather than wired up.
I had no problem with that. As a mentor at the Agency said about testifying before Congress, “Under oath or not under oath. Either way you’re telling the truth, so what’s the difference?”
The two polys I’d already taken for DIA had been CI (counterintelligence). But halfway through, the examiner asked, “How often do you look at pornography?” I blinked with surprise. “Excuse me?” That question didn’t belong on a CI poly, that was a Lifestyle question.
When the interview ended, the examiner said, “I believe you” in a voice that lacked conviction. Then he added, “I’m going to try to get you another polygraph.”
“I don’t want another.” I meant it.
Over the next few days, I jumped whenever the phone rang. Weeks went by before I began to relax. And then my long-delayed computer account was approved. As I understood how the system worked, the polygraph had just been adjudicated in my favor.
A few days later, I was sitting at my desk eating lunch when a scheduling clerk from Polygraph called. “You missed your appointment this morning,” he said. Next time, you might consider telling me about it ahead of time. He said, “Don’t even worry about it. Everyone makes mistakes.” My jaw dropped. He just called me a liar.
The clerk wanted me to come in the next day. I was seriously rattled. I’d already decided I was not going to take another polygraph, ever. I stalled. And then I remembered my newly granted computer accesses. I was already cleared. Probably the paperwork hadn’t caught up yet. I said I needed a few days to figure out if the poly was still necessary, and I would get back to him.
I checked with Security. They said, “No, you haven’t been adjudicated yet.”
I couldn’t get out of it, but I could put it off until it was OBE (overtaken by events). In a few months, the project would move to a new location beyond my commuting range. I planned to stay until right before the move and then find something else. I didn’t have to refuse the poly, I just had to conduct a delaying action.
But Polygraph was insistent, and I wasn’t sure I could hold them off much longer. I asked about withdrawing my application for DIA clearances, but was advised to watch and wait.
Or, I could leave the project now and find other work. My TS/SCI from CIA was still active, but I knew that eventually CIA would want me to take a polygraph. I also held a Top Secret clearance from the Air Force. The Air Force TS, a “vanilla TS” (not SCI) was based on an SF-86 background investigation. It did not require a polygraph. And at the very worst, I could do unclassified work. My company had a large number of unclassified projects, and many had work for analysts.
A few days after I’d told the clerk I’d get back to him, I gave a presentation about the work I’d been doing on my task. It was well received, and I stepped down from the podium covered in glory. As I left the meeting, the program manager pulled me aside and said my task had lost its funding. It was an occupational hazard of defense contracting, and not anything sinister.
I wasn’t happy about being cut from the project, but it did solve my polygraph dilemma. If I wasn’t on the project anymore, I didn’t need to apply for DI clearances, and I didn’t need to take the polygraph.
Around noon, the clerk from Polygraph called again. I told him I didn’t have to take the poly anymore because I’d been laid off the project. In mid-afternoon, he called back. He said he’d made a few phone calls and learned that I hadn’t been laid off from my company, after all. Wonderful, call me a liar.
He pressed me to schedule another poly. I said no. His voice turned whiny. “You have to do it!” I dug in my heels. “I’ve already told you no.” He slammed down the phone.
I’d just refused a polygraph. I felt like Neville Longbottom when he drew the sword of Gryffindor and advanced on Lord Voldemort. I was filled with righteous indignation, and it gave me courage.
For the rest of the day, I was peppered with emails and phone calls summoning me to the offices of upper managers. Some of them were so far up the chain, I’d never heard their names before. They were uniformly kind to me. I hadn’t known it, but I wasn’t the first person in our division to refuse the polygraph. Polygraph conscientious objectors—who knew that there was such a thing?
Polygraph told my management I’d lied about being laid off. It caused a major flap. My project badges were taken from me, even the unclassified ones, and I was debriefed from all my DIA clearances.
To their credit, DIA didn’t tell any other agencies they’d taken my badge and debriefed me. Weeks later, my CIA clearances were still active.
Six weeks went by. I put in for a few days of leave to take the kids to an alumni event at my university. I worked half a day and then went home to pack.
Sometime after 4:00 pm, when I was loading the last suitcase into the car, the department secretary called to say I was late for a 4:00 meeting with my department head. This was the first I’d heard of it. The meeting had been put on the calendar after I’d left for the day. I said I was sorry I couldn’t be there, but I’d be back in the office first thing on Monday.
Just before close of business Monday, I was summoned to another meeting with the department head. When I arrived, my boss was sitting in her office with a woman from Human Resources. As a general rule, if you’re boss wants to see you and HR has been asked to sit in, you know it’s going to end badly.
My department head put a piece of paper in front of me. It said I’d agreed to take a polygraph as a condition of working on the DIA project, but when they tried to schedule it, I canceled the appointment. As a result, I was cut from the project.
“No, I took two polygraphs. I turned down the third because I wasn’t on the project anymore.” Although I now thought of myself a polygraph conscientious objector, and I would have refused whether or not I was still on the project.
The rest of the letter said the company would begin termination proceedings against me. I was eight years from retirement. I wasn’t counting the days until retirement. I liked going to work. Even when I was between assignments and not getting paid, I still came into the office and put in a full day.
I spoke to a lawyer. She said I lived in an “at will” state, which means employees can be dismissed for any (legal) reason, or for no reason at all. I was being terminated for refusing the polygraph, and it was legal.
I decided to resign rather than fight.
...
Read the original on antipolygraph.org »
High-efficiency C++/CUDA LLM inference engine. Runs Llama 70B on a single RTX 3090 (24GB VRAM) by streaming model layers through GPU memory via PCIe, with optional NVMe direct I/O that bypasses the CPU entirely.
3-tier adaptive caching auto-sizes from hardware: VRAM-resident layers (zero I/O) + pinned RAM (H2D only) + NVMe/mmap fallback. Achieves 83x speedup over mmap baseline for 70B on consumer hardware (RTX 3090 + 48 GB RAM).
Bottleneck is PCIe H2D bandwidth at Gen3 x8 (~6.5 GB/s). Q4_K_M fits 10 more layers in VRAM (36 vs 26), reducing tier B transfers. Layer skip (cosine similarity calibration) eliminates 20/80 layers per token with minimal quality loss.
* Zero external dependencies beyond CUDA Toolkit (no PyTorch, no cuBLAS)
# Build
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=gcc-14 \
-DCMAKE_CXX_COMPILER=g++-14 \
-DCMAKE_CUDA_COMPILER=/usr/local/cuda-13.1/bin/nvcc
cmake –build . -j
# Run (resident mode — model fits in VRAM)
./ntransformer -m /path/to/llama-8b-q8_0.gguf -p “Hello” -n 128
# Run (streaming mode — model larger than VRAM)
./ntransformer -m /path/to/llama-70b-q6_k.gguf -p “Hello” -n 32 –streaming
# Run with layer skip (fastest for 70B)
./ntransformer -m /path/to/llama-70b-q4_k_m.gguf -p “Hello” -n 32 –streaming –skip-threshold 0.98
# Self-speculative decoding (VRAM layers as draft, no extra model)
./ntransformer -m /path/to/llama-70b-q6_k.gguf -p “Hello” -n 32 –self-spec –draft-k 3
# Chat mode
./ntransformer -m /path/to/model.gguf –chat
# Benchmark
./ntransformer -m /path/to/model.gguf –benchmark -n 64
Running ntransformer with NVMe direct I/O requires system-level modifications. An automated setup script handles all of them:
# Full first-time setup (interactive, creates backups)
sudo ./scripts/setup_system.sh
# Check current system state (no changes)
sudo ./scripts/setup_system.sh –check
# NVMe-only (run after every reboot)
sudo ./scripts/setup_system.sh –nvme-only
* Above 4G Decoding: ON (required for 64-bit BAR mapping)
* IOMMU: OFF (or leave on — the script adds the kernel parameter)
WARNING: This project performs low-level PCIe operations (GPU MMIO writes to NVMe controller registers, userspace NVMe command submission, VFIO device passthrough). While tested extensively on RTX 3090 + WD SN740, incorrect configuration or hardware incompatibilities could theoretically cause:
Data loss on the NVMe device used for raw block storage
Never use your boot drive for NVMe direct I/O. Always use a dedicated secondary NVMe. The authors are not responsible for hardware damage or data loss. Use at your own risk.
For models that don’t fit in VRAM, the NVMe backend eliminates the CPU from the data path:
# Build with NVMe support (requires gpu-nvme-direct library)
cmake .. -DCMAKE_BUILD_TYPE=Release -DUSE_GPUNVME=ON \
-DCMAKE_C_COMPILER=gcc-14 -DCMAKE_CXX_COMPILER=g++-14 \
-DCMAKE_CUDA_COMPILER=/usr/local/cuda-13.1/bin/nvcc
cmake –build . -j
# Write GGUF model to NVMe raw device
sudo ./scripts/restore_nvme.sh # ensure kernel driver is bound
sudo dd if=model.gguf of=/dev/nvme0n1 bs=1M oflag=direct status=progress
# Bind NVMe to VFIO for userspace access
sudo ./scripts/setup_nvme.sh # loads VFIO, forces D0, enables BusMaster
# Run with NVMe backend
sudo GPUNVME_PCI_BDF=0000:01:00.0 GPUNVME_GGUF_LBA=0 \
./build/ntransformer -m /path/to/model.gguf -p “Hello” -n 32 –streaming
# Restore NVMe to kernel driver when done
sudo ./scripts/restore_nvme.sh
The GGUF model file is written to raw NVMe blocks via dd
During inference, each layer (~670 MB for 70B Q6_K) is read via 670 NVMe commands in ~202 ms
Data lands in CUDA pinned staging memory, then async DMA to GPU compute buffers
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.