10 interesting stories served every morning and every evening.
The Workflow in One Sentence I’ve been using Claude Code as my primary development tool for approx 9 months, and the workflow I’ve settled into is radically different from what most people do with AI coding tools. Most developers type a prompt, sometimes use plan mode, fix the errors, repeat. The more terminally online are stitching together ralph loops, mcps, gas towns (remember those?), etc. The results in both cases are a mess that completely falls apart for anything non-trivial.
The workflow I’m going to describe has one core principle: never let Claude write code until you’ve reviewed and approved a written plan. This separation of planning and execution is the single most important thing I do. It prevents wasted effort, keeps me in control of architecture decisions, and produces significantly better results with minimal token usage than jumping straight to code.
flowchart LR
R[Research] –> P[Plan]
P –> A[Annotate]
A –>|repeat 1-6x| A
A –> T[Todo List]
T –> I[Implement]
I –> F[Feedback & Iterate]
Every meaningful task starts with a deep-read directive. I ask Claude to thoroughly understand the relevant part of the codebase before doing anything else. And I always require the findings to be written into a persistent markdown file, never just a verbal summary in the chat.
read this folder in depth, understand how it works deeply, what it does and all its specificities. when that’s done, write a detailed report of your learnings and findings in research.md
study the notification system in great details, understand the intricacies of it and write a detailed research.md document with everything there is to know about how notifications work
go through the task scheduling flow, understand it deeply and look for potential bugs. there definitely are bugs in the system as it sometimes runs tasks that should have been cancelled. keep researching the flow until you find all the bugs, don’t stop until all the bugs are found. when you’re done, write a detailed report of your findings in research.md
Notice the language: “deeply”, “in great details”, “intricacies”, “go through everything”. This isn’t fluff. Without these words, Claude will skim. It’ll read a file, see what a function does at the signature level, and move on. You need to signal that surface-level reading is not acceptable.
The written artifact (research.md) is critical. It’s not about making Claude do homework. It’s my review surface. I can read it, verify Claude actually understood the system, and correct misunderstandings before any planning happens. If the research is wrong, the plan will be wrong, and the implementation will be wrong. Garbage in, garbage out.
This is the most expensive failure mode with AI-assisted coding, and it’s not wrong syntax or bad logic. It’s implementations that work in isolation but break the surrounding system. A function that ignores an existing caching layer. A migration that doesn’t account for the ORM’s conventions. An API endpoint that duplicates logic that already exists elsewhere. The research phase prevents all of this.
Once I’ve reviewed the research, I ask for a detailed implementation plan in a separate markdown file.
I want to build a new feature that extends the system to perform . write a detailed plan.md document outlining how to implement this. include code snippets
the list endpoint should support cursor-based pagination instead of offset. write a detailed plan.md for how to achieve this. read source files before suggesting changes, base the plan on the actual codebase
The generated plan always includes a detailed explanation of the approach, code snippets showing the actual changes, file paths that will be modified, and considerations and trade-offs.
I use my own .md plan files rather than Claude Code’s built-in plan mode. The built-in plan mode sucks. My markdown file gives me full control. I can edit it in my editor, add inline notes, and it persists as a real artifact in the project.
One trick I use constantly: for well-contained features where I’ve seen a good implementation in an open source repo, I’ll share that code as a reference alongside the plan request. If I want to add sortable IDs, I paste the ID generation code from a project that does it well and say “this is how they do sortable IDs, write a plan.md explaining how we can adopt a similar approach.” Claude works dramatically better when it has a concrete reference implementation to work from rather than designing from scratch.
But the plan document itself isn’t the interesting part. The interesting part is what happens next.
This is the most distinctive part of my workflow, and the part where I add the most value.
flowchart TD
W[Claude writes plan.md] –> R[I review in my editor]
R –> N[I add inline notes]
N –> S[Send Claude back to the document]
S –> U[Claude updates plan]
U –> D{Satisfied?}
D –>|No| R
D –>|Yes| T[Request todo list]
After Claude writes the plan, I open it in my editor and add inline notes directly into the document. These notes correct assumptions, reject approaches, add constraints, or provide domain knowledge that Claude doesn’t have.
The notes vary wildly in length. Sometimes a note is two words: “not optional” next to a parameter Claude marked as optional. Other times it’s a paragraph explaining a business constraint or pasting a code snippet showing the data shape I expect.
“use drizzle:generate for migrations, not raw SQL” — domain knowledge Claude doesn’t have
“no — this should be a PATCH, not a PUT” — correcting a wrong assumption
“remove this section entirely, we don’t need caching here” — rejecting a proposed approach
“the queue consumer already handles retries, so this retry logic is redundant. remove it and just let it fail” — explaining why something should change
“this is wrong, the visibility field needs to be on the list itself, not on individual items. when a list is public, all items are public. restructure the schema section accordingly” — redirecting an entire section of the plan
Then I send Claude back to the document:
I added a few notes to the document, address all the notes and update the document accordingly. don’t implement yet
This cycle repeats 1 to 6 times. The explicit “don’t implement yet” guard is essential. Without it, Claude will jump to code the moment it thinks the plan is good enough. It’s not good enough until I say it is.
Why This Works So Well
The markdown file acts as shared mutable state between me and Claude. I can think at my own pace, annotate precisely where something is wrong, and re-engage without losing context. I’m not trying to explain everything in a chat message. I’m pointing at the exact spot in the document where the issue is and writing my correction right there.
This is fundamentally different from trying to steer implementation through chat messages. The plan is a structured, complete specification I can review holistically. A chat conversation is something I’d have to scroll through to reconstruct decisions. The plan wins every time.
Three rounds of “I added notes, update the plan” can transform a generic implementation plan into one that fits perfectly into the existing system. Claude is excellent at understanding code, proposing solutions, and writing implementations. But it doesn’t know my product priorities, my users’ pain points, or the engineering trade-offs I’m willing to make. The annotation cycle is how I inject that judgement.
add a detailed todo list to the plan, with all the phases and individual tasks necessary to complete the plan - don’t implement yet
This creates a checklist that serves as a progress tracker during implementation. Claude marks items as completed as it goes, so I can glance at the plan at any point and see exactly where things stand. Especially valuable in sessions that run for hours.
When the plan is ready, I issue the implementation command. I’ve refined this into a standard prompt I reuse across sessions:
implement it all. when you’re done with a task or phase, mark it as completed in the plan document. do not stop until all tasks and phases are completed. do not add unnecessary comments or jsdocs, do not use any or unknown types. continuously run typecheck to make sure you’re not introducing new issues.
This single prompt encodes everything that matters:
“implement it all”: do everything in the plan, don’t cherry-pick
“mark it as completed in the plan document”: the plan is the source of truth for progress
“do not stop until all tasks and phases are completed”: don’t pause for confirmation mid-flow
“do not add unnecessary comments or jsdocs”: keep the code clean
“do not use any or unknown types”: maintain strict typing
“continuously run typecheck”: catch problems early, not at the end
I use this exact phrasing (with minor variations) in virtually every implementation session. By the time I say “implement it all,” every decision has been made and validated. The implementation becomes mechanical, not creative. This is deliberate. I want implementation to be boring. The creative work happened in the annotation cycles. Once the plan is right, execution should be straightforward.
Without the planning phase, what typically happens is Claude makes a reasonable-but-wrong assumption early on, builds on top of it for 15 minutes, and then I have to unwind a chain of changes. The “don’t implement yet” guard eliminates this entirely.
Once Claude is executing the plan, my role shifts from architect to supervisor. My prompts become dramatically shorter.
flowchart LR
I[Claude implements] –> R[I review / test]
R –> C{Correct?}
C –>|No| F[Terse correction]
F –> I
C –>|Yes| N{More tasks?}
N –>|Yes| I
N –>|No| D[Done]
Where a planning note might be a paragraph, an implementation correction is often a single sentence:
“You built the settings page in the main app when it should be in the admin app, move it.”
Claude has the full context of the plan and the ongoing session, so terse corrections are enough.
Frontend work is the most iterative part. I test in the browser and fire off rapid corrections:
For visual issues, I sometimes attach screenshots. A screenshot of a misaligned table communicates the problem faster than describing it.
“this table should look exactly like the users table, same header, same pagination, same row density.”
This is far more precise than describing a design from scratch. Most features in a mature codebase are variations on existing patterns. A new settings page should look like the existing settings pages. Pointing to the reference communicates all the implicit requirements without spelling them out. Claude would typically read the reference file(s) before making the correction.
When something goes in a wrong direction, I don’t try to patch it. I revert and re-scope by discarding the git changes:
“I reverted everything. Now all I want is to make the list view more minimal — nothing else.”
Narrowing scope after a revert almost always produces better results than trying to incrementally fix a bad approach.
Even though I delegate execution to Claude, I never give it total autonomy over what gets built. I do the vast majority of the active steering in the plan.md documents.
This matters because Claude will sometimes propose solutions that are technically correct but wrong for the project. Maybe the approach is over-engineered, or it changes a public API signature that other parts of the system depend on, or it picks a more complex option when a simpler one would do. I have context about the broader system, the product direction, and the engineering culture that Claude doesn’t.
flowchart TD
P[Claude proposes changes] –> E[I evaluate each item]
E –> A[Accept as-is]
E –> M[Modify approach]
E –> S[Skip / remove]
E –> O[Override technical choice]
A & M & S & O –> R[Refined implementation scope]
Cherry-picking from proposals: When Claude identifies multiple issues, I go through them one by one: “for the first one, just use Promise.all, don’t make it overly complicated; for the third one, extract it into a separate function for readability; ignore the fourth and fifth ones, they’re not worth the complexity.” I’m making item-level decisions based on my knowledge of what matters right now.
Trimming scope: When the plan includes nice-to-haves, I actively cut them. “remove the download feature from the plan, I don’t want to implement this now.” This prevents scope creep.
Protecting existing interfaces: I set hard constraints when I know something shouldn’t change: “the signatures of these three functions should not change, the caller should adapt, not the library.”
Overriding technical choices: Sometimes I have a specific preference Claude wouldn’t know about: “use this model instead of that one” or “use this library’s built-in method instead of writing a custom one.” Fast, direct overrides.
Claude handles the mechanical execution, while I make the judgement calls. The plan captures the big decisions upfront, and selective guidance handles the smaller ones that emerge during implementation.
I run research, planning, and implementation in a single long session rather than splitting them across separate sessions. A single session might start with deep-reading a folder, go through three rounds of plan annotation, then run the full implementation, all in one continuous conversation.
I am not seeing the performance degradation everyone talks about after 50% context window. Actually, by the time I say “implement it all,” Claude has spent the entire session building understanding: reading files during research, refining its mental model during annotation cycles, absorbing my domain knowledge corrections.
When the context window fills up, Claude’s auto-compaction maintains enough context to keep going. And the plan document, the persistent artifact, survives compaction in full fidelity. I can point Claude to it at any point in time.
The Workflow in One Sentence
Read deeply, write a plan, annotate the plan until it’s right, then let Claude execute the whole thing without stopping, checking types along the way.
That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing. The research prevents Claude from making ignorant changes. The plan prevents it from making wrong changes. The annotation cycle injects my judgement. And the implementation command lets it run without interruption once every decision has been made.
Try my workflow, you’ll wonder how you ever shipped anything with coding agents without an annotated plan document sitting between you and the code.
The Workflow in One Sentence
...
Read the original on boristane.com »
A man takes a train from London to the coast. He’s visiting a town called Wulfleet. It’s small and old, the kind of place with a pub that’s been pouring pints since the Battle of Bosworth Field. He’s going to write about it for his blog. He’s excited.
He arrives, he checks in. He walks to the cute B&B he’d picked out online. And he writes it all up like any good travel blogger would: in that breezy LiveJournal style from 25 years ago, perhaps, in his case, trying a little too hard.
But as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler.
By the middle of his post, he’s writing in what might as well be a foreign language.
But it’s not a foreign language. It’s all English.
None of the story is real: not the blogger, not the town. But the language is real, or at least realistic. I constructed the passages myself, working from what we know about how English was written in each period.
It’s a thousand years of the English language, compressed into a single blog post.
Read it and notice where you start to struggle. Notice where you give up entirely. Then meet me on the other side and I’ll tell you what happened to the language (and the blogger).
You’re reading The Dead Language Society, where 35,000+ readers explore the hidden history of the English language. I’m Colin Gorrie: PhD linguist and your guide through 1,500 years of English being weird.
I publish every Wednesday. Paid subscribers get every issue, the full archive, and the content I’m most proud of: practical guides to reading historical texts yourself, honest takes on how language really works, and live book clubs where we read texts like Beowulf and (up next!) Sir Gawain and the Green Knight.
Well, I finally got to the town everyone has been talking about lately. Wulfleet. And let me tell you, it was not easy to get here. It’s ridiculous how close this place is to London, and yet how hard it is to get here. I took a train to some place whose name I can’t pronounce, and then from there I had to hop on a bus. The whole day was shot just getting here.
Not going to lie though: so far, it’s totally worth it.
Yes, it’s the typical English coastal town: the seagulls, the cobblestone streets, the works. But there’s something about it that just makes me want to dress up in a cape and walk around like I’m in a Gothic novel. Although, let’s be honest, do I really need an excuse to do that? :)
Everyone seems really nice here, although I did have one really weird encounter on the way to the B&B. A guy was following me for a while. It kind of freaked me out. Anyway, if you go to Wulfleet, just watch out for this one weird guy who hangs out near the bus stop. I know, real specific. But anyway, that was just a bit odd.
Speaking of which, the B&B is also… interesting. LOL. It has separate hot and cold taps and everything. I’m about to see how the “bed” portion works. I’ll update you on the “breakfast” tomorrow morning. If I can find an internet cafe around here, that is.
My plans for an untroubled sleep were upset, however, when I woke with a start before dawn. The window had, it seemed, come open in the night, though I was perfectly certain I had fastened it. I sprang up from the bed to see what was the cause, but I could see nothing in the darkness — nothing, that is, that I could satisfactorily account for. I closed the window again but was entirely unable to fall asleep due to the shock. I am not, I hope, an easily frightened man, but I confess the incident left me not a little unsettled.
When dawn finally came, I went downstairs to find a well-appointed dining room in which there was laid out a modest but perfectly adequate meal. After I ate, and thanked the landlady — a respectable woman of the kind one expects to find in charge of such an establishment — I decided to take a stroll around the town. The sea air did something to revive me after the events of the previous day, not to mention the night, although a question still weighed on me. Do windows simply burst open in the night? Or was there something else afoot? I resolved to make enquiries, though of whom I was not yet certain.
After spending the day wandering around the environs of the town, and, finding myself hungry, I sought out an inn, where I might buy some supper. It was not difficult to find one, and, sitting alone, I called for supper from what the publican had to offer. I confess I gave no great thought to the quality of the fare. Hunger, that great leveller, makes philosophers of us all, and renders even the meanest dish agreeable.
The place was adequately charming. The tables were covered with guttering candles, and the local rustics seemed to be amusing themselves with great jollity. Reader, I am not one of those travellers who holds himself above the common people of the places he visits. I saw fit rather to join in with their sport and we whiled away the hours together in good cheer. I found them to be as honest and amiable a company as one could wish for.
The only thing that disturbed my good humour was when I thought, for a brief moment, that I saw the man who accosted me yesterday among the crowd. But it must have been a mere fancy, for whatever I thought I saw vanished as quickly as it had appeared. I chided myself for the weakness of my nerves, and took another draught to steady them.
When, at long last, the entertainment was spent, I undertook to return to my lodgings; however, finding myself quite unable to find my way, a fact which owed something to having imbibed rather immoderately in the hours prior — and here let me caution the reader against the particular hospitality of country innkeepers, which is liberal beyond what prudence would advise — I soon found myself at the harbour’s edge.
When I was firſt come to Wulfleet, I did not see the harbour, for I was weary and would ſooner go to the inn, that I might ſleep. It is a truth well known to travellers, that wearineſs of body breeds a kind of blindneſs to all things, however remarkable, and ſo it was with me. But now that I beheld the ſight of it, I marvelled. In the inky blackneſs I could see not a ſtar, nor even a ſliver of the moon. It was indeed a wonder that I did not ſtumble on my way, and periſh in a gutter, for many a man has come to his end by leſs.
Finally, with my mind much filled with reflection, I found my way through dark ſtreets to a familiar alley. This was a welcome sight, as an ill foreboding was lately come into my mind. I entertained for a moment such unmanly thoughts as are far from my cuſtom, and which I ſhould be aſhamed to ſet down here, were it not that an honeſt account requires it. I felt eſpecially that I was purſued by ſome thing unknown to me. I glanced backwards, to ſee if I might eſpy that man. But there was no one, or at least no one that I could diſcern.
At laſt, I found the doorway of the inn, as much by chance as by deſign, and retired to ſleep with a mind addled half by drink and the other half by a fear for which I could not well account. I commended myſelf to Providence, and reſolved to think no more on it.
That night I was vntroubled by such euents as I had vndergone the night before, for I had barred the door ere I ſlept, and so fortified, that so no force might open it. This town of Wulfleet was paſſing ſtrange, as ſtrange I dare ſay as any place whereof Plinie wrote, or any iland discovered in the voyages of Sir Walter Raleigh. But I was bound to my taſk, and would not flinch from it. I would record the occurrents in Wulfleet, howeuer ſtrange they might ſeem, yea, though they were ſuch things as would make a leſſer man forſake his purpoſe.
But I ſoon forgot my earlier dread, for the morning brought with it ſo fair a ſight as to diſpel all feare. The people of the town had erected ouernight a market of ſuch variety and abundance as I haue not ſeen the like. Animals walked among men, and men among animals, a true maruel!
As I looked on this aſſembled throng, greatly pleaſed and not a little amazed, a man approached me. He ſtartled me, but I quickly saw he was nothing but a farmer come to hawke his wares. “Would you haue a fowl, sir?” ſaid he, “My hens are fat and luſty, and you may haue them cheap.”
I said in reply, “No, I thanke thee,” He was a churliſh fellow, rude of ſpeech and meane of aſpect, and I felt no ſhame at thouing ſuch a man as that.
I went forthe among the people, and as I paſſed throughe the market and the ſtretes of the towne, euer lokyng aboute me with grete care, leſt I ſholde agayn encountre ſome peryl, thee appeared, from oute of the prees that ſame man whom I ſo dredde. And he was passyng foule was of vyſage, as it ſemed to me, more foule than ony man I had ſene in al my lyf.
He turned hym towarde me and ſayd, “Straunger, wherefore art thou come hydder?”
And I anſwerd hym nott, for I knewe nott what I ſholde ſaye, ne what answere myght ſerue me beſt in ſuche a caas.
Than hee asked me, “Was it for that thou wouldeſt ſee the Maiſter?”
And verely this name dyd me ſore affright, for who was this Maiſter wherof he ſpake? And what maner of man was he, that his very name ſholde be ſpoken wyth ſuche reuerence and drede. I wolde haue fledde but he purſued me and by myn avys he was the ſwifter, for he caught me full ſoone.
I sayd to him, “What meaneſt thou? Who is the Maiſter?”
And he sayd, “I ſhall brynge the vnto hym, and thou ſhalt ſee for thy ſelf what maner of lorde he is.”
But I wolde not, and cryed out ayenſt hym with grete noyſe, leſt he ſholde take me thyder by violence and ayenſt my wille.
Bot þe man wolde me nat abandone þer, ne suffre me to passen forþ. I miȝt nat flee, for hys companiouns, of whom þer were a gret nombre, beſet me aboute, and heelden me faſt þat I ne scholde nat ascapen. And þei weren stronge menn and wel douȝti, of grymme contenaunce and fiers, and armed wiþ swerdes and wiþ knyues, so þat it were gret foly for eny man to wiþstonden hem.
So þei bounden me hond and foot and ledden me to þe one þei callede Maiſter, of whom I hadde herd so muchel and knewe so litel.
Þe sayde Maiſter, what that hee apperid bifore me, was verely a Deuill, or so me þouȝte, for neuer in al my lyf hadde I beholden so foule a creature. Hee bore a blak clok þat heng to þe grounde, and ſpake neuer a worde. Bot his countenaunce was hidous and so dredful þat my blood wexed colde to loken on hym. For he hadde nat þe visage of a man bot of a beest, wiþ þe teeþ and ſnoute of a wulf, scharpe and crueel. And his eres weren longe eres, as of a wulf, and bihynde him þer heng a gret tayl, as wulf haþ. And hys eyen schon in þe derknesse lyke brennyng coles.
Bot þei maden no answer, neyþer good ne yuel. Þei weren stille as stoon, and stoden about me as men þat wayte on þeir lordes commandement.
Þanne after muchel tyme spak þe Maiſter, and his wordes weren colde as wintres is. His vois was as þe crying of rauenes, scharpe and schille, and al þat herde hym weren adrade and durst nat speken.
“I deme þe to þe deeþ, straunger. Here ſchaltou dyen, fer fram þi kynne and fer fram þine owen londe, and non ſchal knowen þi name, ne non schal þe biwepe.”
And I sayde to hym, wiþ what boldenesse I miȝte gaderen, “Whi fareſt þou wiþ me þus? What treſpaas haue I wrouȝt ayeins þe, þat þou demeſt me so harde a dome?”
“Swie!” quoþ he, and smot me wiþ his honde, so þat I fel to þe erþe. And þe blod ran doun from mi mouþe.
And I swied, for þe grete drede þat was icumen vpon mee was more þan I miȝte beren. Mi herte bicam as stoon, and mi lymes weren heuy as leed, and I ne miȝte namore stonden ne spoken.
Þe euele man louȝ, whan that he sawe my peine, and it was a crueel louȝter, wiþouten merci or pitee as of a man þat haþ no rewþe in his herte.
Allas! I scholde neuer hauen icumen to þis toune of Wuluesfleete! Cursed be þe dai and cursed be þe houre þat I first sette foot þerinne!
Hit is muchel to seggen all þat pinunge hie on me uuroȝten, al þar sor and al þat sorȝe. Ne scal ic nefre hit forȝeten, naht uuhiles ic libbe!
Ac þer com me gret sped, and þat was a uuif, strong and stiþ! Heo com in among þe yuele men and me nerede fram heore honden.
Heo sloȝ þe heþene men þat me pyneden, sloȝ hem and fælde hem to þe grunde. Þer was blod and bale inouȝ And hie feollen leien stille, for hie ne miȝten namore stonden. Ac þe Maister, þe uuraþþe Maister, he flaȝ awei in þe deorcnesse and was iseon namore.
Ic seide hire, “Ic þanke þe, leoue uuif, for þu hauest me ineredd from dæðe and from alle mine ifoan!”
Þæt ƿif me andsƿarode and cƿæð, “Ic eom Ælfgifu gehaten. Þu scalt me to ƿife nimen, þeah þe þu hit ne ƿite gyt, for hit is sƿa gedon þæt nan man ne nan ƿif ne mote heonon faren buten þurh þone dæð þæs Hlafordes.”
“Ac þær is gyt mare to donne her, forþi ƿe nabbaþ þone Hlaford ofslagenne. He is strong and sƿiðe yfel, and manige gode men he hæfð fordone on þisse stoƿe.”
And þæt heo sægde wæs eall soþ. Ic ƿifode on hire, and heo ƿæs ful scyne ƿif, ƿis ond ƿælfæst. Ne gemette ic næfre ær sƿylce ƿifman. Heo ƿæs on gefeohte sƿa beald swa ænig mann, and þeah hƿæþere hire andƿlite wæs ƿynsum and fæger.
Ac ƿe naƿiht freo ne sindon, for þy þe ƿe næfre ne mihton fram Ƿulfesfleote geƿitan, nefne ƿe þone Hlaford finden and hine ofslean. Se Hlaford hæfþ þisne stede mid searocræftum gebunden, þæt nan man ne mæg hine forlætan. Ƿe sindon her sƿa fuglas on nette, swa fixas on ƿere.
The blog ends there. No sign-off, no “thanks for reading.” Just a few sentences in a language that most of us lost the ability to follow somewhere around the thirteenth century.
So, how far did you get?
Let me take you back through it.
Written English has been remarkably stable over the last 300 years. Spelling was standardized in the mid-1700s, and grammar has barely changed at all. This means that, if you can read Harry Potter (1997–2003), you can read Robinson Crusoe (1719), which is good news to fans of the English novel.
What has changed is the voice.
Blog post became diary entry became travel letter. The format changed much faster than the language. Compare the very first line, “Well, I finally got to the town everyone has been talking about lately” with the line from the 1800 section, “Hunger, that great leveller, makes philosophers of us all, and renders even the meanest dish agreeable.”
They’re both performances of a sort: the 2000s protagonist is performing for his blog’s audience, so the tone is chatty and personal. The 1800s protagonist, with the mind of a Georgian diarist, is performing for posterity, so he philosophizes.
The one visible change in the language itself is the appearance, in the 1700 passage, of the long s (ſ). This wasn’t a different letter, just a variant form of s used in certain positions within a word. It disappeared fully from English printing in the early 19th century, although its use was dwindling even before that, which is why it does not appear in the 1800 passage. It’s a typographic change rather than a linguistic one, but it’s the first unmistakable sign that the text is getting older.
This is where the ground starts to move under our feet.
Before the mid 1700s, there was no such thing as standardized spelling. Writers spelled words as they heard them, or as they felt like spelling them, which is why the 1500s and 1600s sections look so alien, even when the words, underneath the surface, are ones you know.
For another difficulty, take the word vntroubled from the 1600 section. This is our familiar untroubled, but the u is replaced by a v, because u and v were not yet considered separate letters. They were variants of the same latter, used to represent both sounds. The convention was to write v at the beginning of words and u in the middle, which give us spelling like vnto (unto), euents (events), ouernight (overnight), and howeuer (however). It looks weird at first, but once you know the rule, the words become much more readable.
Another new arrival — or, more accurately, late departure — from the language is the letter thorn (þ), which first appears in the 1400 section. Thorn is simply th. That’s it. Wherever you see þ, read th, and the word will usually reveal itself: þe is the, þei is they, þat is that. If you’ve ever seen a pub called “Ye Olde” anything, that ye is actually þe, an attempt by early printers to write a thorn without having to make an expensive new letter.
Thorn’s companion, yogh (ȝ), is more complicated. It represents sounds that modern English spells as gh or y — so miȝt is might, ȝe is ye. The reasons for this are a story unto themselves.
But the most interesting change in this period isn’t a letter. Rather, it’s a pronoun. Notice the moment in the 1600 section where our blogger meets a farmer and says, “No, I thanke thee.” Then he adds, “I felt no ſhame at thouing ſuch a man as that.”
Thouing. To thou someone, or to use thou when talking to them, was, by the 1600s, a deliberate social statement. Thou was the old singular form of you; you was originally the plural. Over the centuries, you came to be used as a polite singular, much as French uses vous. Gradually, you took over entirely. By Shakespeare’s time (1564–1616), thou survived in two main contexts: intimacy (as in prayer) and insult. Our blogger is being a little rude here. He’s looking down on a man he considers beneath him, and his language gives him a way of making his feelings perfectly clear.
Somewhere in this section — and if you’re like most readers, it happened around 1300 or 1200 — the language crossed a boundary. Up to this point, comprehension felt like it was dropping gradually, but now it’s fallen off a cliff. In one section, you could get by by squinting and guessing; in the next you were utterly lost. You have hit the wall.
There are two reasons for this. The first is vocabulary. As you move backwards in time, the French and Latin loanwords that make up an enormous proportion of the Modern English vocabulary grow fewer and fewer. When you pass 1250, they drop off almost altogether. Where a modern writer would say he underwent torture, a 1200-era writer must say that he suffered pinunge instead.
The farther back you go, the more the familiar Latinate layer of English is stripped away, revealing the Germanic core underneath: a language that looks to modern eyes more like German or Icelandic than anything we’d call English.
The second reason for the difficulty is grammar. Old English (450–1100) was an inflected language: it used endings on nouns, adjectives, and verbs to mark their grammatical roles in a sentence, much as Latin or modern German do. Alongside these endings came a greater freedom in word order, which makes sense given that the endings told you who was doing what to whom.
English lost most of these endings over the course of the period linguists call Middle English (1100–1450), and it tightened its word order as if to compensate. When you look at these final sections, if you can make out the words, you will see the effects of this freer word order. For example, in 1200 we read monige gode men he hæfð fordone ‘many good men he has destroyed’, where we’d expect a Modern English order more like and he has destroyed many good men.
To make matters worse, a few unfamiliar letters also appear: wynn (ƿ) is simply w, eth (ð) means the same as thorn (þ) — both represent th, and ash (æ) represents the vowel in cat and hat.:
All of these factors combined likely made it difficult, if not impossible, to follow the plot. So let me tell you what happened. In the 1400 section, the blogger was seized. He was dragged before a creature they called the Master, and the Master was no man. He had the teeth and snout of a wolf, as well as a wolf’s long ears and great tail. His eyes glowed like burning coals. Wulfleet was once Wulfesfleot ‘the Bay of the Wolf.’
In the 1300 section, the Master condemned our hero to death. In the 1200 section, a woman appeared and killed his captor. The Master, however, fled into the darkness. In the 1100 section, the woman revealed her name: Ælfgifu ‘gift of the elves.’ She told the blogger — can we still call him that in 1100? — they would marry, and she shares the terrible truth about Wulfleet: no one leaves until the Master is dead.
In the 1000 section, they are married. She is, he writes, as bold as any man in battle, and yet fair of face. But they are not free. Together, through the dark streets of Wulfleet, they hunt the Master still.
The English in which I write this paragraph is not the English of fifty years ago, and it will not be the English of fifty years in the future.
Go back far enough, and English writing becomes unrecognisable. Go forward far enough and the same thing will happen, though none of us will be around to notice.
Our poor blogger didn’t notice either, even as he and his language travelled back in time through the centuries. He just kept writing even as he was carried off to somewhere he couldn’t come back from. Some say that, far away in Wulfleet, he’s writing still.
...
Read the original on www.deadlanguagesociety.com »
Date: 01 Apr 88 1620 PST
From: Les Earnest
Subject: The “previous account” referred to in RISKS-6.51
Reading a book got me into early trouble–I had an FBI record
by age twelve. This bizarre incident caused a problem much later
when I needed a security clearance. I learned that I could obtain
one only by concealing my sordid past.
A friend named Bob and I read the book ``Secret and Urgent,‘’ by Fletcher Pratt [Blue Ribbon Books; Garden City, NY; 1942] which was an early popular account of codes and ciphers. Pratt showed how to use letter frequencies to break ciphers and reported that the most frequently occurring letters in typical English text are e-t-a-o-n-r-i, in that order. (The letter frequency order of the story you are now reading is e-t-a-i-o-n-r. The higher frequency of ``i’′ probably reflects the fact that _I_ use the first person singular a lot.) Pratt’s book also treated more advanced cryptographic schemes.
Bob and I decided that we needed to have a secure way to communicate with each other, so we put together a rather elaborate jargon code based on the principles described in the book. I don’t remember exactly why we thought we needed it–we spent much of our time outside of school together, so there was ample time to talk privately. Still, you never could tell when you might need to send a secret message!
We made two copies of the code key (a description of how to encrypt and decrypt our messages) in the form of a single typewritten sheet. We each took a copy and carried it on our persons at all times when we were wearing clothes.
I actually didn’t wear clothes much. I spent nearly all my time outside school wearing just a baggy pair of maroon swimming trunks. That wasn’t considered too weird in San Diego.
I had recently been given glasses to wear but generally kept them in a hard case in the pocket of the trousers that I wore to school. I figured that this was a good place to hide my copy of the code key, so I carefully folded it to one-eighth of its original size and stuck it at the bottom of the case, under my glasses.
Every chance I got, I went body surfing at Old Mission Beach. I usually went by streetcar and, since I had to transfer Downtown, I wore clothes. Unfortunately, while I was riding the trolley home from the beach one Saturday, the case carrying my glasses slipped out of my pocket unnoticed. I reported the loss to my mother that night. She chastised me and later called the streetcar company. They said that the glasses hadn’t been turned in.
After a few weeks of waiting in vain for the glasses to turn up, we began to lose hope. My mother didn’t rush getting replacement glasses in view of the fact that I hadn’t worn them much and they cost about $8, a large sum at that time. (To me, $8 represented 40 round trips to the beach by streetcar, or 80 admission fees to the movies.)
Unknown to us, the case had been found by a patriotic citizen who opened it, discovered the code key, recognized that it must belong to a Japanese spy and turned it over to the FBI This was in 1943, just after citizens of Japanese descent had been forced off their property and taken away to concentration camps. I remember hearing that a local grocer was secretly a Colonel in the Japanese Army and had hidden his uniform in the back of his store. A lot of people actually believed these things.
About six weeks later, when I happened to be off on another escapade, my mother was visited by a man who identified himself as an investigator from the FBI (She was a school administrator, but happened to be at home working on her Ph. D. dissertation.) She noticed that there were two more men waiting in a car outside. The agent asked a number of questions about me, including my occupation. He reportedly was quite disappointed when he learned that I was only 12 years old.
He eventually revealed why I was being investigated, showed my mother the glasses and the code key and asked her if she knew where it came from. She didn’t, of course. She asked if we could get the glasses back and he agreed.
My mother told the investigator how glad she was to get them back, considering that they cost $8. He did a slow burn, then said ``Lady, this case has cost the government thousands of dollars. It has been the top priority in our office for the last six weeks. We traced the glasses to your son from the prescription by examining the files of nearly every optometrist in San Diego.‘’ It apparently didn’t occur to them that if I were a real Japanese spy, I might have brought the glasses with me from headquarters.
The FBI agent gave back the glasses but kept the code key ``for our records.‘’ They apparently were not fully convinced that they were dealing just with kids.
Since our communication scheme had been compromised, Bob and I devised a new key. I started carrying it in my wallet, which I thought was more secure. I don’t remember ever exchanging any cryptographic messages. I was always ready, though.
A few years later when I was in college, I got a summer job at the Naval Electronics Lab, which required a security clearance. One of the questions on the application form was ``Have you ever been investigated by the FBI?‘’ Naturally, I checked ``Yes.‘’ The next question was, ``If so, describe the circumstances.‘’ There was very little space on the form, so I answered simply and honestly, ``I was suspected of being a Japanese spy.‘’
When I handed the form in to the security officer, he scanned it quickly, looked me over slowly, then said, ``Explain this’’–pointing at the FBI question. I described what had happened. He got very agitated, picked up my form, tore it in pieces, and threw it in the waste basket.
He then got out a blank form and handed it to me, saying ``Here, fill it out again and don’t mention that. If you do, I’ll make sure that you never get a security clearance.‘’
I did as he directed and was shortly granted the clearance. I never again disclosed that incident on security clearance forms.
On another occasion much later, I learned by chance that putting certain provocative information on a security clearance form can greatly speed up the clearance process. But that is another story.
Edited and converted to HTML by Dan Bornstein.
...
Read the original on milk.com »
The state of coding agents can be summed up by this fact
Claude spent $20k on an agent swarm implementing (kinda) a C-compiler in Rust, but desktop Claude is an Electron app.
If you’re unfamiliar, Electron is a coding framework for building desktop applications using web tech, specifically HTML, CSS, and JS. What’s great about Electron is it allows you to build one desktop app that supports Windows, Mac, and Linux. Plus it lets developers use existing web app code to get started. It’s great for teams big and small. Many apps you probably use every day are built with Electron: Slack, Discord, VS Code, Teams, Notion, and more.
There are downsides though. Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes. They are often laggy or unresponsive. They don’t integrate well with OS features.
But these downsides are dramatically outweighed by the ability to build and maintain one app, shipping it everywhere.
But now we have coding agents! And one thing coding agents are proving to be pretty good at is cross-platform, cross-language implementations given a well-defined spec and test suite.
On the surface, this ability should render Electron’s benefits obsolete! Rather than write one web app and ship it to each platform, we should write one spec and test suite and use coding agents to ship native code to each platform. If this ability is real and adopted, users get snappy, performant, native apps from small, focused teams serving a broad market.
But we’re still leaning on Electron. Even Anthropic, one of the leaders in AI coding tools, who keeps publishing flashy agentic coding achievements, still uses Electron in the Claude desktop app. And it’s slow, buggy, and bloated app.
So why are we still using Electron and not embracing the agent-powered, spec driven development future?
For one thing, coding agents are really good at the first 90% of dev. But that last bit — nailing down all the edge cases and continuing support once it meets the real world — remains hard, tedious, and requires plenty of agent hand-holding.
Anthropic’s Rust-base C compiler slammed into this wall, after screaming through the bulk of the tests:
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
The resulting compiler is impressive, given the time it took to deliver it and the number of people who worked on it, but it is largely unusable. That last mile is hard.
And this gets even worse once a program meets the real world. Messy, unexpected scenarios stack up and development never really ends. Agents make it easier, sure, but hard product decisions become challenged and require human decisions.
Further, with 3 different apps produced (Mac, Windows, and Linux) the surface area for bugs and support increases 3-fold. Sure, there are local quirks with Electron apps, but most of it is mitigated by the common wrapper. Not so with native!
A good test suite and spec could enable the Claude team to ship a Claude desktop app native to each platform. But the resulting overhead of that last 10% of dev and the increased support and maintenance burden will remain.
For now, Electron still makes sense. Coding agents are amazing. But the last mile of dev and the support surface area remains a real concern.
...
Read the original on www.dbreunig.com »
High-efficiency C++/CUDA LLM inference engine. Runs Llama 70B on a single RTX 3090 (24GB VRAM) by streaming model layers through GPU memory via PCIe, with optional NVMe direct I/O that bypasses the CPU entirely.
3-tier adaptive caching auto-sizes from hardware: VRAM-resident layers (zero I/O) + pinned RAM (H2D only) + NVMe/mmap fallback. Achieves 83x speedup over mmap baseline for 70B on consumer hardware (RTX 3090 + 48 GB RAM).
Bottleneck is PCIe H2D bandwidth at Gen3 x8 (~6.5 GB/s). Q4_K_M fits 10 more layers in VRAM (36 vs 26), reducing tier B transfers. Layer skip (cosine similarity calibration) eliminates 20/80 layers per token with minimal quality loss.
* Zero external dependencies beyond CUDA Toolkit (no PyTorch, no cuBLAS)
# Build
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=gcc-14 \
-DCMAKE_CXX_COMPILER=g++-14 \
-DCMAKE_CUDA_COMPILER=/usr/local/cuda-13.1/bin/nvcc
cmake –build . -j
# Run (resident mode — model fits in VRAM)
./ntransformer -m /path/to/llama-8b-q8_0.gguf -p “Hello” -n 128
# Run (streaming mode — model larger than VRAM)
./ntransformer -m /path/to/llama-70b-q6_k.gguf -p “Hello” -n 32 –streaming
# Run with layer skip (fastest for 70B)
./ntransformer -m /path/to/llama-70b-q4_k_m.gguf -p “Hello” -n 32 –streaming –skip-threshold 0.98
# Self-speculative decoding (VRAM layers as draft, no extra model)
./ntransformer -m /path/to/llama-70b-q6_k.gguf -p “Hello” -n 32 –self-spec –draft-k 3
# Chat mode
./ntransformer -m /path/to/model.gguf –chat
# Benchmark
./ntransformer -m /path/to/model.gguf –benchmark -n 64
Running ntransformer with NVMe direct I/O requires system-level modifications. An automated setup script handles all of them:
# Full first-time setup (interactive, creates backups)
sudo ./scripts/setup_system.sh
# Check current system state (no changes)
sudo ./scripts/setup_system.sh –check
# NVMe-only (run after every reboot)
sudo ./scripts/setup_system.sh –nvme-only
* Above 4G Decoding: ON (required for 64-bit BAR mapping)
* IOMMU: OFF (or leave on — the script adds the kernel parameter)
WARNING: This project performs low-level PCIe operations (GPU MMIO writes to NVMe controller registers, userspace NVMe command submission, VFIO device passthrough). While tested extensively on RTX 3090 + WD SN740, incorrect configuration or hardware incompatibilities could theoretically cause:
Data loss on the NVMe device used for raw block storage
Never use your boot drive for NVMe direct I/O. Always use a dedicated secondary NVMe. The authors are not responsible for hardware damage or data loss. Use at your own risk.
For models that don’t fit in VRAM, the NVMe backend eliminates the CPU from the data path:
# Build with NVMe support (requires gpu-nvme-direct library)
cmake .. -DCMAKE_BUILD_TYPE=Release -DUSE_GPUNVME=ON \
-DCMAKE_C_COMPILER=gcc-14 -DCMAKE_CXX_COMPILER=g++-14 \
-DCMAKE_CUDA_COMPILER=/usr/local/cuda-13.1/bin/nvcc
cmake –build . -j
# Write GGUF model to NVMe raw device
sudo ./scripts/restore_nvme.sh # ensure kernel driver is bound
sudo dd if=model.gguf of=/dev/nvme0n1 bs=1M oflag=direct status=progress
# Bind NVMe to VFIO for userspace access
sudo ./scripts/setup_nvme.sh # loads VFIO, forces D0, enables BusMaster
# Run with NVMe backend
sudo GPUNVME_PCI_BDF=0000:01:00.0 GPUNVME_GGUF_LBA=0 \
./build/ntransformer -m /path/to/model.gguf -p “Hello” -n 32 –streaming
# Restore NVMe to kernel driver when done
sudo ./scripts/restore_nvme.sh
The GGUF model file is written to raw NVMe blocks via dd
During inference, each layer (~670 MB for 70B Q6_K) is read via 670 NVMe commands in ~202 ms
Data lands in CUDA pinned staging memory, then async DMA to GPU compute buffers
...
Read the original on github.com »
A startup called Taalas, recently released an ASIC chip running Llama 3.1 8B (3/6 bit quant) at an inference rate of 17,000 tokens per seconds. That’s like writing around 30 A4 sized pages in one second. They claim it’s 10x cheaper in ownership cost than GPU based inference systems and is 10x less electricity hog. And yeah, about 10x faster than state of art inference.
I tried to read through their blog and they’ve literally “hardwired” the model’s weights on chip. Initially, this didn’t sound intuitive to me. Coming from a Software background, with hobby-ist understanding of LLMs, I couldn’t wrap my head around how you just “print” a LLM onto a chip. So, I decided to dig into multiple blogposts, LocalLLaMA discussions, and hardware concepts. It was much more interesting than I had thought. Hence this blogpost.
Taalas is a 2.5 year old company and it’s their first chip. Taalas’s chip is a fixed-function ASIC (Application-Specific Integrated Circuit). Kinda like a CD-ROM/Game cartridge, or a printed book, it only holds one model and cannot be rewritten.
LLMs consist of sequential Layers. For eg. Llama 3.1 8B has 32 layers. The task of each layer is to further refine the input. Each layer is essentially large weight matrices (the model’s ‘knowledge’).
When a user inputs a prompt, it is converted into an vector of numbers aka embeddings.
On a normal GPU, the input vector enters the compute cores. Then GPU fetches the Layer 1 weights from VRAM/HBM (GPU’s RAM) , does matrix multiplication, stores the intermediate results(aka activations) back in VRAM. Then it fetches the Layer 2 weights, and previous result, does the math, and saves it to VRAM again. This cycle continues till 32nd layer just to generate a single token. Then, to generate the next token, the GPU repeats this entire 32-layer journey.
So, due to this constant back-and-forth the memory bus induces latency and consumes significant amounts of energy. This is the memory bandwidth bottleneck, sometimes loosely called the Von Neumann bottleneck or the “memory wall.”
Taalas sidesteps this wall entirely. They just engraved the 32 layers of Llama 3.1 sequentially on a chip. Essentially, the model’s weights are physical transistors etched into the silicon.
Importantly, they also claim to have invented a hardware scheme where they can store a 4-bit data and perform the multiplication related to it using a single transistor. I will refer it as their ‘magic multiplier’
Now, when the user’s input arrives, it gets converted into a vector, and flows into physical transistors making up Layer1. It does multiplication via their ‘magic multiplier’ and instead of result being saved in a VRAM, the electrical signal simply flows down physical wires into the Layer 2 transistors (via pipeline registers from what I understand). The data streams continuously through the silicon until the final output token is generated.
They don’t use external DRAM/HBM, but they do use a small amount of on-chip SRAM. Why SRAM? Due to cost and complexity, manufacturers don’t mix DRAM and logic gates. That’s why GPUs have separate VRAM. (Also SRAM isn’t facing supply chain crisis, DRAM is).
Taalas uses this on-chip SRAM for the KV Cache (the temporary memory/context window of an ongoing conversation) and to hold LoRA adapters for fine tuning.
Technically yes, I read lots of comments saying that. But Taalas designed a base chip with a massive, generic grid of logic gates and transistors. To map a specific model onto the chip, they only need to customize the top two layers/masks. While it’s still slow, but it’s much faster than building chips from ground up.
It took them two months, to develop chip for Llama 3.1 8B. In the AI world where one week is a year, it’s super slow. But in a world of custom chips, this is supposed to be insanely fast.
As someone stuck running local models on a laptop without a massive GPU, I am keeping my fingers crossed for this type of hardware to be mass-produced soon.
...
Read the original on www.anuragk.com »
I first took a polygraph when I applied to the CIA and went through the applicant screening process.
To prepare for the test, I read A Tremor in the Blood by David T. Lykken. The book described the use of control versus relevant questions as well as countermeasures such as butt-clenching. I had no desire to use countermeasures. I wasn’t out to “beat” the test: I wanted to understand how it worked. A future colleague at the Agency advised me, “Spill your guts.” I thought it was good advice, and I planned to follow it.
I knew I was taking a risk in applying to the Agency. I worked as a defense contractor on a project for the CIA, so I already held CIA TS/SCI clearances. If I failed the polygraph, I could lose my clearances, and I might lose my job as well.
I flew to Northern Virginia for two days of pre-employment screening. A bus took us from the hotel to a nondescript building in Vienna. The examiner was a young woman. She asked me to sign a consent form and told me not to talk about the polygraph with anyone else.
In the pretest interview, the examiner asked, “Did you make any false statements on your application?” I said, “Yes. Under height and weight, I put 130 lb. I actually weigh 134 lbs.” She laughed. Then she asked if I’d read about polygraphs. I said I’d just finished A Tremor in the Blood. She claimed she’d never heard of it. I was surprised. It’s an important book about her field, I would have thought all polygraphers knew of it.
She wired me up, and the polygraph began. My hand turned purple, which hurt terribly. My body twitched from the exaggerated stillness the test required. Halfway through, the examiner left the room, saying she had to show the charts to her supervisor. I came to think of this as the What will it take to get you to buy the car? part of the test. I waited twenty minutes or so, resisting the urge to press my nose against the one-way mirror and peer into it through cupped hands.
The examiner came back. “You’re having a problem with one of the questions. Do you know which one?” I had no idea. I’d answered all of them truthfully. She said, “How about, ’Have you ever lied to your boss?’” I said I hadn’t. She pressed me until I came up with an occasion when I’d passed my boss in the hall. She said, “How are you?” and I said, “Fine.” But I wasn’t fine, I was in the middle of a cancer scare.
I failed the poly and was told to come back the next day. I couldn’t understand why I hadn’t passed. I’d spilled my guts, and I hadn’t used countermeasures.
On the bus back to the hotel, a woman was sobbing, “Do they count something less than $50 as theft?” I felt bad for her because she was crying, but I wondered why a petty thief thought she could get into the Agency.
That evening, the other applicants went to the nearby Tysons Corner mall to go shopping. I didn’t feel festive enough to join them so I withdrew to my room. I ordered from room service but couldn’t eat my dinner. I sat by the window for hours, looking into the darkness. She’d seen something inside me I hadn’t known about. I’d always thought I was a good person. Now I wasn’t sure.
The next morning, I rode back to the same crumbling building where I’d been polygraphed the day before. The examiner said, “Now that you’ve had a chance to think about it, is there anything you’d like to say?” He didn’t need to ask me twice. “You bet there is. I did my part, now I expect you to do yours.” It wasn’t until late that afternoon, when I was waiting for my plane at Dulles, that I realized, “Is there anything you’d like to say?” does not mean, “Please tell us all our faults.”
The examiner wired me up. He began with what he called a calibration test. He took a piece of paper and wrote the numbers one through five in a vertical column. He asked me to pick a number. I picked three. He drew a square around the number three, then taped the paper to the back of a chair where I could see it. I was supposed to lie about having selected the number three.
The test began. He asked, “Is the number one? Is the number two? Is the number three?” I said “No,” and butt-clenched. “Nice strong response!” he said.
I wasn’t hiding anything, so I had no reason to use countermeasures. On the other hand, the analytical part of me enjoyed poking at the test to figure out how it worked. And I was still mad about having failed the previous day, so I was messing with him. Curiosity satisfied, I didn’t try the butt-clench again, not on that day or ever.
During the real test, the examiner said, “Your breathing is unnatural.” He described a kid in little league who kept swinging the bat wrong, and then suddenly got it. If I just kept trying, I could get it too. It took almost four hours, but by the end of the session, he told me I’d passed. I’d just cleared the last hurdle to joining the CIA.
I entered on duty and began a week of orientation. On the first morning, we introduced ourselves and said a few words about what we’d be doing at the Agency. Four guys sitting together at a table up front identified themselves as polygraphers. Everyone else in the room hissed. It wasn’t friendly teasing, either. At lunch, no one would sit with them.
A few years into my Agency career, I took a battery of vocational and aptitude tests including the MMPI, a personality inventory. The MMPI results came back, and they said I fell on the extreme end of the honesty spectrum, or in the words of the Agency psychiatrist, “You’re honest to the point of being naïve.” I was kind of offended. Naïve? Who are you calling naïve? On the other hand, it was nice to have that in my official file.
CIA employees were required to take a polygraph every five years. We all did it, but there was a lot of complaining.
I never heard that anyone worried about losing their job to the poly. It was said that new applicants failed in large numbers, but once you were in, you were in. The re-poly could be unpleasant, though. If you failed, you had to keep taking it. It was said that there was an upper manager who just couldn’t pass, no matter how many times he tried. After something like seven attempts, Polygraph gave up and stopped calling him back. The manager remained in his job.
We weren’t supposed to discuss the polygraph among ourselves, but of course we did. When people came back from a poly, they talked about how it had gone. A woman who’d never seen an illegal drug in her life was accused of being a major drug user. Someone who hated computers so much that she had the secretary print out her emails so she could read them was interrogated for hours about hacking into Agency networks.
A pattern emerged. In a normal polygraph, there was often a gross mismatch between a person and the accusations made against them. I don’t think the officials at Polygraph had any idea how unintentionally humorous this was. Not to the person it happened to, of course, but the rest of us found it hysterically funny.
Once, the examiner got in my face and shouted, “Admit it, you’re deeply in debt. Creditors are pounding on your door!” I said. “You’ve just revealed to me that you haven’t bothered to pull my credit report. Are you lazy, or are you cheap?” I offered to pay the $15 fee myself, but he didn’t take me up on it.
Another time, the examiner accused me of working for a foreign intelligence service and traveling overseas to meet my handler. I rolled my eyes. “Do you want to see my passport? It’s been expired for nine years.” No, he didn’t want to see my passport.
I told my office mates I’d figured out why the accusations were so consistently off-the-wall. Polygraph must have a question of the day. Everyone who went in on Monday would be accused of dealing drugs. Tuesday was espionage day, Wednesday was marital infidelity day, and so on.
Then Aldrich Ames was arrested, and polygraphs became more brutal. People who’d never had trouble passing were being called back two and three times. Thank you, Mr. Ames.
I overheard a fragment of conversation at CIA Headquarters, “I thought I was a good person, but after that last poly, I’m not so sure.” It was hard on people.
Because of Ames, the Agency introduced a policy of random polygraphs. I knew someone who completed a five year poly. A few months later, he was called back for a random one.
I’d been at the Agency for ten years when I went through the reinvestigation poly again.
The test was administered by an inexperienced young woman. In the pretest interview, she asked me a question. I answered truthfully. She asked again as if she was looking for a better answer. Call me a liar. It made me furious.
Well into the session, she said I was failing. I was so frustrated, I started to cry. I knew I could pass it if I just had enough time, but she had to run an errand. She failed me and ended the session early.
I wrote a letter of complaint to the Chief of Polygraph saying I didn’t like the way I’d been treated. He sent me a letter of apology. He said he’d reviewed the tapes and that he was sorry about the abuse. He said the polygrapher had been reprimanded.
I was surprised to get a response of any kind, but a letter of apology was astonishing. Although the cynical part of me might call it a “please don’t sue me” letter.
But I was also puzzled. The apology was for the wrong thing. The Director seemed to think I’d gone through a particularly abusive polygraph. I hadn’t. It was a perfectly ordinary polygraph, no different from any other. I just wrote to say that I didn’t like polygraphs.
I had to take the test again because I’d failed the first one. The second examiner was experienced and had a mild disposition. I passed without difficulty.
Over the course of many years at CIA, I formed an impression that a typical polygraph involves an inexperienced examiner who grills you harshly and then fails you, followed by a re-poly with a more experienced examiner who guides you through it with no fuss. I’ve had two polygraphs in which I passed on the first try. On both occasions, an experienced examiner conducted the test.
I worked at CIA for eleven years. It was a terrific experience, and I count those years as among the happiest in my life. I left only because I got married and had a baby. CIA is many things, but family friendly is not one of them.
I joined a small defense contractor known for its work-life balance. The company supported most of the three-letter agencies, and I settled into doing the same sort of work I’d done before.
My first assignment was on a National Reconnaissance Office (NRO) project. The NRO clearances required a poly, which I agreed to. The test was administered by a woman with many years’ experience. She told me I’d passed. It’s possible to have a polygraph that isn’t confrontational and doesn’t leave you feeling violated. It’s rare, though.
I’d been supporting an FBI project for several years when the phone rang in the kitchen. Someone from the FBI asked me to come in for a routine polygraph, a requirement to keep my clearances up to date.
To prepare, I read the 2002 National Academy of Sciences report. It was an eye-opener, even though it confirmed what I’d already begun to suspect, that the polygraph didn’t work.
I arrived for my polygraph appointment, which would be administered in a building across the street from my children’s orthodontist.
I stood in the marble lobby, waiting for the examiner to come and collect me. The 2002 NAS report made me cynical. In the interview room, I thought, “Don’t look at the man behind the curtain.”
The examiner asked if I’d ever visited the anti-polygraph sites online. I said yes, that’s where I found the 2002 NAS report. He said he’d never heard of it. He also said there was no such a thing as a control question. I hate being lied to; it makes me angry.
In the pretest interview, he asked how many polygraphs I’d had before this one. I wasn’t sure, but I thought it was probably six or seven. He asked what medications I took. I listed everything, including a cortisone cream for a patch of eczema on my hand. He went on and on about my skin condition. What does this have to do with national security, and why is it any of your business? Maybe violating people’s boundaries is a way to establish dominance.
The examiner wired me up, and we did the card trick. He drew a vertical column of numbers, told me to pick one, drew a box around it, and pinned it up where I could see it.
It occurred to me that we were playing a guessing game in which the examiner knew the answer before the game began. I’d have been more impressed if he’d had to discover my number using only the polygraph equipment and/or his ability to read people. I was tempted to suggest it, but I didn’t think the idea would be well received.
The test proceeded normally. The examiner left the room. When he came back, he didn’t meet my eyes, and the muscles of his face were tight. “The test shows deception.” He was right. I had been deceptive, but only about one thing. I hadn’t told him I knew the polygraph didn’t work.
The examiner hammered on me to come clean. I kept repeating, “No, I can’t think of anything else.” I was tempted to make something up, just to make it stop, but I’m not comfortable lying.
At the end of the interview, the examiner looked at me, gloating. “You claim you’ve taken seven polygraphs before today, but later, you said it was only six.” That’s all you’ve got on me? I’m underwhelmed.
Being able to recognize interrogation techniques didn’t make me immune to them. Exhausted, I hung my head, feeling like he’d broken me under interrogation. I’ve always found the shame of being broken was the worst thing about being polygraphed.
A few days later, I was still seriously rattled. I didn’t realize how badly the polygraph had affected me until I plowed through a red light and almost hit another car. I’d never run a red light before. I couldn’t think what had gotten into me.
I told a relative I’d had a really hard time with the polygraph. There was an embarrassed silence, followed by a rapid change of subject. What did you do wrong, and why don’t you own up to it? This from someone who’s known me from birth and has always taken my side. It shows how strongly people in our culture believe in the polygraph.
I wrote to my congressman and asked him to support legislation banning the polygraph. I said it was a civil rights issue to subject an entire workforce to a brutal police-style interrogation in the absence of probable cause, especially if they might be harmed by it.
Although I failed the FBI polygraph, I remained on the project.
Much to my surprise, I was granted additional clearances and assigned to more highly classified work than I’d been doing before the polygraph.
The work dried up, and I moved on to something else. Seven months after the failed FBI poly, I was summoned to FBI Headquarters “to talk about your polygraph.”
It took several hours to drive downtown and find a place to park. I found FBI headquarters. Two burly agents met me at the door. I wondered if they had the power to arrest me. My hands shook. It was like a scene from a movie where government officials are trying to be intimidating. It worked. As we walked across the lobby, I thought I was going to faint.
They escorted me upstairs. Neither of them spoke. They took me to a small room. A folder lay open on a desk. Papers spilled from it.
I said, “Does it matter that I was laid off the project four months ago?”
It was like watching method actors break character. “We’re sorry, we didn’t mean to bring you all the way down here for nothing. Maybe you could visit the spy museum? It’s right across the street.”
Years later, I joined a DIA project which required a CI (counterintelligence) polygraph. I liked the work and the people doing it, so I agreed.
I started in the summer. In late September, Polygraph asked me to come in. I was no longer afraid of them. I didn’t doubt the apparatus took accurate measurements of heart rate, blood pressure, respiration, and perspiration, but I’d stopped believing the examiner could infer a person’s emotions or thoughts from the data in the tracings.
To prepare for the test, I read the DoD Polygraph Institute Interview and Interrogation handbook (PDF), which describes techniques like “offer hope, take it away.” That book is evil. I made copies and gave them to my writer friends.
I now knew the examiners worked from a script like the one in the handbook. On page 24, the script calls for empathetic listening to put the subject at ease. On page 53, it says to change gears and confront the subject with evidence of a crime. I thought that since I was familiar with the script, the polygrapher would lose his power over me.
I arrived for the appointment. The examiner asked if I was nervous, and I said no (true). In the pretest interview, he asked if I’d visited the anti-polygraph websites to learn how to beat the test. I said I did read the sites, but I was looking for articles on psychological damage from interrogation and how to recover from it. Beating the polygraph was of no interest to me (true).
The real test began. In my mind, I turned the pages of the script. The examiner excused himself and stepped outside. He returned, and whatever warmth there’d been in his manner had vanished. Oh c**p, we’re on page 53. He accused me of deception. I hunkered down until it was over. I reminded myself, Whatever you do, don’t make admissions. I didn’t have anything to confess, but if the pressure were bad enough, I might be tempted to make things up.
At the end of the test, the examiner told me I’d failed. But, and this is huge, for the first time I didn’t leave the poly broken and weeping. I was annoyed, but I hadn’t been harmed.
Two weeks later, they asked me to come in for a retest. The examiner was different, but the test was the same. Everything went smoothly, and I assumed I’d passed. The examiner said he needed to run the chart by his supervisor, and I’d hear back later.
Weeks went by, and then months. The computer account I would get as soon as I passed the poly was still in limbo, which was not a good sign.
During the quiet time over the Christmas break, I came to believe I’d lost the ability to pass a polygraph. By this time, I’d failed three in a row, two for DIA and the one for FBI.
I wondered if it was because I’d grown cynical. I now thought of the polygraph apparatus as a colander attached to a Xerox machine. Even so, I’d tried to cooperate. I’d answered their questions truthfully, and I hadn’t used countermeasures. But I no longer feared the examiners, and I no longer broke under interrogation. According to the handbook, breaking the subject is an important part of the process.
On the first day back from Christmas break, my department head stopped by to give me my annual salary review. It was my highest raise in five years. She said the people on my project liked my work, and they liked me.
Later the same day, I got a call from Polygraph asking me to come in the next morning to answer the polygraph questions under oath rather than wired up.
I had no problem with that. As a mentor at the Agency said about testifying before Congress, “Under oath or not under oath. Either way you’re telling the truth, so what’s the difference?”
The two polys I’d already taken for DIA had been CI (counterintelligence). But halfway through, the examiner asked, “How often do you look at pornography?” I blinked with surprise. “Excuse me?” That question didn’t belong on a CI poly, that was a Lifestyle question.
When the interview ended, the examiner said, “I believe you” in a voice that lacked conviction. Then he added, “I’m going to try to get you another polygraph.”
“I don’t want another.” I meant it.
Over the next few days, I jumped whenever the phone rang. Weeks went by before I began to relax. And then my long-delayed computer account was approved. As I understood how the system worked, the polygraph had just been adjudicated in my favor.
A few days later, I was sitting at my desk eating lunch when a scheduling clerk from Polygraph called. “You missed your appointment this morning,” he said. Next time, you might consider telling me about it ahead of time. He said, “Don’t even worry about it. Everyone makes mistakes.” My jaw dropped. He just called me a liar.
The clerk wanted me to come in the next day. I was seriously rattled. I’d already decided I was not going to take another polygraph, ever. I stalled. And then I remembered my newly granted computer accesses. I was already cleared. Probably the paperwork hadn’t caught up yet. I said I needed a few days to figure out if the poly was still necessary, and I would get back to him.
I checked with Security. They said, “No, you haven’t been adjudicated yet.”
I couldn’t get out of it, but I could put it off until it was OBE (overtaken by events). In a few months, the project would move to a new location beyond my commuting range. I planned to stay until right before the move and then find something else. I didn’t have to refuse the poly, I just had to conduct a delaying action.
But Polygraph was insistent, and I wasn’t sure I could hold them off much longer. I asked about withdrawing my application for DIA clearances, but was advised to watch and wait.
Or, I could leave the project now and find other work. My TS/SCI from CIA was still active, but I knew that eventually CIA would want me to take a polygraph. I also held a Top Secret clearance from the Air Force. The Air Force TS, a “vanilla TS” (not SCI) was based on an SF-86 background investigation. It did not require a polygraph. And at the very worst, I could do unclassified work. My company had a large number of unclassified projects, and many had work for analysts.
A few days after I’d told the clerk I’d get back to him, I gave a presentation about the work I’d been doing on my task. It was well received, and I stepped down from the podium covered in glory. As I left the meeting, the program manager pulled me aside and said my task had lost its funding. It was an occupational hazard of defense contracting, and not anything sinister.
I wasn’t happy about being cut from the project, but it did solve my polygraph dilemma. If I wasn’t on the project anymore, I didn’t need to apply for DI clearances, and I didn’t need to take the polygraph.
Around noon, the clerk from Polygraph called again. I told him I didn’t have to take the poly anymore because I’d been laid off the project. In mid-afternoon, he called back. He said he’d made a few phone calls and learned that I hadn’t been laid off from my company, after all. Wonderful, call me a liar.
He pressed me to schedule another poly. I said no. His voice turned whiny. “You have to do it!” I dug in my heels. “I’ve already told you no.” He slammed down the phone.
I’d just refused a polygraph. I felt like Neville Longbottom when he drew the sword of Gryffindor and advanced on Lord Voldemort. I was filled with righteous indignation, and it gave me courage.
For the rest of the day, I was peppered with emails and phone calls summoning me to the offices of upper managers. Some of them were so far up the chain, I’d never heard their names before. They were uniformly kind to me. I hadn’t known it, but I wasn’t the first person in our division to refuse the polygraph. Polygraph conscientious objectors—who knew that there was such a thing?
Polygraph told my management I’d lied about being laid off. It caused a major flap. My project badges were taken from me, even the unclassified ones, and I was debriefed from all my DIA clearances.
To their credit, DIA didn’t tell any other agencies they’d taken my badge and debriefed me. Weeks later, my CIA clearances were still active.
Six weeks went by. I put in for a few days of leave to take the kids to an alumni event at my university. I worked half a day and then went home to pack.
Sometime after 4:00 pm, when I was loading the last suitcase into the car, the department secretary called to say I was late for a 4:00 meeting with my department head. This was the first I’d heard of it. The meeting had been put on the calendar after I’d left for the day. I said I was sorry I couldn’t be there, but I’d be back in the office first thing on Monday.
Just before close of business Monday, I was summoned to another meeting with the department head. When I arrived, my boss was sitting in her office with a woman from Human Resources. As a general rule, if you’re boss wants to see you and HR has been asked to sit in, you know it’s going to end badly.
My department head put a piece of paper in front of me. It said I’d agreed to take a polygraph as a condition of working on the DIA project, but when they tried to schedule it, I canceled the appointment. As a result, I was cut from the project.
“No, I took two polygraphs. I turned down the third because I wasn’t on the project anymore.” Although I now thought of myself a polygraph conscientious objector, and I would have refused whether or not I was still on the project.
The rest of the letter said the company would begin termination proceedings against me. I was eight years from retirement. I wasn’t counting the days until retirement. I liked going to work. Even when I was between assignments and not getting paid, I still came into the office and put in a full day.
I spoke to a lawyer. She said I lived in an “at will” state, which means employees can be dismissed for any (legal) reason, or for no reason at all. I was being terminated for refusing the polygraph, and it was legal.
I decided to resign rather than fight.
...
Read the original on antipolygraph.org »
CXMT halves DDR4 prices as YMTC gains ground in NAND, raising concerns over Korea’s legacy exposure
Samsung Electronics and SK hynix are locked in a race to mass-produce sixth-generation high-bandwidth memory, but Chinese rivals are making gains elsewhere — flooding the legacy DRAM market with chips priced at roughly half the going rate.
According to industry sources on Friday, China’s top DRAM manufacturer CXMT has been offering older-generation DDR4 chips at about half the prevailing market rate. The move comes as global supply shortages have driven prices sharply higher, allowing the company to aggressively push legacy products for mobile devices and PCs in a bid to boost market share.
DDR4 remains a mainstay component in devices such as PCs and TVs, and have risen in price recently.
Data from DRAMeXchange showed that as of end-January, the average fixed contract price of PC DRAM DDR4 8Gb stood at $11.50, up 23.7 percent from $9.30 a month ago. Compared with $1.35 a year earlier, the price has jumped more than eightfold. DRAM prices have climbed for 10 consecutive months, marking the highest level since the market tracker began compiling data in June 2016.
Against this backdrop, cut-price Chinese chips are proving tempting. US hardware firms HP and Dell are reportedly conducting quality tests on CXMT’s DRAM, while Taiwan’s Asus and Acer have sought cooperation with Chinese partners. Signs are emerging that aggressive pricing is translating into demand.
“Chinese firms are waging a volume-based strategy starting with general-purpose memory, backed by state subsidies and domestic demand from AI servers and locally developed GPUs,” said an industry source who requested anonymity. “As Korean companies concentrate on HBM4, there are visible cracks emerging in (their hold on) the legacy market.”
The challenge for Korean chipmakers is that the legacy segment still accounts for a significant portion of their earnings. More than half of the total DRAM production capacity at both Samsung and SK hynix is understood to be allocated to general-purpose products. Even if they maintain leadership in HBM4, a deepening erosion of the mainstream market could eventually weigh on profitability.
Chinese players, meanwhile, are not limiting their push to low-cost volume sales. The cash and know-how gained from legacy chips is funding a push into higher-end products.
CXMT is in the process of converting wafer capacity equivalent to about 20 percent of its total DRAM output — some 60,000 wafers per month — at its Shanghai plant to the fourth-generation HBM3 chip production. The possibility of expanding into post-HBM3E products is also being discussed.
The Shanghai facility is believed to have production capacity two to three times larger than the company’s headquarters plant in Hefei. Equipment installation is expected to be completed in the second half of this year, with mass production slated for next year. Although HBM3 and the fifth-generation HBM3E chips trail HBM4 in performance, they remain widely used in AI data centers.
China’s advance is not confined to DRAM. YMTC has been gaining traction in the NAND flash sector as well, capitalizing on competitively priced mobile products. The company recorded a 10 percent share of the global NAND market for the first time last year, and momentum is widely expected to continue.
YMTC is currently building a third fabrication plant in Wuhan, targeting operations next year. Half of the facility’s production capacity is to be allocated to DRAM. It will initially focus on legacy DRAM products, with the possibility of expanding into HBM production in partnership with local assembly firms. Industry sources say the pattern is familiar — build scale in legacy DRAM, then move up the value chain.
“At this stage, Chinese manufacturers are relying on aggressive pricing to build scale in legacy DRAM,” the anonymous source said. “But over time, the technology gap may narrow more quickly than expected. Even if Korean firms maintain leadership in HBM, neglecting the mainstream segment could weigh on profitability in the longer run.”
...
Read the original on www.koreaherald.com »
1.4 What can we do?
In the Rust Programming Language Community Server, there’s tag named -parse-dont-validate which links to an article about the concept of avoiding validation functions and encoding invariants in the type level instead. I usually recommend it to beginners/intermediates to Rust who are struggling with designing APIs.
The only problem is that it uses Haskell to explain its concepts.
Yeah, it’s fine, but for beginners unfamiliar with the functional paradigm, it might not be so approachable. And so I wanted so write a blog post about this pattern but in a rather Rust-centric way. So let’s start!
One basic example I can give is a function that divides a number by another number. This is fine, but unfortunately it can panic when b has the value of zero: Compiling playground v0.0.1 (/playground) Finished `dev` profile [unoptimized + debuginfo] target(s) in 1.28s Running `target/debug/playground`
thread ‘main’ (41) panicked at src/main.rs:2:5:attempt to divide by zeronote: run with `RUST_BACKTRACE=1` environment variable to display a backtraceThat’s fine and dandy if we want erroneous values to fail loudly at runtime, but what if we want stronger guarantees? This is especially important when some operations don’t fail loudly, like the following:There’s no error! But do we want that?We could add an assert! in the divide_floats function to emulate typical integer division behavior.fn divide_floats(a: f32, b: f32) -> f32 { assert_ne!(b, 0.0, “Division by zero is not allowed.“); a / b} Compiling playground v0.0.1 (/playground) Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.65s Running `target/debug/playground`
thread ‘main’ (32) panicked at src/main.rs:2:5:assertion `left != right` failed: Division by zero is not allowed. left: 0.0 right: 0.0Cute! But there’s still a problem of running into panics only at runtime. My beef with Python (or any other dynamic language for that matter) is that a lot of errors only arises when you run the program. That’s why they’re adding typechecking to these languages: people want to bubble some mistakes to compile-time (or typecheck-time, whatever). We can use Rust’s rich type system to communicate these errors at build time.One way, which I think is the more common way as people are more familiar with it is the idea of fallible functions, which return either an Option or a Result.This is a fine way to do things, as it communicates that (1) the function can fail, and (2) you can handle the failing case after. † † Of course, catch_unwind exists, but I’m pretending that it doesn’t. To me, the function’s invariants (b must not be zero) is encoded after-the-fact, aka in the return type Option. This implies to me that the invariants could be encoded before-the-fact, aka in the function parameters. But what would that look like?Say, let’s have a type that is something like f32, but it’s guaranteed to never be zero. We’ll name it NonZeroF32:This struct only contains a single field f32. The semantics of the type understood from the name is that it’s just like a normal f32, but does not allow the value of zero. How do we guarantee this? Since rust does encapsulation at the module level, we make this type public while have its field private.Then, the only way to construct this type is via a fallible constructor function:impl Add for NonZeroF32 { … }impl Add for NonZeroF32 { … }impl Add for f32 { … }// and a bunch of other operators…We can then use this in our divide_floats function.There is an interesting implication in this pattern.In the second version of divide_floats, we changed the return type from f32 to Option just to avoid the panics. As described in the original article by Alexis King, this is a weakening of the return type, and the function’s promise. We temper the caller’s expectation by saying that yes, this function can fail in some way, and you have to account for that. And that weakening is described in the type system via the Option enum.In the third iteration of divide_floats, we change our perspective and ask ourselves “instead of weakening the return type, what if we strengthen the function parameters?” We communicated that via accepting a NonZeroF32. Instead of having the validation code in our functions, we instead push that responsibility to the caller. The validation now happens before the function execution.To see the advantage of pushing the validation forward to the user, let’s say we have another function like so:// The quadratic formula!fn roots(a: f32, b: f32, c: f32) -> [f32; 2] { // For the sake of demonstration we will be ignoring complex roots let discriminant = b * b - 4 * a * c; [ -b + discriminant.sqrt() / (2 * a), -b - discriminant.sqrt() / (2 * a), ]}This function can fail if the discriminant is negative (which we will be ignoring in this contrived example), and if a is zero. The two ways of going about this can be written as follows:fn try_roots(a: f32, b: f32, c: f32) -> Option { if a == 0 { return None; } // …}
fn newtyped_roots(a: NonZeroF32, b: f32, c: f32) -> [f32; 2] { // unchanged}The Option version has me duplicating the conditional for at least two different functions, which might be icky if you are a DRY-hard. Also, not only the function has to validate if the float can be zero, the caller must then validate again by matching on the returned Option. That seems redundant. It would be ideal if we only need to check only once.let roots = try_roots(5, 4, 7); // `try_roots` does a validation check// and then we validate it again by matching on the resultmatch roots { Some(result) => do_something(), None => { handle_error(); return },}The NonZeroF32 version can help with that as validation happens before, and happens once, instead of twice.// Handle the special case oncelet Some(a) = NonZeroF32::new(5) else { handle_error(); return;}
// `newtyped_roots` does not need to handle it again,// indicated by the function not needing to return// an `Option` and us handling the result.let [root1, root2] = newtyped_roots(a, 4, 7);Moving away from the divide_floats, let’s now use an example from the original blog post, converted to Rust:fn get_cfg_dirs() -> Result, Box> { let cfg_dirs_string = std::env::var(“CONFIG_DIRS”)?;
let cfg_dirs_list = cfg_dirs_string.split(‘,’) .map(PathBuf::from) .collect::>();
if cfg_dirs_list.is_empty() { return Err(“CONFIG_DIRS cannot be empty”.into()); }
Ok(cfg_dirs_list)}
fn main() -> Result> { let cfg_dirs = get_cfg_dirs()?; match cfg_dirs.first() { Some(cache_dir) => init_cache(cache_dir), None => unreachable!(“should never happen; already checked configDirs is non-empty”), }}
We checked if cfg_dirs_list is empty in the get_cfg_dirs function. Then, we still had to “check” it again in the main function by matching on cfg_dirs.first(). The Vec was known to be nonempty, do we have to check it again? Consequently, doesn’t this have an impact on performance, especially if we have to check it again and again and again?
The original post raised a good point about resilience to refactors. If for some reason the is_empty gets refactored out for some reason, and the programmer forgot to update main, then the unreachable! branch might actually get reached and explode your computer or whatever.
If we instead had a special NonEmptyVec newtype (well, not exactly special) where its existence guarantees that the Vec is never empty, we could dostruct NonEmptyVec(T, Vec);
impl NonEmptyVec { // Notice that we don’t need to return an `Option` fn first(&self) -> &T { … }}
fn get_cfg_dirs() -> Result, Box> { let cfg_dirs_string = std::env::var(“CONFIG_DIRS”)?;
let cfg_dirs_list = cfg_dirs_string.split(‘,’) .map(PathBuf::from) .collect::>();
// We parse the `Vec` into a more structured type let cfg_dirs_list = NonEmptyVec::try_from(cfg_dirs_list)?;
Ok(cfg_dirs_list)}
fn main() -> Result> { let cfg_dirs = get_cfg_dirs()?; // Notice that we don’t have to check again if the `Vec` // was empty, since we guarantee that via the `NonEmptyVec` type init_cache(cfg_dirs.first());}In this context, we can call NonZeroF32::new and NonEmptyVec::try_from parsing functions, since they validate and convert the less semantic type to a type with more meaning imbued into it. That is, nonzeroness of a float and nonemptiness of a Vec is now encoded into a type. You can just see the word NonZeroF32 and therefore understand that going forward it is always be an f32 that is never zero.Validation and checking functions on the other hand, well, just validate the value and leave the type as that. If I have a is_nonzero(f32) -> bool function, then there’s not really much of a readable difference between an f32 that has is_nonzero called on it versus and an f32 that hasn’t.By taking advantage of the existence of a nominative type system, we can communicate that this f32 is not zero by parsing it to a new type, as opposed to just validating it. If you only validate it, then you still can’t tell if f32 was nonzero unless you dig through the code. However, if you parsed it, you can say it’s always be nonzero if you see NonZeroF32 in your code.
Of course, the above examples are very much contrived, but is there an instance where creating newtypes is helpful? Yes. In fact, most people have used it. It’s called a String. If we dig into the internals, String is just a newtype over the Vec type:It’s parsing function is String::from_utf8, which contains the validation code for checking if the byte vector is valid UTF-8.So instead of passing around a Vec around and validating all over the place, just parse into a String and you can be assured with having a type-safe String with all the convenience functions you can get.Another example is serde_json. In Python, json.loads simply give you a dictionary. This is fine, especially if the data is sufficiently arbitrary, but if you have a schema and a type system, it’s better to let the type system do the work of parsing json.In our terminology, validation looks like this:use serde_json::{from_str, Value};
const SAMPLE_JSON: &str = r#“{ “foo”: 1, “bar”: [1, 2, 3] }“#;
let json = from_str::That’s two unwraps! One for checking if the string is valid json and the other is for checking if the bar field exists. Now consider this example where we use the parsing mechanic instead via types and the Deserialize derive macro.struct Sample { foo: i32, bar: [i32; 3]}
impl Sample { fn first_elem(&self) -> i32 { self.bar[0] // does not panic, by definition }}
let json = from_str::Since we deserialized the json file into an actual type, we can safely make these guarantees:
The foo and bar always exist in the json string we parse.
foo always has an integer value.
bar is always an array of three integers.
first_elem will never panic since all elements of an array is always initialized, and indexing into the first the element of a nonzero-length array will always be successful.
The only point of failure here is pushed upfront, where the from_str happens. After that point, there’s not really much error handling to be done here, since the validation is now represented at the type level instead of at the function level.
With that said, what lessons can we learn from here? Turns out, most functional language programmers already have learned several lessons, and Rust is not much different in terms of applying such FP concepts to the language. First lesson we can learn is that we should make illegal states unrepresentable.To refer back to the NonZeroF32 and NonEmptyVec examples, we say the state of being zero is illegal for NonZeroF32 and the state of being empty is illegal for NonEmptyVec. And as illegal states, they cannot be represented in such types. That’s why the only constructors available for these types are fallible; the value either parsed successfully, or it failed and does not return the new types.If we only do validation, like checking if f32 is nonzero for example, then the illegal state can still be represented. There’s a small possible that the value is zero, especially after some refactors when the conditional checks are accidentally or intentionally removed in some places. This reminds me of how other languages use integers as sentinel values. Given this code snippet from Wikipedia:int find(int arr[], size_t len, int val) { for (int i = 0; i < len; i++) { if (arr[i] == val) { return i; } } return -1; // not found}The error is returned as -1, since indexing arrays is only valid for nonnegative integers. Seems weird as (1) the numbers -2 and below can exist, but not actually valid, and (2) treating certain values as special seems too error-prone, as in the future it could be that negative number can become semantically valid. Second lesson we can learn is that proving invariants should be done as early as possible.There’s this concept called shotgun parsing where the linked paper describes it as follows:
Shotgun Parsing: Shotgun parsing is a programming antipattern whereby parsing and input-validating code is mixed with and spread across processing code—throwing a cloud of checks at the input, and hoping, without any systematic justification, that one or another would catch all the “bad” cases.
Essentially, it describes the problem of usage of data without previous validation of its entirety of data. You could act on a part of the data that is validated beforehand, but discover that another part of the data is invalid.The paper mentions CVE-2016-0752, which is a bug that allows attackers to read arbitrary files because you can use .. in the input. The paper argues that treating validation as emergent and not deliberate can lead to security bugs like these.If we treat validation as deliberate, then it should happen as early as possible and as comprehensive as possible. By parsing first, every invariant can be proven first before executing on said data. I remember this video about lambda calculus. It concludes that types can be represented as propositions in logic, and terms as proofs. I recommend watching the video, as it is eye-opening to me and maybe it can help you realize some things too.Fundamentally, if your program typechecks properly, then you can say that the proof is correct. Thank you Curry-Howard Correspondence. There are proof assistant programming languages that can help with this like Lean and Agda, but you can emulate this in Rust anyway. That’s how some weird libraries like the typenum crate work.This is a simple program in Rust where I check if 3 + 4 is equal to 8. Obviously this is not correct, and so it will appropriately give you a compile error. Compiling playground v0.0.1 (/playground)error[E0277]: the trait bound `PInt, B1>, B1>>: Same>>` is not satisfied –> src/lib.rs:11:5 |11 | Result: | ^^^^^^ unsatisfied trait bound | = help: the trait `typenum::Same, B0>, B0>, B0>>>` is not implemented for `PInt, B1>, B1>>` = note: the full name for the type has been written to ‘/playground/target/debug/deps/playground-e4f34f6f1769e3b6.long-type-6323804316620900.txt’ = note: consider using `–verbose` to print the full type name to the console
For more information about this error, try `rustc –explain E0277`.error: could not compile `playground` (lib) due to 1 previous errorSo sad that the error message is dogshit. Such is life.
What can we do? #There are some recommendations I usually say to people on the RPLCS discord server, adapted from the original blog post. First, just because a function accepts a type doesn’t mean you have to use it in your structs, nor have to perpetually represent it as that type. For example, let’s say we have a third party library function that looks like this.You don’t have to store bool in your App/Context struct like App { lightbulb_state: bool }. That’s confusing. I’d rather have you define a separate enum with more semantics imbued into it, like:enum LightBulbState { Off, On,}
impl FromYeah, people can say it gets more verbose, but I rather care more about correctness instead. Sorry.Second, I sometimes get suspicious about these kind of APIs:If I see the function body does not do anything side-effectful, then it’s probable that parsing can help here turning Thing into a more structured datatype. And even for side-effectful stuff, there are some types that better represent certain situations, like infinite loop function representing their return types as Result or Result.
I love creating more types. Five million types for everyone please. I think it’s interesting that there’s a lot of instances where types drive the design of Rust programs. Like how Vec has four layers of newtypes plus an additional field. sqlx generate anonymous structs in their query! macros. bon is a macro crate that converts functions into compile-time builders via types.Of course, not everything is solvable via types. But personally I think pushing your verification code to types can help your code become clearer and more robust. Let the type system handle the validation for you. It exists, so might as well use it to its fullest extent.I’d like to thank Alexis King for this article where I first encountered this idea. I’d love to follow up on this topic with an extension on this sequel, and maybe recontextualizing in Rust via the unsafe keyword would be helpful.Of course, newtyping is not the answer to all problems. Due to lack of ergonomic features to allow newtyping—like delegation—many people are somewhat averse to using the pattern. Nevertheless, if someone made a good enough RFC I’d be happy to see it happen.Using the type system as a compile-time checker because I want the compiler to help me write my programs is very nice. You should take advantage of the type system too, not many languages have it as good as Rust :)
...
Read the original on www.harudagondi.space »
sandbox-exec is a built-in macOS command-line utility that enables users to execute applications within a sandboxed environment. In essence, it creates a secure, isolated space where applications can run with limited access to system resources — only accessing what you explicitly permit.
The concept behind sandboxing is fundamental to modern security: by restricting what an application can access, you minimize the potential damage from malicious code or unintended behavior. Think of it as putting an application in a secure room where it can only interact with specific objects you’ve placed there.
Before diving into usage, let’s understand why sandboxing matters:
Protection from malicious code: If you’re testing an unfamiliar application or script, sandboxing can prevent it from accessing sensitive files or sending data across the network.
Damage limitation: Even trusted applications can have vulnerabilities. Sandboxing limits the potential impact if an application is compromised.
Privacy control: You can explicitly deny applications access to personal directories like Documents, Photos, or Contacts.
Testing environment: Developers can test how applications function with limited permissions before implementing formal App Sandbox entitlements.
Resource restriction: Beyond security, sandboxing can limit an application’s resource consumption or network access.
Using sandbox-exec requires creating a sandbox profile (configuration file) that defines the rules for your secure environment. The basic syntax is:
Where profile.sb contains the rules defining what the sandboxed application can and cannot do, and command_to_run is the application you want to run within those constraints.
Sandbox profiles use a Scheme-like syntax (a LISP dialect) with parentheses grouping expressions. The basic structure includes:
See Appendix for more complete list of available rules
There are two primary philosophies when creating sandbox profiles:
This approach starts by denying everything and explicitly allowing only required operations:
This is the most secure approach, ideal for running untrusted code, but requires careful configuration to make applications functional.
Alternatively, you can allow everything except specific operations:
This approach is easier to implement but less secure, as you must anticipate every potential risky operation.
Let’s explore some real-world examples to demonstrate the power of custom sandboxing.
Create a sandboxed terminal session that can’t access the network:
This creates a terminal session that functions normally but cannot access the network or read from your personal directories.
These system profiles provide configurations for common restriction scenarios and applications. Some of them have quite good comments so you can use it as basis for your future profiles.
When applications fail in a sandbox, determining the cause can be challenging. Here are effective debugging techniques:
Search for “sandbox” and your application name
Look for lines containing “deny” to identify blocked operations
These logs show exactly which operations are being denied, helping you refine your sandbox profile.
For frequent sandboxing, add an alias to your shell configuration:
# Add to ~/.zshrc or ~/.bash_profile
alias sandbox-no-network=‘sandbox-exec -p “(version 1)(allow default)(deny network*)“’
# Then use it as:
sandbox-no-network curl -v https://google.com
but when I did the same for UI applications it didn’t work for some reason (I can still open Google.com):
You can import and extend existing profiles:
Despite its power, sandbox-exec has some limitations to consider:
Deprecation status: While functional, Apple discourages its direct use in favor of App Sandbox for developers.
Complex applications: Modern applications often have complex requirements that make comprehensive sandboxing challenging without extensive testing.
Trial and error: Creating effective sandbox profiles often requires iterative testing to identify all necessary permissions.
No GUI: Unlike App Sandbox in Xcode, sandbox-exec has no graphical interface for configuration.
System updates: Major macOS updates might change how sandbox-exec works or what rules are effective.
While Apple has moved toward more user-friendly security models, sandbox-exec remains a powerful tool for those willing to invest time in learning its intricacies. It offers a level of control and customization that GUI-based solutions simply cannot match.
For security-conscious users, developers testing applications, or anyone working with potentially untrusted code, sandbox-exec provides a native macOS solution for creating finely-tuned security environments. Though it requires knowledge of all it’s possibility, despite lack of documentation, the security benefits make it well worth the effort.
The most powerful aspect of sandbox-exec is its flexibility — you can create custom security profiles tailored to specific applications and use cases, going far beyond the one-size-fits-all approach of most security tools.
If you’re interested in learning more about macOS security tools and techniques, check out Apple’s official documentation on App Sandbox or explore the pre-built sandbox profiles in /System/Library/Sandbox/Profiles to see how Apple implements sandboxing for system services
...
Read the original on igorstechnoclub.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.