10 interesting stories served every morning and every evening.




1 695 shares, 50 trendiness

Boris Tane

The Workflow in One Sentence I’ve been us­ing Claude Code as my pri­mary de­vel­op­ment tool for ap­prox 9 months, and the work­flow I’ve set­tled into is rad­i­cally dif­fer­ent from what most peo­ple do with AI cod­ing tools. Most de­vel­op­ers type a prompt, some­times use plan mode, fix the er­rors, re­peat. The more ter­mi­nally on­line are stitch­ing to­gether ralph loops, mcps, gas towns (remember those?), etc. The re­sults in both cases are a mess that com­pletely falls apart for any­thing non-triv­ial.

The work­flow I’m go­ing to de­scribe has one core prin­ci­ple: never let Claude write code un­til you’ve re­viewed and ap­proved a writ­ten plan. This sep­a­ra­tion of plan­ning and ex­e­cu­tion is the sin­gle most im­por­tant thing I do. It pre­vents wasted ef­fort, keeps me in con­trol of ar­chi­tec­ture de­ci­sions, and pro­duces sig­nif­i­cantly bet­ter re­sults with min­i­mal to­ken us­age than jump­ing straight to code.

flow­chart LR

R[Research] –> P[Plan]

P –> A[Annotate]

A –>|repeat 1-6x| A

A –> T[Todo List]

T –> I[Implement]

I –> F[Feedback & Iterate]

Every mean­ing­ful task starts with a deep-read di­rec­tive. I ask Claude to thor­oughly un­der­stand the rel­e­vant part of the code­base be­fore do­ing any­thing else. And I al­ways re­quire the find­ings to be writ­ten into a per­sis­tent mark­down file, never just a ver­bal sum­mary in the chat.

read this folder in depth, un­der­stand how it works deeply, what it does and all its speci­fici­ties. when that’s done, write a de­tailed re­port of your learn­ings and find­ings in re­search.md

study the no­ti­fi­ca­tion sys­tem in great de­tails, un­der­stand the in­tri­ca­cies of it and write a de­tailed re­search.md doc­u­ment with every­thing there is to know about how no­ti­fi­ca­tions work

go through the task sched­ul­ing flow, un­der­stand it deeply and look for po­ten­tial bugs. there def­i­nitely are bugs in the sys­tem as it some­times runs tasks that should have been can­celled. keep re­search­ing the flow un­til you find all the bugs, don’t stop un­til all the bugs are found. when you’re done, write a de­tailed re­port of your find­ings in re­search.md

Notice the lan­guage: deeply”, in great de­tails”, intricacies”, go through every­thing”. This is­n’t fluff. Without these words, Claude will skim. It’ll read a file, see what a func­tion does at the sig­na­ture level, and move on. You need to sig­nal that sur­face-level read­ing is not ac­cept­able.

The writ­ten ar­ti­fact (research.md) is crit­i­cal. It’s not about mak­ing Claude do home­work. It’s my re­view sur­face. I can read it, ver­ify Claude ac­tu­ally un­der­stood the sys­tem, and cor­rect mis­un­der­stand­ings be­fore any plan­ning hap­pens. If the re­search is wrong, the plan will be wrong, and the im­ple­men­ta­tion will be wrong. Garbage in, garbage out.

This is the most ex­pen­sive fail­ure mode with AI-assisted cod­ing, and it’s not wrong syn­tax or bad logic. It’s im­ple­men­ta­tions that work in iso­la­tion but break the sur­round­ing sys­tem. A func­tion that ig­nores an ex­ist­ing caching layer. A mi­gra­tion that does­n’t ac­count for the ORMs con­ven­tions. An API end­point that du­pli­cates logic that al­ready ex­ists else­where. The re­search phase pre­vents all of this.

Once I’ve re­viewed the re­search, I ask for a de­tailed im­ple­men­ta­tion plan in a sep­a­rate mark­down file.

I want to build a new fea­ture that ex­tends the sys­tem to per­form . write a de­tailed plan.md doc­u­ment out­lin­ing how to im­ple­ment this. in­clude code snip­pets

the list end­point should sup­port cur­sor-based pag­i­na­tion in­stead of off­set. write a de­tailed plan.md for how to achieve this. read source files be­fore sug­gest­ing changes, base the plan on the ac­tual code­base

The gen­er­ated plan al­ways in­cludes a de­tailed ex­pla­na­tion of the ap­proach, code snip­pets show­ing the ac­tual changes, file paths that will be mod­i­fied, and con­sid­er­a­tions and trade-offs.

I use my own .md plan files rather than Claude Code’s built-in plan mode. The built-in plan mode sucks. My mark­down file gives me full con­trol. I can edit it in my ed­i­tor, add in­line notes, and it per­sists as a real ar­ti­fact in the pro­ject.

One trick I use con­stantly: for well-con­tained fea­tures where I’ve seen a good im­ple­men­ta­tion in an open source repo, I’ll share that code as a ref­er­ence along­side the plan re­quest. If I want to add sortable IDs, I paste the ID gen­er­a­tion code from a pro­ject that does it well and say this is how they do sortable IDs, write a plan.md ex­plain­ing how we can adopt a sim­i­lar ap­proach.” Claude works dra­mat­i­cally bet­ter when it has a con­crete ref­er­ence im­ple­men­ta­tion to work from rather than de­sign­ing from scratch.

But the plan doc­u­ment it­self is­n’t the in­ter­est­ing part. The in­ter­est­ing part is what hap­pens next.

This is the most dis­tinc­tive part of my work­flow, and the part where I add the most value.

flow­chart TD

W[Claude writes plan.md] –> R[I re­view in my ed­i­tor]

R –> N[I add in­line notes]

N –> S[Send Claude back to the doc­u­ment]

S –> U[Claude up­dates plan]

U –> D{Satisfied?}

D –>|No| R

D –>|Yes| T[Request todo list]

After Claude writes the plan, I open it in my ed­i­tor and add in­line notes di­rectly into the doc­u­ment. These notes cor­rect as­sump­tions, re­ject ap­proaches, add con­straints, or pro­vide do­main knowl­edge that Claude does­n’t have.

The notes vary wildly in length. Sometimes a note is two words: not op­tional” next to a pa­ra­me­ter Claude marked as op­tional. Other times it’s a para­graph ex­plain­ing a busi­ness con­straint or past­ing a code snip­pet show­ing the data shape I ex­pect.

use driz­zle:gen­er­ate for mi­gra­tions, not raw SQL — do­main knowl­edge Claude does­n’t have

no — this should be a PATCH, not a PUT — cor­rect­ing a wrong as­sump­tion

remove this sec­tion en­tirely, we don’t need caching here” — re­ject­ing a pro­posed ap­proach

the queue con­sumer al­ready han­dles re­tries, so this retry logic is re­dun­dant. re­move it and just let it fail” — ex­plain­ing why some­thing should change

this is wrong, the vis­i­bil­ity field needs to be on the list it­self, not on in­di­vid­ual items. when a list is pub­lic, all items are pub­lic. re­struc­ture the schema sec­tion ac­cord­ingly” — redi­rect­ing an en­tire sec­tion of the plan

Then I send Claude back to the doc­u­ment:

I added a few notes to the doc­u­ment, ad­dress all the notes and up­date the doc­u­ment ac­cord­ingly. don’t im­ple­ment yet

This cy­cle re­peats 1 to 6 times. The ex­plicit don’t im­ple­ment yet” guard is es­sen­tial. Without it, Claude will jump to code the mo­ment it thinks the plan is good enough. It’s not good enough un­til I say it is.

Why This Works So Well

The mark­down file acts as shared mu­ta­ble state be­tween me and Claude. I can think at my own pace, an­no­tate pre­cisely where some­thing is wrong, and re-en­gage with­out los­ing con­text. I’m not try­ing to ex­plain every­thing in a chat mes­sage. I’m point­ing at the ex­act spot in the doc­u­ment where the is­sue is and writ­ing my cor­rec­tion right there.

This is fun­da­men­tally dif­fer­ent from try­ing to steer im­ple­men­ta­tion through chat mes­sages. The plan is a struc­tured, com­plete spec­i­fi­ca­tion I can re­view holis­ti­cally. A chat con­ver­sa­tion is some­thing I’d have to scroll through to re­con­struct de­ci­sions. The plan wins every time.

Three rounds of I added notes, up­date the plan” can trans­form a generic im­ple­men­ta­tion plan into one that fits per­fectly into the ex­ist­ing sys­tem. Claude is ex­cel­lent at un­der­stand­ing code, propos­ing so­lu­tions, and writ­ing im­ple­men­ta­tions. But it does­n’t know my prod­uct pri­or­i­ties, my users’ pain points, or the en­gi­neer­ing trade-offs I’m will­ing to make. The an­no­ta­tion cy­cle is how I in­ject that judge­ment.

add a de­tailed todo list to the plan, with all the phases and in­di­vid­ual tasks nec­es­sary to com­plete the plan - don’t im­ple­ment yet

This cre­ates a check­list that serves as a progress tracker dur­ing im­ple­men­ta­tion. Claude marks items as com­pleted as it goes, so I can glance at the plan at any point and see ex­actly where things stand. Especially valu­able in ses­sions that run for hours.

When the plan is ready, I is­sue the im­ple­men­ta­tion com­mand. I’ve re­fined this into a stan­dard prompt I reuse across ses­sions:

im­ple­ment it all. when you’re done with a task or phase, mark it as com­pleted in the plan doc­u­ment. do not stop un­til all tasks and phases are com­pleted. do not add un­nec­es­sary com­ments or js­docs, do not use any or un­known types. con­tin­u­ously run type­check to make sure you’re not in­tro­duc­ing new is­sues.

This sin­gle prompt en­codes every­thing that mat­ters:

implement it all”: do every­thing in the plan, don’t cherry-pick

mark it as com­pleted in the plan doc­u­ment”: the plan is the source of truth for progress

do not stop un­til all tasks and phases are com­pleted”: don’t pause for con­fir­ma­tion mid-flow

do not add un­nec­es­sary com­ments or js­docs”: keep the code clean

do not use any or un­known types”: main­tain strict typ­ing

continuously run type­check”: catch prob­lems early, not at the end

I use this ex­act phras­ing (with mi­nor vari­a­tions) in vir­tu­ally every im­ple­men­ta­tion ses­sion. By the time I say implement it all,” every de­ci­sion has been made and val­i­dated. The im­ple­men­ta­tion be­comes me­chan­i­cal, not cre­ative. This is de­lib­er­ate. I want im­ple­men­ta­tion to be bor­ing. The cre­ative work hap­pened in the an­no­ta­tion cy­cles. Once the plan is right, ex­e­cu­tion should be straight­for­ward.

Without the plan­ning phase, what typ­i­cally hap­pens is Claude makes a rea­son­able-but-wrong as­sump­tion early on, builds on top of it for 15 min­utes, and then I have to un­wind a chain of changes. The don’t im­ple­ment yet” guard elim­i­nates this en­tirely.

Once Claude is ex­e­cut­ing the plan, my role shifts from ar­chi­tect to su­per­vi­sor. My prompts be­come dra­mat­i­cally shorter.

flow­chart LR

I[Claude im­ple­ments] –> R[I re­view / test]

R –> C{Correct?}

C –>|No| F[Terse cor­rec­tion]

F –> I

C –>|Yes| N{More tasks?}

N –>|Yes| I

N –>|No| D[Done]

Where a plan­ning note might be a para­graph, an im­ple­men­ta­tion cor­rec­tion is of­ten a sin­gle sen­tence:

You built the set­tings page in the main app when it should be in the ad­min app, move it.”

Claude has the full con­text of the plan and the on­go­ing ses­sion, so terse cor­rec­tions are enough.

Frontend work is the most it­er­a­tive part. I test in the browser and fire off rapid cor­rec­tions:

For vi­sual is­sues, I some­times at­tach screen­shots. A screen­shot of a mis­aligned table com­mu­ni­cates the prob­lem faster than de­scrib­ing it.

this table should look ex­actly like the users table, same header, same pag­i­na­tion, same row den­sity.”

This is far more pre­cise than de­scrib­ing a de­sign from scratch. Most fea­tures in a ma­ture code­base are vari­a­tions on ex­ist­ing pat­terns. A new set­tings page should look like the ex­ist­ing set­tings pages. Pointing to the ref­er­ence com­mu­ni­cates all the im­plicit re­quire­ments with­out spelling them out. Claude would typ­i­cally read the ref­er­ence file(s) be­fore mak­ing the cor­rec­tion.

When some­thing goes in a wrong di­rec­tion, I don’t try to patch it. I re­vert and re-scope by dis­card­ing the git changes:

I re­verted every­thing. Now all I want is to make the list view more min­i­mal — noth­ing else.”

Narrowing scope af­ter a re­vert al­most al­ways pro­duces bet­ter re­sults than try­ing to in­cre­men­tally fix a bad ap­proach.

Even though I del­e­gate ex­e­cu­tion to Claude, I never give it to­tal au­ton­omy over what gets built. I do the vast ma­jor­ity of the ac­tive steer­ing in the plan.md doc­u­ments.

This mat­ters be­cause Claude will some­times pro­pose so­lu­tions that are tech­ni­cally cor­rect but wrong for the pro­ject. Maybe the ap­proach is over-en­gi­neered, or it changes a pub­lic API sig­na­ture that other parts of the sys­tem de­pend on, or it picks a more com­plex op­tion when a sim­pler one would do. I have con­text about the broader sys­tem, the prod­uct di­rec­tion, and the en­gi­neer­ing cul­ture that Claude does­n’t.

flow­chart TD

P[Claude pro­poses changes] –> E[I eval­u­ate each item]

E –> A[Accept as-is]

E –> M[Modify ap­proach]

E –> S[Skip / re­move]

E –> O[Override tech­ni­cal choice]

A & M & S & O –> R[Refined im­ple­men­ta­tion scope]

Cherry-picking from pro­pos­als: When Claude iden­ti­fies mul­ti­ple is­sues, I go through them one by one: for the first one, just use Promise.all, don’t make it overly com­pli­cated; for the third one, ex­tract it into a sep­a­rate func­tion for read­abil­ity; ig­nore the fourth and fifth ones, they’re not worth the com­plex­ity.” I’m mak­ing item-level de­ci­sions based on my knowl­edge of what mat­ters right now.

Trimming scope: When the plan in­cludes nice-to-haves, I ac­tively cut them. remove the down­load fea­ture from the plan, I don’t want to im­ple­ment this now.” This pre­vents scope creep.

Protecting ex­ist­ing in­ter­faces: I set hard con­straints when I know some­thing should­n’t change: the sig­na­tures of these three func­tions should not change, the caller should adapt, not the li­brary.”

Overriding tech­ni­cal choices: Sometimes I have a spe­cific pref­er­ence Claude would­n’t know about: use this model in­stead of that one” or use this li­brary’s built-in method in­stead of writ­ing a cus­tom one.” Fast, di­rect over­rides.

Claude han­dles the me­chan­i­cal ex­e­cu­tion, while I make the judge­ment calls. The plan cap­tures the big de­ci­sions up­front, and se­lec­tive guid­ance han­dles the smaller ones that emerge dur­ing im­ple­men­ta­tion.

I run re­search, plan­ning, and im­ple­men­ta­tion in a sin­gle long ses­sion rather than split­ting them across sep­a­rate ses­sions. A sin­gle ses­sion might start with deep-read­ing a folder, go through three rounds of plan an­no­ta­tion, then run the full im­ple­men­ta­tion, all in one con­tin­u­ous con­ver­sa­tion.

I am not see­ing the per­for­mance degra­da­tion every­one talks about af­ter 50% con­text win­dow. Actually, by the time I say implement it all,” Claude has spent the en­tire ses­sion build­ing un­der­stand­ing: read­ing files dur­ing re­search, re­fin­ing its men­tal model dur­ing an­no­ta­tion cy­cles, ab­sorb­ing my do­main knowl­edge cor­rec­tions.

When the con­text win­dow fills up, Claude’s auto-com­paction main­tains enough con­text to keep go­ing. And the plan doc­u­ment, the per­sis­tent ar­ti­fact, sur­vives com­paction in full fi­delity. I can point Claude to it at any point in time.

The Workflow in One Sentence

Read deeply, write a plan, an­no­tate the plan un­til it’s right, then let Claude ex­e­cute the whole thing with­out stop­ping, check­ing types along the way.

That’s it. No magic prompts, no elab­o­rate sys­tem in­struc­tions, no clever hacks. Just a dis­ci­plined pipeline that sep­a­rates think­ing from typ­ing. The re­search pre­vents Claude from mak­ing ig­no­rant changes. The plan pre­vents it from mak­ing wrong changes. The an­no­ta­tion cy­cle in­jects my judge­ment. And the im­ple­men­ta­tion com­mand lets it run with­out in­ter­rup­tion once every de­ci­sion has been made.

Try my work­flow, you’ll won­der how you ever shipped any­thing with cod­ing agents with­out an an­no­tated plan doc­u­ment sit­ting be­tween you and the code.

The Workflow in One Sentence

...

Read the original on boristane.com »

2 614 shares, 26 trendiness

How far back in time can you understand English?

A man takes a train from London to the coast. He’s vis­it­ing a town called Wulfleet. It’s small and old, the kind of place with a pub that’s been pour­ing pints since the Battle of Bosworth Field. He’s go­ing to write about it for his blog. He’s ex­cited.

He ar­rives, he checks in. He walks to the cute B&B he’d picked out on­line. And he writes it all up like any good travel blog­ger would: in that breezy LiveJournal style from 25 years ago, per­haps, in his case, try­ing a lit­tle too hard.

But as his post goes on, his lan­guage gets older. A hun­dred years older with each jump. The spelling changes. The gram­mar changes. Words you know are re­placed by un­fa­mil­iar words, and his at­ti­tude gets older too, as the blog­ger’s voice is re­placed by that of a Georgian di­arist, an Elizabethan pam­phle­teer, a me­dieval chron­i­cler.

By the mid­dle of his post, he’s writ­ing in what might as well be a for­eign lan­guage.

But it’s not a for­eign lan­guage. It’s all English.

None of the story is real: not the blog­ger, not the town. But the lan­guage is real, or at least re­al­is­tic. I con­structed the pas­sages my­self, work­ing from what we know about how English was writ­ten in each pe­riod.

It’s a thou­sand years of the English lan­guage, com­pressed into a sin­gle blog post.

Read it and no­tice where you start to strug­gle. Notice where you give up en­tirely. Then meet me on the other side and I’ll tell you what hap­pened to the lan­guage (and the blog­ger).

You’re read­ing The Dead Language Society, where 35,000+ read­ers ex­plore the hid­den his­tory of the English lan­guage. I’m Colin Gorrie: PhD lin­guist and your guide through 1,500 years of English be­ing weird.

I pub­lish every Wednesday. Paid sub­scribers get every is­sue, the full archive, and the con­tent I’m most proud of: prac­ti­cal guides to read­ing his­tor­i­cal texts your­self, hon­est takes on how lan­guage re­ally works, and live book clubs where we read texts like Beowulf and (up next!) Sir Gawain and the Green Knight.

Well, I fi­nally got to the town every­one has been talk­ing about lately. Wulfleet. And let me tell you, it was not easy to get here. It’s ridicu­lous how close this place is to London, and yet how hard it is to get here. I took a train to some place whose name I can’t pro­nounce, and then from there I had to hop on a bus. The whole day was shot just get­ting here.

Not go­ing to lie though: so far, it’s to­tally worth it.

Yes, it’s the typ­i­cal English coastal town: the seag­ulls, the cob­ble­stone streets, the works. But there’s some­thing about it that just makes me want to dress up in a cape and walk around like I’m in a Gothic novel. Although, let’s be hon­est, do I re­ally need an ex­cuse to do that? :)

Everyone seems re­ally nice here, al­though I did have one re­ally weird en­counter on the way to the B&B. A guy was fol­low­ing me for a while. It kind of freaked me out. Anyway, if you go to Wulfleet, just watch out for this one weird guy who hangs out near the bus stop. I know, real spe­cific. But any­way, that was just a bit odd.

Speaking of which, the B&B is also… in­ter­est­ing. LOL. It has sep­a­rate hot and cold taps and every­thing. I’m about to see how the bed” por­tion works. I’ll up­date you on the breakfast” to­mor­row morn­ing. If I can find an in­ter­net cafe around here, that is.

My plans for an un­trou­bled sleep were up­set, how­ever, when I woke with a start be­fore dawn. The win­dow had, it seemed, come open in the night, though I was per­fectly cer­tain I had fas­tened it. I sprang up from the bed to see what was the cause, but I could see noth­ing in the dark­ness — noth­ing, that is, that I could sat­is­fac­to­rily ac­count for. I closed the win­dow again but was en­tirely un­able to fall asleep due to the shock. I am not, I hope, an eas­ily fright­ened man, but I con­fess the in­ci­dent left me not a lit­tle un­set­tled.

When dawn fi­nally came, I went down­stairs to find a well-ap­pointed din­ing room in which there was laid out a mod­est but per­fectly ad­e­quate meal. After I ate, and thanked the land­lady — a re­spectable woman of the kind one ex­pects to find in charge of such an es­tab­lish­ment — I de­cided to take a stroll around the town. The sea air did some­thing to re­vive me af­ter the events of the pre­vi­ous day, not to men­tion the night, al­though a ques­tion still weighed on me. Do win­dows sim­ply burst open in the night? Or was there some­thing else afoot? I re­solved to make en­quiries, though of whom I was not yet cer­tain.

After spend­ing the day wan­der­ing around the en­vi­rons of the town, and, find­ing my­self hun­gry, I sought out an inn, where I might buy some sup­per. It was not dif­fi­cult to find one, and, sit­ting alone, I called for sup­per from what the pub­li­can had to of­fer. I con­fess I gave no great thought to the qual­ity of the fare. Hunger, that great lev­eller, makes philoso­phers of us all, and ren­ders even the mean­est dish agree­able.

The place was ad­e­quately charm­ing. The ta­bles were cov­ered with gut­ter­ing can­dles, and the lo­cal rus­tics seemed to be amus­ing them­selves with great jol­lity. Reader, I am not one of those trav­ellers who holds him­self above the com­mon peo­ple of the places he vis­its. I saw fit rather to join in with their sport and we whiled away the hours to­gether in good cheer. I found them to be as hon­est and ami­able a com­pany as one could wish for.

The only thing that dis­turbed my good hu­mour was when I thought, for a brief mo­ment, that I saw the man who ac­costed me yes­ter­day among the crowd. But it must have been a mere fancy, for what­ever I thought I saw van­ished as quickly as it had ap­peared. I chided my­self for the weak­ness of my nerves, and took an­other draught to steady them.

When, at long last, the en­ter­tain­ment was spent, I un­der­took to re­turn to my lodg­ings; how­ever, find­ing my­self quite un­able to find my way, a fact which owed some­thing to hav­ing im­bibed rather im­mod­er­ately in the hours prior — and here let me cau­tion the reader against the par­tic­u­lar hos­pi­tal­ity of coun­try innkeep­ers, which is lib­eral be­yond what pru­dence would ad­vise — I soon found my­self at the har­bour’s edge.

When I was firſt come to Wulfleet, I did not see the har­bour, for I was weary and would ſooner go to the inn, that I might ſleep. It is a truth well known to trav­ellers, that wearineſs of body breeds a kind of blind­neſs to all things, how­ever re­mark­able, and ſo it was with me. But now that I be­held the ſight of it, I mar­velled. In the inky black­neſs I could see not a ſtar, nor even a ſliver of the moon. It was in­deed a won­der that I did not ſtum­ble on my way, and per­iſh in a gut­ter, for many a man has come to his end by leſs.

Finally, with my mind much filled with re­flec­tion, I found my way through dark ſtreets to a fa­mil­iar al­ley. This was a wel­come sight, as an ill fore­bod­ing was lately come into my mind. I en­ter­tained for a mo­ment such un­manly thoughts as are far from my cuſ­tom, and which I ſhould be aſhamed to ſet down here, were it not that an honeſt ac­count re­quires it. I felt eſpe­cially that I was purſued by ſome thing un­known to me. I glanced back­wards, to ſee if I might eſpy that man. But there was no one, or at least no one that I could diſcern.

At laſt, I found the door­way of the inn, as much by chance as by deſign, and re­tired to ſleep with a mind ad­dled half by drink and the other half by a fear for which I could not well ac­count. I com­mended myſelf to Providence, and reſolved to think no more on it.

That night I was vn­trou­bled by such eu­ents as I had vn­der­gone the night be­fore, for I had barred the door ere I ſlept, and so for­ti­fied, that so no force might open it. This town of Wulfleet was paſſing ſtrange, as ſtrange I dare ſay as any place whereof Plinie wrote, or any iland dis­cov­ered in the voy­ages of Sir Walter Raleigh. But I was bound to my taſk, and would not flinch from it. I would record the oc­cur­rents in Wulfleet, howeuer ſtrange they might ſeem, yea, though they were ſuch things as would make a leſſer man forſake his pur­poſe.

But I ſoon for­got my ear­lier dread, for the morn­ing brought with it ſo fair a ſight as to diſpel all feare. The peo­ple of the town had erected ouernight a mar­ket of ſuch va­ri­ety and abun­dance as I haue not ſeen the like. Animals walked among men, and men among an­i­mals, a true maruel!

As I looked on this aſſem­bled throng, greatly pleaſed and not a lit­tle amazed, a man ap­proached me. He ſtar­tled me, but I quickly saw he was noth­ing but a farmer come to hawke his wares. Would you haue a fowl, sir?” ſaid he, My hens are fat and luſty, and you may haue them cheap.”

I said in re­ply, No, I thanke thee,” He was a churliſh fel­low, rude of ſpeech and meane of aſpect, and I felt no ſhame at thouing ſuch a man as that.

I went forthe among the peo­ple, and as I paſſed throughe the mar­ket and the ſtretes of the towne, euer lokyng aboute me with grete care, leſt I ſholde agayn en­coun­tre ſome peryl, thee ap­peared, from oute of the prees that ſame man whom I ſo dredde. And he was passyng foule was of vyſage, as it ſemed to me, more foule than ony man I had ſene in al my lyf.

He turned hym to­warde me and ſayd, Straunger, where­fore art thou come hy­d­der?”

And I anſw­erd hym nott, for I knewe nott what I ſholde ſaye, ne what an­swere myght ſerue me beſt in ſuche a caas.

Than hee asked me, Was it for that thou wouldeſt ſee the Maiſter?”

And verely this name dyd me ſore af­fright, for who was this Maiſter wherof he ſpake? And what maner of man was he, that his very name ſholde be ſpo­ken wyth ſuche reuer­ence and drede. I wolde haue fledde but he purſued me and by myn avys he was the ſwifter, for he caught me full ſoone.

I sayd to him, What meaneſt thou? Who is the Maiſter?”

And he sayd, I ſhall brynge the vnto hym, and thou ſhalt ſee for thy ſelf what maner of lorde he is.”

But I wolde not, and cryed out ayenſt hym with grete noyſe, leſt he ſholde take me thy­der by vi­o­lence and ayenſt my wille.

Bot þe man wolde me nat aban­done þer, ne suf­fre me to passen forþ. I miȝt nat flee, for hys com­pan­iouns, of whom þer were a gret nom­bre, beſet me aboute, and heelden me faſt þat I ne scholde nat as­capen. And þei weren stronge menn and wel douȝti, of grymme con­te­naunce and fiers, and armed wiþ swerdes and wiþ knyues, so þat it were gret foly for eny man to wiþston­den hem.

So þei bounden me hond and foot and led­den me to þe one þei callede Maiſter, of whom I hadde herd so muchel and knewe so litel.

Þe sayde Maiſter, what that hee ap­perid bi­fore me, was verely a Deuill, or so me þouȝte, for neuer in al my lyf hadde I be­holden so foule a crea­ture. Hee bore a blak clok þat heng to þe grounde, and ſpake neuer a worde. Bot his coun­te­naunce was hi­dous and so dred­ful þat my blood wexed colde to lo­ken on hym. For he hadde nat þe vis­age of a man bot of a beest, wiþ þe teeþ and ſnoute of a wulf, scharpe and crueel. And his eres weren longe eres, as of a wulf, and bi­hynde him þer heng a gret tayl, as wulf haþ. And hys eyen schon in þe derk­nesse lyke bren­nyng coles.

Bot þei maden no an­swer, neyþer good ne yuel. Þei weren stille as stoon, and sto­den about me as men þat wayte on þeir lordes com­man­de­ment.

Þanne af­ter muchel tyme spak þe Maiſter, and his wordes weren colde as win­tres is. His vois was as þe cry­ing of rauenes, scharpe and schille, and al þat herde hym weren adrade and durst nat speken.

I deme þe to þe deeþ, straunger. Here ſchal­tou dyen, fer fram þi kynne and fer fram þine owen londe, and non ſchal knowen þi name, ne non schal þe bi­wepe.”

And I sayde to hym, wiþ what bold­e­nesse I miȝte gaderen, Whi fareſt þou wiþ me þus? What treſ­paas haue I wrouȝt ayeins þe, þat þou de­meſt me so harde a dome?”

Swie!” quoþ he, and smot me wiþ his honde, so þat I fel to þe erþe. And þe blod ran doun from mi mouþe.

And I swied, for þe grete drede þat was icu­men vpon mee was more þan I miȝte beren. Mi herte bi­cam as stoon, and mi ly­mes weren heuy as leed, and I ne miȝte namore ston­den ne spo­ken.

Þe eu­ele man louȝ, whan that he sawe my peine, and it was a crueel louȝter, wiþouten merci or pitee as of a man þat haþ no rewþe in his herte.

Allas! I scholde neuer hauen icu­men to þis toune of Wuluesfleete! Cursed be þe dai and cursed be þe houre þat I first sette foot þerinne!

Hit is muchel to seggen all þat pi­n­unge hie on me uuroȝten, al þar sor and al þat sorȝe. Ne scal ic ne­fre hit forȝeten, naht uuhiles ic libbe!

Ac þer com me gret sped, and þat was a uuif, strong and stiþ! Heo com in among þe yuele men and me nerede fram heore hon­den.

Heo sloȝ þe heþene men þat me pyne­den, sloȝ hem and fælde hem to þe grunde. Þer was blod and bale inouȝ And hie fe­ollen leien stille, for hie ne miȝten namore ston­den. Ac þe Maister, þe uuraþþe Maister, he flaȝ awei in þe de­or­c­nesse and was iseon namore.

Ic seide hire, Ic þanke þe, leoue uuif, for þu hauest me ineredd from dæðe and from alle mine ifoan!”

Þæt ƿif me andsƿar­ode and cƿæð, Ic eom Ælfgifu gehaten. Þu scalt me to ƿife ni­men, þeah þe þu hit ne ƿite gyt, for hit is sƿa gedon þæt nan man ne nan ƿif ne mote heonon faren buten þurh þone dæð þæs Hlafordes.”

Ac þær is gyt mare to donne her, forþi ƿe nabbaþ þone Hlaford of­s­la­genne. He is strong and sƿiðe yfel, and manige gode men he hæfð for­done on þisse stoƿe.”

And þæt heo sægde wæs eall soþ. Ic ƿifode on hire, and heo ƿæs ful scyne ƿif, ƿis ond ƿælfæst. Ne gemette ic næfre ær sƿylce ƿif­man. Heo ƿæs on gefeo­hte sƿa beald swa ænig mann, and þeah hƿæþere hire andƿlite wæs ƿyn­sum and fæger.

Ac ƿe naƿiht freo ne sin­don, for þy þe ƿe næfre ne mi­h­ton fram Ƿulfesfleote geƿitan, nefne ƿe þone Hlaford finden and hine of­slean. Se Hlaford hæfþ þisne stede mid searo­cræf­tum gebun­den, þæt nan man ne mæg hine for­læ­tan. Ƿe sin­don her sƿa fu­glas on nette, swa fixas on ƿere.

The blog ends there. No sign-off, no thanks for read­ing.” Just a few sen­tences in a lan­guage that most of us lost the abil­ity to fol­low some­where around the thir­teenth cen­tury.

So, how far did you get?

Let me take you back through it.

Written English has been re­mark­ably sta­ble over the last 300 years. Spelling was stan­dard­ized in the mid-1700s, and gram­mar has barely changed at all. This means that, if you can read Harry Potter (1997–2003), you can read Robinson Crusoe (1719), which is good news to fans of the English novel.

What has changed is the voice.

Blog post be­came di­ary en­try be­came travel let­ter. The for­mat changed much faster than the lan­guage. Compare the very first line, Well, I fi­nally got to the town every­one has been talk­ing about lately” with the line from the 1800 sec­tion, Hunger, that great lev­eller, makes philoso­phers of us all, and ren­ders even the mean­est dish agree­able.”

They’re both per­for­mances of a sort: the 2000s pro­tag­o­nist is per­form­ing for his blog’s au­di­ence, so the tone is chatty and per­sonal. The 1800s pro­tag­o­nist, with the mind of a Georgian di­arist, is per­form­ing for pos­ter­ity, so he phi­los­o­phizes.

The one vis­i­ble change in the lan­guage it­self is the ap­pear­ance, in the 1700 pas­sage, of the long s (ſ). This was­n’t a dif­fer­ent let­ter, just a vari­ant form of s used in cer­tain po­si­tions within a word. It dis­ap­peared fully from English print­ing in the early 19th cen­tury, al­though its use was dwin­dling even be­fore that, which is why it does not ap­pear in the 1800 pas­sage. It’s a ty­po­graphic change rather than a lin­guis­tic one, but it’s the first un­mis­tak­able sign that the text is get­ting older.

This is where the ground starts to move un­der our feet.

Before the mid 1700s, there was no such thing as stan­dard­ized spelling. Writers spelled words as they heard them, or as they felt like spelling them, which is why the 1500s and 1600s sec­tions look so alien, even when the words, un­der­neath the sur­face, are ones you know.

For an­other dif­fi­culty, take the word vn­trou­bled from the 1600 sec­tion. This is our fa­mil­iar un­trou­bled, but the u is re­placed by a v, be­cause u and v were not yet con­sid­ered sep­a­rate let­ters. They were vari­ants of the same lat­ter, used to rep­re­sent both sounds. The con­ven­tion was to write v at the be­gin­ning of words and u in the mid­dle, which give us spelling like vnto (unto), eu­ents (events), ouernight (overnight), and howeuer (however). It looks weird at first, but once you know the rule, the words be­come much more read­able.

Another new ar­rival — or, more ac­cu­rately, late de­par­ture — from the lan­guage is the let­ter thorn (þ), which first ap­pears in the 1400 sec­tion. Thorn is sim­ply th. That’s it. Wherever you see þ, read th, and the word will usu­ally re­veal it­self: þe is the, þei is they, þat is that. If you’ve ever seen a pub called Ye Olde” any­thing, that ye is ac­tu­ally þe, an at­tempt by early print­ers to write a thorn with­out hav­ing to make an ex­pen­sive new let­ter.

Thorn’s com­pan­ion, yogh (ȝ), is more com­pli­cated. It rep­re­sents sounds that mod­ern English spells as gh or y — so miȝt is might, ȝe is ye. The rea­sons for this are a story unto them­selves.

But the most in­ter­est­ing change in this pe­riod is­n’t a let­ter. Rather, it’s a pro­noun. Notice the mo­ment in the 1600 sec­tion where our blog­ger meets a farmer and says, No, I thanke thee.” Then he adds, I felt no ſhame at thouing ſuch a man as that.”

Thouing. To thou some­one, or to use thou when talk­ing to them, was, by the 1600s, a de­lib­er­ate so­cial state­ment. Thou was the old sin­gu­lar form of you; you was orig­i­nally the plural. Over the cen­turies, you came to be used as a po­lite sin­gu­lar, much as French uses vous. Gradually, you took over en­tirely. By Shakespeare’s time (1564–1616), thou sur­vived in two main con­texts: in­ti­macy (as in prayer) and in­sult. Our blog­ger is be­ing a lit­tle rude here. He’s look­ing down on a man he con­sid­ers be­neath him, and his lan­guage gives him a way of mak­ing his feel­ings per­fectly clear.

Somewhere in this sec­tion — and if you’re like most read­ers, it hap­pened around 1300 or 1200 — the lan­guage crossed a bound­ary. Up to this point, com­pre­hen­sion felt like it was drop­ping grad­u­ally, but now it’s fallen off a cliff. In one sec­tion, you could get by by squint­ing and guess­ing; in the next you were ut­terly lost. You have hit the wall.

There are two rea­sons for this. The first is vo­cab­u­lary. As you move back­wards in time, the French and Latin loan­words that make up an enor­mous pro­por­tion of the Modern English vo­cab­u­lary grow fewer and fewer. When you pass 1250, they drop off al­most al­to­gether. Where a mod­ern writer would say he un­der­went tor­ture, a 1200-era writer must say that he suf­fered pi­n­unge in­stead.

The far­ther back you go, the more the fa­mil­iar Latinate layer of English is stripped away, re­veal­ing the Germanic core un­der­neath: a lan­guage that looks to mod­ern eyes more like German or Icelandic than any­thing we’d call English.

The sec­ond rea­son for the dif­fi­culty is gram­mar. Old English (450–1100) was an in­flected lan­guage: it used end­ings on nouns, ad­jec­tives, and verbs to mark their gram­mat­i­cal roles in a sen­tence, much as Latin or mod­ern German do. Alongside these end­ings came a greater free­dom in word or­der, which makes sense given that the end­ings told you who was do­ing what to whom.

English lost most of these end­ings over the course of the pe­riod lin­guists call Middle English (1100–1450), and it tight­ened its word or­der as if to com­pen­sate. When you look at these fi­nal sec­tions, if you can make out the words, you will see the ef­fects of this freer word or­der. For ex­am­ple, in 1200 we read monige gode men he hæfð for­done many good men he has de­stroyed’, where we’d ex­pect a Modern English or­der more like and he has de­stroyed many good men.

To make mat­ters worse, a few un­fa­mil­iar let­ters also ap­pear: wynn (ƿ) is sim­ply w, eth (ð) means the same as thorn (þ) — both rep­re­sent th, and ash (æ) rep­re­sents the vowel in cat and hat.:

All of these fac­tors com­bined likely made it dif­fi­cult, if not im­pos­si­ble, to fol­low the plot. So let me tell you what hap­pened. In the 1400 sec­tion, the blog­ger was seized. He was dragged be­fore a crea­ture they called the Master, and the Master was no man. He had the teeth and snout of a wolf, as well as a wolf’s long ears and great tail. His eyes glowed like burn­ing coals. Wulfleet was once Wulfesfleot the Bay of the Wolf.’

In the 1300 sec­tion, the Master con­demned our hero to death. In the 1200 sec­tion, a woman ap­peared and killed his cap­tor. The Master, how­ever, fled into the dark­ness. In the 1100 sec­tion, the woman re­vealed her name: Ælfgifu gift of the elves.’ She told the blog­ger — can we still call him that in 1100? — they would marry, and she shares the ter­ri­ble truth about Wulfleet: no one leaves un­til the Master is dead.

In the 1000 sec­tion, they are mar­ried. She is, he writes, as bold as any man in bat­tle, and yet fair of face. But they are not free. Together, through the dark streets of Wulfleet, they hunt the Master still.

The English in which I write this para­graph is not the English of fifty years ago, and it will not be the English of fifty years in the fu­ture.

Go back far enough, and English writ­ing be­comes un­recog­nis­able. Go for­ward far enough and the same thing will hap­pen, though none of us will be around to no­tice.

Our poor blog­ger did­n’t no­tice ei­ther, even as he and his lan­guage trav­elled back in time through the cen­turies. He just kept writ­ing even as he was car­ried off to some­where he could­n’t come back from. Some say that, far away in Wulfleet, he’s writ­ing still.

...

Read the original on www.deadlanguagesociety.com »

3 461 shares, 16 trendiness

What Not To Write On Your Security Clearance Form

Date: 01 Apr 88 1620 PST

From: Les Earnest

Subject: The previous ac­count” re­ferred to in RISKS-6.51

Reading a book got me into early trou­ble–I had an FBI record

by age twelve. This bizarre in­ci­dent caused a prob­lem much later

when I needed a se­cu­rity clear­ance. I learned that I could ob­tain

one only by con­ceal­ing my sor­did past.

A friend named Bob and I read the book ``Secret and Urgent,‘’ by Fletcher Pratt [Blue Ribbon Books; Garden City, NY; 1942] which was an early pop­u­lar ac­count of codes and ci­phers. Pratt showed how to use let­ter fre­quen­cies to break ci­phers and re­ported that the most fre­quently oc­cur­ring let­ters in typ­i­cal English text are e-t-a-o-n-r-i, in that or­der. (The let­ter fre­quency or­der of the story you are now read­ing is e-t-a-i-o-n-r. The higher fre­quency of ``i’′ prob­a­bly re­flects the fact that _I_ use the first per­son sin­gu­lar a lot.) Pratt’s book also treated more ad­vanced cryp­to­graphic schemes.

Bob and I de­cided that we needed to have a se­cure way to com­mu­ni­cate with each other, so we put to­gether a rather elab­o­rate jar­gon code based on the prin­ci­ples de­scribed in the book. I don’t re­mem­ber ex­actly why we thought we needed it–we spent much of our time out­side of school to­gether, so there was am­ple time to talk pri­vately. Still, you never could tell when you might need to send a se­cret mes­sage!

We made two copies of the code key (a de­scrip­tion of how to en­crypt and de­crypt our mes­sages) in the form of a sin­gle type­writ­ten sheet. We each took a copy and car­ried it on our per­sons at all times when we were wear­ing clothes.

I ac­tu­ally did­n’t wear clothes much. I spent nearly all my time out­side school wear­ing just a baggy pair of ma­roon swim­ming trunks. That was­n’t con­sid­ered too weird in San Diego.

I had re­cently been given glasses to wear but gen­er­ally kept them in a hard case in the pocket of the trousers that I wore to school. I fig­ured that this was a good place to hide my copy of the code key, so I care­fully folded it to one-eighth of its orig­i­nal size and stuck it at the bot­tom of the case, un­der my glasses.

Every chance I got, I went body surf­ing at Old Mission Beach. I usu­ally went by street­car and, since I had to trans­fer Downtown, I wore clothes. Unfortunately, while I was rid­ing the trol­ley home from the beach one Saturday, the case car­ry­ing my glasses slipped out of my pocket un­no­ticed. I re­ported the loss to my mother that night. She chas­tised me and later called the street­car com­pany. They said that the glasses had­n’t been turned in.

After a few weeks of wait­ing in vain for the glasses to turn up, we be­gan to lose hope. My mother did­n’t rush get­ting re­place­ment glasses in view of the fact that I had­n’t worn them much and they cost about $8, a large sum at that time. (To me, $8 rep­re­sented 40 round trips to the beach by street­car, or 80 ad­mis­sion fees to the movies.)

Unknown to us, the case had been found by a pa­tri­otic cit­i­zen who opened it, dis­cov­ered the code key, rec­og­nized that it must be­long to a Japanese spy and turned it over to the FBI This was in 1943, just af­ter cit­i­zens of Japanese de­scent had been forced off their prop­erty and taken away to con­cen­tra­tion camps. I re­mem­ber hear­ing that a lo­cal gro­cer was se­cretly a Colonel in the Japanese Army and had hid­den his uni­form in the back of his store. A lot of peo­ple ac­tu­ally be­lieved these things.

About six weeks later, when I hap­pened to be off on an­other es­capade, my mother was vis­ited by a man who iden­ti­fied him­self as an in­ves­ti­ga­tor from the FBI (She was a school ad­min­is­tra­tor, but hap­pened to be at home work­ing on her Ph. D. dis­ser­ta­tion.) She no­ticed that there were two more men wait­ing in a car out­side. The agent asked a num­ber of ques­tions about me, in­clud­ing my oc­cu­pa­tion. He re­port­edly was quite dis­ap­pointed when he learned that I was only 12 years old.

He even­tu­ally re­vealed why I was be­ing in­ves­ti­gated, showed my mother the glasses and the code key and asked her if she knew where it came from. She did­n’t, of course. She asked if we could get the glasses back and he agreed.

My mother told the in­ves­ti­ga­tor how glad she was to get them back, con­sid­er­ing that they cost $8. He did a slow burn, then said ``Lady, this case has cost the gov­ern­ment thou­sands of dol­lars. It has been the top pri­or­ity in our of­fice for the last six weeks. We traced the glasses to your son from the pre­scrip­tion by ex­am­in­ing the files of nearly every op­tometrist in San Diego.‘’ It ap­par­ently did­n’t oc­cur to them that if I were a real Japanese spy, I might have brought the glasses with me from head­quar­ters.

The FBI agent gave back the glasses but kept the code key ``for our records.‘’ They ap­par­ently were not fully con­vinced that they were deal­ing just with kids.

Since our com­mu­ni­ca­tion scheme had been com­pro­mised, Bob and I de­vised a new key. I started car­ry­ing it in my wal­let, which I thought was more se­cure. I don’t re­mem­ber ever ex­chang­ing any cryp­to­graphic mes­sages. I was al­ways ready, though.

A few years later when I was in col­lege, I got a sum­mer job at the Naval Electronics Lab, which re­quired a se­cu­rity clear­ance. One of the ques­tions on the ap­pli­ca­tion form was ``Have you ever been in­ves­ti­gated by the FBI?‘’ Naturally, I checked ``Yes.‘’ The next ques­tion was, ``If so, de­scribe the cir­cum­stances.‘’ There was very lit­tle space on the form, so I an­swered sim­ply and hon­estly, ``I was sus­pected of be­ing a Japanese spy.‘’

When I handed the form in to the se­cu­rity of­fi­cer, he scanned it quickly, looked me over slowly, then said, ``Explain this’’–point­ing at the FBI ques­tion. I de­scribed what had hap­pened. He got very ag­i­tated, picked up my form, tore it in pieces, and threw it in the waste bas­ket.

He then got out a blank form and handed it to me, say­ing ``Here, fill it out again and don’t men­tion that. If you do, I’ll make sure that you never get a se­cu­rity clear­ance.‘’

I did as he di­rected and was shortly granted the clear­ance. I never again dis­closed that in­ci­dent on se­cu­rity clear­ance forms.

On an­other oc­ca­sion much later, I learned by chance that putting cer­tain provoca­tive in­for­ma­tion on a se­cu­rity clear­ance form can greatly speed up the clear­ance process. But that is an­other story.

Edited and con­verted to HTML by Dan Bornstein.

...

Read the original on milk.com »

4 389 shares, 15 trendiness

Why is Claude an Electron App?

The state of cod­ing agents can be summed up by this fact

Claude spent $20k on an agent swarm im­ple­ment­ing (kinda) a C-compiler in Rust, but desk­top Claude is an Electron app.

If you’re un­fa­mil­iar, Electron is a cod­ing frame­work for build­ing desk­top ap­pli­ca­tions us­ing web tech, specif­i­cally HTML, CSS, and JS. What’s great about Electron is it al­lows you to build one desk­top app that sup­ports Windows, Mac, and Linux. Plus it lets de­vel­op­ers use ex­ist­ing web app code to get started. It’s great for teams big and small. Many apps you prob­a­bly use every day are built with Electron: Slack, Discord, VS Code, Teams, Notion, and more.

There are down­sides though. Electron apps are bloated; each runs its own Chromium en­gine. The min­i­mum app size is usu­ally a cou­ple hun­dred megabytes. They are of­ten laggy or un­re­spon­sive. They don’t in­te­grate well with OS fea­tures.

But these down­sides are dra­mat­i­cally out­weighed by the abil­ity to build and main­tain one app, ship­ping it every­where.

But now we have cod­ing agents! And one thing cod­ing agents are prov­ing to be pretty good at is cross-plat­form, cross-lan­guage im­ple­men­ta­tions given a well-de­fined spec and test suite.

On the sur­face, this abil­ity should ren­der Electron’s ben­e­fits ob­so­lete! Rather than write one web app and ship it to each plat­form, we should write one spec and test suite and use cod­ing agents to ship na­tive code to each plat­form. If this abil­ity is real and adopted, users get snappy, per­for­mant, na­tive apps from small, fo­cused teams serv­ing a broad mar­ket.

But we’re still lean­ing on Electron. Even Anthropic, one of the lead­ers in AI cod­ing tools, who keeps pub­lish­ing flashy agen­tic cod­ing achieve­ments, still uses Electron in the Claude desk­top app. And it’s slow, buggy, and bloated app.

So why are we still us­ing Electron and not em­brac­ing the agent-pow­ered, spec dri­ven de­vel­op­ment fu­ture?

For one thing, cod­ing agents are re­ally good at the first 90% of dev. But that last bit — nail­ing down all the edge cases and con­tin­u­ing sup­port once it meets the real world — re­mains hard, te­dious, and re­quires plenty of agent hand-hold­ing.

Anthropic’s Rust-base C com­piler slammed into this wall, af­ter scream­ing through the bulk of the tests:

The re­sult­ing com­piler has nearly reached the lim­its of Opus’s abil­i­ties. I tried (hard!) to fix sev­eral of the above lim­i­ta­tions but was­n’t fully suc­cess­ful. New fea­tures and bug­fixes fre­quently broke ex­ist­ing func­tion­al­ity.

The re­sult­ing com­piler is im­pres­sive, given the time it took to de­liver it and the num­ber of peo­ple who worked on it, but it is largely un­us­able. That last mile is hard.

And this gets even worse once a pro­gram meets the real world. Messy, un­ex­pected sce­nar­ios stack up and de­vel­op­ment never re­ally ends. Agents make it eas­ier, sure, but hard prod­uct de­ci­sions be­come chal­lenged and re­quire hu­man de­ci­sions.

Further, with 3 dif­fer­ent apps pro­duced (Mac, Windows, and Linux) the sur­face area for bugs and sup­port in­creases 3-fold. Sure, there are lo­cal quirks with Electron apps, but most of it is mit­i­gated by the com­mon wrap­per. Not so with na­tive!

A good test suite and spec could en­able the Claude team to ship a Claude desk­top app na­tive to each plat­form. But the re­sult­ing over­head of that last 10% of dev and the in­creased sup­port and main­te­nance bur­den will re­main.

For now, Electron still makes sense. Coding agents are amaz­ing. But the last mile of dev and the sup­port sur­face area re­mains a real con­cern.

...

Read the original on www.dbreunig.com »

5 294 shares, 18 trendiness

xaskasdf/ntransformer: High-efficiency LLM inference engine in C++/CUDA. Run Llama 70B on RTX 3090.

High-efficiency C++/CUDA LLM in­fer­ence en­gine. Runs Llama 70B on a sin­gle RTX 3090 (24GB VRAM) by stream­ing model lay­ers through GPU mem­ory via PCIe, with op­tional NVMe di­rect I/O that by­passes the CPU en­tirely.

3-tier adap­tive caching auto-sizes from hard­ware: VRAM-resident lay­ers (zero I/O) + pinned RAM (H2D only) + NVMe/mmap fall­back. Achieves 83x speedup over mmap base­line for 70B on con­sumer hard­ware (RTX 3090 + 48 GB RAM).

Bottleneck is PCIe H2D band­width at Gen3 x8 (~6.5 GB/s). Q4_K_M fits 10 more lay­ers in VRAM (36 vs 26), re­duc­ing tier B trans­fers. Layer skip (cosine sim­i­lar­ity cal­i­bra­tion) elim­i­nates 20/80 lay­ers per to­ken with min­i­mal qual­ity loss.

* Zero ex­ter­nal de­pen­den­cies be­yond CUDA Toolkit (no PyTorch, no cuBLAS)

# Build

mkdir build && cd build

cmake .. -DCMAKE_BUILD_TYPE=Release \

-DCMAKE_C_COMPILER=gcc-14 \

-DCMAKE_CXX_COMPILER=g++-14 \

-DCMAKE_CUDA_COMPILER=/usr/local/cuda-13.1/bin/nvcc

cmake –build . -j

# Run (resident mode — model fits in VRAM)

./ntransformer -m /path/to/llama-8b-q8_0.gguf -p Hello” -n 128

# Run (streaming mode — model larger than VRAM)

./ntransformer -m /path/to/llama-70b-q6_k.gguf -p Hello” -n 32 –streaming

# Run with layer skip (fastest for 70B)

./ntransformer -m /path/to/llama-70b-q4_k_m.gguf -p Hello” -n 32 –streaming –skip-threshold 0.98

# Self-speculative de­cod­ing (VRAM lay­ers as draft, no ex­tra model)

./ntransformer -m /path/to/llama-70b-q6_k.gguf -p Hello” -n 32 –self-spec –draft-k 3

# Chat mode

./ntransformer -m /path/to/model.gguf –chat

# Benchmark

./ntransformer -m /path/to/model.gguf –benchmark -n 64

Running ntrans­former with NVMe di­rect I/O re­quires sys­tem-level mod­i­fi­ca­tions. An au­to­mated setup script han­dles all of them:

# Full first-time setup (interactive, cre­ates back­ups)

sudo ./scripts/setup_system.sh

# Check cur­rent sys­tem state (no changes)

sudo ./scripts/setup_system.sh –check

# NVMe-only (run af­ter every re­boot)

sudo ./scripts/setup_system.sh –nvme-only

* Above 4G Decoding: ON (required for 64-bit BAR map­ping)

* IOMMU: OFF (or leave on — the script adds the ker­nel pa­ra­me­ter)

WARNING: This pro­ject per­forms low-level PCIe op­er­a­tions (GPU MMIO writes to NVMe con­troller reg­is­ters, user­space NVMe com­mand sub­mis­sion, VFIO de­vice passthrough). While tested ex­ten­sively on RTX 3090 + WD SN740, in­cor­rect con­fig­u­ra­tion or hard­ware in­com­pat­i­bil­i­ties could the­o­ret­i­cally cause:

Data loss on the NVMe de­vice used for raw block stor­age

Never use your boot drive for NVMe di­rect I/O. Always use a ded­i­cated sec­ondary NVMe. The au­thors are not re­spon­si­ble for hard­ware dam­age or data loss. Use at your own risk.

For mod­els that don’t fit in VRAM, the NVMe back­end elim­i­nates the CPU from the data path:

# Build with NVMe sup­port (requires gpu-nvme-di­rect li­brary)

cmake .. -DCMAKE_BUILD_TYPE=Release -DUSE_GPUNVME=ON \

-DCMAKE_C_COMPILER=gcc-14 -DCMAKE_CXX_COMPILER=g++-14 \

-DCMAKE_CUDA_COMPILER=/usr/local/cuda-13.1/bin/nvcc

cmake –build . -j

# Write GGUF model to NVMe raw de­vice

sudo ./scripts/restore_nvme.sh # en­sure ker­nel dri­ver is bound

sudo dd if=model.gguf of=/​dev/​nvme0n1 bs=1M oflag=di­rect sta­tus=progress

# Bind NVMe to VFIO for user­space ac­cess

sudo ./scripts/setup_nvme.sh # loads VFIO, forces D0, en­ables BusMaster

# Run with NVMe back­end

sudo GPUNVME_PCI_BDF=0000:01:00.0 GPUNVME_GGUF_LBA=0 \

./build/ntransformer -m /path/to/model.gguf -p Hello” -n 32 –streaming

# Restore NVMe to ker­nel dri­ver when done

sudo ./scripts/restore_nvme.sh

The GGUF model file is writ­ten to raw NVMe blocks via dd

During in­fer­ence, each layer (~670 MB for 70B Q6_K) is read via 670 NVMe com­mands in ~202 ms

Data lands in CUDA pinned stag­ing mem­ory, then async DMA to GPU com­pute buffers

...

Read the original on github.com »

6 245 shares, 26 trendiness

How Taalas "prints" LLM onto a chip?

A startup called Taalas, re­cently re­leased an ASIC chip run­ning Llama 3.1 8B (3/6 bit quant) at an in­fer­ence rate of 17,000 to­kens per sec­onds. That’s like writ­ing around 30 A4 sized pages in one sec­ond. They claim it’s 10x cheaper in own­er­ship cost than GPU based in­fer­ence sys­tems and is 10x less elec­tric­ity hog. And yeah, about 10x faster than state of art in­fer­ence.

I tried to read through their blog and they’ve lit­er­ally hardwired” the mod­el’s weights on chip. Initially, this did­n’t sound in­tu­itive to me. Coming from a Software back­ground, with hobby-ist un­der­stand­ing of LLMs, I could­n’t wrap my head around how you just print” a LLM onto a chip. So, I de­cided to dig into mul­ti­ple blog­posts, LocalLLaMA dis­cus­sions, and hard­ware con­cepts. It was much more in­ter­est­ing than I had thought. Hence this blog­post.

Taalas is a 2.5 year old com­pany and it’s their first chip. Taalas’s chip is a fixed-func­tion ASIC (Application-Specific Integrated Circuit). Kinda like a CD-ROM/Game car­tridge, or a printed book, it only holds one model and can­not be rewrit­ten.

LLMs con­sist of se­quen­tial Layers. For eg. Llama 3.1 8B has 32 lay­ers. The task of each layer is to fur­ther re­fine the in­put. Each layer is es­sen­tially large weight ma­tri­ces (the mod­el’s knowledge’).

When a user in­puts a prompt, it is con­verted into an vec­tor of num­bers aka em­bed­dings.

On a nor­mal GPU, the in­put vec­tor en­ters the com­pute cores. Then GPU fetches the Layer 1 weights from VRAM/HBM (GPUs RAM) , does ma­trix mul­ti­pli­ca­tion, stores the in­ter­me­di­ate re­sults(aka ac­ti­va­tions) back in VRAM. Then it fetches the Layer 2 weights, and pre­vi­ous re­sult, does the math, and saves it to VRAM again. This cy­cle con­tin­ues till 32nd layer just to gen­er­ate a sin­gle to­ken. Then, to gen­er­ate the next to­ken, the GPU re­peats this en­tire 32-layer jour­ney.

So, due to this con­stant back-and-forth the mem­ory bus in­duces la­tency and con­sumes sig­nif­i­cant amounts of en­ergy. This is the mem­ory band­width bot­tle­neck, some­times loosely called the Von Neumann bot­tle­neck or the memory wall.”

Taalas side­steps this wall en­tirely. They just en­graved the 32 lay­ers of Llama 3.1 se­quen­tially on a chip. Essentially, the mod­el’s weights are phys­i­cal tran­sis­tors etched into the sil­i­con.

Importantly, they also claim to have in­vented a hard­ware scheme where they can store a 4-bit data and per­form the mul­ti­pli­ca­tion re­lated to it us­ing a sin­gle tran­sis­tor. I will re­fer it as their magic mul­ti­pli­er’

Now, when the user’s in­put ar­rives, it gets con­verted into a vec­tor, and flows into phys­i­cal tran­sis­tors mak­ing up Layer1. It does mul­ti­pli­ca­tion via their magic mul­ti­pli­er’ and in­stead of re­sult be­ing saved in a VRAM, the elec­tri­cal sig­nal sim­ply flows down phys­i­cal wires into the Layer 2 tran­sis­tors (via pipeline reg­is­ters from what I un­der­stand). The data streams con­tin­u­ously through the sil­i­con un­til the fi­nal out­put to­ken is gen­er­ated.

They don’t use ex­ter­nal DRAM/HBM, but they do use a small amount of on-chip SRAM. Why SRAM? Due to cost and com­plex­ity, man­u­fac­tur­ers don’t mix DRAM and logic gates. That’s why GPUs have sep­a­rate VRAM. (Also SRAM is­n’t fac­ing sup­ply chain cri­sis, DRAM is).

Taalas uses this on-chip SRAM for the KV Cache (the tem­po­rary mem­ory/​con­text win­dow of an on­go­ing con­ver­sa­tion) and to hold LoRA adapters for fine tun­ing.

Technically yes, I read lots of com­ments say­ing that. But Taalas de­signed a base chip with a mas­sive, generic grid of logic gates and tran­sis­tors. To map a spe­cific model onto the chip, they only need to cus­tomize the top two lay­ers/​masks. While it’s still slow, but it’s much faster than build­ing chips from ground up.

It took them two months, to de­velop chip for Llama 3.1 8B. In the AI world where one week is a year, it’s su­per slow. But in a world of cus­tom chips, this is sup­posed to be in­sanely fast.

As some­one stuck run­ning lo­cal mod­els on a lap­top with­out a mas­sive GPU, I am keep­ing my fin­gers crossed for this type of hard­ware to be mass-pro­duced soon.

...

Read the original on www.anuragk.com »

7 225 shares, 10 trendiness

A CIA Analyst Shares Her Polygraph Experience

I first took a poly­graph when I ap­plied to the CIA and went through the ap­pli­cant screen­ing process.

To pre­pare for the test, I read A Tremor in the Blood by David T. Lykken. The book de­scribed the use of con­trol ver­sus rel­e­vant ques­tions as well as coun­ter­mea­sures such as butt-clench­ing. I had no de­sire to use coun­ter­mea­sures. I was­n’t out to beat” the test: I wanted to un­der­stand how it worked. A fu­ture col­league at the Agency ad­vised me, Spill your guts.” I thought it was good ad­vice, and I planned to fol­low it.

I knew I was tak­ing a risk in ap­ply­ing to the Agency. I worked as a de­fense con­trac­tor on a pro­ject for the CIA, so I al­ready held CIA TS/SCI clear­ances. If I failed the poly­graph, I could lose my clear­ances, and I might lose my job as well.

I flew to Northern Virginia for two days of pre-em­ploy­ment screen­ing. A bus took us from the ho­tel to a non­de­script build­ing in Vienna. The ex­am­iner was a young woman. She asked me to sign a con­sent form and told me not to talk about the poly­graph with any­one else.

In the pretest in­ter­view, the ex­am­iner asked, Did you make any false state­ments on your ap­pli­ca­tion?” I said, Yes. Under height and weight, I put 130 lb. I ac­tu­ally weigh 134 lbs.” She laughed. Then she asked if I’d read about poly­graphs. I said I’d just fin­ished A Tremor in the Blood. She claimed she’d never heard of it. I was sur­prised. It’s an im­por­tant book about her field, I would have thought all poly­g­ra­phers knew of it.

She wired me up, and the poly­graph be­gan. My hand turned pur­ple, which hurt ter­ri­bly. My body twitched from the ex­ag­ger­ated still­ness the test re­quired. Halfway through, the ex­am­iner left the room, say­ing she had to show the charts to her su­per­vi­sor. I came to think of this as the What will it take to get you to buy the car? part of the test. I waited twenty min­utes or so, re­sist­ing the urge to press my nose against the one-way mir­ror and peer into it through cupped hands.

The ex­am­iner came back. You’re hav­ing a prob­lem with one of the ques­tions. Do you know which one?” I had no idea. I’d an­swered all of them truth­fully. She said, How about, Have you ever lied to your boss?’” I said I had­n’t. She pressed me un­til I came up with an oc­ca­sion when I’d passed my boss in the hall. She said, How are you?” and I said, Fine.” But I was­n’t fine, I was in the mid­dle of a can­cer scare.

I failed the poly and was told to come back the next day. I could­n’t un­der­stand why I had­n’t passed. I’d spilled my guts, and I had­n’t used coun­ter­mea­sures.

On the bus back to the ho­tel, a woman was sob­bing, Do they count some­thing less than $50 as theft?” I felt bad for her be­cause she was cry­ing, but I won­dered why a petty thief thought she could get into the Agency.

That evening, the other ap­pli­cants went to the nearby Tysons Corner mall to go shop­ping. I did­n’t feel fes­tive enough to join them so I with­drew to my room. I or­dered from room ser­vice but could­n’t eat my din­ner. I sat by the win­dow for hours, look­ing into the dark­ness. She’d seen some­thing in­side me I had­n’t known about. I’d al­ways thought I was a good per­son. Now I was­n’t sure.

The next morn­ing, I rode back to the same crum­bling build­ing where I’d been poly­graphed the day be­fore. The ex­am­iner said, Now that you’ve had a chance to think about it, is there any­thing you’d like to say?” He did­n’t need to ask me twice. You bet there is. I did my part, now I ex­pect you to do yours.” It was­n’t un­til late that af­ter­noon, when I was wait­ing for my plane at Dulles, that I re­al­ized, Is there any­thing you’d like to say?” does not mean, Please tell us all our faults.”

The ex­am­iner wired me up. He be­gan with what he called a cal­i­bra­tion test. He took a piece of pa­per and wrote the num­bers one through five in a ver­ti­cal col­umn. He asked me to pick a num­ber. I picked three. He drew a square around the num­ber three, then taped the pa­per to the back of a chair where I could see it. I was sup­posed to lie about hav­ing se­lected the num­ber three.

The test be­gan. He asked, Is the num­ber one? Is the num­ber two? Is the num­ber three?” I said No,” and butt-clenched. Nice strong re­sponse!” he said.

I was­n’t hid­ing any­thing, so I had no rea­son to use coun­ter­mea­sures. On the other hand, the an­a­lyt­i­cal part of me en­joyed pok­ing at the test to fig­ure out how it worked. And I was still mad about hav­ing failed the pre­vi­ous day, so I was mess­ing with him. Curiosity sat­is­fied, I did­n’t try the butt-clench again, not on that day or ever.

During the real test, the ex­am­iner said, Your breath­ing is un­nat­ural.” He de­scribed a kid in lit­tle league who kept swing­ing the bat wrong, and then sud­denly got it. If I just kept try­ing, I could get it too. It took al­most four hours, but by the end of the ses­sion, he told me I’d passed. I’d just cleared the last hur­dle to join­ing the CIA.

I en­tered on duty and be­gan a week of ori­en­ta­tion. On the first morn­ing, we in­tro­duced our­selves and said a few words about what we’d be do­ing at the Agency. Four guys sit­ting to­gether at a table up front iden­ti­fied them­selves as poly­g­ra­phers. Everyone else in the room hissed. It was­n’t friendly teas­ing, ei­ther. At lunch, no one would sit with them.

A few years into my Agency ca­reer, I took a bat­tery of vo­ca­tional and ap­ti­tude tests in­clud­ing the MMPI, a per­son­al­ity in­ven­tory. The MMPI re­sults came back, and they said I fell on the ex­treme end of the hon­esty spec­trum, or in the words of the Agency psy­chi­a­trist, You’re hon­est to the point of be­ing naïve.” I was kind of of­fended. Naïve? Who are you call­ing naïve? On the other hand, it was nice to have that in my of­fi­cial file.

CIA em­ploy­ees were re­quired to take a poly­graph every five years. We all did it, but there was a lot of com­plain­ing.

I never heard that any­one wor­ried about los­ing their job to the poly. It was said that new ap­pli­cants failed in large num­bers, but once you were in, you were in. The re-poly could be un­pleas­ant, though. If you failed, you had to keep tak­ing it. It was said that there was an up­per man­ager who just could­n’t pass, no mat­ter how many times he tried. After some­thing like seven at­tempts, Polygraph gave up and stopped call­ing him back. The man­ager re­mained in his job.

We weren’t sup­posed to dis­cuss the poly­graph among our­selves, but of course we did. When peo­ple came back from a poly, they talked about how it had gone. A woman who’d never seen an il­le­gal drug in her life was ac­cused of be­ing a ma­jor drug user. Someone who hated com­put­ers so much that she had the sec­re­tary print out her emails so she could read them was in­ter­ro­gated for hours about hack­ing into Agency net­works.

A pat­tern emerged. In a nor­mal poly­graph, there was of­ten a gross mis­match be­tween a per­son and the ac­cu­sa­tions made against them. I don’t think the of­fi­cials at Polygraph had any idea how un­in­ten­tion­ally hu­mor­ous this was. Not to the per­son it hap­pened to, of course, but the rest of us found it hys­ter­i­cally funny.

Once, the ex­am­iner got in my face and shouted, Admit it, you’re deeply in debt. Creditors are pound­ing on your door!” I said. You’ve just re­vealed to me that you haven’t both­ered to pull my credit re­port. Are you lazy, or are you cheap?” I of­fered to pay the $15 fee my­self, but he did­n’t take me up on it.

Another time, the ex­am­iner ac­cused me of work­ing for a for­eign in­tel­li­gence ser­vice and trav­el­ing over­seas to meet my han­dler. I rolled my eyes. Do you want to see my pass­port? It’s been ex­pired for nine years.” No, he did­n’t want to see my pass­port.

I told my of­fice mates I’d fig­ured out why the ac­cu­sa­tions were so con­sis­tently off-the-wall. Polygraph must have a ques­tion of the day. Everyone who went in on Monday would be ac­cused of deal­ing drugs. Tuesday was es­pi­onage day, Wednesday was mar­i­tal in­fi­delity day, and so on.

Then Aldrich Ames was ar­rested, and poly­graphs be­came more bru­tal. People who’d never had trou­ble pass­ing were be­ing called back two and three times. Thank you, Mr. Ames.

I over­heard a frag­ment of con­ver­sa­tion at CIA Headquarters, I thought I was a good per­son, but af­ter that last poly, I’m not so sure.” It was hard on peo­ple.

Because of Ames, the Agency in­tro­duced a pol­icy of ran­dom poly­graphs. I knew some­one who com­pleted a five year poly. A few months later, he was called back for a ran­dom one.

I’d been at the Agency for ten years when I went through the rein­ves­ti­ga­tion poly again.

The test was ad­min­is­tered by an in­ex­pe­ri­enced young woman. In the pretest in­ter­view, she asked me a ques­tion. I an­swered truth­fully. She asked again as if she was look­ing for a bet­ter an­swer. Call me a liar. It made me fu­ri­ous.

Well into the ses­sion, she said I was fail­ing. I was so frus­trated, I started to cry. I knew I could pass it if I just had enough time, but she had to run an er­rand. She failed me and ended the ses­sion early.

I wrote a let­ter of com­plaint to the Chief of Polygraph say­ing I did­n’t like the way I’d been treated. He sent me a let­ter of apol­ogy. He said he’d re­viewed the tapes and that he was sorry about the abuse. He said the poly­g­ra­pher had been rep­ri­manded.

I was sur­prised to get a re­sponse of any kind, but a let­ter of apol­ogy was as­ton­ish­ing. Although the cyn­i­cal part of me might call it a please don’t sue me” let­ter.

But I was also puz­zled. The apol­ogy was for the wrong thing. The Director seemed to think I’d gone through a par­tic­u­larly abu­sive poly­graph. I had­n’t. It was a per­fectly or­di­nary poly­graph, no dif­fer­ent from any other. I just wrote to say that I did­n’t like poly­graphs.

I had to take the test again be­cause I’d failed the first one. The sec­ond ex­am­iner was ex­pe­ri­enced and had a mild dis­po­si­tion. I passed with­out dif­fi­culty.

Over the course of many years at CIA, I formed an im­pres­sion that a typ­i­cal poly­graph in­volves an in­ex­pe­ri­enced ex­am­iner who grills you harshly and then fails you, fol­lowed by a re-poly with a more ex­pe­ri­enced ex­am­iner who guides you through it with no fuss. I’ve had two poly­graphs in which I passed on the first try. On both oc­ca­sions, an ex­pe­ri­enced ex­am­iner con­ducted the test.

I worked at CIA for eleven years. It was a ter­rific ex­pe­ri­ence, and I count those years as among the hap­pi­est in my life. I left only be­cause I got mar­ried and had a baby. CIA is many things, but fam­ily friendly is not one of them.

I joined a small de­fense con­trac­tor known for its work-life bal­ance. The com­pany sup­ported most of the three-let­ter agen­cies, and I set­tled into do­ing the same sort of work I’d done be­fore.

My first as­sign­ment was on a National Reconnaissance Office (NRO) pro­ject. The NRO clear­ances re­quired a poly, which I agreed to. The test was ad­min­is­tered by a woman with many years’ ex­pe­ri­ence. She told me I’d passed. It’s pos­si­ble to have a poly­graph that is­n’t con­fronta­tional and does­n’t leave you feel­ing vi­o­lated. It’s rare, though.

I’d been sup­port­ing an FBI pro­ject for sev­eral years when the phone rang in the kitchen. Someone from the FBI asked me to come in for a rou­tine poly­graph, a re­quire­ment to keep my clear­ances up to date.

To pre­pare, I read the 2002 National Academy of Sciences re­port. It was an eye-opener, even though it con­firmed what I’d al­ready be­gun to sus­pect, that the poly­graph did­n’t work.

I ar­rived for my poly­graph ap­point­ment, which would be ad­min­is­tered in a build­ing across the street from my chil­dren’s or­tho­don­tist.

I stood in the mar­ble lobby, wait­ing for the ex­am­iner to come and col­lect me. The 2002 NAS re­port made me cyn­i­cal. In the in­ter­view room, I thought, Don’t look at the man be­hind the cur­tain.”

The ex­am­iner asked if I’d ever vis­ited the anti-poly­graph sites on­line. I said yes, that’s where I found the 2002 NAS re­port. He said he’d never heard of it. He also said there was no such a thing as a con­trol ques­tion. I hate be­ing lied to; it makes me an­gry.

In the pretest in­ter­view, he asked how many poly­graphs I’d had be­fore this one. I was­n’t sure, but I thought it was prob­a­bly six or seven. He asked what med­ica­tions I took. I listed every­thing, in­clud­ing a cor­ti­sone cream for a patch of eczema on my hand. He went on and on about my skin con­di­tion. What does this have to do with na­tional se­cu­rity, and why is it any of your busi­ness? Maybe vi­o­lat­ing peo­ple’s bound­aries is a way to es­tab­lish dom­i­nance.

The ex­am­iner wired me up, and we did the card trick. He drew a ver­ti­cal col­umn of num­bers, told me to pick one, drew a box around it, and pinned it up where I could see it.

It oc­curred to me that we were play­ing a guess­ing game in which the ex­am­iner knew the an­swer be­fore the game be­gan. I’d have been more im­pressed if he’d had to dis­cover my num­ber us­ing only the poly­graph equip­ment and/​or his abil­ity to read peo­ple. I was tempted to sug­gest it, but I did­n’t think the idea would be well re­ceived.

The test pro­ceeded nor­mally. The ex­am­iner left the room. When he came back, he did­n’t meet my eyes, and the mus­cles of his face were tight. The test shows de­cep­tion.” He was right. I had been de­cep­tive, but only about one thing. I had­n’t told him I knew the poly­graph did­n’t work.

The ex­am­iner ham­mered on me to come clean. I kept re­peat­ing, No, I can’t think of any­thing else.” I was tempted to make some­thing up, just to make it stop, but I’m not com­fort­able ly­ing.

At the end of the in­ter­view, the ex­am­iner looked at me, gloat­ing. You claim you’ve taken seven poly­graphs be­fore to­day, but later, you said it was only six.” That’s all you’ve got on me? I’m un­der­whelmed.

Being able to rec­og­nize in­ter­ro­ga­tion tech­niques did­n’t make me im­mune to them. Exhausted, I hung my head, feel­ing like he’d bro­ken me un­der in­ter­ro­ga­tion. I’ve al­ways found the shame of be­ing bro­ken was the worst thing about be­ing poly­graphed.

A few days later, I was still se­ri­ously rat­tled. I did­n’t re­al­ize how badly the poly­graph had af­fected me un­til I plowed through a red light and al­most hit an­other car. I’d never run a red light be­fore. I could­n’t think what had got­ten into me.

I told a rel­a­tive I’d had a re­ally hard time with the poly­graph. There was an em­bar­rassed si­lence, fol­lowed by a rapid change of sub­ject. What did you do wrong, and why don’t you own up to it? This from some­one who’s known me from birth and has al­ways taken my side. It shows how strongly peo­ple in our cul­ture be­lieve in the poly­graph.

I wrote to my con­gress­man and asked him to sup­port leg­is­la­tion ban­ning the poly­graph. I said it was a civil rights is­sue to sub­ject an en­tire work­force to a bru­tal po­lice-style in­ter­ro­ga­tion in the ab­sence of prob­a­ble cause, es­pe­cially if they might be harmed by it.

Although I failed the FBI poly­graph, I re­mained on the pro­ject.

Much to my sur­prise, I was granted ad­di­tional clear­ances and as­signed to more highly clas­si­fied work than I’d been do­ing be­fore the poly­graph.

The work dried up, and I moved on to some­thing else. Seven months af­ter the failed FBI poly, I was sum­moned to FBI Headquarters to talk about your poly­graph.”

It took sev­eral hours to drive down­town and find a place to park. I found FBI head­quar­ters. Two burly agents met me at the door. I won­dered if they had the power to ar­rest me. My hands shook. It was like a scene from a movie where gov­ern­ment of­fi­cials are try­ing to be in­tim­i­dat­ing. It worked. As we walked across the lobby, I thought I was go­ing to faint.

They es­corted me up­stairs. Neither of them spoke. They took me to a small room. A folder lay open on a desk. Papers spilled from it.

I said, Does it mat­ter that I was laid off the pro­ject four months ago?”

It was like watch­ing method ac­tors break char­ac­ter. We’re sorry, we did­n’t mean to bring you all the way down here for noth­ing. Maybe you could visit the spy mu­seum? It’s right across the street.”

Years later, I joined a DIA pro­ject which re­quired a CI (counterintelligence) poly­graph. I liked the work and the peo­ple do­ing it, so I agreed.

I started in the sum­mer. In late September, Polygraph asked me to come in. I was no longer afraid of them. I did­n’t doubt the ap­pa­ra­tus took ac­cu­rate mea­sure­ments of heart rate, blood pres­sure, res­pi­ra­tion, and per­spi­ra­tion, but I’d stopped be­liev­ing the ex­am­iner could in­fer a per­son’s emo­tions or thoughts from the data in the trac­ings.

To pre­pare for the test, I read the DoD Polygraph Institute Interview and Interrogation hand­book (PDF), which de­scribes tech­niques like offer hope, take it away.” That book is evil. I made copies and gave them to my writer friends.

I now knew the ex­am­in­ers worked from a script like the one in the hand­book. On page 24, the script calls for em­pa­thetic lis­ten­ing to put the sub­ject at ease. On page 53, it says to change gears and con­front the sub­ject with ev­i­dence of a crime. I thought that since I was fa­mil­iar with the script, the poly­g­ra­pher would lose his power over me.

I ar­rived for the ap­point­ment. The ex­am­iner asked if I was ner­vous, and I said no (true). In the pretest in­ter­view, he asked if I’d vis­ited the anti-poly­graph web­sites to learn how to beat the test. I said I did read the sites, but I was look­ing for ar­ti­cles on psy­cho­log­i­cal dam­age from in­ter­ro­ga­tion and how to re­cover from it. Beating the poly­graph was of no in­ter­est to me (true).

The real test be­gan. In my mind, I turned the pages of the script. The ex­am­iner ex­cused him­self and stepped out­side. He re­turned, and what­ever warmth there’d been in his man­ner had van­ished. Oh c**p, we’re on page 53. He ac­cused me of de­cep­tion. I hun­kered down un­til it was over. I re­minded my­self, Whatever you do, don’t make ad­mis­sions. I did­n’t have any­thing to con­fess, but if the pres­sure were bad enough, I might be tempted to make things up.

At the end of the test, the ex­am­iner told me I’d failed. But, and this is huge, for the first time I did­n’t leave the poly bro­ken and weep­ing. I was an­noyed, but I had­n’t been harmed.

Two weeks later, they asked me to come in for a retest. The ex­am­iner was dif­fer­ent, but the test was the same. Everything went smoothly, and I as­sumed I’d passed. The ex­am­iner said he needed to run the chart by his su­per­vi­sor, and I’d hear back later.

Weeks went by, and then months. The com­puter ac­count I would get as soon as I passed the poly was still in limbo, which was not a good sign.

During the quiet time over the Christmas break, I came to be­lieve I’d lost the abil­ity to pass a poly­graph. By this time, I’d failed three in a row, two for DIA and the one for FBI.

I won­dered if it was be­cause I’d grown cyn­i­cal. I now thought of the poly­graph ap­pa­ra­tus as a colan­der at­tached to a Xerox ma­chine. Even so, I’d tried to co­op­er­ate. I’d an­swered their ques­tions truth­fully, and I had­n’t used coun­ter­mea­sures. But I no longer feared the ex­am­in­ers, and I no longer broke un­der in­ter­ro­ga­tion. According to the hand­book, break­ing the sub­ject is an im­por­tant part of the process.

On the first day back from Christmas break, my de­part­ment head stopped by to give me my an­nual salary re­view. It was my high­est raise in five years. She said the peo­ple on my pro­ject liked my work, and they liked me.

Later the same day, I got a call from Polygraph ask­ing me to come in the next morn­ing to an­swer the poly­graph ques­tions un­der oath rather than wired up.

I had no prob­lem with that. As a men­tor at the Agency said about tes­ti­fy­ing be­fore Congress, Under oath or not un­der oath. Either way you’re telling the truth, so what’s the dif­fer­ence?”

The two polys I’d al­ready taken for DIA had been CI (counterintelligence). But halfway through, the ex­am­iner asked, How of­ten do you look at pornog­ra­phy?” I blinked with sur­prise. Excuse me?” That ques­tion did­n’t be­long on a CI poly, that was a Lifestyle ques­tion.

When the in­ter­view ended, the ex­am­iner said, I be­lieve you” in a voice that lacked con­vic­tion. Then he added, I’m go­ing to try to get you an­other poly­graph.”

I don’t want an­other.” I meant it.

Over the next few days, I jumped when­ever the phone rang. Weeks went by be­fore I be­gan to re­lax. And then my long-de­layed com­puter ac­count was ap­proved. As I un­der­stood how the sys­tem worked, the poly­graph had just been ad­ju­di­cated in my fa­vor.

A few days later, I was sit­ting at my desk eat­ing lunch when a sched­ul­ing clerk from Polygraph called. You missed your ap­point­ment this morn­ing,” he said. Next time, you might con­sider telling me about it ahead of time. He said, Don’t even worry about it. Everyone makes mis­takes.” My jaw dropped. He just called me a liar.

The clerk wanted me to come in the next day. I was se­ri­ously rat­tled. I’d al­ready de­cided I was not go­ing to take an­other poly­graph, ever. I stalled. And then I re­mem­bered my newly granted com­puter ac­cesses. I was al­ready cleared. Probably the pa­per­work had­n’t caught up yet. I said I needed a few days to fig­ure out if the poly was still nec­es­sary, and I would get back to him.

I checked with Security. They said, No, you haven’t been ad­ju­di­cated yet.”

I could­n’t get out of it, but I could put it off un­til it was OBE (overtaken by events). In a few months, the pro­ject would move to a new lo­ca­tion be­yond my com­mut­ing range. I planned to stay un­til right be­fore the move and then find some­thing else. I did­n’t have to refuse the poly, I just had to con­duct a de­lay­ing ac­tion.

But Polygraph was in­sis­tent, and I was­n’t sure I could hold them off much longer. I asked about with­draw­ing my ap­pli­ca­tion for DIA clear­ances, but was ad­vised to watch and wait.

Or, I could leave the pro­ject now and find other work. My TS/SCI from CIA was still ac­tive, but I knew that even­tu­ally CIA would want me to take a poly­graph. I also held a Top Secret clear­ance from the Air Force. The Air Force TS, a vanilla TS (not SCI) was based on an SF-86 back­ground in­ves­ti­ga­tion. It did not re­quire a poly­graph. And at the very worst, I could do un­clas­si­fied work. My com­pany had a large num­ber of un­clas­si­fied pro­jects, and many had work for an­a­lysts.

A few days af­ter I’d told the clerk I’d get back to him, I gave a pre­sen­ta­tion about the work I’d been do­ing on my task. It was well re­ceived, and I stepped down from the podium cov­ered in glory. As I left the meet­ing, the pro­gram man­ager pulled me aside and said my task had lost its fund­ing. It was an oc­cu­pa­tional haz­ard of de­fense con­tract­ing, and not any­thing sin­is­ter.

I was­n’t happy about be­ing cut from the pro­ject, but it did solve my poly­graph dilemma. If I was­n’t on the pro­ject any­more, I did­n’t need to ap­ply for DI clear­ances, and I did­n’t need to take the poly­graph.

Around noon, the clerk from Polygraph called again. I told him I did­n’t have to take the poly any­more be­cause I’d been laid off the pro­ject. In mid-af­ter­noon, he called back. He said he’d made a few phone calls and learned that I had­n’t been laid off from my com­pany, af­ter all. Wonderful, call me a liar.

He pressed me to sched­ule an­other poly. I said no. His voice turned whiny. You have to do it!” I dug in my heels. I’ve al­ready told you no.” He slammed down the phone.

I’d just re­fused a poly­graph. I felt like Neville Longbottom when he drew the sword of Gryffindor and ad­vanced on Lord Voldemort. I was filled with right­eous in­dig­na­tion, and it gave me courage.

For the rest of the day, I was pep­pered with emails and phone calls sum­mon­ing me to the of­fices of up­per man­agers. Some of them were so far up the chain, I’d never heard their names be­fore. They were uni­formly kind to me. I had­n’t known it, but I was­n’t the first per­son in our di­vi­sion to refuse the poly­graph. Polygraph con­sci­en­tious ob­jec­tors—who knew that there was such a thing?

Polygraph told my man­age­ment I’d lied about be­ing laid off. It caused a ma­jor flap. My pro­ject badges were taken from me, even the un­clas­si­fied ones, and I was de­briefed from all my DIA clear­ances.

To their credit, DIA did­n’t tell any other agen­cies they’d taken my badge and de­briefed me. Weeks later, my CIA clear­ances were still ac­tive.

Six weeks went by. I put in for a few days of leave to take the kids to an alumni event at my uni­ver­sity. I worked half a day and then went home to pack.

Sometime af­ter 4:00 pm, when I was load­ing the last suit­case into the car, the de­part­ment sec­re­tary called to say I was late for a 4:00 meet­ing with my de­part­ment head. This was the first I’d heard of it. The meet­ing had been put on the cal­en­dar af­ter I’d left for the day. I said I was sorry I could­n’t be there, but I’d be back in the of­fice first thing on Monday.

Just be­fore close of busi­ness Monday, I was sum­moned to an­other meet­ing with the de­part­ment head. When I ar­rived, my boss was sit­ting in her of­fice with a woman from Human Resources. As a gen­eral rule, if you’re boss wants to see you and HR has been asked to sit in, you know it’s go­ing to end badly.

My de­part­ment head put a piece of pa­per in front of me. It said I’d agreed to take a poly­graph as a con­di­tion of work­ing on the DIA pro­ject, but when they tried to sched­ule it, I can­celed the ap­point­ment. As a re­sult, I was cut from the pro­ject.

No, I took two poly­graphs. I turned down the third be­cause I was­n’t on the pro­ject any­more.” Although I now thought of my­self a poly­graph con­sci­en­tious ob­jec­tor, and I would have re­fused whether or not I was still on the pro­ject.

The rest of the let­ter said the com­pany would be­gin ter­mi­na­tion pro­ceed­ings against me. I was eight years from re­tire­ment. I was­n’t count­ing the days un­til re­tire­ment. I liked go­ing to work. Even when I was be­tween as­sign­ments and not get­ting paid, I still came into the of­fice and put in a full day.

I spoke to a lawyer. She said I lived in an at will” state, which means em­ploy­ees can be dis­missed for any (legal) rea­son, or for no rea­son at all. I was be­ing ter­mi­nated for re­fus­ing the poly­graph, and it was le­gal.

I de­cided to re­sign rather than fight.

...

Read the original on antipolygraph.org »

8 222 shares, 9 trendiness

China’s cut-rate DRAM tests Samsung, SK in HBM4 race

CXMT halves DDR4 prices as YMTC gains ground in NAND, rais­ing con­cerns over Korea’s legacy ex­po­sure

Samsung Electronics and SK hynix are locked in a race to mass-pro­duce sixth-gen­er­a­tion high-band­width mem­ory, but Chinese ri­vals are mak­ing gains else­where — flood­ing the legacy DRAM mar­ket with chips priced at roughly half the go­ing rate.

According to in­dus­try sources on Friday, China’s top DRAM man­u­fac­turer CXMT has been of­fer­ing older-gen­er­a­tion DDR4 chips at about half the pre­vail­ing mar­ket rate. The move comes as global sup­ply short­ages have dri­ven prices sharply higher, al­low­ing the com­pany to ag­gres­sively push legacy prod­ucts for mo­bile de­vices and PCs in a bid to boost mar­ket share.

DDR4 re­mains a main­stay com­po­nent in de­vices such as PCs and TVs, and have risen in price re­cently.

Data from DRAMeXchange showed that as of end-Jan­u­ary, the av­er­age fixed con­tract price of PC DRAM DDR4 8Gb stood at $11.50, up 23.7 per­cent from $9.30 a month ago. Compared with $1.35 a year ear­lier, the price has jumped more than eight­fold. DRAM prices have climbed for 10 con­sec­u­tive months, mark­ing the high­est level since the mar­ket tracker be­gan com­pil­ing data in June 2016.

Against this back­drop, cut-price Chinese chips are prov­ing tempt­ing. US hard­ware firms HP and Dell are re­port­edly con­duct­ing qual­ity tests on CXMTs DRAM, while Taiwan’s Asus and Acer have sought co­op­er­a­tion with Chinese part­ners. Signs are emerg­ing that ag­gres­sive pric­ing is trans­lat­ing into de­mand.

Chinese firms are wag­ing a vol­ume-based strat­egy start­ing with gen­eral-pur­pose mem­ory, backed by state sub­si­dies and do­mes­tic de­mand from AI servers and lo­cally de­vel­oped GPUs,” said an in­dus­try source who re­quested anonymity. As Korean com­pa­nies con­cen­trate on HBM4, there are vis­i­ble cracks emerg­ing in (their hold on) the legacy mar­ket.”

The chal­lenge for Korean chip­mak­ers is that the legacy seg­ment still ac­counts for a sig­nif­i­cant por­tion of their earn­ings. More than half of the to­tal DRAM pro­duc­tion ca­pac­ity at both Samsung and SK hynix is un­der­stood to be al­lo­cated to gen­eral-pur­pose prod­ucts. Even if they main­tain lead­er­ship in HBM4, a deep­en­ing ero­sion of the main­stream mar­ket could even­tu­ally weigh on prof­itabil­ity.

Chinese play­ers, mean­while, are not lim­it­ing their push to low-cost vol­ume sales. The cash and know-how gained from legacy chips is fund­ing a push into higher-end prod­ucts.

CXMT is in the process of con­vert­ing wafer ca­pac­ity equiv­a­lent to about 20 per­cent of its to­tal DRAM out­put — some 60,000 wafers per month — at its Shanghai plant to the fourth-gen­er­a­tion HBM3 chip pro­duc­tion. The pos­si­bil­ity of ex­pand­ing into post-HB­M3E prod­ucts is also be­ing dis­cussed.

The Shanghai fa­cil­ity is be­lieved to have pro­duc­tion ca­pac­ity two to three times larger than the com­pa­ny’s head­quar­ters plant in Hefei. Equipment in­stal­la­tion is ex­pected to be com­pleted in the sec­ond half of this year, with mass pro­duc­tion slated for next year. Although HBM3 and the fifth-gen­er­a­tion HBM3E chips trail HBM4 in per­for­mance, they re­main widely used in AI data cen­ters.

China’s ad­vance is not con­fined to DRAM. YMTC has been gain­ing trac­tion in the NAND flash sec­tor as well, cap­i­tal­iz­ing on com­pet­i­tively priced mo­bile prod­ucts. The com­pany recorded a 10 per­cent share of the global NAND mar­ket for the first time last year, and mo­men­tum is widely ex­pected to con­tinue.

YMTC is cur­rently build­ing a third fab­ri­ca­tion plant in Wuhan, tar­get­ing op­er­a­tions next year. Half of the fa­cil­i­ty’s pro­duc­tion ca­pac­ity is to be al­lo­cated to DRAM. It will ini­tially fo­cus on legacy DRAM prod­ucts, with the pos­si­bil­ity of ex­pand­ing into HBM pro­duc­tion in part­ner­ship with lo­cal as­sem­bly firms. Industry sources say the pat­tern is fa­mil­iar — build scale in legacy DRAM, then move up the value chain.

At this stage, Chinese man­u­fac­tur­ers are re­ly­ing on ag­gres­sive pric­ing to build scale in legacy DRAM,” the anony­mous source said. But over time, the tech­nol­ogy gap may nar­row more quickly than ex­pected. Even if Korean firms main­tain lead­er­ship in HBM, ne­glect­ing the main­stream seg­ment could weigh on prof­itabil­ity in the longer run.”

yeeun@her­ald­corp.com

...

Read the original on www.koreaherald.com »

9 220 shares, 10 trendiness

Parse, don't Validate and Type-Driven Design in Rust — ramblings of @harudagondi

1.4 What can we do?

In the Rust Programming Language Community Server, there’s tag named -parse-dont-validate which links to an ar­ti­cle about the con­cept of avoid­ing val­i­da­tion func­tions and en­cod­ing in­vari­ants in the type level in­stead. I usu­ally rec­om­mend it to be­gin­ners/​in­ter­me­di­ates to Rust who are strug­gling with de­sign­ing APIs.

The only prob­lem is that it uses Haskell to ex­plain its con­cepts.

Yeah, it’s fine, but for be­gin­ners un­fa­mil­iar with the func­tional par­a­digm, it might not be so ap­proach­able. And so I wanted so write a blog post about this pat­tern but in a rather Rust-centric way. So let’s start!

One ba­sic ex­am­ple I can give is a func­tion that di­vides a num­ber by an­other num­ber. This is fine, but un­for­tu­nately it can panic when b has the value of zero: Compiling play­ground v0.0.1 (/playground) Finished `dev` pro­file [unoptimized + de­bug­info] tar­get(s) in 1.28s Running `target/debug/playground`

thread main’ (41) pan­icked at src/​main.rs:2:5:at­tempt to di­vide by ze­ronote: run with `RUST_BACKTRACE=1` en­vi­ron­ment vari­able to dis­play a back­traceThat’s fine and dandy if we want er­ro­neous val­ues to fail loudly at run­time, but what if we want stronger guar­an­tees? This is es­pe­cially im­por­tant when some op­er­a­tions don’t fail loudly, like the fol­low­ing:There’s no er­ror! But do we want that?We could add an as­sert! in the di­vide_floats func­tion to em­u­late typ­i­cal in­te­ger di­vi­sion be­hav­ior.fn di­vide_floats(a: f32, b: f32) -> f32 { as­sert_ne!(b, 0.0, Division by zero is not al­lowed.“); a / b} Compiling play­ground v0.0.1 (/playground) Finished `dev` pro­file [unoptimized + de­bug­info] tar­get(s) in 0.65s Running `target/debug/playground`

thread main’ (32) pan­icked at src/​main.rs:2:5:as­ser­tion `left != right` failed: Division by zero is not al­lowed. left: 0.0 right: 0.0Cute! But there’s still a prob­lem of run­ning into pan­ics only at run­time. My beef with Python (or any other dy­namic lan­guage for that mat­ter) is that a lot of er­rors only arises when you run the pro­gram. That’s why they’re adding type­check­ing to these lan­guages: peo­ple want to bub­ble some mis­takes to com­pile-time (or type­check-time, what­ever). We can use Rust’s rich type sys­tem to com­mu­ni­cate these er­rors at build time.One way, which I think is the more com­mon way as peo­ple are more fa­mil­iar with it is the idea of fal­li­ble func­tions, which re­turn ei­ther an Option or a Result.This is a fine way to do things, as it com­mu­ni­cates that (1) the func­tion can fail, and (2) you can han­dle the fail­ing case af­ter. † † Of course, catch_un­wind ex­ists, but I’m pre­tend­ing that it does­n’t. To me, the func­tion’s in­vari­ants (b must not be zero) is en­coded af­ter-the-fact, aka in the re­turn type Option. This im­plies to me that the in­vari­ants could be en­coded be­fore-the-fact, aka in the func­tion pa­ra­me­ters. But what would that look like?Say, let’s have a type that is some­thing like f32, but it’s guar­an­teed to never be zero. We’ll name it NonZeroF32:This struct only con­tains a sin­gle field f32. The se­man­tics of the type un­der­stood from the name is that it’s just like a nor­mal f32, but does not al­low the value of zero. How do we guar­an­tee this? Since rust does en­cap­su­la­tion at the mod­ule level, we make this type pub­lic while have its field pri­vate.Then, the only way to con­struct this type is via a fal­li­ble con­struc­tor func­tion:impl Add for NonZeroF32 { … }impl Add for NonZeroF32 { … }impl Add for f32 { … }// and a bunch of other op­er­a­tors…We can then use this in our di­vide_floats func­tion.There is an in­ter­est­ing im­pli­ca­tion in this pat­tern.In the sec­ond ver­sion of di­vide_floats, we changed the re­turn type from f32 to Option just to avoid the pan­ics. As de­scribed in the orig­i­nal ar­ti­cle by Alexis King, this is a weak­en­ing of the re­turn type, and the func­tion’s promise. We tem­per the caller’s ex­pec­ta­tion by say­ing that yes, this func­tion can fail in some way, and you have to ac­count for that. And that weak­en­ing is de­scribed in the type sys­tem via the Option enum.In the third it­er­a­tion of di­vide_floats, we change our per­spec­tive and ask our­selves instead of weak­en­ing the re­turn type, what if we strengthen the func­tion pa­ra­me­ters?” We com­mu­ni­cated that via ac­cept­ing a NonZeroF32. Instead of hav­ing the val­i­da­tion code in our func­tions, we in­stead push that re­spon­si­bil­ity to the caller. The val­i­da­tion now hap­pens be­fore the func­tion ex­e­cu­tion.To see the ad­van­tage of push­ing the val­i­da­tion for­ward to the user, let’s say we have an­other func­tion like so:// The qua­dratic for­mula!fn roots(a: f32, b: f32, c: f32) -> [f32; 2] { // For the sake of demon­stra­tion we will be ig­nor­ing com­plex roots let dis­crim­i­nant = b * b - 4 * a * c; [ -b + dis­crim­i­nant.sqrt() / (2 * a), -b - dis­crim­i­nant.sqrt() / (2 * a), ]}This func­tion can fail if the dis­crim­i­nant is neg­a­tive (which we will be ig­nor­ing in this con­trived ex­am­ple), and if a is zero. The two ways of go­ing about this can be writ­ten as fol­lows:fn try_­roots(a: f32, b: f32, c: f32) -> Option { if a == 0 { re­turn None; } // …}

fn new­type­d_­roots(a: NonZeroF32, b: f32, c: f32) -> [f32; 2] { // un­changed}The Option ver­sion has me du­pli­cat­ing the con­di­tional for at least two dif­fer­ent func­tions, which might be icky if you are a DRY-hard. Also, not only the func­tion has to val­i­date if the float can be zero, the caller must then val­i­date again by match­ing on the re­turned Option. That seems re­dun­dant. It would be ideal if we only need to check only once.let roots = try_­roots(5, 4, 7); // `try_roots` does a val­i­da­tion check// and then we val­i­date it again by match­ing on the re­sult­match roots { Some(result) => do_­some­thing(), None => { han­dle_er­ror(); re­turn },}The NonZeroF32 ver­sion can help with that as val­i­da­tion hap­pens be­fore, and hap­pens once, in­stead of twice.// Handle the spe­cial case on­celet Some(a) = NonZeroF32::new(5) else { han­dle_er­ror(); re­turn;}

// `newtyped_roots` does not need to han­dle it again,// in­di­cated by the func­tion not need­ing to re­turn// an `Option` and us han­dling the re­sult.let [root1, root2] = new­type­d_­roots(a, 4, 7);Moving away from the di­vide_floats, let’s now use an ex­am­ple from the orig­i­nal blog post, con­verted to Rust:fn get_cfg_dirs() -> Result, Box> { let cfg_dirs_string = std::env::var(“CON­FIG_DIRS”)?;

let cfg_dirs_list = cfg_dirs_string.split(‘,’)  .map(PathBuf::from)  .collect::>();

if cfg_dirs_list.is_empty() { re­turn Err(“CONFIG_DIRS can­not be empty”.into()); }

Ok(cfg_dirs_list)}

fn main() -> Result> { let cfg_dirs = get_cfg_dirs()?; match cfg_dirs.first() { Some(cache_dir) => init_­cache(cache_dir), None => un­reach­able!(“should never hap­pen; al­ready checked con­figDirs is non-empty”), }}

We checked if cfg_dirs_list is empty in the get_cfg_dirs func­tion. Then, we still had to check” it again in the main func­tion by match­ing on cfg_dirs.first(). The Vec was known to be non­empty, do we have to check it again? Consequently, does­n’t this have an im­pact on per­for­mance, es­pe­cially if we have to check it again and again and again?

The orig­i­nal post raised a good point about re­silience to refac­tors. If for some rea­son the is_empty gets refac­tored out for some rea­son, and the pro­gram­mer for­got to up­date main, then the un­reach­able! branch might ac­tu­ally get reached and ex­plode your com­puter or what­ever.

If we in­stead had a spe­cial NonEmptyVec new­type (well, not ex­actly spe­cial) where its ex­is­tence guar­an­tees that the Vec is never empty, we could dostruct NonEmptyVec(T, Vec);

impl NonEmptyVec { // Notice that we don’t need to re­turn an `Option` fn first(&self) -> &T { … }}

fn get_cfg_dirs() -> Result, Box> { let cfg_dirs_string = std::env::var(“CON­FIG_DIRS”)?;

let cfg_dirs_list = cfg_dirs_string.split(‘,’)  .map(PathBuf::from)  .collect::>();

// We parse the `Vec` into a more struc­tured type let cfg_dirs_list = NonEmptyVec::try_from(cfg_dirs_list)?;

Ok(cfg_dirs_list)}

fn main() -> Result> { let cfg_dirs = get_cfg_dirs()?; // Notice that we don’t have to check again if the `Vec` // was empty, since we guar­an­tee that via the `NonEmptyVec` type init_­cache(cfg_dirs.first());}In this con­text, we can call NonZeroF32::new and NonEmptyVec::try_from pars­ing func­tions, since they val­i­date and con­vert the less se­man­tic type to a type with more mean­ing im­bued into it. That is, nonze­roness of a float and non­empti­ness of a Vec is now en­coded into a type. You can just see the word NonZeroF32 and there­fore un­der­stand that go­ing for­ward it is al­ways be an f32 that is never zero.Val­i­da­tion and check­ing func­tions on the other hand, well, just val­i­date the value and leave the type as that. If I have a is_nonzero(f32) -> bool func­tion, then there’s not re­ally much of a read­able dif­fer­ence be­tween an f32 that has is_nonzero called on it ver­sus and an f32 that has­n’t.By tak­ing ad­van­tage of the ex­is­tence of a nom­i­na­tive type sys­tem, we can com­mu­ni­cate that this f32 is not zero by pars­ing it to a new type, as op­posed to just val­i­dat­ing it. If you only val­i­date it, then you still can’t tell if f32 was nonzero un­less you dig through the code. However, if you parsed it, you can say it’s al­ways be nonzero if you see NonZeroF32 in your code.

Of course, the above ex­am­ples are very much con­trived, but is there an in­stance where cre­at­ing new­types is help­ful? Yes. In fact, most peo­ple have used it. It’s called a String. If we dig into the in­ter­nals, String is just a new­type over the Vec type:It’s pars­ing func­tion is String::from_utf8, which con­tains the val­i­da­tion code for check­ing if the byte vec­tor is valid UTF-8.So in­stead of pass­ing around a Vec around and val­i­dat­ing all over the place, just parse into a String and you can be as­sured with hav­ing a type-safe String with all the con­ve­nience func­tions you can get.An­other ex­am­ple is serde_j­son. In Python, json.loads sim­ply give you a dic­tio­nary. This is fine, es­pe­cially if the data is suf­fi­ciently ar­bi­trary, but if you have a schema and a type sys­tem, it’s bet­ter to let the type sys­tem do the work of pars­ing json.In our ter­mi­nol­ogy, val­i­da­tion looks like this:use serde_j­son::{from_str, Value};

const SAMPLE_JSON: &str = r#“{ foo”: 1, bar”: [1, 2, 3] }“#;

let json = from_str::That’s two un­wraps! One for check­ing if the string is valid json and the other is for check­ing if the bar field ex­ists. Now con­sider this ex­am­ple where we use the pars­ing me­chanic in­stead via types and the Deserialize de­rive macro.struct Sample { foo: i32, bar: [i32; 3]}

impl Sample { fn first_elem(&self) -> i32 { self.bar[0] // does not panic, by de­f­i­n­i­tion }}

let json = from_str::Since we de­se­ri­al­ized the json file into an ac­tual type, we can safely make these guar­an­tees:

The foo and bar al­ways ex­ist in the json string we parse.

foo al­ways has an in­te­ger value.

bar is al­ways an ar­ray of three in­te­gers.

first_elem will never panic since all el­e­ments of an ar­ray is al­ways ini­tial­ized, and in­dex­ing into the first the el­e­ment of a nonzero-length ar­ray will al­ways be suc­cess­ful.

The only point of fail­ure here is pushed up­front, where the from_str hap­pens. After that point, there’s not re­ally much er­ror han­dling to be done here, since the val­i­da­tion is now rep­re­sented at the type level in­stead of at the func­tion level.

With that said, what lessons can we learn from here? Turns out, most func­tional lan­guage pro­gram­mers al­ready have learned sev­eral lessons, and Rust is not much dif­fer­ent in terms of ap­ply­ing such FP con­cepts to the lan­guage. First les­son we can learn is that we should make il­le­gal states un­rep­re­sentable.To re­fer back to the NonZeroF32 and NonEmptyVec ex­am­ples, we say the state of be­ing zero is il­le­gal for NonZeroF32 and the state of be­ing empty is il­le­gal for NonEmptyVec. And as il­le­gal states, they can­not be rep­re­sented in such types. That’s why the only con­struc­tors avail­able for these types are fal­li­ble; the value ei­ther parsed suc­cess­fully, or it failed and does not re­turn the new types.If we only do val­i­da­tion, like check­ing if f32 is nonzero for ex­am­ple, then the il­le­gal state can still be rep­re­sented. There’s a small pos­si­ble that the value is zero, es­pe­cially af­ter some refac­tors when the con­di­tional checks are ac­ci­den­tally or in­ten­tion­ally re­moved in some places. This re­minds me of how other lan­guages use in­te­gers as sen­tinel val­ues. Given this code snip­pet from Wikipedia:int find(int arr[], size_t len, int val) { for (int i = 0; i < len; i++) { if (arr[i] == val) { re­turn i; } } re­turn -1; // not found}The er­ror is re­turned as -1, since in­dex­ing ar­rays is only valid for non­neg­a­tive in­te­gers. Seems weird as (1) the num­bers -2 and be­low can ex­ist, but not ac­tu­ally valid, and (2) treat­ing cer­tain val­ues as spe­cial seems too er­ror-prone, as in the fu­ture it could be that neg­a­tive num­ber can be­come se­man­ti­cally valid. Second les­son we can learn is that prov­ing in­vari­ants should be done as early as pos­si­ble.There’s this con­cept called shot­gun pars­ing where the linked pa­per de­scribes it as fol­lows:

Shotgun Parsing: Shotgun pars­ing is a pro­gram­ming an­tipat­tern whereby pars­ing and in­put-val­i­dat­ing code is mixed with and spread across pro­cess­ing code—throw­ing a cloud of checks at the in­put, and hop­ing, with­out any sys­tem­atic jus­ti­fi­ca­tion, that one or an­other would catch all the bad” cases.

Essentially, it de­scribes the prob­lem of us­age of data with­out pre­vi­ous val­i­da­tion of its en­tirety of data. You could act on a part of the data that is val­i­dated be­fore­hand, but dis­cover that an­other part of the data is in­valid.The pa­per men­tions CVE-2016-0752, which is a bug that al­lows at­tack­ers to read ar­bi­trary files be­cause you can use .. in the in­put. The pa­per ar­gues that treat­ing val­i­da­tion as emer­gent and not de­lib­er­ate can lead to se­cu­rity bugs like these.If we treat val­i­da­tion as de­lib­er­ate, then it should hap­pen as early as pos­si­ble and as com­pre­hen­sive as pos­si­ble. By pars­ing first, every in­vari­ant can be proven first be­fore ex­e­cut­ing on said data. I re­mem­ber this video about lambda cal­cu­lus. It con­cludes that types can be rep­re­sented as propo­si­tions in logic, and terms as proofs. I rec­om­mend watch­ing the video, as it is eye-open­ing to me and maybe it can help you re­al­ize some things too.Fun­da­men­tally, if your pro­gram type­checks prop­erly, then you can say that the proof is cor­rect. Thank you Curry-Howard Correspondence. There are proof as­sis­tant pro­gram­ming lan­guages that can help with this like Lean and Agda, but you can em­u­late this in Rust any­way. That’s how some weird li­braries like the type­num crate work.This is a sim­ple pro­gram in Rust where I check if 3 + 4 is equal to 8. Obviously this is not cor­rect, and so it will ap­pro­pri­ately give you a com­pile er­ror. Compiling play­ground v0.0.1 (/playground)error[E0277]: the trait bound `PInt, B1>, B1>>: Same>>` is not sat­is­fied –> src/​lib.rs:11:5 |11 | Result: | ^^^^^^ un­sat­is­fied trait bound | = help: the trait `typenum::Same, B0>, B0>, B0>>>` is not im­ple­mented for `PInt, B1>, B1>>` = note: the full name for the type has been writ­ten to /playground/target/debug/deps/playground-e4f34f6f1769e3b6.long-type-6323804316620900.txt’ = note: con­sider us­ing `–verbose` to print the full type name to the con­sole

For more in­for­ma­tion about this er­ror, try `rustc –explain E0277`.error: could not com­pile `playground` (lib) due to 1 pre­vi­ous er­rorSo sad that the er­ror mes­sage is dogshit. Such is life.

What can we do? #There are some rec­om­men­da­tions I usu­ally say to peo­ple on the RPLCS dis­cord server, adapted from the orig­i­nal blog post. First, just be­cause a func­tion ac­cepts a type does­n’t mean you have to use it in your structs, nor have to per­pet­u­ally rep­re­sent it as that type. For ex­am­ple, let’s say we have a third party li­brary func­tion that looks like this.You don’t have to store bool in your App/Context struct like App { light­bul­b_s­tate: bool }. That’s con­fus­ing. I’d rather have you de­fine a sep­a­rate enum with more se­man­tics im­bued into it, like:enum LightBulbState { Off, On,}

impl FromYeah, peo­ple can say it gets more ver­bose, but I rather care more about cor­rect­ness in­stead. Sorry.Second, I some­times get sus­pi­cious about these kind of APIs:If I see the func­tion body does not do any­thing side-ef­fect­ful, then it’s prob­a­ble that pars­ing can help here turn­ing Thing into a more struc­tured datatype. And even for side-ef­fect­ful stuff, there are some types that bet­ter rep­re­sent cer­tain sit­u­a­tions, like in­fi­nite loop func­tion rep­re­sent­ing their re­turn types as Result or Result.

I love cre­at­ing more types. Five mil­lion types for every­one please. I think it’s in­ter­est­ing that there’s a lot of in­stances where types drive the de­sign of Rust pro­grams. Like how Vec has four lay­ers of new­types plus an ad­di­tional field. sqlx gen­er­ate anony­mous structs in their query! macros. bon is a macro crate that con­verts func­tions into com­pile-time builders via types.Of course, not every­thing is solv­able via types. But per­son­ally I think push­ing your ver­i­fi­ca­tion code to types can help your code be­come clearer and more ro­bust. Let the type sys­tem han­dle the val­i­da­tion for you. It ex­ists, so might as well use it to its fullest ex­tent.I’d like to thank Alexis King for this ar­ti­cle where I first en­coun­tered this idea. I’d love to fol­low up on this topic with an ex­ten­sion on this se­quel, and maybe re­con­tex­tu­al­iz­ing in Rust via the un­safe key­word would be help­ful.Of course, new­typ­ing is not the an­swer to all prob­lems. Due to lack of er­gonomic fea­tures to al­low new­typ­ing—like del­e­ga­tion—many peo­ple are some­what averse to us­ing the pat­tern. Nevertheless, if some­one made a good enough RFC I’d be happy to see it hap­pen.Us­ing the type sys­tem as a com­pile-time checker be­cause I want the com­piler to help me write my pro­grams is very nice. You should take ad­van­tage of the type sys­tem too, not many lan­guages have it as good as Rust :)

...

Read the original on www.harudagondi.space »

10 209 shares, 8 trendiness

macOS's Little-Known Command-Line Sandboxing Tool

sand­box-exec is a built-in ma­cOS com­mand-line util­ity that en­ables users to ex­e­cute ap­pli­ca­tions within a sand­boxed en­vi­ron­ment. In essence, it cre­ates a se­cure, iso­lated space where ap­pli­ca­tions can run with lim­ited ac­cess to sys­tem re­sources — only ac­cess­ing what you ex­plic­itly per­mit.

The con­cept be­hind sand­box­ing is fun­da­men­tal to mod­ern se­cu­rity: by re­strict­ing what an ap­pli­ca­tion can ac­cess, you min­i­mize the po­ten­tial dam­age from ma­li­cious code or un­in­tended be­hav­ior. Think of it as putting an ap­pli­ca­tion in a se­cure room where it can only in­ter­act with spe­cific ob­jects you’ve placed there.

Before div­ing into us­age, let’s un­der­stand why sand­box­ing mat­ters:

Protection from ma­li­cious code: If you’re test­ing an un­fa­mil­iar ap­pli­ca­tion or script, sand­box­ing can pre­vent it from ac­cess­ing sen­si­tive files or send­ing data across the net­work.

Damage lim­i­ta­tion: Even trusted ap­pli­ca­tions can have vul­ner­a­bil­i­ties. Sandboxing lim­its the po­ten­tial im­pact if an ap­pli­ca­tion is com­pro­mised.

Privacy con­trol: You can ex­plic­itly deny ap­pli­ca­tions ac­cess to per­sonal di­rec­to­ries like Documents, Photos, or Contacts.

Testing en­vi­ron­ment: Developers can test how ap­pli­ca­tions func­tion with lim­ited per­mis­sions be­fore im­ple­ment­ing for­mal App Sandbox en­ti­tle­ments.

Resource re­stric­tion: Beyond se­cu­rity, sand­box­ing can limit an ap­pli­ca­tion’s re­source con­sump­tion or net­work ac­cess.

Using sand­box-exec re­quires cre­at­ing a sand­box pro­file (configuration file) that de­fines the rules for your se­cure en­vi­ron­ment. The ba­sic syn­tax is:

Where pro­file.sb con­tains the rules defin­ing what the sand­boxed ap­pli­ca­tion can and can­not do, and com­mand_­to_run is the ap­pli­ca­tion you want to run within those con­straints.

Sandbox pro­files use a Scheme-like syn­tax (a LISP di­alect) with paren­the­ses group­ing ex­pres­sions. The ba­sic struc­ture in­cludes:

See Appendix for more com­plete list of avail­able rules

There are two pri­mary philoso­phies when cre­at­ing sand­box pro­files:

This ap­proach starts by deny­ing every­thing and ex­plic­itly al­low­ing only re­quired op­er­a­tions:

This is the most se­cure ap­proach, ideal for run­ning un­trusted code, but re­quires care­ful con­fig­u­ra­tion to make ap­pli­ca­tions func­tional.

Alternatively, you can al­low every­thing ex­cept spe­cific op­er­a­tions:

This ap­proach is eas­ier to im­ple­ment but less se­cure, as you must an­tic­i­pate every po­ten­tial risky op­er­a­tion.

Let’s ex­plore some real-world ex­am­ples to demon­strate the power of cus­tom sand­box­ing.

Create a sand­boxed ter­mi­nal ses­sion that can’t ac­cess the net­work:

This cre­ates a ter­mi­nal ses­sion that func­tions nor­mally but can­not ac­cess the net­work or read from your per­sonal di­rec­to­ries.

These sys­tem pro­files pro­vide con­fig­u­ra­tions for com­mon re­stric­tion sce­nar­ios and ap­pli­ca­tions. Some of them have quite good com­ments so you can use it as ba­sis for your fu­ture pro­files.

When ap­pli­ca­tions fail in a sand­box, de­ter­min­ing the cause can be chal­leng­ing. Here are ef­fec­tive de­bug­ging tech­niques:

Search for sandbox” and your ap­pli­ca­tion name

Look for lines con­tain­ing deny” to iden­tify blocked op­er­a­tions

These logs show ex­actly which op­er­a­tions are be­ing de­nied, help­ing you re­fine your sand­box pro­file.

For fre­quent sand­box­ing, add an alias to your shell con­fig­u­ra­tion:

# Add to ~/.zshrc or ~/.bash_profile

alias sand­box-no-net­work=‘sand­box-exec -p (version 1)(allow de­fault)(deny net­work*)“’

# Then use it as:

sand­box-no-net­work curl -v https://​google.com

but when I did the same for UI ap­pli­ca­tions it did­n’t work for some rea­son (I can still open Google.com):

You can im­port and ex­tend ex­ist­ing pro­files:

Despite its power, sand­box-exec has some lim­i­ta­tions to con­sider:

Deprecation sta­tus: While func­tional, Apple dis­cour­ages its di­rect use in fa­vor of App Sandbox for de­vel­op­ers.

Complex ap­pli­ca­tions: Modern ap­pli­ca­tions of­ten have com­plex re­quire­ments that make com­pre­hen­sive sand­box­ing chal­leng­ing with­out ex­ten­sive test­ing.

Trial and er­ror: Creating ef­fec­tive sand­box pro­files of­ten re­quires it­er­a­tive test­ing to iden­tify all nec­es­sary per­mis­sions.

No GUI: Unlike App Sandbox in Xcode, sand­box-exec has no graph­i­cal in­ter­face for con­fig­u­ra­tion.

System up­dates: Major ma­cOS up­dates might change how sand­box-exec works or what rules are ef­fec­tive.

While Apple has moved to­ward more user-friendly se­cu­rity mod­els, sand­box-exec re­mains a pow­er­ful tool for those will­ing to in­vest time in learn­ing its in­tri­ca­cies. It of­fers a level of con­trol and cus­tomiza­tion that GUI-based so­lu­tions sim­ply can­not match.

For se­cu­rity-con­scious users, de­vel­op­ers test­ing ap­pli­ca­tions, or any­one work­ing with po­ten­tially un­trusted code, sand­box-exec pro­vides a na­tive ma­cOS so­lu­tion for cre­at­ing finely-tuned se­cu­rity en­vi­ron­ments. Though it re­quires knowl­edge of all it’s pos­si­bil­ity, de­spite lack of doc­u­men­ta­tion, the se­cu­rity ben­e­fits make it well worth the ef­fort.

The most pow­er­ful as­pect of sand­box-exec is its flex­i­bil­ity — you can cre­ate cus­tom se­cu­rity pro­files tai­lored to spe­cific ap­pli­ca­tions and use cases, go­ing far be­yond the one-size-fits-all ap­proach of most se­cu­rity tools.

If you’re in­ter­ested in learn­ing more about ma­cOS se­cu­rity tools and tech­niques, check out Apple’s of­fi­cial doc­u­men­ta­tion on App Sandbox or ex­plore the pre-built sand­box pro­files in /System/Library/Sandbox/Profiles to see how Apple im­ple­ments sand­box­ing for sys­tem ser­vices

...

Read the original on igorstechnoclub.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.