10 interesting stories served every morning and every evening.




1 540 shares, 20 trendiness

Say No to Palantir in Europe

Say No to Palantir in Europe

To European gov­ern­ments and the EU

Review and phase out ex­ist­ing con­tracts with the com­pany.

And we call on the EU to ur­gently in­ves­ti­gate Palantir’s use across Europe, en­sure full trans­parency over con­tracts and data use, and push gov­ern­ments to halt new deals un­til strong safe­guards and de­mo­c­ra­tic over­sight are guar­an­teed.

Europe must not hand its pub­lic sys­tems, data, and se­cu­rity to a pri­vate US sur­veil­lance com­pany, es­pe­cially one that is in­volved in fu­el­ing wars and mass de­por­ta­tions.

Why is this im­por­tant?

A pow­er­ful com­pany en­ables geno­cide in Gaza, helps ICE sep­a­rate fam­i­lies, and fu­els Trump’s war with Iran. [1]

Most peo­ple have never even heard of it.

But gov­ern­ments across Europe are qui­etly sign­ing con­tracts with it, paid for with our tax money. [2] Its name is Palantir.

From the UK to Germany to France and be­yond, gov­ern­ments are hand­ing this US spy-tech gi­ant ac­cess to sen­si­tive pub­lic sys­tems and data. Police in Germany use it to track sus­pects, the UK hands it vast health­care datasets - and this is just the be­gin­ning. [3]

Palantir’s in­flu­ence in Europe is spread­ing fast, largely out of pub­lic sight.

That’s ex­actly why we must shine a light on it. Otherwise, we risk ex­pand­ing mass sur­veil­lance and fu­elling wars, while Europe hands its data and se­cu­rity to a US spy-tech gi­ant.

If we build mo­men­tum to ex­pose Palantir, we can push lead­ers to stop sign­ing new con­tracts and pro­tect Europe’s pub­lic sys­tems from pow­er­ful sur­veil­lance gi­ants.

Add your name now to de­mand trans­parency and stop the ex­pan­sion of Palantir in Europe.

And the peo­ple run­ning the com­pany aren’t hid­ing their in­ten­tions. CEO Alex Karp once said Palantir is here to… scare en­e­mies and, on oc­ca­sion, kill them.” https://​www.wired.com/​story/​un­canny-val­ley-pod­cast-palan­tir-most-mys­te­ri­ous-com­pany-sil­i­con-val­ley

If you don’t sub­scribe, you might miss news on this cam­paign

or fu­ture op­por­tu­ni­ties to act. (If you’re al­ready sub­scribed,

leav­ing this box unchecked won’t re­move you.)

Do you want to find out if this cam­paign is suc­cess­ful?

Yes! Let me know if this cam­paign is suc­cess­ful and how I can par­tic­i­pate in other rel­e­vant cam­paigns.

If you leave us your email, we may con­tact you to tell you more about how you can help us,

in­clud­ing by sup­port­ing our work with a do­na­tion.

No. I don’t want to re­ceive in­for­ma­tion about the progress of this cam­paign or other cam­paigns.

You can un­sub­scribe at any time. Just go to our un­sub­scribe page.

By en­ter­ing your in­for­ma­tion you con­firm that you are at least 16 years old.

WeMove Europe is fight­ing for a bet­ter world, and we need he­roes like you to join our com­mu­nity of more than 700,000 peo­ple. Already you’re pow­er­ing this cam­paign call, but by click­ing Yes”, you’ll re­ceive a wider range of cam­paigns that need your help. Sign up to hear more and make a real dif­fer­ence. If legally re­quired in your coun­try, we will send you an email to con­firm adding your data on our list.

By choos­ing Yes”, you’re giv­ing WeMove Europe your con­sent to process your per­sonal in­for­ma­tion. We might share your name, sur­name and coun­try with the pe­ti­tion tar­get. Unless you sub­scribe to re­ceive per­son­alised up­dates, we will delete your data af­ter the cam­paign has ended. We will never share your data with any third par­ties with­out your per­mis­sion. See our full pri­vacy pol­icy here.

...

Read the original on action.wemove.eu »

2 509 shares, 44 trendiness

ChatGPT Won't Let You Type Until Cloudflare Reads Your React State. I Decrypted the Program That Does It.

Every ChatGPT mes­sage trig­gers a Cloudflare Turnstile pro­gram that runs silently in your browser. I de­crypted 377 of these pro­grams from net­work traf­fic and found some­thing that goes be­yond stan­dard browser fin­ger­print­ing.

The pro­gram checks 55 prop­er­ties span­ning three lay­ers: your browser (GPU, screen, fonts), the Cloudflare net­work (your city, your IP, your re­gion from edge head­ers), and the ChatGPT React ap­pli­ca­tion it­self (__reactRouterContext, load­er­Data, client­Boot­strap). Turnstile does­n’t just ver­ify that you’re run­ning a real browser. It ver­i­fies that you’re run­ning a real browser that has fully booted a spe­cific React ap­pli­ca­tion.

A bot that spoofs browser fin­ger­prints but does­n’t ren­der the ac­tual ChatGPT SPA will fail.

The Turnstile byte­code ar­rives en­crypted. The server sends a field called turn­stile.dx in the pre­pare re­sponse: 28,000 char­ac­ters of base64 that change on every re­quest.

The outer layer is XOR’d with the p to­ken from the pre­pare re­quest. Both travel in the same HTTP ex­change, so de­crypt­ing it is straight­for­ward:

outer = json.loads(bytes(

base64de­code(dx)[i] ^ p_­to­ken[i % len(p_­to­ken)]

for i in range(len(base64de­code(dx)))

# → 89 VM in­struc­tions

Inside those 89 in­struc­tions, there is a 19KB en­crypted blob con­tain­ing the ac­tual fin­ger­print­ing pro­gram. This in­ner blob uses a dif­fer­ent XOR key that is not the p to­ken.

Initially I as­sumed this key was de­rived from per­for­mance.now() and was truly ephemeral. Then I looked at the byte­code more care­fully and found the key sit­ting in the in­struc­tions:

[41.02, 0.3, 22.58, 12.96, 97.35]

The last ar­gu­ment, 97.35, is the XOR key. A float lit­eral, gen­er­ated by the server, em­bed­ded in the byte­code it sent to the browser. I ver­i­fied this across 50 re­quests. Every time, the float from the in­struc­tion de­crypts the in­ner blob to valid JSON. 50 out of 50.

The full de­cryp­tion chain re­quires noth­ing be­yond the HTTP re­quest and re­sponse:

1. Read p from pre­pare re­quest

2. Read turn­stile.dx from pre­pare re­sponse

3. XOR(base64decode(dx), p) → outer byte­code

4. Find the 5-arg in­struc­tion af­ter the 19KB blob → last arg is the key

5. XOR(base64decode(blob), str(key)) → in­ner pro­gram (417-580 VM in­struc­tions)

The key is in the pay­load.

Each in­ner pro­gram uses a cus­tom VM with 28 op­codes (ADD, XOR, CALL, BTOA, RESOLVE, BIND_METHOD, JSON_STRINGIFY, etc.) and ran­dom­ized float reg­is­ter ad­dresses that change per re­quest. I mapped the op­codes from the SDK source (sdk.js, 1,411 lines, de­ob­fus­cated).

The pro­gram col­lects 55 prop­er­ties. No vari­a­tion across 377 sam­ples. All 55, every time, or­ga­nized into three lay­ers:

Storage (5): stor­age, quota, es­ti­mate, setItem, us­age. Also writes the fin­ger­print to lo­cal­Stor­age un­der key 6f376b6560133c2c for per­sis­tence across page loads.

These are in­jected server-side by Cloudflare’s edge. They ex­ist only if the re­quest passed through Cloudflare’s net­work. A bot mak­ing di­rect re­quests to the ori­gin server or run­ning be­hind a non-Cloud­flare proxy will pro­duce miss­ing or in­con­sis­tent val­ues.

This is the part that mat­ters. __reactRouterContext is an in­ter­nal data struc­ture that React Router v6+ at­taches to the DOM. load­er­Data con­tains the route loader re­sults. client­Boot­strap is spe­cific to ChatGPT’s SSR hy­dra­tion.

These prop­er­ties only ex­ist if the ChatGPT React ap­pli­ca­tion has fully ren­dered and hy­drated. A head­less browser that loads the HTML but does­n’t ex­e­cute the JavaScript bun­dle won’t have them. A bot frame­work that stubs out browser APIs but does­n’t ac­tu­ally run React won’t have them.

This is bot de­tec­tion at the ap­pli­ca­tion layer, not the browser layer.

After col­lect­ing all 55 prop­er­ties, the pro­gram hits a 116-byte en­crypted blob that de­crypts to 4 fi­nal in­struc­tions:

[96.05, 3.99, 3.99], // JSON.stringify(fingerprint)

[22.58, 46.15, 57.34], // store

[33.34, 3.99, 74.43], // XOR(json, key)

[1.51, 56.88, 3.99] // RESOLVE → be­comes the to­ken

The fin­ger­print is JSON.stringify’d, XOR’d, and re­solved back to the par­ent. The re­sult is the OpenAI-Sentinel-Turnstile-Token header sent with every con­ver­sa­tion re­quest.

Turnstile is one of three chal­lenges. The other two:

Signal Orchestrator (271 in­struc­tions): Installs event lis­ten­ers for key­down, point­er­move, click, scroll, paste, and wheel. Monitors 36 win­dow.__oai_­so_* prop­er­ties track­ing key­stroke tim­ing, mouse ve­loc­ity, scroll pat­terns, idle time, and paste events. A be­hav­ioral bio­met­ric layer run­ning un­der­neath the fin­ger­print.

Proof of Work (25-field fin­ger­print + SHA-256 hash­cash): Difficulty is uni­form ran­dom (400K-500K), 72% solve un­der 5ms. Includes 7 bi­nary de­tec­tion flags (ai, cre­atePRNG, cache, solana, dump, InstallTrigger, data), all zero across 100% of 100 sam­ples. The PoW adds com­pute cost but is not the real de­fense.

The XOR key for the in­ner pro­gram is a server-gen­er­ated float em­bed­ded in the byte­code. Whoever gen­er­ated the turn­stile.dx knows the key. The pri­vacy bound­ary be­tween the user and the sys­tem op­er­a­tor is a pol­icy de­ci­sion, not a cryp­to­graphic one.

The ob­fus­ca­tion serves real op­er­a­tional pur­poses: it hides the fin­ger­print check­list from sta­tic analy­sis, pre­vents the web­site op­er­a­tor (OpenAI) from read­ing raw fin­ger­print val­ues with­out re­verse-en­gi­neer­ing the byte­code, makes each to­ken unique to pre­vent re­play, and al­lows Cloudflare to change what the pro­gram checks with­out any­one notic­ing.

But the encryption” is XOR with a key that’s in the same data stream. It pre­vents ca­sual in­spec­tion. It does not pre­vent analy­sis.

No sys­tems were ac­cessed with­out au­tho­riza­tion. No in­di­vid­ual user data is dis­closed. All traf­fic was ob­served from con­sented par­tic­i­pants. The Sentinel SDK was beau­ti­fied and man­u­ally de­ob­fus­cated. All de­cryp­tion was per­formed of­fline us­ing Python.

...

Read the original on www.buchodi.com »

3 483 shares, 33 trendiness

A 1977 Time Capsule, Voyager 1 runs on 69 KB of memory and an 8-track tape recorder

Your brain is still grow­ing new cells right now. Here’s how to keep it hap­pen­ing

...

Read the original on techfixated.com »

4 387 shares, 34 trendiness

The Cognitive Dark Forest

This is some­what of a ex­per­i­ment, think­ing is still free, so let’s in­dulge.

No per­mis­sion was needed, no sub­scrip­tion. No gate­keeper, and no mid­dle­man tak­ing its toll, be­tween me and the fu­ture me.

Just idea, code ed­i­tor, mu­sic in my ears and off I went to­wards a brighter fu­ture - a prod­uct mar­ket fit, or a learn­ing ex­pe­ri­ence.

Sharing was cool. Source code on GitHub. Talking to peers on fo­rums. MVPs to users. Oddball ideas on blogs. We did our think­ing in pub­lic be­cause of two as­sump­tions:

Ideas are cheap - ex­e­cu­tion is hard -and- the world ahead is ripe with op­por­tu­nity.

Did you get to read the Liu Cixin’s sec­ond 3-body-problem novel? - The Dark Forest. Well some of you did …

In it, the uni­verse is­n’t empty, it’s just silent. Because it’s a dan­ger­ous place. Every sur­viv­ing civ­i­liza­tion that re­veals it­self gets an­ni­hi­lated. So they all hide.

Annihilation is­n’t even malev­o­lent, but only the most ra­tio­nal game-the­o­retic re­ac­tion to be­com­ing aware of an­other civil­i­sa­tion.

It is also asym­met­ric. If you an­nounce your pres­ence, even if 4 out of 5 civs that no­tice you don’t an­ni­hi­late you im­me­di­ately (but they prob­a­bly should), the fifth might. It’s just a prob­a­bil­ity game, with per­madeath.

So hid­ing is the most ra­tio­nal - the only - strat­egy of sur­vival.

The ear­lier in­ter­net was­n’t like that. On the con­trary, the risk was be­ing silent, dis­con­nected, node with­out edges. Connecting im­proved your odds of suc­cess, be­com­ing a hub lifted you to an­other level.

Announcing, sig­nal­ing your ideas of­fered much greater ben­e­fit than risk, be­cause your value mul­ti­plied by con­nec­tions, and ex­e­cu­tion was the moat you could stand be­hind.

I said suc­cess above. Bright fu­ture and op­por­tu­nity makes you op­ti­mize for suc­cess. But, cur­rent year 2026, the in­ter­net, by a large mar­gin got con­sol­i­dated. By cor­po­ra­tions try­ing to ex­tract your info to ba­si­cally ad­ver­tise you, and gov­ern­ments try­ing to kill your pri­vacy to con­trol you.

Consolidated op­por­tu­nity space and bleaker fu­ture make us scram­ble for sur­vival. And when we play for sur­vival, we al­ready lost, the re­sult is known, we are just play­ing to post­pone it.

We de­vel­op­ers knew bet­ter, it was overblown. It still is, but some code gets gen­er­ated, and some code works, it’s a prob­a­bil­ity game, even­tu­ally prob­a­bil­ity rose to the level of good enough”.

If whole pro­jects can get one-prompted or agent-teamed it be­comes just the money game.

You are cre­at­ing your cool stream­ing plat­form in your bed­room. Nobody is stop­ping you, but if you suc­ceed, if you get the sig­nal out, if you are be­ing no­ticed, the large plat­form with loads of cash can in­cor­po­rate your spe­cific in­no­va­tions sim­ply by throw­ing com­pute and cap­i­tal at the prob­lem. They can gen­er­ate a vari­a­tion of your in­no­va­tion every few days, even­tu­ally they will be able to ab­sorb your unique­ness. It’s just cash, and they have more of it than you.

So the safest bet again is to stay silent, or at least un­der the radar. Best bet is to not dis­rupt - suc­ceed at all … ?

But also, for­get about in­cum­bents with cap­i­tal.

You use prompts to gen­er­ate code, you use them to ex­plore ideas, to brain­storm, you use it in­stead of every­day search. And every prompt flows through cen­tral­ized AI plat­form. Every prompt is a sig­nal - re­veals in­tent.

The plat­form does­n’t need to read your prompt. It does­n’t spy on you specif­i­cally. It is­n’t sur­veil­lance. It’s just sta­tis­tics.

It’s a gra­di­ent in idea space. A de­mand curve made of hu­man in­ter­ests. The plat­form does­n’t need to bother with in­di­vid­ual prompts - it just needs to see where the ques­tions clus­ter. A map of where the world is mov­ing. And you are just in­put data.

The plat­form will know your idea is preg­nant far be­fore you will.

Two things changed: the web (or even our fu­ture) got con­sol­i­dated, and now with AIs, ex­e­cu­tion got cheap.

Before LLMs, a com­pany could­n’t just ab­sorb your idea and ship it. Ideas needed pro­gram­mers, and pro­gram­mers worked in meat-space-and-time, i.e. they were a lim­ited re­source, ex­pen­sive and slow, and most im­por­tantly: meat does­n’t scale.

Now the gap shrinks. The big cor­pos that help you be more ef­fi­cient pro­gram­mers - and whose sub­scrip­tions you pay - al­ready own:

If the dif­fi­culty and cost of build­ing are still there, they are on your end. That’s when the for­est gets dark.

The orig­i­nal Dark Forest as­sumes civ­i­liza­tions hide from hunters - other civ­i­liza­tions that might de­stroy them. But in the cog­ni­tive dark for­est, the most dan­ger­ous ac­tor is not your peer. It’s the for­est it­self.

We will again build and in­no­vate in pri­vate, hide, not share knowl­edge, mis­takes, ideas.

The vi­brant pub­lic ecosys­tem that cre­ated all the in­no­va­tion and moved it around the world will de­cline - the fo­rums, the blogs, the here’s how I built this” will move to lo­cal, pri­vate spaces.

The para­dox: AI com­pa­nies needed hu­man open­ness to build their mod­els, but will also kill the open­ness be­cause the re­la­tion­ship is one-sided.

But in re­act­ing to this, the hu­man knowl­edge and in­no­va­tion will suf­fer too.

But we can al­ways out­in­no­vate the for­est.

Except, this is ex­actly what the for­est needs. The for­est needs your in­no­va­tion, be­cause your in­no­va­tion be­comes the in­no­va­tion of the for­est.

You think of some­thing new and ex­press it - through a prompt, through code, through a prod­uct - it en­ters the sys­tem. Your novel idea be­comes train­ing data. The sheer act of think­ing out­side the box makes the box big­ger.

This is the true hor­ror of the cog­ni­tive dark for­est: it does­n’t kill you. It lets you live and feeds on you. Your in­no­va­tion be­comes its ca­pa­bil­i­ties. Your dif­fer­en­ti­a­tion be­comes its me­dian.

Resistance is­n’t sup­pressed. It’s ab­sorbed. The very act of re­sist­ing feeds what you re­sist and makes it less frag­ile to fu­ture re­sis­tance.

You’ve just read this and this es­say is now in the for­est.

By de­scrib­ing the dy­namic, it be­came a part of it. The mod­els now know a lit­tle more about why we might hide.

I wrote this know­ing it feeds the thing I’m warn­ing you about. That’s not a con­tra­dic­tion. That’s the con­di­tion. You can’t step out­side the for­est to warn peo­ple about the for­est. There is no out­side.

The com­ments can be even more in­ter­est­ing and thought pro­vok­ing than the post:

...

Read the original on ryelang.org »

5 347 shares, 24 trendiness

Release Nvim 0.12.0 · neovim/neovim

Skip to con­tent

Secure your code as you build

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

There was an er­ror while load­ing. Please re­load this page.

Notifications

You must be signed in to change no­ti­fi­ca­tion set­tings

Sorry, some­thing went wrong.

Sorry, some­thing went wrong.

There was an er­ror while load­ing. Please re­load this page.

This tag was signed with the com­mit­ter’s ver­i­fied sig­na­ture.

Note: On Windows Server” you may need to in­stall vcrun­time140.dll.

If your sys­tem does not have the re­quired glibc ver­sion, try the (unsupported) builds for older glibc.

Run chmod u+x nvim-linux-x86_64.ap­pim­age && ./nvim-linux-x86_64.appimage

If your sys­tem does not have FUSE you can ex­tract the ap­pim­age:

./nvim-linux-x86_64.appimage –appimage-extract

./squashfs-root/usr/bin/nvim

Run chmod u+x nvim-linux-ar­m64.ap­pim­age && ./nvim-linux-arm64.appimage

If your sys­tem does not have FUSE you can ex­tract the ap­pim­age:

./nvim-linux-arm64.appimage –appimage-extract

./squashfs-root/usr/bin/nvim

You can’t per­form that ac­tion at this time.

...

Read the original on github.com »

6 310 shares, 15 trendiness

austin-weeks/miasma: Trap AI web scrapers in an endless poison pit.

AI com­pa­nies con­tin­u­ally scrape the in­ter­net at an enor­mous scale, swal­low­ing up all of its con­tents to use as train­ing data for their next mod­els. If you have a pub­lic web­site, they are al­ready steal­ing your work.

Miasma is here to help you fight back! Spin up the server and point any ma­li­cious traf­fic to­wards it. Miasma will send poi­soned train­ing data from the poi­son foun­tain along­side mul­ti­ple self-ref­er­en­tial links. It’s an end­less buf­fet of slop for the slop ma­chines.

Miasma is very fast and has a min­i­mal mem­ory foot­print - you should not have to waste com­pute re­sources fend­ing off the in­ter­net’s leeches.

cargo in­stall mi­asma

mi­asma

mi­asma –help

Let’s walk through an ex­am­ple of set­ting up a server to trap scrap­ers with Miasma. We’ll pick /bots as our server’s path to di­rect scraper traf­fic. We’ll be us­ing Nginx as our server’s re­verse proxy, but the same re­sult can be achieved with many dif­fer­ent se­tups.

When we’re done, scrap­ers will be trapped like so:

Within our site, we’ll in­clude a few hid­den links lead­ing to /bots.

Amazing high qual­ity data here!

The style=“dis­play: none;”, aria-hid­den=“true”, and tabindex=“1” at­trib­utes en­sure links are to­tally in­vis­i­ble to hu­man vis­i­tors and will be ig­nored by screen read­ers and key­board nav­i­ga­tion. They will only be vis­i­ble to scrap­ers.

Since our hid­den links point to /bots, we’ll con­fig­ure this path to proxy Miasma. Let’s as­sume we’re run­ning Miasma on port 9855.

lo­ca­tion ~ ^/bots($|/.*)$ {

prox­y_­pass http://​lo­cal­host:9855;

This will match all vari­a­tions of the /bots path -> /bots, /bots/, /bots/12345, etc.

Lastly, we’ll start Miasma and spec­ify /bots as the link pre­fix. This in­structs Miasma to start links with /bots/, which en­sures scrap­ers are prop­erly routed through our Nginx proxy back to Miasma.

We’ll also limit the num­ber of max in-flight con­nec­tions to 50. At 50 con­nec­tions, we can ex­pect 50-60 MB peak mem­ory us­age. Note that any re­quests ex­ceed­ing this limit will im­me­di­ately re­ceive a 429 re­sponse rather than be­ing added to a queue.

mi­asma –link-prefix /bots’ -p 9855 -c 50

Let’s de­ploy and watch as multi-bil­lion dol­lar com­pa­nies greed­ily eat from our end­less slop ma­chine!

Be sure to pro­tect friendly bots and search en­gines from Miasma in your ro­bots.txt!

Miasma can be con­fig­ured via its CLI op­tions:

Contributions are wel­come! Please open an is­sue for bugs re­ports or fea­ture re­quests. Primarily AI-generated con­tri­bu­tions will be au­to­mat­i­cally re­jected.

...

Read the original on github.com »

7 285 shares, 158 trendiness

copilot edited an ad into my pr

After a team mem­ber sum­moned Copilot to cor­rect a typo in a PR of mine, Copilot edited my PR de­scrip­tion to in­clude and ad for it­self and Raycast.

This is hor­rific. I knew this kind of bull­shit would hap­pen even­tu­ally, but I did­n’t ex­pect it so soon.

Here is how plat­forms die: first, they are good to their users; then they abuse their users to make things bet­ter for their busi­ness cus­tomers; fi­nally, they abuse those busi­ness cus­tomers to claw back all the value for them­selves. Then, they die.

...

Read the original on notes.zachmanson.com »

8 278 shares, 11 trendiness

The (nearly) perfect USB cable tester does exist

...

Read the original on blog.literarily-starved.com »

9 272 shares, 18 trendiness

chenglou/pretext

Pure JavaScript/TypeScript li­brary for mul­ti­line text mea­sure­ment & lay­out. Fast, ac­cu­rate & sup­ports all the lan­guages you did­n’t even know about. Allows ren­der­ing to DOM, Canvas, SVG and soon, server-side.

Pretext side-steps the need for DOM mea­sure­ments (e.g. get­Bound­ing­Clien­tRect, off­setHeight), which trig­ger lay­out re­flow, one of the most ex­pen­sive op­er­a­tions in the browser. It im­ple­ments its own text mea­sure­ment logic, us­ing the browsers’ own font en­gine as ground truth (very AI-friendly it­er­a­tion method).

npm in­stall @chenglou/pretext

Clone the repo, run bun in­stall, then bun start, and open the /demos in your browser (no trail­ing slash. Bun de­vserver bugs on those) Alternatively, see them live at chen­glou.me/​pre­text. Some more at som­nai-dreams.github.io/​pre­text-demos

im­port { pre­pare, lay­out } from @chenglou/pretext’

const pre­pared = pre­pare(‘AGI 春天到了. بدأت الرحلة 🚀, 16px Inter’)

const { height, lineCount } = lay­out(pre­pared, tex­tWidth, 20) // pure arith­metics. No DOM lay­out & re­flow!

pre­pare() does the one-time work: nor­mal­ize white­space, seg­ment the text, ap­ply glue rules, mea­sure the seg­ments with can­vas, and re­turn an opaque han­dle. lay­out() is the cheap hot path af­ter that: pure arith­metic over cached widths. Do not re­run pre­pare() for the same text and con­figs; that’d de­feat its pre­com­pu­ta­tion. For ex­am­ple, on re­size, only re­run lay­out().

If you want textarea-like text where or­di­nary spaces, \t tabs, and \n hard breaks stay vis­i­ble, pass { white­Space: pre-wrap’ } to pre­pare():

const pre­pared = pre­pare(texta­reaValue, 16px Inter’, { white­Space: pre-wrap’ })

const { height } = lay­out(pre­pared, textareaW­idth, 20)

* pre­pare() is about 19ms for the shared 500-text batch

* lay­out() is about 0.09ms for that same batch

We sup­port all the lan­guages you can imag­ine, in­clud­ing emo­jis and mixed-bidi, and caters to spe­cific browser quirks

The re­turned height is the cru­cial last piece for un­lock­ing web UIs:

* fancy user­land lay­outs: ma­sonry, JS-driven flexbox-like im­ple­men­ta­tions, nudg­ing a few lay­out val­ues with­out CSS hacks (imagine that), etc.

* de­vel­op­ment time ver­i­fi­ca­tion (especially now with AI) that la­bels on e.g. but­tons don’t over­flow to the next line, browser-free

* pre­vent lay­out shift when new text loads and you wanna re-an­chor the scroll po­si­tion

Switch out pre­pare with pre­pare­With­Seg­ments, then:

* lay­outWith­Lines() gives you all the lines at a fixed width:

im­port { pre­pare­With­Seg­ments, lay­outWith­Lines } from @chenglou/pretext’

const pre­pared = pre­pare­With­Seg­ments(‘AGI 春天到了. بدأت الرحلة 🚀, 18px Helvetica Neue”’)

const { lines } = lay­outWith­Lines(pre­pared, 320, 26) // 320px max width, 26px line height

for (let i = 0; i < lines.length; i++) ctx.fill­Text(lines[i].text, 0, i * 26)

* walk­Lin­eRanges() gives you line widths and cur­sors with­out build­ing the text strings:

let maxW = 0

walk­Lin­eRanges(pre­pared, 320, line => { if (line.width > maxW) maxW = line.width })

// maxW is now the widest line — the tight­est con­tainer width that still fits the text! This mul­ti­line shrink wrap” has been miss­ing from web

* lay­out­NextLine() lets you route text one row at a time when width changes as you go:

let cur­sor = { seg­mentIn­dex: 0, graphe­meIn­dex: 0 }

let y = 0

// Flow text around a floated im­age: lines be­side the im­age are nar­rower

while (true) {

const width = y < im­age.bot­tom ? colum­n­Width - im­age.width : colum­n­Width

const line = lay­out­NextLine(pre­pared, cur­sor, width)

if (line === null) break

ctx.fill­Text(line.text, 0, y)

cur­sor = line.end

y += 26

This us­age al­lows ren­der­ing to can­vas, SVG, WebGL and (eventually) server-side.

pre­pare(text: string, font: string, op­tions?: { white­Space?: normal’ | pre-wrap’ }): PreparedText // one-time text analy­sis + mea­sure­ment pass, re­turns an opaque value to pass to `layout()`. Make sure `font` is synced with your css `font` de­c­la­ra­tion short­hand (e.g. size, weight, style, fam­ily) for the text you’re mea­sur­ing. `font` is the same for­mat as what you’d use for `myCanvasContext.font = …`, e.g. `16px Inter`.

lay­out(pre­pared: PreparedText, maxWidth: num­ber, line­Height: num­ber): { height: num­ber, lineCount: num­ber } // cal­cu­lates text height given a max width and line­Height. Make sure `lineHeight` is synced with your css `line-height` de­c­la­ra­tion for the text you’re mea­sur­ing.

pre­pare­With­Seg­ments(text: string, font: string, op­tions?: { white­Space?: normal’ | pre-wrap’ }): PreparedTextWithSegments // same as `prepare()`, but re­turns a richer struc­ture for man­ual line lay­outs needs

lay­outWith­Lines(pre­pared: PreparedTextWithSegments, maxWidth: num­ber, line­Height: num­ber): { height: num­ber, lineCount: num­ber, lines: LayoutLine[] } // high-level api for man­ual lay­out needs. Accepts a fixed max width for all lines. Similar to `layout()`’s re­turn, but ad­di­tion­ally re­turns the lines info

walk­Lin­eRanges(pre­pared: PreparedTextWithSegments, maxWidth: num­ber, on­Line: (line: LayoutLineRange) => void): num­ber // low-level api for man­ual lay­out needs. Accepts a fixed max width for all lines. Calls `onLine` once per line with its ac­tual cal­cu­lated line width and start/​end cur­sors, with­out build­ing line text strings. Very use­ful for cer­tain cases where you wanna spec­u­la­tively test a few width and height bound­aries (e.g. bi­nary search a nice width value by re­peat­edly call­ing walk­Lin­eRanges and check­ing the line count, and there­fore height, is nice” too. You can have text mes­sages shrinkwrap and bal­anced text lay­out this way). After walk­Lin­eRanges calls, you’d call lay­outWith­Lines once, with your sat­is­fy­ing max width, to get the ac­tual lines info.

lay­out­NextLine(pre­pared: PreparedTextWithSegments, start: LayoutCursor, maxWidth: num­ber): LayoutLine | null // it­er­a­tor-like api for lay­ing out each line with a dif­fer­ent width! Returns the LayoutLine start­ing from `start`, or `null` when the para­graph’s ex­hausted. Pass the pre­vi­ous line’s `end` cur­sor as the next `start`.

type LayoutLine = {

text: string // Full text con­tent of this line, e.g. hello world’

width: num­ber // Measured width of this line, e.g. 87.5

start: LayoutCursor // Inclusive start cur­sor in pre­pared seg­ments/​graphemes

end: LayoutCursor // Exclusive end cur­sor in pre­pared seg­ments/​graphemes

type LayoutLineRange = {

width: num­ber // Measured width of this line, e.g. 87.5

start: LayoutCursor // Inclusive start cur­sor in pre­pared seg­ments/​graphemes

end: LayoutCursor // Exclusive end cur­sor in pre­pared seg­ments/​graphemes

type LayoutCursor = {

seg­mentIn­dex: num­ber // Segment in­dex in pre­pare­With­Seg­ments’ pre­pared rich seg­ment stream

graphe­meIn­dex: num­ber // Grapheme in­dex within that seg­ment; `0` at seg­ment bound­aries

clearCache(): void // clears Pretext’s shared in­ter­nal caches used by pre­pare() and pre­pare­With­Seg­ments(). Useful if your app cy­cles through many dif­fer­ent fonts or text vari­ants and you want to re­lease the ac­cu­mu­lated cache

set­Lo­cale(lo­cale?: string): void // op­tional (by de­fault we use the cur­rent lo­cale). Sets lo­cale for fu­ture pre­pare() and pre­pare­With­Seg­ments(). Internally, it also calls clearCache(). Setting a new lo­cale does­n’t af­fect ex­ist­ing pre­pare() and pre­pare­With­Seg­ments() states (no mu­ta­tions to them)

Pretext does­n’t try to be a full font ren­der­ing en­gine (yet?). It cur­rently tar­gets the com­mon text setup:

* If you pass { white­Space: pre-wrap’ }, or­di­nary spaces, \t tabs, and \n hard breaks are pre­served in­stead of col­lapsed. Tabs fol­low the de­fault browser-style tab-size: 8. The other wrap­ping de­faults stay the same: word-break: nor­mal, over­flow-wrap: break-word, and line-break: auto.

* sys­tem-ui is un­safe for lay­out() ac­cu­racy on ma­cOS. Use a named font.

* Because the de­fault tar­get in­cludes over­flow-wrap: break-word, very nar­row widths can still break in­side words, but only at grapheme bound­aries.

See DEVELOPMENT.md for the dev setup and com­mands.

Sebastian Markbage first planted the seed with text-lay­out last decade. His de­sign — can­vas mea­sure­Text for shap­ing, bidi from pdf.js, stream­ing line break­ing — in­formed the ar­chi­tec­ture we kept push­ing for­ward here.

...

Read the original on github.com »

10 270 shares, 17 trendiness

Full network of clitoral nerves mapped out for first time

Almost 30 years af­ter the in­tri­cate web of nerves in­side the pe­nis was plot­ted out, the same map­ping has fi­nally been com­pleted for one of the least-stud­ied or­gans in the hu­man body — the cli­toris.

As well as re­veal­ing the ex­tent of the nerves that are cru­cial to or­gasms, the work shows that some of what medics are learn­ing about the anatomy of the cli­toris is wrong, and could help pre­vent women who have pelvic op­er­a­tions from end­ing up with poorer sex­ual func­tion.

The cli­toris, re­spon­si­ble for sex­ual plea­sure, is one of the least stud­ied or­gans of the hu­man body. Cultural taboo around fe­male sex­u­al­ity has held back sci­en­tific in­ves­ti­ga­tions and the cli­toris did not even make it into stan­dard anatomy text­books un­til the 20th cen­tury. And in the 38th edi­tion of Gray’s Anatomy in 1995 it was in­tro­duced as just a small ver­sion of the pe­nis”.

A Melbourne urol­o­gist, Helen O’Connell, says the cli­toris has been ig­nored by re­searchers for far too long. It has been deleted in­tel­lec­tu­ally by the med­ical and sci­en­tific com­mu­nity, pre­sum­ably align­ing at­ti­tude to a so­ci­etal ig­no­rance,” she said.

To get a bet­ter idea of the in­ner work­ings of this key plea­sure-re­lated or­gan, Ju Young Lee, a re­search as­so­ci­ate at Amsterdam University Medical Center in the Netherlands, and her col­leagues used high-en­ergy X-rays to cre­ate 3D scans of two fe­male pelvises that had been do­nated through a body donor or­gan pro­gramme.

The scans re­vealed in 3D the tra­jec­tory of the five com­plex tree-like branch­ing nerves run­ning through the cli­toris in un­prece­dented de­tail, the widest 0.7mm across. The work has been re­ported on the preprint server bioRxiv and has not yet been peer re­viewed.

This is the first ever 3D map of the nerves within the glans of the cli­toris,” said Lee. She is amazed it has taken so long, con­sid­er­ing a sim­i­lar level of knowl­edge re­gard­ing the pe­nile glans was reached back in 1998, 28 years ago.

Lee and her col­leagues show that some branches of cli­toral nerves reach the mons pu­bis, the rounded mound of tis­sue over the pu­bic bone. Others go to the cli­toral hood, which sits over the small, sen­si­tive, ex­ter­nal part of the cli­toris — the glans cli­toris — which is just 10% of the to­tal or­gan. Other nerves reach the folds of skin of the vulva, the labial struc­tures.

Previous re­search had in­di­cated that the big dor­sal nerve of the cli­toris grad­u­ally di­min­ished as it ap­proached the glans. However, the new scans ap­pear to show that some of what medics have been learn­ing in anatomy is wrong and the nerve con­tin­ues strongly all the way to the end.

I was es­pe­cially fas­ci­nated by the high-res­o­lu­tion im­ages within the glans, the most sen­si­tive part of the cli­toris, as these ter­mi­nal nerve branches are im­pos­si­ble to see dur­ing dis­sec­tion,” said Georga Longhurst, the head of anatom­i­cal sci­ences at St George’s, University of London.

O’Connell, who pub­lished the first com­pre­hen­sive anatom­i­cal study of the cli­toris in 1998, said the find­ings were cru­cial to un­der­stand­ing the fe­male sen­sory mech­a­nism un­der­ly­ing arousal and or­gasm via stim­u­lat­ing the cli­toris. Orgasm is a brain func­tion that leads to im­proved health and well­be­ing as well as hav­ing pos­i­tive im­pli­ca­tions for hu­man re­la­tion­ships and pos­si­bly fer­til­ity,” she said.

The map­ping of cli­toral nerves is likely to in­form re­con­struc­tive surgery af­ter fe­male gen­i­tal mu­ti­la­tion, one of the most ex­treme ex­am­ples of cul­tural misog­yny. According to the World Health Organization, more than 230 mil­lion girls and women alive to­day in 30 coun­tries in Africa, the Middle East and Asia have un­der­gone such mu­ti­la­tion, in which the vis­i­ble part of the cli­toris may be re­moved, along with parts of the labia.

The prac­tice has no health ben­e­fits and can re­sult in is­sues in­clud­ing se­vere bleed­ing, in­fec­tion, prob­lems uri­nat­ing, men­strual dif­fi­cul­ties and com­pli­ca­tions in child­birth.

About 22% of women who un­dergo sur­gi­cal re­con­struc­tion af­ter mu­ti­la­tion ex­pe­ri­ence a de­cline in or­gas­mic ex­pe­ri­ence af­ter their op­er­a­tion, so a bet­ter un­der­stand­ing of how far the nerves ex­tend could re­duce that per­cent­age, said Lee.

O’Connell said the work could also in­form surgery to treat vul­var can­cer, gen­der re­as­sign­ment surgery and gen­i­tal cos­metic surg­eries, such as labi­aplasty, which in­creased in pop­u­lar­ity by 70% from 2015 to 2020.

Lee is hop­ing to open a cli­toris ex­hi­bi­tion within Amsterdam University Medical Center to help ex­pand knowl­edge about the cli­toris, in­spired by the Vagina Museum in London.

...

Read the original on www.theguardian.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.