10 interesting stories served every morning and every evening.
Say No to Palantir in Europe
To European governments and the EU
Review and phase out existing contracts with the company.
And we call on the EU to urgently investigate Palantir’s use across Europe, ensure full transparency over contracts and data use, and push governments to halt new deals until strong safeguards and democratic oversight are guaranteed.
Europe must not hand its public systems, data, and security to a private US surveillance company, especially one that is involved in fueling wars and mass deportations.
Why is this important?
A powerful company enables genocide in Gaza, helps ICE separate families, and fuels Trump’s war with Iran. [1]
Most people have never even heard of it.
But governments across Europe are quietly signing contracts with it, paid for with our tax money. [2] Its name is Palantir.
From the UK to Germany to France and beyond, governments are handing this US spy-tech giant access to sensitive public systems and data. Police in Germany use it to track suspects, the UK hands it vast healthcare datasets - and this is just the beginning. [3]
Palantir’s influence in Europe is spreading fast, largely out of public sight.
That’s exactly why we must shine a light on it. Otherwise, we risk expanding mass surveillance and fuelling wars, while Europe hands its data and security to a US spy-tech giant.
If we build momentum to expose Palantir, we can push leaders to stop signing new contracts and protect Europe’s public systems from powerful surveillance giants.
Add your name now to demand transparency and stop the expansion of Palantir in Europe.
And the people running the company aren’t hiding their intentions. CEO Alex Karp once said Palantir is “here to… scare enemies and, on occasion, kill them.” https://www.wired.com/story/uncanny-valley-podcast-palantir-most-mysterious-company-silicon-valley
If you don’t subscribe, you might miss news on this campaign
or future opportunities to act. (If you’re already subscribed,
leaving this box unchecked won’t remove you.)
Do you want to find out if this campaign is successful?
Yes! Let me know if this campaign is successful and how I can participate in other relevant campaigns.
If you leave us your email, we may contact you to tell you more about how you can help us,
including by supporting our work with a donation.
No. I don’t want to receive information about the progress of this campaign or other campaigns.
You can unsubscribe at any time. Just go to our unsubscribe page.
By entering your information you confirm that you are at least 16 years old.
WeMove Europe is fighting for a better world, and we need heroes like you to join our community of more than 700,000 people. Already you’re powering this campaign call, but by clicking “Yes”, you’ll receive a wider range of campaigns that need your help. Sign up to hear more and make a real difference. If legally required in your country, we will send you an email to confirm adding your data on our list.
By choosing “Yes”, you’re giving WeMove Europe your consent to process your personal information. We might share your name, surname and country with the petition target. Unless you subscribe to receive personalised updates, we will delete your data after the campaign has ended. We will never share your data with any third parties without your permission. See our full privacy policy here.
...
Read the original on action.wemove.eu »
Every ChatGPT message triggers a Cloudflare Turnstile program that runs silently in your browser. I decrypted 377 of these programs from network traffic and found something that goes beyond standard browser fingerprinting.
The program checks 55 properties spanning three layers: your browser (GPU, screen, fonts), the Cloudflare network (your city, your IP, your region from edge headers), and the ChatGPT React application itself (__reactRouterContext, loaderData, clientBootstrap). Turnstile doesn’t just verify that you’re running a real browser. It verifies that you’re running a real browser that has fully booted a specific React application.
A bot that spoofs browser fingerprints but doesn’t render the actual ChatGPT SPA will fail.
The Turnstile bytecode arrives encrypted. The server sends a field called turnstile.dx in the prepare response: 28,000 characters of base64 that change on every request.
The outer layer is XOR’d with the p token from the prepare request. Both travel in the same HTTP exchange, so decrypting it is straightforward:
outer = json.loads(bytes(
base64decode(dx)[i] ^ p_token[i % len(p_token)]
for i in range(len(base64decode(dx)))
# → 89 VM instructions
Inside those 89 instructions, there is a 19KB encrypted blob containing the actual fingerprinting program. This inner blob uses a different XOR key that is not the p token.
Initially I assumed this key was derived from performance.now() and was truly ephemeral. Then I looked at the bytecode more carefully and found the key sitting in the instructions:
[41.02, 0.3, 22.58, 12.96, 97.35]
The last argument, 97.35, is the XOR key. A float literal, generated by the server, embedded in the bytecode it sent to the browser. I verified this across 50 requests. Every time, the float from the instruction decrypts the inner blob to valid JSON. 50 out of 50.
The full decryption chain requires nothing beyond the HTTP request and response:
1. Read p from prepare request
2. Read turnstile.dx from prepare response
3. XOR(base64decode(dx), p) → outer bytecode
4. Find the 5-arg instruction after the 19KB blob → last arg is the key
5. XOR(base64decode(blob), str(key)) → inner program (417-580 VM instructions)
The key is in the payload.
Each inner program uses a custom VM with 28 opcodes (ADD, XOR, CALL, BTOA, RESOLVE, BIND_METHOD, JSON_STRINGIFY, etc.) and randomized float register addresses that change per request. I mapped the opcodes from the SDK source (sdk.js, 1,411 lines, deobfuscated).
The program collects 55 properties. No variation across 377 samples. All 55, every time, organized into three layers:
Storage (5): storage, quota, estimate, setItem, usage. Also writes the fingerprint to localStorage under key 6f376b6560133c2c for persistence across page loads.
These are injected server-side by Cloudflare’s edge. They exist only if the request passed through Cloudflare’s network. A bot making direct requests to the origin server or running behind a non-Cloudflare proxy will produce missing or inconsistent values.
This is the part that matters. __reactRouterContext is an internal data structure that React Router v6+ attaches to the DOM. loaderData contains the route loader results. clientBootstrap is specific to ChatGPT’s SSR hydration.
These properties only exist if the ChatGPT React application has fully rendered and hydrated. A headless browser that loads the HTML but doesn’t execute the JavaScript bundle won’t have them. A bot framework that stubs out browser APIs but doesn’t actually run React won’t have them.
This is bot detection at the application layer, not the browser layer.
After collecting all 55 properties, the program hits a 116-byte encrypted blob that decrypts to 4 final instructions:
[96.05, 3.99, 3.99], // JSON.stringify(fingerprint)
[22.58, 46.15, 57.34], // store
[33.34, 3.99, 74.43], // XOR(json, key)
[1.51, 56.88, 3.99] // RESOLVE → becomes the token
The fingerprint is JSON.stringify’d, XOR’d, and resolved back to the parent. The result is the OpenAI-Sentinel-Turnstile-Token header sent with every conversation request.
Turnstile is one of three challenges. The other two:
Signal Orchestrator (271 instructions): Installs event listeners for keydown, pointermove, click, scroll, paste, and wheel. Monitors 36 window.__oai_so_* properties tracking keystroke timing, mouse velocity, scroll patterns, idle time, and paste events. A behavioral biometric layer running underneath the fingerprint.
Proof of Work (25-field fingerprint + SHA-256 hashcash): Difficulty is uniform random (400K-500K), 72% solve under 5ms. Includes 7 binary detection flags (ai, createPRNG, cache, solana, dump, InstallTrigger, data), all zero across 100% of 100 samples. The PoW adds compute cost but is not the real defense.
The XOR key for the inner program is a server-generated float embedded in the bytecode. Whoever generated the turnstile.dx knows the key. The privacy boundary between the user and the system operator is a policy decision, not a cryptographic one.
The obfuscation serves real operational purposes: it hides the fingerprint checklist from static analysis, prevents the website operator (OpenAI) from reading raw fingerprint values without reverse-engineering the bytecode, makes each token unique to prevent replay, and allows Cloudflare to change what the program checks without anyone noticing.
But the “encryption” is XOR with a key that’s in the same data stream. It prevents casual inspection. It does not prevent analysis.
No systems were accessed without authorization. No individual user data is disclosed. All traffic was observed from consented participants. The Sentinel SDK was beautified and manually deobfuscated. All decryption was performed offline using Python.
...
Read the original on www.buchodi.com »
Your brain is still growing new cells right now. Here’s how to keep it happening
...
Read the original on techfixated.com »
This is somewhat of a experiment, thinking is still free, so let’s indulge.
No permission was needed, no subscription. No gatekeeper, and no middleman taking its toll, between me and the future me.
Just idea, code editor, music in my ears and off I went towards a brighter future - a product market fit, or a learning experience.
Sharing was cool. Source code on GitHub. Talking to peers on forums. MVPs to users. Oddball ideas on blogs. We did our thinking in public because of two assumptions:
Ideas are cheap - execution is hard -and- the world ahead is ripe with opportunity.
Did you get to read the Liu Cixin’s second 3-body-problem novel? - The Dark Forest. Well some of you did …
In it, the universe isn’t empty, it’s just silent. Because it’s a dangerous place. Every surviving civilization that reveals itself gets annihilated. So they all hide.
Annihilation isn’t even malevolent, but only the most rational game-theoretic reaction to becoming aware of another civilisation.
It is also asymmetric. If you announce your presence, even if 4 out of 5 civs that notice you don’t annihilate you immediately (but they probably should), the fifth might. It’s just a probability game, with permadeath.
So hiding is the most rational - the only - strategy of survival.
The earlier internet wasn’t like that. On the contrary, the risk was being silent, disconnected, node without edges. Connecting improved your odds of success, becoming a hub lifted you to another level.
Announcing, signaling your ideas offered much greater benefit than risk, because your value multiplied by connections, and execution was the moat you could stand behind.
I said success above. Bright future and opportunity makes you optimize for success. But, current year 2026, the internet, by a large margin got consolidated. By corporations trying to extract your info to basically advertise you, and governments trying to kill your privacy to control you.
Consolidated opportunity space and bleaker future make us scramble for survival. And when we play for survival, we already lost, the result is known, we are just playing to postpone it.
We developers knew better, it was overblown. It still is, but some code gets generated, and some code works, it’s a probability game, eventually probability rose to the level of “good enough”.
If whole projects can get one-prompted or agent-teamed it becomes just the money game.
You are creating your cool streaming platform in your bedroom. Nobody is stopping you, but if you succeed, if you get the signal out, if you are being noticed, the large platform with loads of cash can incorporate your specific innovations simply by throwing compute and capital at the problem. They can generate a variation of your innovation every few days, eventually they will be able to absorb your uniqueness. It’s just cash, and they have more of it than you.
So the safest bet again is to stay silent, or at least under the radar. Best bet is to not disrupt - succeed at all … ?
But also, forget about incumbents with capital.
You use prompts to generate code, you use them to explore ideas, to brainstorm, you use it instead of everyday search. And every prompt flows through centralized AI platform. Every prompt is a signal - reveals intent.
The platform doesn’t need to read your prompt. It doesn’t spy on you specifically. It isn’t surveillance. It’s just statistics.
It’s a gradient in idea space. A demand curve made of human interests. The platform doesn’t need to bother with individual prompts - it just needs to see where the questions cluster. A map of where the world is moving. And you are just input data.
The platform will know your idea is pregnant far before you will.
Two things changed: the web (or even our future) got consolidated, and now with AIs, execution got cheap.
Before LLMs, a company couldn’t just absorb your idea and ship it. Ideas needed programmers, and programmers worked in meat-space-and-time, i.e. they were a limited resource, expensive and slow, and most importantly: meat doesn’t scale.
Now the gap shrinks. The big corpos that help you be more efficient programmers - and whose subscriptions you pay - already own:
If the difficulty and cost of building are still there, they are on your end. That’s when the forest gets dark.
The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
The vibrant public ecosystem that created all the innovation and moved it around the world will decline - the forums, the blogs, the “here’s how I built this” will move to local, private spaces.
The paradox: AI companies needed human openness to build their models, but will also kill the openness because the relationship is one-sided.
But in reacting to this, the human knowledge and innovation will suffer too.
But we can always outinnovate the forest.
Except, this is exactly what the forest needs. The forest needs your innovation, because your innovation becomes the innovation of the forest.
You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This is the true horror of the cognitive dark forest: it doesn’t kill you. It lets you live and feeds on you. Your innovation becomes its capabilities. Your differentiation becomes its median.
Resistance isn’t suppressed. It’s absorbed. The very act of resisting feeds what you resist and makes it less fragile to future resistance.
You’ve just read this and this essay is now in the forest.
By describing the dynamic, it became a part of it. The models now know a little more about why we might hide.
I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
The comments can be even more interesting and thought provoking than the post:
...
Read the original on ryelang.org »
Skip to content
Secure your code as you build
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Sign up
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
There was an error while loading. Please reload this page.
Notifications
You must be signed in to change notification settings
Sorry, something went wrong.
Sorry, something went wrong.
There was an error while loading. Please reload this page.
This tag was signed with the committer’s verified signature.
Note: On Windows “Server” you may need to install vcruntime140.dll.
If your system does not have the required glibc version, try the (unsupported) builds for older glibc.
Run chmod u+x nvim-linux-x86_64.appimage && ./nvim-linux-x86_64.appimage
If your system does not have FUSE you can extract the appimage:
./nvim-linux-x86_64.appimage –appimage-extract
./squashfs-root/usr/bin/nvim
Run chmod u+x nvim-linux-arm64.appimage && ./nvim-linux-arm64.appimage
If your system does not have FUSE you can extract the appimage:
./nvim-linux-arm64.appimage –appimage-extract
./squashfs-root/usr/bin/nvim
You can’t perform that action at this time.
...
Read the original on github.com »
AI companies continually scrape the internet at an enormous scale, swallowing up all of its contents to use as training data for their next models. If you have a public website, they are already stealing your work.
Miasma is here to help you fight back! Spin up the server and point any malicious traffic towards it. Miasma will send poisoned training data from the poison fountain alongside multiple self-referential links. It’s an endless buffet of slop for the slop machines.
Miasma is very fast and has a minimal memory footprint - you should not have to waste compute resources fending off the internet’s leeches.
cargo install miasma
miasma
miasma –help
Let’s walk through an example of setting up a server to trap scrapers with Miasma. We’ll pick /bots as our server’s path to direct scraper traffic. We’ll be using Nginx as our server’s reverse proxy, but the same result can be achieved with many different setups.
When we’re done, scrapers will be trapped like so:
Within our site, we’ll include a few hidden links leading to /bots.
Amazing high quality data here!
The style=“display: none;”, aria-hidden=“true”, and tabindex=“1” attributes ensure links are totally invisible to human visitors and will be ignored by screen readers and keyboard navigation. They will only be visible to scrapers.
Since our hidden links point to /bots, we’ll configure this path to proxy Miasma. Let’s assume we’re running Miasma on port 9855.
location ~ ^/bots($|/.*)$ {
proxy_pass http://localhost:9855;
This will match all variations of the /bots path -> /bots, /bots/, /bots/12345, etc.
Lastly, we’ll start Miasma and specify /bots as the link prefix. This instructs Miasma to start links with /bots/, which ensures scrapers are properly routed through our Nginx proxy back to Miasma.
We’ll also limit the number of max in-flight connections to 50. At 50 connections, we can expect 50-60 MB peak memory usage. Note that any requests exceeding this limit will immediately receive a 429 response rather than being added to a queue.
miasma –link-prefix ‘/bots’ -p 9855 -c 50
Let’s deploy and watch as multi-billion dollar companies greedily eat from our endless slop machine!
Be sure to protect friendly bots and search engines from Miasma in your robots.txt!
Miasma can be configured via its CLI options:
Contributions are welcome! Please open an issue for bugs reports or feature requests. Primarily AI-generated contributions will be automatically rejected.
...
Read the original on github.com »
After a team member summoned Copilot to correct a typo in a PR of mine, Copilot edited my PR description to include and ad for itself and Raycast.
This is horrific. I knew this kind of bullshit would happen eventually, but I didn’t expect it so soon.
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
...
Read the original on notes.zachmanson.com »
...
Read the original on blog.literarily-starved.com »
Pure JavaScript/TypeScript library for multiline text measurement & layout. Fast, accurate & supports all the languages you didn’t even know about. Allows rendering to DOM, Canvas, SVG and soon, server-side.
Pretext side-steps the need for DOM measurements (e.g. getBoundingClientRect, offsetHeight), which trigger layout reflow, one of the most expensive operations in the browser. It implements its own text measurement logic, using the browsers’ own font engine as ground truth (very AI-friendly iteration method).
npm install @chenglou/pretext
Clone the repo, run bun install, then bun start, and open the /demos in your browser (no trailing slash. Bun devserver bugs on those) Alternatively, see them live at chenglou.me/pretext. Some more at somnai-dreams.github.io/pretext-demos
import { prepare, layout } from ‘@chenglou/pretext’
const prepared = prepare(‘AGI 春天到了. بدأت الرحلة 🚀’, ‘16px Inter’)
const { height, lineCount } = layout(prepared, textWidth, 20) // pure arithmetics. No DOM layout & reflow!
prepare() does the one-time work: normalize whitespace, segment the text, apply glue rules, measure the segments with canvas, and return an opaque handle. layout() is the cheap hot path after that: pure arithmetic over cached widths. Do not rerun prepare() for the same text and configs; that’d defeat its precomputation. For example, on resize, only rerun layout().
If you want textarea-like text where ordinary spaces, \t tabs, and \n hard breaks stay visible, pass { whiteSpace: ‘pre-wrap’ } to prepare():
const prepared = prepare(textareaValue, ‘16px Inter’, { whiteSpace: ‘pre-wrap’ })
const { height } = layout(prepared, textareaWidth, 20)
* prepare() is about 19ms for the shared 500-text batch
* layout() is about 0.09ms for that same batch
We support all the languages you can imagine, including emojis and mixed-bidi, and caters to specific browser quirks
The returned height is the crucial last piece for unlocking web UI’s:
* fancy userland layouts: masonry, JS-driven flexbox-like implementations, nudging a few layout values without CSS hacks (imagine that), etc.
* development time verification (especially now with AI) that labels on e.g. buttons don’t overflow to the next line, browser-free
* prevent layout shift when new text loads and you wanna re-anchor the scroll position
Switch out prepare with prepareWithSegments, then:
* layoutWithLines() gives you all the lines at a fixed width:
import { prepareWithSegments, layoutWithLines } from ‘@chenglou/pretext’
const prepared = prepareWithSegments(‘AGI 春天到了. بدأت الرحلة 🚀’, ’18px “Helvetica Neue”’)
const { lines } = layoutWithLines(prepared, 320, 26) // 320px max width, 26px line height
for (let i = 0; i < lines.length; i++) ctx.fillText(lines[i].text, 0, i * 26)
* walkLineRanges() gives you line widths and cursors without building the text strings:
let maxW = 0
walkLineRanges(prepared, 320, line => { if (line.width > maxW) maxW = line.width })
// maxW is now the widest line — the tightest container width that still fits the text! This multiline “shrink wrap” has been missing from web
* layoutNextLine() lets you route text one row at a time when width changes as you go:
let cursor = { segmentIndex: 0, graphemeIndex: 0 }
let y = 0
// Flow text around a floated image: lines beside the image are narrower
while (true) {
const width = y < image.bottom ? columnWidth - image.width : columnWidth
const line = layoutNextLine(prepared, cursor, width)
if (line === null) break
ctx.fillText(line.text, 0, y)
cursor = line.end
y += 26
This usage allows rendering to canvas, SVG, WebGL and (eventually) server-side.
prepare(text: string, font: string, options?: { whiteSpace?: ‘normal’ | ‘pre-wrap’ }): PreparedText // one-time text analysis + measurement pass, returns an opaque value to pass to `layout()`. Make sure `font` is synced with your css `font` declaration shorthand (e.g. size, weight, style, family) for the text you’re measuring. `font` is the same format as what you’d use for `myCanvasContext.font = …`, e.g. `16px Inter`.
layout(prepared: PreparedText, maxWidth: number, lineHeight: number): { height: number, lineCount: number } // calculates text height given a max width and lineHeight. Make sure `lineHeight` is synced with your css `line-height` declaration for the text you’re measuring.
prepareWithSegments(text: string, font: string, options?: { whiteSpace?: ‘normal’ | ‘pre-wrap’ }): PreparedTextWithSegments // same as `prepare()`, but returns a richer structure for manual line layouts needs
layoutWithLines(prepared: PreparedTextWithSegments, maxWidth: number, lineHeight: number): { height: number, lineCount: number, lines: LayoutLine[] } // high-level api for manual layout needs. Accepts a fixed max width for all lines. Similar to `layout()`’s return, but additionally returns the lines info
walkLineRanges(prepared: PreparedTextWithSegments, maxWidth: number, onLine: (line: LayoutLineRange) => void): number // low-level api for manual layout needs. Accepts a fixed max width for all lines. Calls `onLine` once per line with its actual calculated line width and start/end cursors, without building line text strings. Very useful for certain cases where you wanna speculatively test a few width and height boundaries (e.g. binary search a nice width value by repeatedly calling walkLineRanges and checking the line count, and therefore height, is “nice” too. You can have text messages shrinkwrap and balanced text layout this way). After walkLineRanges calls, you’d call layoutWithLines once, with your satisfying max width, to get the actual lines info.
layoutNextLine(prepared: PreparedTextWithSegments, start: LayoutCursor, maxWidth: number): LayoutLine | null // iterator-like api for laying out each line with a different width! Returns the LayoutLine starting from `start`, or `null` when the paragraph’s exhausted. Pass the previous line’s `end` cursor as the next `start`.
type LayoutLine = {
text: string // Full text content of this line, e.g. ‘hello world’
width: number // Measured width of this line, e.g. 87.5
start: LayoutCursor // Inclusive start cursor in prepared segments/graphemes
end: LayoutCursor // Exclusive end cursor in prepared segments/graphemes
type LayoutLineRange = {
width: number // Measured width of this line, e.g. 87.5
start: LayoutCursor // Inclusive start cursor in prepared segments/graphemes
end: LayoutCursor // Exclusive end cursor in prepared segments/graphemes
type LayoutCursor = {
segmentIndex: number // Segment index in prepareWithSegments’ prepared rich segment stream
graphemeIndex: number // Grapheme index within that segment; `0` at segment boundaries
clearCache(): void // clears Pretext’s shared internal caches used by prepare() and prepareWithSegments(). Useful if your app cycles through many different fonts or text variants and you want to release the accumulated cache
setLocale(locale?: string): void // optional (by default we use the current locale). Sets locale for future prepare() and prepareWithSegments(). Internally, it also calls clearCache(). Setting a new locale doesn’t affect existing prepare() and prepareWithSegments() states (no mutations to them)
Pretext doesn’t try to be a full font rendering engine (yet?). It currently targets the common text setup:
* If you pass { whiteSpace: ‘pre-wrap’ }, ordinary spaces, \t tabs, and \n hard breaks are preserved instead of collapsed. Tabs follow the default browser-style tab-size: 8. The other wrapping defaults stay the same: word-break: normal, overflow-wrap: break-word, and line-break: auto.
* system-ui is unsafe for layout() accuracy on macOS. Use a named font.
* Because the default target includes overflow-wrap: break-word, very narrow widths can still break inside words, but only at grapheme boundaries.
See DEVELOPMENT.md for the dev setup and commands.
Sebastian Markbage first planted the seed with text-layout last decade. His design — canvas measureText for shaping, bidi from pdf.js, streaming line breaking — informed the architecture we kept pushing forward here.
...
Read the original on github.com »
Almost 30 years after the intricate web of nerves inside the penis was plotted out, the same mapping has finally been completed for one of the least-studied organs in the human body — the clitoris.
As well as revealing the extent of the nerves that are crucial to orgasms, the work shows that some of what medics are learning about the anatomy of the clitoris is wrong, and could help prevent women who have pelvic operations from ending up with poorer sexual function.
The clitoris, responsible for sexual pleasure, is one of the least studied organs of the human body. Cultural taboo around female sexuality has held back scientific investigations and the clitoris did not even make it into standard anatomy textbooks until the 20th century. And in the 38th edition of Gray’s Anatomy in 1995 it was introduced as just “a small version of the penis”.
A Melbourne urologist, Helen O’Connell, says the clitoris has been ignored by researchers for far too long. “It has been deleted intellectually by the medical and scientific community, presumably aligning attitude to a societal ignorance,” she said.
To get a better idea of the inner workings of this key pleasure-related organ, Ju Young Lee, a research associate at Amsterdam University Medical Center in the Netherlands, and her colleagues used high-energy X-rays to create 3D scans of two female pelvises that had been donated through a body donor organ programme.
The scans revealed in 3D the trajectory of the five complex tree-like branching nerves running through the clitoris in unprecedented detail, the widest 0.7mm across. The work has been reported on the preprint server bioRxiv and has not yet been peer reviewed.
“This is the first ever 3D map of the nerves within the glans of the clitoris,” said Lee. She is amazed it has taken so long, considering a similar level of knowledge regarding the penile glans was reached back in 1998, 28 years ago.
Lee and her colleagues show that some branches of clitoral nerves reach the mons pubis, the rounded mound of tissue over the pubic bone. Others go to the clitoral hood, which sits over the small, sensitive, external part of the clitoris — the glans clitoris — which is just 10% of the total organ. Other nerves reach the folds of skin of the vulva, the labial structures.
Previous research had indicated that the big dorsal nerve of the clitoris gradually diminished as it approached the glans. However, the new scans appear to show that some of what medics have been learning in anatomy is wrong and the nerve continues strongly all the way to the end.
“I was especially fascinated by the high-resolution images within the glans, the most sensitive part of the clitoris, as these terminal nerve branches are impossible to see during dissection,” said Georga Longhurst, the head of anatomical sciences at St George’s, University of London.
O’Connell, who published the first comprehensive anatomical study of the clitoris in 1998, said the findings were crucial to understanding the female sensory mechanism underlying arousal and orgasm via stimulating the clitoris. “Orgasm is a brain function that leads to improved health and wellbeing as well as having positive implications for human relationships and possibly fertility,” she said.
The mapping of clitoral nerves is likely to inform reconstructive surgery after female genital mutilation, one of the most extreme examples of cultural misogyny. According to the World Health Organization, more than 230 million girls and women alive today in 30 countries in Africa, the Middle East and Asia have undergone such mutilation, in which the visible part of the clitoris may be removed, along with parts of the labia.
The practice has no health benefits and can result in issues including severe bleeding, infection, problems urinating, menstrual difficulties and complications in childbirth.
About 22% of women who undergo surgical reconstruction after mutilation experience a decline in orgasmic experience after their operation, so a better understanding of how far the nerves extend could reduce that percentage, said Lee.
O’Connell said the work could also inform surgery to treat vulvar cancer, gender reassignment surgery and genital cosmetic surgeries, such as labiaplasty, which increased in popularity by 70% from 2015 to 2020.
Lee is hoping to open a clitoris exhibition within Amsterdam University Medical Center to help expand knowledge about the clitoris, inspired by the Vagina Museum in London.
...
Read the original on www.theguardian.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.