10 interesting stories served every morning and every evening.
The FBI raided the home of a Washington Post reporter early on Wednesday in what the newspaper called a “highly unusual and aggressive” move by law enforcement, and press freedom groups condemned as a “tremendous intrusion” by the Trump administration.
Agents descended on the Virginia home of Hannah Natanson as part of an investigation into a government contractor accused of illegally retaining classified government materials.
An email sent on Wednesday afternoon to Post staff from the executive editor, Matt Murray, obtained by the Guardian, said agents turned up “unannounced”, searched her home and seized electronic devices.
“This extraordinary, aggressive action is deeply concerning and raises profound questions and concern around the constitutional protections for our work,” the email said.
“The Washington Post has a long history of zealous support for robust press freedoms. The entire institution stands by those freedoms and our work.”
“It’s a clear and appalling sign that this administration will set no limits on its acts of aggression against an independent press,” Marty Baron, the Post’s former executive editor, told the Guardian.
Murray said neither the newspaper nor Natanson were told they were the target of a justice department investigation.
Pam Bondi, the attorney general, said in a post on X that the raid was conducted by the justice department and FBI at the request of the Pentagon.
The warrant, she said, was executed “at the home of a Washington Post journalist who was obtaining and reporting classified and illegally leaked information from a Pentagon contractor. The leaker is currently behind bars.”
The statement gave no further details of the raid or investigation. Bondi added: “The Trump administration will not tolerate illegal leaks of classified information that, when reported, pose a grave risk to our nation’s national security and the brave men and women who are serving our country.”
The reporter’s home and devices were searched, and her Garmin watch, phone, and two laptop computers, one belonging to her employer, were seized, the newspaper said. It added that agents told Natanson she was not the focus of the investigation, and was not accused of any wrongdoing.
A warrant obtained by the Post cited an investigation into Aurelio Perez-Lugones, a system administrator in Maryland with a top secret security clearance who has been accused of accessing and taking home classified intelligence reports.
Natanson, the Post said, covers the federal workforce and has been a part of the newspaper’s “most high-profile and sensitive coverage” during the first year of the second Trump administration.
As the paper noted in its report, it is “highly unusual and aggressive for law enforcement to conduct a search on a reporter’s home”.
In a first-person account published last month, Natanson described herself as the Post’s “federal government whisperer”, and said she would receive calls day and night from “federal workers who wanted to tell me how President Donald Trump was rewriting their workplace policies, firing their colleagues or transforming their agency’s missions”.
“It’s been brutal,” the article’s headline said.
Natanson said her work had led to 1,169 new sources, “all current or former federal employees who decided to trust me with their stories”. She said she learned information “people inside government agencies weren’t supposed to tell me”, saying that the intensity of the work nearly “broke” her.
The federal investigation into Perez-Lugones, the Post said, involved documents found in his lunchbox and his basement, according to an FBI affidavit. The criminal complaint against him does not accuse him of leaking classified information, the newspaper said.
Press freedom groups were united in their condemnation of the raid on Wednesday.
“Physical searches of reporters’ devices, homes and belongings are some of the most invasive investigative steps law enforcement can take,” Bruce D Brown, president of the Reporters’ Committee for Freedom of the Press, said in a statement.
“There are specific federal laws and policies at the Department of Justice that are meant to limit searches to the most extreme cases because they endanger confidential sources far beyond just one investigation and impair public interest reporting in general.
“While we won’t know the government’s arguments about overcoming these very steep hurdles until the affidavit is made public, this is a tremendous escalation in the administration’s intrusions into the independence of the press.”
Jameel Jaffer, executive director of the Knight First Amendment Institute, demanded a public explanation from the justice department of “why it believes this search was necessary and legally permissible”.
In a statement, Jaffer said: “Any search targeting a journalist warrants intense scrutiny because these kinds of searches can deter and impede reporting that is vital to our democracy.
“Attorney General Bondi has weakened guidelines that were intended to protect the freedom of the press, but there are still important legal limits, including constitutional ones, on the government’s authority to use subpoenas, court orders, and search warrants to obtain information from journalists.
“Searches of newsrooms and journalists are hallmarks of illiberal regimes, and we must ensure that these practices are not normalized here.”
Seth Stern, chief of advocacy for the Freedom of the Press Foundation, said it was “an alarming escalation in the Trump administration’s multipronged war on press freedom” and called the warrant “outrageous”.
“The administration may now be in possession of volumes of journalist communications having nothing to do with any pending investigation and, if investigators are able to access them, we have zero faith that they will respect journalist-source confidentiality,” he said.
Tim Richardson, journalism and disinformation program director at PEN America, said: “A government action this rare and aggressive signals a growing assault on independent reporting and undermines the First Amendment.
“It is intended to intimidate sources and chill journalists’ ability to gather news and hold the government accountable. Such behavior is more commonly associated with authoritarian police states than democratic societies that recognize journalism’s essential role in informing the public.”
The Post has had a rocky relationship with the Trump administration in recent months, despite its billionaire owner, Jeff Bezos, the Amazon founder, attempting to curry favor by blocking it from endorsing Kamala Harris, the Democratic nominee, in the 2024 presidential election.
Bezos defended the action, which saw the desertion of more than 200,000 subscribers in protest.
...
Read the original on www.theguardian.com »
...
Read the original on blog.jgc.org »
In the past few months the MetaBrainz team has been fighting a battle against unscrupulous AI companies ignoring common courtesies (such as robots.txt) and scraping the Internet in order to build up their AI models. Rather than downloading our dataset in one complete download, they insist on loading all of MusicBrainz one page at a time. This of course would take hundreds of years to complete and is utterly pointless. In doing so, they are overloading our servers and preventing legitimate users from accessing our site.
Now the AI scrapers have found ListenBrainz and are hitting a number of our API endpoints for their nefarious data gathering purposes. In order to protect our services from becoming overloaded, we’ve made the following changes:
* The /metadata/lookup API endpoints (GET and POST versions) now require the caller to send an Authorization token in order for this endpoint to work.
* The ListenBrainz Labs API endpoints for mbid-mapping, mbid-mapping-release and mbid-mapping-explain have been removed. Those were always intended for debugging purposes and will also soon be replaced with a new endpoints for our upcoming improved mapper.
* LB Radio will now require users to be logged in to use it (and API endpoint users will need to send the Authorization header). The error message for logged in users is a bit clunky at the moment; we’ll fix this once we’ve finished the work for this year’s Year in Music.
Sorry for these hassles and no-notice changes, but they were required in order to keep our services functioning at an acceptable level.
...
Read the original on blog.metabrainz.org »
When hardware products reach end-of-life (EOL), companies should be forced to open-source the software.
I think we’ve made strides in this area with the “Right to Repair”-movement, but let’s go one step further. Preferably with the power of the European Commission: enforce that when something goes end-of-life, companies need to open-source the software.
I have a “smart” weight scale. It still connects via Bluetooth just fine (meaning: I see it connect on my phone) but because the app is no longer in development, it’s essentially useless. A perfect piece of hardware, “dead” because the company behind it stopped supporting it. (I’m exaggerating a bit; it shows the weight on its display, but the app used to store data for up to 5 users to keep track over time. I miss that!) It’s infuriating that we allow this to happen with all the wasteful electronics already lying around. We deserve better.
I thought of this while reading this article. It’s great that Bose does this, but it’s rare. When Spotify killed off its $200 Car Thing at the end of 2024, we just accepted it and moved on, even though that’s $200 of hardware turned into e-waste overnight. Out of sustainability concerns, but also just out of doing what’s right: this should not be able to happen.
Now, I’m not asking companies to open-source their entire codebase. That’s unrealistic when an app is tied to a larger platform. What I am asking for: publish a basic GitHub repo with the hardware specs and connection protocols. Let the community build their own apps on top of it.
And here’s the thing: with vibe-coding making development more accessible than ever, this isn’t just for hardcore developers anymore. Regular users can actually tinker with this stuff now.
The worst you can do is break the software. But the hardware was bricked already anyway :-)Can I keep you updated?Starting in 2026, I’ll share more focused notes on product design, technology, and business. If you’d like them in your inbox, leave your email below. I’m always happy to connect via email, Bluesky, or LinkedIn (blergh).
...
Read the original on www.marcia.no »
I have a habit of skimming the OpenJDK commit log every few weeks. Many commits are too complex for me to grasp in the limited time I have reserved for this … special hobby. But occasionally something catches my eye.
Last week, this commit stopped me mid-scroll:
The diffstat was interesting: +96 insertions, -54 deletions. The changeset adds a 55-line JMH benchmark, which means the production code itself is actually reduced.
Here’s what got removed from os_linux.cpp:
This was the implementation behind ThreadMXBean.getCurrentThreadUserTime(). To get the current thread’s user CPU time, the old code was:
Parsing through a hostile format where the command name can contain parentheses (hence the strrchr for the last ))
For comparison, here’s what getCurrentThreadCpuTime() does and has always done:
Just a single clock_gettime() call. There is no file I/O, no complex parsing and no buffer to manage.
The original bug report, filed back in 2018, quantified the difference:
The gap widens under concurrency. Why is clock_gettime() so much faster? Both approaches require kernel entry, but the difference is in what happens next.
The /proc path involves multiple syscalls, VFS machinery, string formatting kernel-side, and parsing userspace-side. The clock_gettime() path is one syscall with a direct function call chain.
Under concurrent load, the /proc approach also suffers from kernel lock contention. The bug report notes:
“Reading proc is slow (hence why this procedure is put under the method slow_thread_cpu_time(…)) and may lead to noticeable spikes in case of contention for kernel resources.”
So why didn’t getCurrentThreadUserTime() just use clock_gettime() from the start?
The answer is (probably) POSIX. The standard mandates that CLOCK_THREAD_CPUTIME_ID returns total CPU time (user + system). There’s no portable way to request user time only. Hence the /proc-based implementation.
The Linux port of OpenJDK isn’t limited to what POSIX defines, it can use Linux-specific features. Let’s see how.
Linux kernels since 2.6.12 (released in 2005) encode clock type information directly into the clockid_t value. When you call pthread_getcpuclockid(), you get back a clockid with a specific bit pattern:
The remaining bits encode the target PID/TID. We’ll come back to that in the bonus section.
The POSIX-compliant pthread_getcpuclockid() returns a clockid with bits 10 (SCHED). But if you flip those low bits to 01 (VIRT), clock_gettime() will return user time only.
And that’s it. The new version has no file I/O, no buffer and certainly no sscanf() with thirteen format specifiers.
Let’s have a look at how it performs in practice. For this exercise, I am taking the JMH test included in the fix, the only change is that I increased the number of threads from 1 to 16 and added a main() method for simple execution from an IDE:
Aside: This is a rather unscientific benchmark, I have other processes running on my desktop etc. Anyway, here is the setup: Ryzen 9950X, JDK main branch at commit 8ab7d3b89f656e5c. For the “before” case, I reverted the fix rather than checking out an older revision.
Here is the result:
We can see that a single invocation took 11 microseconds on average and the median was about 10 microseconds per invocation.
The CPU profile looks like this:
The CPU profile confirms that each invocation of getCurrentThreadUserTime() does multiple syscalls. In fact, most of the CPU time is spent in syscalls. We can see files being opened and closed. Closing alone results in multiple syscalls, including futex locks.
Let’s see the benchmark result with the fix applied:
The average went down from 11 microseconds to 279 nanos. This means the latency of the fixed version is 40x lower than the old version. While this is not a 400x improvement, it’s within the 30x - 400x range from the original report. Chances are the delta would be higher with a different setup. Let’s have a look at the new profile:
The profile is much cleaner. There is just a single syscall. If the profile is to be trusted then most of the time is spent in JVM, outside of the kernel.
Barely. The bit encoding is stable. It hasn’t changed in 20 years, but you won’t find it in the clock_gettime(2) man page. The closest thing to official documentation is the kernel source itself, in kernel/time/posix-cpu-timers.c and the CPUCLOCK_* macros.
My take: If glibc depends on it, it’s not going away.
When looking at profiler data from the ‘after’ run, I spotted a further optimization opportunity: A good portion of the remaining syscall is spent inside a radix tree lookup. Have a look:
When the JVM calls pthread_getcpuclockid(), it receives a clockid that encodes the thread’s ID. When this clockid is passed to clock_gettime(), the kernel extracts the thread ID and performs a radix tree lookup to find the pid structure associated with that ID.
However, the Linux kernel has a fast-path. If the encoded PID in the clockid is 0, the kernel interprets this as “the current thread” and skips the radix tree lookup entirely, jumping to the current task’s structure directly.
The OpenJDK fix currently obtains the specific TID, flips the bits, and passes it to clock_gettime(). This forces the kernel to take the “generalized path” (the radix tree lookup).
The source code looks like this:
If the JVM constructed the entire clockid manually with PID=0 encoded (rather than obtaining the clockid via pthread_getcpuclockid()), the kernel could take the fast-path and avoid the radix tree lookup altogether. The JVM already pokes bits in the clockid, so constructing it entirely from scratch wouldn’t be a bigger leap compatibility-wise.
First, a refresher on the clockid encoding. The clockid is constructed like this:
For the current thread, we want PID=0 encoded, which gives ~0 in the upper bits:
We can translate this into C++ as follows:
And then make a tiny teensy change to user_thread_cpu_time():
The change above is sufficient to make getCurrentThreadUserTime() use the fast-path in the kernel.
Given that we are in nanoseconds territory already, we tweak the test a bit:
* Use just a single thread to minimize noise
The benchmark changes are meant to eliminate noise from the rest of my system and get a more precise measurement of the small delta we expect:
The version currently in JDK main branch gives:
With the manual clockid construction, which uses the kernel fast-path, we get:
The average went down from 81.7 ns to 70.8 ns, so about a 13% improvement. The improvements are visible across all percentiles as well. Is it worth the loss of clarity from constructing the clockid manually rather than using pthread_getcpuclockid()? I am not entirely sure. The absolute gain is small and makes additional assumptions about kernel internals, including the size of clockid_t. On the other hand, it’s still a gain without any downside in practice. (famous last words…)
This is why I like browsing commits of large open source projects. A 40-line deletion eliminated a 400x performance gap. The fix required no new kernel features, just knowledge of a stable-but-obscure Linux ABI detail.
Read the kernel source. POSIX tells you what’s portable. The kernel source code tells you what’s possible. Sometimes there’s a 400x difference between the two. Whether it is worth exploiting is a different question.
Check the old assumptions. The /proc parsing approach made sense when it was written, before anyone realized it could be exploited this way. Assumptions get baked into code. Revisiting them occasionally pays off.
The change landed on December 3, 2025. Just one day before the JDK 26 feature freeze. If you’re using ThreadMXBean.getCurrentThreadUserTime(), JDK 26 (releasing March 2026) brings you a free 30-400x speedup!
Update: Jonas Norlinder (the patch author) shared his own deep-dive in the Hacker News discussion - written independently around the same time. Great minds! His is more rigorous on the memory overhead side; mine digs deeper into the bit encoding and the PID=0 fast-path.
...
Read the original on questdb.com »
Why some clothes shrink in the wash — and how to ‘unshrink’ them
Washing your favourite piece of clothing only to find out it shrank can be upsetting. Why does it happen, and how can you ‘unshrink’ it?
Why some clothes shrink in the wash - and how to ‘unshrink’ them
Analysis for The Conversation by textiles scientist Dr Nisa Salim
When your favourite dress or shirt shrinks in the wash, it can be devastating, especially if you followed the instructions closely. Unfortunately, some fabrics just seem to be more prone to shrinking than others — but why?
Understanding more about the science of textile fibres can not only help you prevent the shrinkage of clothing, but also might help you “rescue” the occasional garment after a laundry accident.
It’s all down to the fibres
To know more about clothing shrinkage, we first need to understand a little about how textiles are made.
Common textile fibres, such as cotton and linen, are made from plants. These fibres are irregular and crinkled in their natural form. If you zoom deeper inside them, you’ll see millions of tiny, long-chain cellulose molecules that naturally exist in coiled or convoluted shapes.
During textile manufacturing, these fibres are mechanically pulled, stretched and twisted to straighten and align these cellulose chains together. This creates smooth, long threads.
On a chemical level, there are also links between the chains called hydrogen bonds. These strengthen the fibre and the thread and make it more cohesive.
Threads are woven or knitted into fabrics, which locks in the tension that holds those fibres side by side.
However, these fibres have good “memory”. Whenever they’re exposed to heat, moisture or mechanical action (such as agitation in your washing machine), they tend to relax and return to their original crinkled state.
This fibre memory is why some fabrics wrinkle so easily and why some of them may even shrink after washing.
Magnified image of cotton fabric, showing threads ‘locked’ in against each other.
How does washing shrink the fabric?
To understand shrinkage, we again need to zoom down to the molecular level. During laundering, hot water helps to increase the energy level of fibres — this means they shake more rapidly which disrupts the hydrogen bonds holding them in place.
The way a fabric is knitted or woven also plays a role. Loosely knitted fabrics have more open spaces and loops, making them more susceptible to shrinkage. Tightly woven fabrics are more resistant because the threads are locked into place with less room to move.
Additionally, cellulose is hydrophilic — it attracts water. Water molecules penetrate inside the fibres, causing swelling and making them more flexible and mobile. Adding to all this is the tumble and twist action inside the washing machine.
The whole process makes the fibres relax and recoil back to their natural, less stretched, crinkled state. As a result, the garment shrinks.
It’s not just hot water — here’s why
This doesn’t just happen with hot water, as you may have experienced yourself with clothes made of rayon, for example.
Cold water can still penetrate into fibres, making them swell, along with the mechanical action of the tumbling in the washing machine. The effect is less dramatic with cold water, but it can happen.
To minimise shrinkage, you may use cold water, the lowest spin speed or the gentlest cycle available, especially for cotton and rayon. Machine labels don’t always fully explain the impact of spin speed and agitation. When in doubt, choose a “delicate” setting.
A wool fibre magnified, showing cuticles that appear like scales.
Different fibres shrink in different ways; there is no single mechanism that fits all.
While cellulose-based fabrics shrink as described above, wool is an animal-derived fibre made of keratin proteins. Its surface is covered in tiny, overlapping scales called cuticle cells.
During washing, these cuticles open up and interlock with neighbouring fibres causing fibre entanglement or “felting”. This makes the clothing feel denser and smaller — in other words, it shrinks.
Why don’t synthetics shrink as much?
Synthetic fibres such as polyester or nylon are made from petroleum-based polymers, engineered for stability and durability.
These polymers contain more crystalline regions that are highly ordered and act as an internal “skeleton”, preventing the fibres from crinkling.
Textile scientists and engineers are also working on fabrics that resist shrinkage through advanced material design. Among promising innovations are blended yarns that combine natural and synthetic fibres.
Some researchers are working on shape-memory polymers that can change shape — or return to a previous shape — in response to temperature or water, for example. This is different to stretch fabrics (such as those used in activewear) that are made up of highly elastic fibres which “bounce back” to their original state after stretching.
How can I unshrink a piece of clothing?
If a favourite garment has shrunk in the wash, you can try to rescue it with this simple method.
Gently soak the item in lukewarm water mixed with hair conditioner or baby shampoo (approximately one tablespoon per litre). Then, carefully stretch the fabric back into shape and dry it flat or under gentle tension — for example, by pegging the garment to a drying rack.
The reason this works is because conditioners have chemicals known as cationic surfactants. These will temporarily lubricate the fibres, making them more flexible and allowing you to gently pull everything back into place.
This process can’t completely reverse extreme shrinkage but it can help recover some of the lost size, making the clothes wearable again.
Swinburne-led network to guide AI use in youth services
Swinburne’s Dr Joel McGregor, Dr Linus Tan and Dr Caleb Lloyd have established the Responsible AI in Youth Sectors Network. The collaborative network aims to guide the fast-growing use of artificial intelligence in youth services across Victoria.
Read more
Ten Swinburne academics have been named on the Highly Cited Researchers 2025 list, released by Clarivate
Swinburne physicist Dr Weibai Li has received a Discovery Early Career Researcher Award from the Australian Research Council
$1.2m ARC funding to boost national X-ray spectroscopy capability through Swinburne and QUT partnership
Swinburne has secured $1.2 million in the latest Australian Research Council Linkage Infrastructure, Equipment and Facilities scheme round
Read more
...
Read the original on www.swinburne.edu.au »
I can’t overstate how much I hate GitHub Actions. I don’t even remember hating any other piece of technology I used. Sure, I still make fun of PHP that I remember from times of PHP4, but even then I didn’t hate it. Merely I found it subpar technology to other emerging at the time (like Ruby on Rails or Django). And yet I hate GitHub Actions.
Day before writing these words I was implementing build.rs for my tmplr project. To save you a click - it is a file/project scaffold tool with human readable (and craftable) template files. I (personally) use it very often, given how easy it is to craft new templates, by hand or with aid of the tool, so check it out if you need a similar tool.
The build.rs used CUE to generate README.md, CHANGELOG.md and also a version/help file to guarantee consistency. It was fun thing to do, it took approx. 1.5h and I even wrote an article about it. For myself and future generations.
I was happy with the results and didn’t check CI output which, quite unsurprisingly, failed. I was using cue binary inside build.rs and without it build simply couldn’t progress. When I woke up next day and saw e-mail from CI notifying me about failed build I immediatelly knew my day isn’t going to start with puppies and rainbows.
It took couple attempts to search and push GitHub Action that would install CUE and then I got the worst of the worst results: One system in matrix failing to build.
Makes sense, right? Even though my user base can be counted on a fingers of one-arm-less and second-arm-hook-equipped pirate, it’s still a thing “One Should Do”.
And with all that - Linux ARM failed with “command can’t be found”. CUE installed and ran nicely for all other 3 targets, but for some reason it failed for Linux ARM.
In case you don’t care about why I hate GitHub but your mind started to wonder to “what went wrong” let me tell you; because I know.
So supposedly cross build that happens in matrix is heavily isolated. When I install CUE I install it only on x86_64 Linux host and macOS ARM host. macOS has zero issues running x86_64 binary and no issues are raised when Linux x86_64 tries to run x86_64 binary. But GitHub Actions is nice enough to hide x86_64 binary from arm64 runner, so that it won’t break.
Thank you GitHub Actions. What would’ve I done without you.
And so my least favorite feedback loop started and went like this:
Offer the Universe choice words it won’t soon forget
I got quite efficient when it comes to points 8 and 9 but otherwise the whole loop still took around 2-3 minutes to execute.
Yes. For a single change. Like having an editor with 2 minute save lag, pushing commit using program running on cassette tapes or playing chess over snail-mail. It’s 2026 for Pete’s sake, and we won’t tolerate this behavior!
Now of course, in some Perfect World, GitHub could have a local runner with all the bells and whistles. Or maybe something that would allow me to quickly check for progress upon the push or even something like a “scratch commit”, i.e. a way that I could testbed different runs without polluting history of both Git and Action runs.
But no such perfect world exists and one is at the whim of heartless YAML-based system.
I suffered only 30 minutes of such loops. Could’ve done it for longer but I was out of colorful language to use and felt without it the process just isn’t the same.
There is a wise saying in the internet that goes like:
For the love of all that is holy, don’t let GitHub Actions manage your logic. Keep your scripts under your own damn control and just make the Actions call them!
This is what everyone should do. This is what I did.
I deleted build.rs (with a sliver of sadness because it was really nice - but sacrifices had to be made). I moved all the generation from build.rs to GNU Makefile, committed the darn files into repository, reverted changes to CI and called it a day. Problem solved.
GitHub Actions, Friends & Gentlefolk, is the reason why we can’t have (some) nice things. I can’t count how many hours I’ve lost debugging the runners or trying to optimize the build process. It’s a sorry process every single time, a time that would be better spent elsewhere.
And yet there are some benefits, like macOS builds that would be quite hard to get otherwise. I don’t know any other system that would be easier to setup than GitHub Actions (if you know one, let me know) but it seems there’s no escape.
We are all doomed to GitHub Actions.
…but at least I dodged the bullet early.
...
Read the original on xlii.space »
Note: I am co-hosting an event in SF on Friday, Jan 16th.
In 1654, a Jesuit polymath named Athanasius Kircher published Mundus Subterraneus, a comprehensive geography of the Earth’s interior. It had maps and illustrations and rivers of fire and vast subterranean oceans and air channels connecting every volcano on the planet. He wrote that “the whole Earth is not solid but everywhere gaping, and hollowed with empty rooms and spaces, and hidden burrows.”. Alongside comments like this, Athanasius identified the legendary lost island of Atlantis, pondered where one could find the remains of giants, and detailed the kinds of animals that lived in this lower world, including dragons. The book was based entirely on secondhand accounts, like travelers tales, miners reports, classical texts, so it was as comprehensive as it could’ve possibly been.
But Athanasius had never been underground and neither had anyone else, not really, not in a way that mattered.
Today, I am in San Francisco, the site of the 2026 J. P. Morgan Healthcare Conference, and it feels a lot like Mundus Subterraneus.
There is ostensibly plenty of evidence to believe that the conference exists, that it actually occurs between January 12, 2026 to January 16, 2026 at the Westin St. Francis Hotel, 335 Powell Street, San Francisco, and that it has done so for the last forty-four years, just like everyone has told you. There is a website for it, there are articles about it, there are dozens of AI-generated posts on Linkedin about how excited people were about it. But I have never met anyone who has actually been inside the conference.
I have never been approached by one, or seated next to one, or introduced to one. They do not appear in my life. They do not appear in anyone’s life that I know. I have put my boots on the ground to rectify this, and asked around, first casually and then less casually, “Do you know anyone who has attended the JPM conference?”, and then they nod, and then I refine the question to be, “No, no, like, someone who has actually been in the physical conference space”, then they look at me like I’ve asked if they know anyone who’s been to the moon. They know it happens. They assume someone goes. Not them, because, just like me, ordinary people like them do not go to the moon, but rather exist around the moon, having coffee chats and organizing little parties around it, all while trusting that the moon is being attended to.
The conference has six focuses: AI in Drug Discovery and Development, AI in Diagnostics, AI for Operational Efficiency, AI in Remote and Virtual Healthcare, AI and Regulatory Compliance, and AI Ethics and Data Privacy. There is also a seventh theme over ‘Keynote Discussions’, the three of which are The Future of AI in Precision Medicine, Ethical AI in Healthcare, and Investing in AI for Healthcare. Somehow, every single thematic concept at this conference has converged onto artificial intelligence as the only thing worth seriously discussing.
Isn’t this strange? Surely, you must feel the same thing as me, the inescapable suspicion that the whole show is being put on by an unconscious Chinese Room, its only job to pass over semi-legible symbols over to us with no regards as to what they actually mean. In fact, this pattern is consistent across not only how the conference communicates itself, but also how biopharmaceutical news outlets discuss it.
Each year, Endpoints News and STAT and BioCentury and FiercePharma all publish extensive coverage of the J. P. Morgan Healthcare Conference. I have read the articles they have put out, and none of it feels like it was written by someone who actually was at the event. There is no emotional energy, no personal anecdotes, all of it has been removed, shredded into one homogeneous, smoothie-like texture. The coverage contains phrases like “pipeline updates” and “strategic priorities” and “catalysts expected in the second half.” If the writers of these articles ever approach a human-like tenor, it is in reference to the conference’s “tone”. The tone is “cautiously optimistic.” The tone is “more subdued than expected.” The tone is “mixed.” What does this mean? What is a mixed tone? What is a cautiously optimistic tone? These are not descriptions of a place. They are more accurately descriptions of a sentiment, abstracted from any physical reality, hovering somewhere above the conference like a weather system.
I could write this coverage. I could write it from my horrible apartment in New York City, without attending anything at all. I could say: “The tone at this year’s J. P. Morgan Healthcare Conference was cautiously optimistic, with executives expressing measured enthusiasm about near-term catalysts while acknowledging macroeconomic headwinds.” I made that up in fifteen seconds. Does it sound fake? It shouldn’t, because it sounds exactly like the coverage of a supposedly real thing that has happened every year for the last forty-four years.
Speaking of the astral body I mentioned earlier, there is an interesting historical parallel to draw there. In 1835, the New York Sun published a series of articles claiming that the astronomer Sir John Herschel had discovered life on the moon. Bat-winged humanoids, unicorns, temples made of sentient sapphire, that sort of stuff. The articles were detailed, describing not only these creatures appearance, but also their social behaviors and mating practices. All of these cited Herschel’s observations through a powerful new telescope. The series was a sensation. It was also, obviously, a hoax, the Great Moon Hoax as it came to be known. Importantly, the hoax worked not because the details were plausible, but because they had the energy of genuine reporting: Herschel was a real astronomer, and telescopes were real, and the moon was real, so how could any combination that involved these three be fake?
To clarify: I am not saying the J. P. Morgan Healthcare Conference is a hoax.
What I am saying is that I, nor anybody, can tell the difference between the conference coverage and a very well-executed hoax. Consider that the Great Moon Hoax was walking a very fine tightrope between giving the appearance of seriousness, while also not giving away too many details that’d let the cat out of the bag. Here, the conference rhymes.
For example: photographs. You would think there would be photographs. The (claimed) conference attendees number in the thousands, many of them with smartphones, all of them presumably capable of pointing a camera at a thing and pressing a button. But the photographs are strange, walking that exact snickering line that the New York Sun walked. They are mostly photographs of the outside of the Westin St. Francis, or they are photographs of people standing in front of step-and-repeat banners, or they are photographs of the schedule, displayed on a screen, as if to prove that the schedule exists. But photographs of the inside with the panels, audience, the keynotes in progress; these are rare. And when I do find them, they are shot from angles that reveal nothing, that could be anywhere, that could be a Marriott ballroom in Cleveland.
Is this a conspiracy theory? You can call it that, but I have a very professional online presence, so I personally wouldn’t. In fact, I wouldn’t even say that the J. P. Morgan Healthcare Conference is not real, but rather that it is real, but not actually materially real.
To explain what I mean, we can rely on economist Thomas Schelling to help us out. Sixty-six years ago, Schelling proposed a thought experiment: if you had to meet a stranger in New York City on a specific day, with no way to communicate beforehand, where would you go? The answer, for most people, is Grand Central Station, at noon. Not because Grand Central Station is special. Not because noon is special. But because everyone knows that everyone else knows that Grand Central Station at noon is the obvious choice, and this mutual knowledge of mutual knowledge is enough to spontaneously produce coordination out of nothing. This, Grand Central Station and places just like it, are what’s known as a Schelling point.
Schelling points appear when they are needed, burnt into our genetic code, Pleistocene subroutines running on repeat, left over from when we were small and furry and needed to know, without speaking, where the rest of the troop would be when the leopards came. The J. P. Morgan Healthcare Conference, on the second week of January, every January, Westin St. Francis, San Francisco, is what happened when that ancient coordination instinct was handed an industry too vast and too abstract to organize by any other means. Something deep drives us to gather here, at this time, at this date.
To preempt the obvious questions: I don’t know why this particular location or time or demographic were chosen. I especially don’t know why J. P. Morgan of all groups was chosen to organize the whole thing. All of this simply is.
If you find any of this hard to believe, observe that the whole event is, structurally, a religious pilgrimage, and has all the quirks you may expect of a religious pilgrimage. And I don’t mean that as a metaphor, I mean it literally, in every dimension except the one where someone official admits it, and J. P. Morgan certainly won’t.
Consider the elements. A specific place, a specific time, an annual cycle, a journey undertaken by the faithful, the presence of hierarchy and exclusion, the production of meaning through ritual rather than content. The hajj requires Muslims to circle the Kaaba seven times. The J. P. Morgan Healthcare Conference requires devotees of the biopharmaceutical industry to slither into San Francisco for five days, nearly all of them—in my opinion, all of them—never actually entering the conference itself, but instead orbiting it, circumambulating it, taking coffee chats in its gravitational field. The Kaaba is a cube containing, according to tradition, nothing, an empty room, the holiest empty room in the world. The Westin St. Francis is also, roughly, a cube. I am not saying these are the same thing. I am saying that we have, as a species, a deep and unexamined relationship to cubes.
This is my strongest theory so far. That the J. P. Morgan Healthcare conference isn’t exactly real or unreal, but a mass-coordination social contract that has been unconsciously signed by everyone in this industry, transcending the need for an underlying referent.
My skeptical readers will protest at this, and they would be correct to do so. The story I have written out is clean, but it cannot be fully correct. Thomas Schelling was not so naive as to believe that Schelling points spontaneously generate out of thin air, there is always a reason, a specific, grounded reason, that their concepts become the low-energy metaphysical basins that they are. Grand Central Station is special because of the cultural gravitas it has accumulated through popular media. Noon is special because that is when the sun reaches its zenith. The Kaaba was worshipped because it was not some arbitrary cube; the cube itself was special, that it contained The Black Stone, set into the eastern corner, a relic that predates Islam itself, that some traditions claim fell from heaven.
And there are signs, if you know where to look, that the underlying referent for the Westin St. Francis status being a gathering area is physical. Consider the heat. It is January in San Francisco, usually brisk, yet the interior of the Westin St. Francis maintains a distinct, humid microclimate. Consider the low-frequency vibration in the lobby that ripples the surface of water glasses, but doesn’t seem to register on local, public seismographs. There is something about the building itself that feels distinctly alien. But, upon standing outside the building for long enough, you’ll have the nagging sensation that it is not something about the hotel that feels off, but rather, what lies within, underneath, and around the hotel.
There’s no easy way to sugarcoat this, so I’ll just come out and say it: it is possible that the entirety of California is built on top of one immensely large organism, and the particular spot in which the Westin St. Francis Hotel stands—335 Powell Street, San Francisco, 94102—is located directly above its beating heart. And that this is the primary organizing focal point for both the location and entire reason for the J. P. Morgan Healthcare Conference.
I believe that the hotel maintains dozens of meter-thick polyvinyl chloride plastic tubes that have been threaded down through the basement, through the bedrock, through geological strata, and into the cardiovascular system of something that has been lying beneath the Pacific coast since before the Pacific coast existed. That the hotel is a singular, thirty-two story central line. That, during the week of the conference, hundreds of gallons of drugs flow through these tubes, into the pulsating mass of the being, pouring down arteries the size of canyons across California. The dosing takes five days; hence the length of the conference.
And I do not believe that the drugs being administered here are simply sedatives. They are, in fact, the opposite of sedatives. The drugs are keeping the thing beneath California alive. There is something wrong with the creature, and a select group of attendees at the J. P. Morgan Healthcare Conference have become its primary caretakers.
Why? The answer is obvious: there is nothing good that can come from having an organic creature that spans hundreds of thousands of square miles suddenly die, especially if that same creatures mass makes up a substantial portion of the fifth-largest economy on the planet, larger than India, larger than the United Kingdom, larger than most countries that we think of as significant. Maybe letting the nation slide off into the sea was an option at one point, but not anymore. California produces more than half of the fruits, vegetables, and nuts grown in the United States. California produces the majority of the world’s entertainment. California produces the technology that has restructured human communication. Nobody can afford to let the whole thing collapse.
So, perhaps it was decided that California must survive, at least for as long as possible. Hence Amgen. Hence Genentech. Hence the entire biotech revolution, which we are taught to understand as a triumph of science and entrepreneurship, a story about venture capital and recombinant DNA and the genius of the California business climate. The story is not false, but incomplete. The reason for the revolution was, above all else, because the creature needed medicine, and the old methods of making medicine were no longer adequate, and someone decided that the only way to save the patient was to create an entire industry dedicated to its care.
Why is drug development so expensive? Because the real R&D costs are for the primary patient, the being underneath California, and human applications are an afterthought, a way of recouping investment. Why do so many clinical trials fail? For the same reason; the drugs are not meant for our species. Why is the industry concentrated in San Francisco, San Diego, Boston? Because these are monitoring stations, places where other intravenous lines have been drilled into other organs, other places where the creature surfaces close enough to reach.
Finally, consider the hotel itself. The Westin St. Francis was built in 1904, and, throughout its entire existence, it has never, ever, even once, closed or stopped operating. The 1906 earthquake leveled most of San Francisco, and the Westin St. Francis did not fall. It was damaged, yes, but it did not fall. The 1989 Loma Prieta earthquake killed sixty-three people and collapsed a section of the Bay Bridge. Still, the Westin St. Francis did not fall. It cannot fall, because if it falls, the central line is severed, and if the central line is severed, the creature dies, and if the creature dies, we lose California, and if we lose California, our civilization loses everything that California has been quietly holding together. And so the Westin St. Francis has hosted every single J. P. Morgan Healthcare Conference since 1983, has never missed one, has never even come close to missing one, and will not miss the next one, or the one after that, or any of the ones that follow.
If you think about it, this all makes a lot of sense. It may also seem very unlikely, but unlikely things have been known to happen throughout history. Mundus Subterraneus had a section on the “seeds of metals,” a theory that gold and silver grew underground like plants, sprouting from mineral seeds in the moist, oxygen-poor darkness. This was wrong, but the intuition beneath it was not entirely misguided. We now understand that the Earth’s mantle is a kind of eternal engine of astronomical size, cycling matter through subduction zones and volcanic systems, creating and destroying crust. Athanasius was wrong about the mechanism, but right about the structure. The earth is not solid. It is everywhere gaping, hollowed with empty rooms, and it is alive.
...
Read the original on www.owlposting.com »
I was recently building a feature for Greptile (an AI-powered code review tool), when I hit a weird snag with GitHub’s API.
The feature should have been simple: I wanted to add clickable links to GitHub PR comments, so users could jump directly from our reviews to relevant GitHub discussions. We already stored the comment IDs, so I just needed to construct the URLs.
The problem was, when I tested it, the links didn’t work.
Searching through GitHub’s documentation for answers revealed that their team maintains two separate ID systems. We’d been using GitHub’s GraphQL API, which returns node IDs like PRRC_kwDOL4aMSs6Tkzl8. GitHub designed these node IDs to uniquely identify any object across its entire system. But web URLs required database IDs, integer values visible in URLs and often associated with REST responses, like 2475899260.
I was looking at either backfilling millions of records or migrating our entire database, and neither sounded fun. So I did what any annoyed engineer would do: I stared at these IDs for way too long, looking for a way out of the migration.
I looked for a relationship between these two ID formats. I pulled up a few of our stored node IDs and opened the corresponding PR comments from the same pull request in my editor:
The database IDs incremented sequentially, and the node IDs were almost identical too, differing only in their last few characters. GitHub’s documentation mentioned that node IDs are base64 encoded. I tried decoding just the part after PRRC_:
def base64_2_int(s):
base64_part = s.split(“_”)[1]
return int.from_bytes(base64.b64decode(base64_part))
The decoded values were very long (96 bit) integers:
The decoded integers were incremented by 798, exactly matching the database ID increment. The database ID had to be embedded in there somewhere.
Since both values were changing by the same amount, and the decoded value was 96 bits, I figured the database ID was likely embedded in the lower 32 bits of the node ID. I wrote a quick test:
def node_id_to_database_id(s):
decoded = int.from_bytes(base64.b64decode(s.split(“_”)[1]))
# Mask to keep only the lower 32 bits
return decoded & ((1 << 32) - 1)
node_id_to_database_id(“PRRC_kwDOL4aMSs6Tkzl8”)
# Returns: 2475899260
It worked! The database ID was just the last 32 bits of the decoded node ID. I could skip the entire migration, and extract what I needed with a simple bitmask operation.
After the relief sunk in, I couldn’t help but ask, “If the database ID only used the last 32 bits out of the 96 total bits, what were the first 64 bits being used for?”
Since the node ID is a global identifier across all of GitHub, I assumed that the extra 64 bits had to encode either the object type or an id to another resource that “owned” the current node. I wanted to see if I could decode them the same way I’d decoded the database ID.
To understand what was in those 64 bits, I started querying different GitHub objects. My test repository returned the familiar PRRC_ format for everything. I tried the first famous repository that came to mind, torvalds/linux, to see if the pattern held.
The response was a completely different base64 encoded string:
MDEwOlJlcG9zaXRvcnkyMzI1Mjk4
MDQ6VHJlZTIzMjUyOTg6NzIwMWJmYjkyOGIyOWU4MGIwMDVkYTE1OTc4MzQ1ZjIzYmEwZmY5Yg==
MDQ6QmxvYjIzMjUyOTg6ZjM3MWExM2I0ZDE5MmQyZTM3ZDcwMTdiNjNlMzNkZmE3YzY3Mzc4Zg==
When I decoded these they showed the following:
base64.b64decode(“MDEwOlJlcG9zaXRvcnkyMzI1Mjk4”)
# Returns: b′010:Repository2325298′
The Linux repository was using a completely different format. I realized the repository was created in 2011. By picking an old repository, I’d accidentally stumbled onto GitHub’s legacy ID format which was quite simple:
[Object Type Number]:[Object Type Name][Database ID]
That repository ID (010:Repository2325298) had a clear structure: 010 is some type enum, followed by a colon, the word Repository, and then the database ID 2325298. Since repositories are just containers, I wanted to see if git objects like trees would reveal more complexity:
base64.b64decode(“MDQ6VHJlZTIzMjUyOTg6NzIwMWJmYjkyOGI…“)
# Returns: b′04:Tree2325298:7201bfb928b29e80b005da15978345f23ba0ff9b’
That’s the enum again, the word Tree, the repository ID, and the tree SHA.
It was apparent that GitHub had two systems for ID’ing their internal objects. Somewhere in GitHub’s codebase, there’s an if-statement checking when a repository was created to decide which ID format to return.
I started mapping out which objects used which format. The pattern wasn’t as simple as “old repos use old IDs, new repos use new IDs”:
Old repositories kept their legacy IDs, while newer ones were issued IDs following the new format. But the split isn’t clean; GitHub still uses the legacy format for some object types, like Users, even when newly created. New objects in old repositories sometimes get new IDs, sometimes don’t. It depends on their creation date.
Surely the new format had some benefit that warranted this messy migration. It shouldn’t be too hard to create a more efficient IDing system than base64 encoding the string representation of an enum and the object name. This information could easily be packed into those 64 extra bits that I still had to understand.
GitHub’s migration guide tells developers to treat the new IDs as opaque strings and treat them as references. However it was clear that there was some underlying structure to these IDs as we just saw with the bitmasking. My best guess was that it used some binary serialization format, so I could just test a bunch to see what worked.
This is when I came across MessagePack, a compact binary serialization format. It seemed promising as it was frequently used in Ruby projects, and GitHub’s backend is built on Ruby. I tried decoding it:
import msgpack
import base64
def decode_new_node_id(node_id):
prefix, encoded = node_id.split(‘_’)
packed = base64.b64decode(encoded)
return msgpack.unpackb(packed)
decode_new_node_id(“PRRC_kwDOL4aMSs6Tkzl8”)
# Returns: [0, 47954445, 2475899260]
It worked. The new format uses MessagePack to encode the relevant IDs into an array.
The structure made sense once I saw it:
* First element (0): Still unclear. Probably a version identifier, but if you know what this is for, please email me at soohoon@greptile.com.
* Second element (47954445): The repository’s database ID. This provides the context needed to make the ID global. Pull requests, issues, and comments are all usually scoped to a repository.
Different object types sometimes have different array lengths. Repositories only need [0, repository_database_id]. Commits include the git SHA: [0, repository_database_id, commit_sha]. The first element is always 0, and repository-scoped objects include both the repository ID and the specific object identifier. Since the database ID of the comment is the last element in the array, when bitmasking for the lower 32 bits we are able extract just that.
What started as a URL generation problem turned into “reverse-engineering” and exploring of GitHub’s ID system.
Putting it all together, for modern GitHub node IDs you can use:
import base64
import msgpack
def node_id_to_database_id(node_id):
prefix, encoded = node_id.split(‘_’)
packed = base64.b64decode(encoded)
array = msgpack.unpackb(packed)
return array[-1]
to extract the database ID for pull request comments. Should I have made sure that we were storing the right ID in the first place? Probably, but then I wouldn’t have had much fun uncovering all of this. And my deepest condolences to the GitHub engineer who has to deal with supporting these two different node ID formats.
...
Read the original on www.greptile.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.