10 interesting stories served every morning and every evening.




1 799 shares, 129 trendiness

FBI raids home of Washington Post reporter in ‘highly unusual and aggressive’ move

The FBI raided the home of a Washington Post re­porter early on Wednesday in what the news­pa­per called a highly un­usual and ag­gres­sive” move by law en­force­ment, and press free­dom groups con­demned as a tremendous in­tru­sion” by the Trump ad­min­is­tra­tion.

Agents de­scended on the Virginia home of Hannah Natanson as part of an in­ves­ti­ga­tion into a gov­ern­ment con­trac­tor ac­cused of il­le­gally re­tain­ing clas­si­fied gov­ern­ment ma­te­ri­als.

An email sent on Wednesday af­ter­noon to Post staff from the ex­ec­u­tive ed­i­tor, Matt Murray, ob­tained by the Guardian, said agents turned up unannounced”, searched her home and seized elec­tronic de­vices.

This ex­tra­or­di­nary, ag­gres­sive ac­tion is deeply con­cern­ing and raises pro­found ques­tions and con­cern around the con­sti­tu­tional pro­tec­tions for our work,” the email said.

The Washington Post has a long his­tory of zeal­ous sup­port for ro­bust press free­doms. The en­tire in­sti­tu­tion stands by those free­doms and our work.”

It’s a clear and ap­palling sign that this ad­min­is­tra­tion will set no lim­its on its acts of ag­gres­sion against an in­de­pen­dent press,” Marty Baron, the Post’s for­mer ex­ec­u­tive ed­i­tor, told the Guardian.

Murray said nei­ther the news­pa­per nor Natanson were told they were the tar­get of a jus­tice de­part­ment in­ves­ti­ga­tion.

Pam Bondi, the at­tor­ney gen­eral, said in a post on X that the raid was con­ducted by the jus­tice de­part­ment and FBI at the re­quest of the Pentagon.

The war­rant, she said, was ex­e­cuted at the home of a Washington Post jour­nal­ist who was ob­tain­ing and re­port­ing clas­si­fied and il­le­gally leaked in­for­ma­tion from a Pentagon con­trac­tor. The leaker is cur­rently be­hind bars.”

The state­ment gave no fur­ther de­tails of the raid or in­ves­ti­ga­tion. Bondi added: The Trump ad­min­is­tra­tion will not tol­er­ate il­le­gal leaks of clas­si­fied in­for­ma­tion that, when re­ported, pose a grave risk to our na­tion’s na­tional se­cu­rity and the brave men and women who are serv­ing our coun­try.”

The re­porter’s home and de­vices were searched, and her Garmin watch, phone, and two lap­top com­put­ers, one be­long­ing to her em­ployer, were seized, the news­pa­per said. It added that agents told Natanson she was not the fo­cus of the in­ves­ti­ga­tion, and was not ac­cused of any wrong­do­ing.

A war­rant ob­tained by the Post cited an in­ves­ti­ga­tion into Aurelio Perez-Lugones, a sys­tem ad­min­is­tra­tor in Maryland with a top se­cret se­cu­rity clear­ance who has been ac­cused of ac­cess­ing and tak­ing home clas­si­fied in­tel­li­gence re­ports.

Natanson, the Post said, cov­ers the fed­eral work­force and has been a part of the news­pa­per’s most high-pro­file and sen­si­tive cov­er­age” dur­ing the first year of the sec­ond Trump ad­min­is­tra­tion.

As the pa­per noted in its re­port, it is highly un­usual and ag­gres­sive for law en­force­ment to con­duct a search on a re­porter’s home”.

In a first-per­son ac­count pub­lished last month, Natanson de­scribed her­self as the Post’s federal gov­ern­ment whis­perer”, and said she would re­ceive calls day and night from federal work­ers who wanted to tell me how President Donald Trump was rewrit­ing their work­place poli­cies, fir­ing their col­leagues or trans­form­ing their agen­cy’s mis­sions”.

It’s been bru­tal,” the ar­ti­cle’s head­line said.

Natanson said her work had led to 1,169 new sources, all cur­rent or for­mer fed­eral em­ploy­ees who de­cided to trust me with their sto­ries”. She said she learned in­for­ma­tion people in­side gov­ern­ment agen­cies weren’t sup­posed to tell me”, say­ing that the in­ten­sity of the work nearly broke” her.

The fed­eral in­ves­ti­ga­tion into Perez-Lugones, the Post said, in­volved doc­u­ments found in his lunch­box and his base­ment, ac­cord­ing to an FBI af­fi­davit. The crim­i­nal com­plaint against him does not ac­cuse him of leak­ing clas­si­fied in­for­ma­tion, the news­pa­per said.

Press free­dom groups were united in their con­dem­na­tion of the raid on Wednesday.

Physical searches of re­porters’ de­vices, homes and be­long­ings are some of the most in­va­sive in­ves­tiga­tive steps law en­force­ment can take,” Bruce D Brown, pres­i­dent of the Reporters’ Committee for Freedom of the Press, said in a state­ment.

There are spe­cific fed­eral laws and poli­cies at the Department of Justice that are meant to limit searches to the most ex­treme cases be­cause they en­dan­ger con­fi­den­tial sources far be­yond just one in­ves­ti­ga­tion and im­pair pub­lic in­ter­est re­port­ing in gen­eral.

While we won’t know the gov­ern­men­t’s ar­gu­ments about over­com­ing these very steep hur­dles un­til the af­fi­davit is made pub­lic, this is a tremen­dous es­ca­la­tion in the ad­min­is­tra­tion’s in­tru­sions into the in­de­pen­dence of the press.”

Jameel Jaffer, ex­ec­u­tive di­rec­tor of the Knight First Amendment Institute, de­manded a pub­lic ex­pla­na­tion from the jus­tice de­part­ment of why it be­lieves this search was nec­es­sary and legally per­mis­si­ble”.

In a state­ment, Jaffer said: Any search tar­get­ing a jour­nal­ist war­rants in­tense scrutiny be­cause these kinds of searches can de­ter and im­pede re­port­ing that is vi­tal to our democ­racy.

Attorney General Bondi has weak­ened guide­lines that were in­tended to pro­tect the free­dom of the press, but there are still im­por­tant le­gal lim­its, in­clud­ing con­sti­tu­tional ones, on the gov­ern­men­t’s au­thor­ity to use sub­poe­nas, court or­ders, and search war­rants to ob­tain in­for­ma­tion from jour­nal­ists.

Searches of news­rooms and jour­nal­ists are hall­marks of il­lib­eral regimes, and we must en­sure that these prac­tices are not nor­mal­ized here.”

Seth Stern, chief of ad­vo­cacy for the Freedom of the Press Foundation, said it was an alarm­ing es­ca­la­tion in the Trump ad­min­is­tra­tion’s mul­ti­pronged war on press free­dom” and called the war­rant outrageous”.

The ad­min­is­tra­tion may now be in pos­ses­sion of vol­umes of jour­nal­ist com­mu­ni­ca­tions hav­ing noth­ing to do with any pend­ing in­ves­ti­ga­tion and, if in­ves­ti­ga­tors are able to ac­cess them, we have zero faith that they will re­spect jour­nal­ist-source con­fi­den­tial­ity,” he said.

Tim Richardson, jour­nal­ism and dis­in­for­ma­tion pro­gram di­rec­tor at PEN America, said: A gov­ern­ment ac­tion this rare and ag­gres­sive sig­nals a grow­ing as­sault on in­de­pen­dent re­port­ing and un­der­mines the First Amendment.

It is in­tended to in­tim­i­date sources and chill jour­nal­ists’ abil­ity to gather news and hold the gov­ern­ment ac­count­able. Such be­hav­ior is more com­monly as­so­ci­ated with au­thor­i­tar­ian po­lice states than de­mo­c­ra­tic so­ci­eties that rec­og­nize jour­nal­is­m’s es­sen­tial role in in­form­ing the pub­lic.”

The Post has had a rocky re­la­tion­ship with the Trump ad­min­is­tra­tion in re­cent months, de­spite its bil­lion­aire owner, Jeff Bezos, the Amazon founder, at­tempt­ing to curry fa­vor by block­ing it from en­dors­ing Kamala Harris, the Democratic nom­i­nee, in the 2024 pres­i­den­tial elec­tion.

Bezos de­fended the ac­tion, which saw the de­ser­tion of more than 200,000 sub­scribers in protest.

...

Read the original on www.theguardian.com »

2 701 shares, 39 trendiness

There's a ridiculous amount of tech in a disposable vape

...

Read the original on blog.jgc.org »

3 442 shares, 15 trendiness

We can’t have nice things… because of AI scrapers

In the past few months the MetaBrainz team has been fight­ing a bat­tle against un­scrupu­lous AI com­pa­nies ig­nor­ing com­mon cour­te­sies (such as ro­bots.txt) and scrap­ing the Internet in or­der to build up their AI mod­els. Rather than down­load­ing our dataset in one com­plete down­load, they in­sist on load­ing all of MusicBrainz one page at a time. This of course would take hun­dreds of years to com­plete and is ut­terly point­less. In do­ing so, they are over­load­ing our servers and pre­vent­ing le­git­i­mate users from ac­cess­ing our site.

Now the AI scrap­ers have found ListenBrainz and are hit­ting a num­ber of our API end­points for their ne­far­i­ous data gath­er­ing pur­poses. In or­der to pro­tect our ser­vices from be­com­ing over­loaded, we’ve made the fol­low­ing changes:

* The /metadata/lookup API end­points (GET and POST ver­sions) now re­quire the caller to send an Authorization to­ken in or­der for this end­point to work.

* The ListenBrainz Labs API end­points for mbid-map­ping, mbid-map­ping-re­lease and mbid-map­ping-ex­plain have been re­moved. Those were al­ways in­tended for de­bug­ging pur­poses and will also soon be re­placed with a new end­points for our up­com­ing im­proved map­per.

* LB Radio will now re­quire users to be logged in to use it (and API end­point users will need to send the Authorization header). The er­ror mes­sage for logged in users is a bit clunky at the mo­ment; we’ll fix this once we’ve fin­ished the work for this year’s Year in Music.

Sorry for these has­sles and no-no­tice changes, but they were re­quired in or­der to keep our ser­vices func­tion­ing at an ac­cept­able level.

...

Read the original on blog.metabrainz.org »

4 372 shares, 13 trendiness

EOL

When hard­ware prod­ucts reach end-of-life (EOL), com­pa­nies should be forced to open-source the soft­ware.

I think we’ve made strides in this area with the Right to Repair”-movement, but let’s go one step fur­ther. Preferably with the power of the European Commission: en­force that when some­thing goes end-of-life, com­pa­nies need to open-source the soft­ware.

I have a smart” weight scale. It still con­nects via Bluetooth just fine (meaning: I see it con­nect on my phone) but be­cause the app is no longer in de­vel­op­ment, it’s es­sen­tially use­less. A per­fect piece of hard­ware, dead” be­cause the com­pany be­hind it stopped sup­port­ing it. (I’m ex­ag­ger­at­ing a bit; it shows the weight on its dis­play, but the app used to store data for up to 5 users to keep track over time. I miss that!) It’s in­fu­ri­at­ing that we al­low this to hap­pen with all the waste­ful elec­tron­ics al­ready ly­ing around. We de­serve bet­ter.

I thought of this while read­ing this ar­ti­cle. It’s great that Bose does this, but it’s rare. When Spotify killed off its $200 Car Thing at the end of 2024, we just ac­cepted it and moved on, even though that’s $200 of hard­ware turned into e-waste overnight. Out of sus­tain­abil­ity con­cerns, but also just out of do­ing what’s right: this should not be able to hap­pen.

Now, I’m not ask­ing com­pa­nies to open-source their en­tire code­base. That’s un­re­al­is­tic when an app is tied to a larger plat­form. What I am ask­ing for: pub­lish a ba­sic GitHub repo with the hard­ware specs and con­nec­tion pro­to­cols. Let the com­mu­nity build their own apps on top of it.

And here’s the thing: with vibe-cod­ing mak­ing de­vel­op­ment more ac­ces­si­ble than ever, this is­n’t just for hard­core de­vel­op­ers any­more. Regular users can ac­tu­ally tin­ker with this stuff now.

The worst you can do is break the soft­ware. But the hard­ware was bricked al­ready any­way :-)Can I keep you up­dated?Start­ing in 2026, I’ll share more fo­cused notes on prod­uct de­sign, tech­nol­ogy, and busi­ness. If you’d like them in your in­box, leave your email be­low. I’m al­ways happy to con­nect via email, Bluesky, or LinkedIn (blergh).

...

Read the original on www.marcia.no »

5 350 shares, 15 trendiness

How a 40-Line Fix Eliminated a 400x Performance Gap

I have a habit of skim­ming the OpenJDK com­mit log every few weeks. Many com­mits are too com­plex for me to grasp in the lim­ited time I have re­served for this … spe­cial hobby. But oc­ca­sion­ally some­thing catches my eye.

Last week, this com­mit stopped me mid-scroll:

The diff­s­tat was in­ter­est­ing: +96 in­ser­tions, -54 dele­tions. The change­set adds a 55-line JMH bench­mark, which means the pro­duc­tion code it­self is ac­tu­ally re­duced.

Here’s what got re­moved from os­_linux.cpp:

This was the im­ple­men­ta­tion be­hind ThreadMXBean.getCurrentThreadUserTime(). To get the cur­rent thread’s user CPU time, the old code was:

Parsing through a hos­tile for­mat where the com­mand name can con­tain paren­the­ses (hence the str­rchr for the last ))

For com­par­i­son, here’s what getCur­rent­Thread­CpuTime() does and has al­ways done:

Just a sin­gle clock­_get­time() call. There is no file I/O, no com­plex pars­ing and no buffer to man­age.

The orig­i­nal bug re­port, filed back in 2018, quan­ti­fied the dif­fer­ence:

The gap widens un­der con­cur­rency. Why is clock­_get­time() so much faster? Both ap­proaches re­quire ker­nel en­try, but the dif­fer­ence is in what hap­pens next.

The /proc path in­volves mul­ti­ple syscalls, VFS ma­chin­ery, string for­mat­ting ker­nel-side, and pars­ing user­space-side. The clock­_get­time() path is one syscall with a di­rect func­tion call chain.

Under con­cur­rent load, the /proc ap­proach also suf­fers from ker­nel lock con­tention. The bug re­port notes:

Reading proc is slow (hence why this pro­ce­dure is put un­der the method slow_thread­_cpu_­time(…)) and may lead to no­tice­able spikes in case of con­tention for ker­nel re­sources.”

So why did­n’t getCur­rent­ThreadUser­Time() just use clock­_get­time() from the start?

The an­swer is (probably) POSIX. The stan­dard man­dates that CLOCK_THREAD_CPUTIME_ID re­turns to­tal CPU time (user + sys­tem). There’s no portable way to re­quest user time only. Hence the /proc-based im­ple­men­ta­tion.

The Linux port of OpenJDK is­n’t lim­ited to what POSIX de­fines, it can use Linux-specific fea­tures. Let’s see how.

Linux ker­nels since 2.6.12 (released in 2005) en­code clock type in­for­ma­tion di­rectly into the clock­id_t value. When you call pthread­_getcpu­clockid(), you get back a clockid with a spe­cific bit pat­tern:

The re­main­ing bits en­code the tar­get PID/TID. We’ll come back to that in the bonus sec­tion.

The POSIX-compliant pthread­_getcpu­clockid() re­turns a clockid with bits 10 (SCHED). But if you flip those low bits to 01 (VIRT), clock­_get­time() will re­turn user time only.

And that’s it. The new ver­sion has no file I/O, no buffer and cer­tainly no ss­canf() with thir­teen for­mat spec­i­fiers.

Let’s have a look at how it per­forms in prac­tice. For this ex­er­cise, I am tak­ing the JMH test in­cluded in the fix, the only change is that I in­creased the num­ber of threads from 1 to 16 and added a main() method for sim­ple ex­e­cu­tion from an IDE:

Aside: This is a rather un­sci­en­tific bench­mark, I have other processes run­ning on my desk­top etc. Anyway, here is the setup: Ryzen 9950X, JDK main branch at com­mit 8ab7d3b89f656e5c. For the before” case, I re­verted the fix rather than check­ing out an older re­vi­sion.

Here is the re­sult:

We can see that a sin­gle in­vo­ca­tion took 11 mi­crosec­onds on av­er­age and the me­dian was about 10 mi­crosec­onds per in­vo­ca­tion.

The CPU pro­file looks like this:

The CPU pro­file con­firms that each in­vo­ca­tion of getCur­rent­ThreadUser­Time() does mul­ti­ple syscalls. In fact, most of the CPU time is spent in syscalls. We can see files be­ing opened and closed. Closing alone re­sults in mul­ti­ple syscalls, in­clud­ing fu­tex locks.

Let’s see the bench­mark re­sult with the fix ap­plied:

The av­er­age went down from 11 mi­crosec­onds to 279 nanos. This means the la­tency of the fixed ver­sion is 40x lower than the old ver­sion. While this is not a 400x im­prove­ment, it’s within the 30x - 400x range from the orig­i­nal re­port. Chances are the delta would be higher with a dif­fer­ent setup. Let’s have a look at the new pro­file:

The pro­file is much cleaner. There is just a sin­gle syscall. If the pro­file is to be trusted then most of the time is spent in JVM, out­side of the ker­nel.

Barely. The bit en­cod­ing is sta­ble. It has­n’t changed in 20 years, but you won’t find it in the clock­_get­time(2) man page. The clos­est thing to of­fi­cial doc­u­men­ta­tion is the ker­nel source it­self, in ker­nel/​time/​posix-cpu-timers.c and the CPUCLOCK_* macros.

My take: If glibc de­pends on it, it’s not go­ing away.

When look­ing at pro­filer data from the after’ run, I spot­ted a fur­ther op­ti­miza­tion op­por­tu­nity: A good por­tion of the re­main­ing syscall is spent in­side a radix tree lookup. Have a look:

When the JVM calls pthread­_getcpu­clockid(), it re­ceives a clockid that en­codes the thread’s ID. When this clockid is passed to clock­_get­time(), the ker­nel ex­tracts the thread ID and per­forms a radix tree lookup to find the pid struc­ture as­so­ci­ated with that ID.

However, the Linux ker­nel has a fast-path. If the en­coded PID in the clockid is 0, the ker­nel in­ter­prets this as the cur­rent thread” and skips the radix tree lookup en­tirely, jump­ing to the cur­rent task’s struc­ture di­rectly.

The OpenJDK fix cur­rently ob­tains the spe­cific TID, flips the bits, and passes it to clock­_get­time(). This forces the ker­nel to take the generalized path” (the radix tree lookup).

The source code looks like this:

If the JVM con­structed the en­tire clockid man­u­ally with PID=0 en­coded (rather than ob­tain­ing the clockid via pthread­_getcpu­clockid()), the ker­nel could take the fast-path and avoid the radix tree lookup al­to­gether. The JVM al­ready pokes bits in the clockid, so con­struct­ing it en­tirely from scratch would­n’t be a big­ger leap com­pat­i­bil­ity-wise.

First, a re­fresher on the clockid en­cod­ing. The clockid is con­structed like this:

For the cur­rent thread, we want PID=0 en­coded, which gives ~0 in the up­per bits:

We can trans­late this into C++ as fol­lows:

And then make a tiny teensy change to user_thread­_cpu_­time():

The change above is suf­fi­cient to make getCur­rent­ThreadUser­Time() use the fast-path in the ker­nel.

Given that we are in nanosec­onds ter­ri­tory al­ready, we tweak the test a bit:

* Use just a sin­gle thread to min­i­mize noise

The bench­mark changes are meant to elim­i­nate noise from the rest of my sys­tem and get a more pre­cise mea­sure­ment of the small delta we ex­pect:

The ver­sion cur­rently in JDK main branch gives:

With the man­ual clockid con­struc­tion, which uses the ker­nel fast-path, we get:

The av­er­age went down from 81.7 ns to 70.8 ns, so about a 13% im­prove­ment. The im­prove­ments are vis­i­ble across all per­centiles as well. Is it worth the loss of clar­ity from con­struct­ing the clockid man­u­ally rather than us­ing pthread­_getcpu­clockid()? I am not en­tirely sure. The ab­solute gain is small and makes ad­di­tional as­sump­tions about ker­nel in­ter­nals, in­clud­ing the size of clock­id_t. On the other hand, it’s still a gain with­out any down­side in prac­tice. (famous last words…)

This is why I like brows­ing com­mits of large open source pro­jects. A 40-line dele­tion elim­i­nated a 400x per­for­mance gap. The fix re­quired no new ker­nel fea­tures, just knowl­edge of a sta­ble-but-ob­scure Linux ABI de­tail.

Read the ker­nel source. POSIX tells you what’s portable. The ker­nel source code tells you what’s pos­si­ble. Sometimes there’s a 400x dif­fer­ence be­tween the two. Whether it is worth ex­ploit­ing is a dif­fer­ent ques­tion.

Check the old as­sump­tions. The /proc pars­ing ap­proach made sense when it was writ­ten, be­fore any­one re­al­ized it could be ex­ploited this way. Assumptions get baked into code. Revisiting them oc­ca­sion­ally pays off.

The change landed on December 3, 2025. Just one day be­fore the JDK 26 fea­ture freeze. If you’re us­ing ThreadMXBean.getCurrentThreadUserTime(), JDK 26 (releasing March 2026) brings you a free 30-400x speedup!

Update: Jonas Norlinder (the patch au­thor) shared his own deep-dive in the Hacker News dis­cus­sion - writ­ten in­de­pen­dently around the same time. Great minds! His is more rig­or­ous on the mem­ory over­head side; mine digs deeper into the bit en­cod­ing and the PID=0 fast-path.

...

Read the original on questdb.com »

6 329 shares, 57 trendiness

Why some clothes shrink in the wash — and how to 'unshrink' them

Why some clothes shrink in the wash — and how to unshrink’ them

Washing your favourite piece of cloth­ing only to find out it shrank can be up­set­ting. Why does it hap­pen, and how can you unshrink’ it?

Why some clothes shrink in the wash - and how to unshrink’ them

Analysis for The Conversation by tex­tiles sci­en­tist Dr Nisa Salim

When your favourite dress or shirt shrinks in the wash, it can be dev­as­tat­ing, es­pe­cially if you fol­lowed the in­struc­tions closely. Unfortunately, some fab­rics just seem to be more prone to shrink­ing than oth­ers — but why?

Understanding more about the sci­ence of tex­tile fi­bres can not only help you pre­vent the shrink­age of cloth­ing, but also might help you rescue” the oc­ca­sional gar­ment af­ter a laun­dry ac­ci­dent.

It’s all down to the fi­bres

To know more about cloth­ing shrink­age, we first need to un­der­stand a lit­tle about how tex­tiles are made.

Common tex­tile fi­bres, such as cot­ton and linen, are made from plants. These fi­bres are ir­reg­u­lar and crin­kled in their nat­ural form. If you zoom deeper in­side them, you’ll see mil­lions of tiny, long-chain cel­lu­lose mol­e­cules that nat­u­rally ex­ist in coiled or con­vo­luted shapes.

During tex­tile man­u­fac­tur­ing, these fi­bres are me­chan­i­cally pulled, stretched and twisted to straighten and align these cel­lu­lose chains to­gether. This cre­ates smooth, long threads.

On a chem­i­cal level, there are also links be­tween the chains called hy­dro­gen bonds. These strengthen the fi­bre and the thread and make it more co­he­sive.

Threads are wo­ven or knit­ted into fab­rics, which locks in the ten­sion that holds those fi­bres side by side.

However, these fi­bres have good memory”. Whenever they’re ex­posed to heat, mois­ture or me­chan­i­cal ac­tion (such as ag­i­ta­tion in your wash­ing ma­chine), they tend to re­lax and re­turn to their orig­i­nal crin­kled state.

This fi­bre mem­ory is why some fab­rics wrin­kle so eas­ily and why some of them may even shrink af­ter wash­ing.

Magnified im­age of cot­ton fab­ric, show­ing threads locked’ in against each other.

How does wash­ing shrink the fab­ric?

To un­der­stand shrink­age, we again need to zoom down to the mol­e­c­u­lar level. During laun­der­ing, hot wa­ter helps to in­crease the en­ergy level of fi­bres — this means they shake more rapidly which dis­rupts the hy­dro­gen bonds hold­ing them in place.

The way a fab­ric is knit­ted or wo­ven also plays a role. Loosely knit­ted fab­rics have more open spaces and loops, mak­ing them more sus­cep­ti­ble to shrink­age. Tightly wo­ven fab­rics are more re­sis­tant be­cause the threads are locked into place with less room to move.

Additionally, cel­lu­lose is hy­drophilic — it at­tracts wa­ter. Water mol­e­cules pen­e­trate in­side the fi­bres, caus­ing swelling and mak­ing them more flex­i­ble and mo­bile. Adding to all this is the tum­ble and twist ac­tion in­side the wash­ing ma­chine.

The whole process makes the fi­bres re­lax and re­coil back to their nat­ural, less stretched, crin­kled state. As a re­sult, the gar­ment shrinks.

It’s not just hot wa­ter — here’s why

This does­n’t just hap­pen with hot wa­ter, as you may have ex­pe­ri­enced your­self with clothes made of rayon, for ex­am­ple.

Cold wa­ter can still pen­e­trate into fi­bres, mak­ing them swell, along with the me­chan­i­cal ac­tion of the tum­bling in the wash­ing ma­chine. The ef­fect is less dra­matic with cold wa­ter, but it can hap­pen.

To min­imise shrink­age, you may use cold wa­ter, the low­est spin speed or the gen­tlest cy­cle avail­able, es­pe­cially for cot­ton and rayon. Machine la­bels don’t al­ways fully ex­plain the im­pact of spin speed and ag­i­ta­tion. When in doubt, choose a delicate” set­ting.

A wool fi­bre mag­ni­fied, show­ing cu­ti­cles that ap­pear like scales.

Different fi­bres shrink in dif­fer­ent ways; there is no sin­gle mech­a­nism that fits all.

While cel­lu­lose-based fab­rics shrink as de­scribed above, wool is an an­i­mal-de­rived fi­bre made of ker­atin pro­teins. Its sur­face is cov­ered in tiny, over­lap­ping scales called cu­ti­cle cells.

During wash­ing, these cu­ti­cles open up and in­ter­lock with neigh­bour­ing fi­bres caus­ing fi­bre en­tan­gle­ment or felting”. This makes the cloth­ing feel denser and smaller — in other words, it shrinks.

Why don’t syn­thet­ics shrink as much?

Synthetic fi­bres such as poly­ester or ny­lon are made from pe­tro­leum-based poly­mers, en­gi­neered for sta­bil­ity and dura­bil­ity.

These poly­mers con­tain more crys­talline re­gions that are highly or­dered and act as an in­ter­nal skeleton”, pre­vent­ing the fi­bres from crin­kling.

Textile sci­en­tists and en­gi­neers are also work­ing on fab­rics that re­sist shrink­age through ad­vanced ma­te­r­ial de­sign. Among promis­ing in­no­va­tions are blended yarns that com­bine nat­ural and syn­thetic fi­bres.

Some re­searchers are work­ing on shape-mem­ory poly­mers that can change shape — or re­turn to a pre­vi­ous shape — in re­sponse to tem­per­a­ture or wa­ter, for ex­am­ple. This is dif­fer­ent to stretch fab­rics (such as those used in ac­tivewear) that are made up of highly elas­tic fi­bres which bounce back” to their orig­i­nal state af­ter stretch­ing.

How can I un­shrink a piece of cloth­ing?

If a favourite gar­ment has shrunk in the wash, you can try to res­cue it with this sim­ple method.

Gently soak the item in luke­warm wa­ter mixed with hair con­di­tioner or baby sham­poo (approximately one ta­ble­spoon per litre). Then, care­fully stretch the fab­ric back into shape and dry it flat or un­der gen­tle ten­sion — for ex­am­ple, by peg­ging the gar­ment to a dry­ing rack.

The rea­son this works is be­cause con­di­tion­ers have chem­i­cals known as cationic sur­fac­tants. These will tem­porar­ily lu­bri­cate the fi­bres, mak­ing them more flex­i­ble and al­low­ing you to gen­tly pull every­thing back into place.

This process can’t com­pletely re­verse ex­treme shrink­age but it can help re­cover some of the lost size, mak­ing the clothes wear­able again.

Swinburne-led net­work to guide AI use in youth ser­vices

Swinburne’s Dr Joel McGregor, Dr Linus Tan and Dr Caleb Lloyd have established the Responsible AI in Youth Sectors Network. The collaborative net­work aims to guide the fast-grow­ing use of ar­ti­fi­cial in­tel­li­gence in youth ser­vices across Victoria.

Read more

Ten Swinburne aca­d­e­mics have been named on the Highly Cited Researchers 2025 list, re­leased by Clarivate

Swinburne physi­cist Dr Weibai Li has re­ceived a Discovery Early Career Researcher Award from the Australian Research Council

$1.2m ARC fund­ing to boost na­tional X-ray spec­troscopy ca­pa­bil­ity through Swinburne and QUT part­ner­ship

Swinburne has se­cured $1.2 mil­lion in the lat­est Australian Research Council Linkage Infrastructure, Equipment and Facilities scheme round

Read more

...

Read the original on www.swinburne.edu.au »

7 325 shares, 38 trendiness

I Hate Github Actions with Passion

I can’t over­state how much I hate GitHub Actions. I don’t even re­mem­ber hat­ing any other piece of tech­nol­ogy I used. Sure, I still make fun of PHP that I re­mem­ber from times of PHP4, but even then I did­n’t hate it. Merely I found it sub­par tech­nol­ogy to other emerg­ing at the time (like Ruby on Rails or Django). And yet I hate GitHub Actions.

Day be­fore writ­ing these words I was im­ple­ment­ing build.rs for my tm­plr pro­ject. To save you a click - it is a file/​pro­ject scaf­fold tool with hu­man read­able (and craftable) tem­plate files. I (personally) use it very of­ten, given how easy it is to craft new tem­plates, by hand or with aid of the tool, so check it out if you need a sim­i­lar tool.

The build.rs used CUE to gen­er­ate README.md, CHANGELOG.md and also a ver­sion/​help file to guar­an­tee con­sis­tency. It was fun thing to do, it took ap­prox. 1.5h and I even wrote an ar­ti­cle about it. For my­self and fu­ture gen­er­a­tions.

I was happy with the re­sults and did­n’t check CI out­put which, quite un­sur­pris­ingly, failed. I was us­ing cue bi­nary in­side build.rs and with­out it build sim­ply could­n’t progress. When I woke up next day and saw e-mail from CI no­ti­fy­ing me about failed build I im­me­di­atelly knew my day is­n’t go­ing to start with pup­pies and rain­bows.

It took cou­ple at­tempts to search and push GitHub Action that would in­stall CUE and then I got the worst of the worst re­sults: One sys­tem in ma­trix fail­ing to build.

Makes sense, right? Even though my user base can be counted on a fin­gers of one-arm-less and sec­ond-arm-hook-equipped pi­rate, it’s still a thing One Should Do”.

And with all that - Linux ARM failed with command can’t be found”. CUE in­stalled and ran nicely for all other 3 tar­gets, but for some rea­son it failed for Linux ARM.

In case you don’t care about why I hate GitHub but your mind started to won­der to what went wrong” let me tell you; be­cause I know.

So sup­pos­edly cross build that hap­pens in ma­trix is heav­ily iso­lated. When I in­stall CUE I in­stall it only on x86_64 Linux host and ma­cOS ARM host. ma­cOS has zero is­sues run­ning x86_64 bi­nary and no is­sues are raised when Linux x86_64 tries to run x86_64 bi­nary. But GitHub Actions is nice enough to hide x86_64 bi­nary from ar­m64 run­ner, so that it won’t break.

Thank you GitHub Actions. What would’ve I done with­out you.

And so my least fa­vorite feed­back loop started and went like this:

Offer the Universe choice words it won’t soon for­get

I got quite ef­fi­cient when it comes to points 8 and 9 but oth­er­wise the whole loop still took around 2-3 min­utes to ex­e­cute.

Yes. For a sin­gle change. Like hav­ing an ed­i­tor with 2 minute save lag, push­ing com­mit us­ing pro­gram run­ning on cas­sette tapes or play­ing chess over snail-mail. It’s 2026 for Pete’s sake, and we won’t tol­er­ate this be­hav­ior!

Now of course, in some Perfect World, GitHub could have a lo­cal run­ner with all the bells and whis­tles. Or maybe some­thing that would al­low me to quickly check for progress upon the push or even some­thing like a scratch com­mit”, i.e. a way that I could test­bed dif­fer­ent runs with­out pol­lut­ing his­tory of both Git and Action runs.

But no such per­fect world ex­ists and one is at the whim of heart­less YAML-based sys­tem.

I suf­fered only 30 min­utes of such loops. Could’ve done it for longer but I was out of col­or­ful lan­guage to use and felt with­out it the process just is­n’t the same.

There is a wise say­ing in the in­ter­net that goes like:

For the love of all that is holy, don’t let GitHub Actions man­age your logic. Keep your scripts un­der your own damn con­trol and just make the Actions call them!

This is what every­one should do. This is what I did.

I deleted build.rs (with a sliver of sad­ness be­cause it was re­ally nice - but sac­ri­fices had to be made). I moved all the gen­er­a­tion from build.rs to GNU Makefile, com­mit­ted the darn files into repos­i­tory, re­verted changes to CI and called it a day. Problem solved.

GitHub Actions, Friends & Gentlefolk, is the rea­son why we can’t have (some) nice things. I can’t count how many hours I’ve lost de­bug­ging the run­ners or try­ing to op­ti­mize the build process. It’s a sorry process every sin­gle time, a time that would be bet­ter spent else­where.

And yet there are some ben­e­fits, like ma­cOS builds that would be quite hard to get oth­er­wise. I don’t know any other sys­tem that would be eas­ier to setup than GitHub Actions (if you know one, let me know) but it seems there’s no es­cape.

We are all doomed to GitHub Actions.

…but at least I dodged the bul­let early.

...

Read the original on xlii.space »

8 321 shares, 17 trendiness

ASCII Clouds

...

Read the original on caidan.dev »

9 317 shares, 16 trendiness

The truth behind the 2026 J.P. Morgan Healthcare Conference

Note: I am co-host­ing an event in SF on Friday, Jan 16th.

In 1654, a Jesuit poly­math named Athanasius Kircher pub­lished Mundus Subterraneus, a com­pre­hen­sive ge­og­ra­phy of the Earth’s in­te­rior. It had maps and il­lus­tra­tions and rivers of fire and vast sub­ter­ranean oceans and air chan­nels con­nect­ing every vol­cano on the planet. He wrote that the whole Earth is not solid but every­where gap­ing, and hol­lowed with empty rooms and spaces, and hid­den bur­rows.”. Alongside com­ments like this, Athanasius iden­ti­fied the leg­endary lost is­land of Atlantis, pon­dered where one could find the re­mains of gi­ants, and de­tailed the kinds of an­i­mals that lived in this lower world, in­clud­ing drag­ons. The book was based en­tirely on sec­ond­hand ac­counts, like trav­el­ers tales, min­ers re­ports, clas­si­cal texts, so it was as com­pre­hen­sive as it could’ve pos­si­bly been.

But Athanasius had never been un­der­ground and nei­ther had any­one else, not re­ally, not in a way that mat­tered.

Today, I am in San Francisco, the site of the 2026 J. P. Morgan Healthcare Conference, and it feels a lot like Mundus Subterraneus.

There is os­ten­si­bly plenty of ev­i­dence to be­lieve that the con­fer­ence ex­ists, that it ac­tu­ally oc­curs be­tween January 12, 2026 to January 16, 2026 at the Westin St. Francis Hotel, 335 Powell Street, San Francisco, and that it has done so for the last forty-four years, just like every­one has told you. There is a web­site for it, there are ar­ti­cles about it, there are dozens of AI-generated posts on Linkedin about how ex­cited peo­ple were about it. But I have never met any­one who has ac­tu­ally been in­side the con­fer­ence.

I have never been ap­proached by one, or seated next to one, or in­tro­duced to one. They do not ap­pear in my life. They do not ap­pear in any­one’s life that I know. I have put my boots on the ground to rec­tify this, and asked around, first ca­su­ally and then less ca­su­ally, Do you know any­one who has at­tended the JPM con­fer­ence?”, and then they nod, and then I re­fine the ques­tion to be, No, no, like, some­one who has ac­tu­ally been in the phys­i­cal con­fer­ence space”, then they look at me like I’ve asked if they know any­one who’s been to the moon. They know it hap­pens. They as­sume some­one goes. Not them, be­cause, just like me, or­di­nary peo­ple like them do not go to the moon, but rather ex­ist around the moon, hav­ing cof­fee chats and or­ga­niz­ing lit­tle par­ties around it, all while trust­ing that the moon is be­ing at­tended to.

The con­fer­ence has six fo­cuses: AI in Drug Discovery and Development, AI in Diagnostics, AI for Operational Efficiency, AI in Remote and Virtual Healthcare, AI and Regulatory Compliance, and AI Ethics and Data Privacy. There is also a sev­enth theme over Keynote Discussions’, the three of which are The Future of AI in Precision Medicine, Ethical AI in Healthcare, and Investing in AI for Healthcare. Somehow, every sin­gle the­matic con­cept at this con­fer­ence has con­verged onto ar­ti­fi­cial in­tel­li­gence as the only thing worth se­ri­ously dis­cussing.

Isn’t this strange? Surely, you must feel the same thing as me, the in­escapable sus­pi­cion that the whole show is be­ing put on by an un­con­scious Chinese Room, its only job to pass over semi-leg­i­ble sym­bols over to us with no re­gards as to what they ac­tu­ally mean. In fact, this pat­tern is con­sis­tent across not only how the con­fer­ence com­mu­ni­cates it­self, but also how bio­phar­ma­ceu­ti­cal news out­lets dis­cuss it.

Each year, Endpoints News and STAT and BioCentury and FiercePharma all pub­lish ex­ten­sive cov­er­age of the J. P. Morgan Healthcare Conference. I have read the ar­ti­cles they have put out, and none of it feels like it was writ­ten by some­one who ac­tu­ally was at the event. There is no emo­tional en­ergy, no per­sonal anec­dotes, all of it has been re­moved, shred­ded into one ho­mo­ge­neous, smoothie-like tex­ture. The cov­er­age con­tains phrases like pipeline up­dates” and strategic pri­or­i­ties” and catalysts ex­pected in the sec­ond half.” If the writ­ers of these ar­ti­cles ever ap­proach a hu­man-like tenor, it is in ref­er­ence to the con­fer­ence’s tone”. The tone is cautiously op­ti­mistic.” The tone is more sub­dued than ex­pected.” The tone is mixed.” What does this mean? What is a mixed tone? What is a cau­tiously op­ti­mistic tone? These are not de­scrip­tions of a place. They are more ac­cu­rately de­scrip­tions of a sen­ti­ment, ab­stracted from any phys­i­cal re­al­ity, hov­er­ing some­where above the con­fer­ence like a weather sys­tem.

I could write this cov­er­age. I could write it from my hor­ri­ble apart­ment in New York City, with­out at­tend­ing any­thing at all. I could say: The tone at this year’s J. P. Morgan Healthcare Conference was cau­tiously op­ti­mistic, with ex­ec­u­tives ex­press­ing mea­sured en­thu­si­asm about near-term cat­a­lysts while ac­knowl­edg­ing macro­eco­nomic head­winds.” I made that up in fif­teen sec­onds. Does it sound fake? It should­n’t, be­cause it sounds ex­actly like the cov­er­age of a sup­pos­edly real thing that has hap­pened every year for the last forty-four years.

Speaking of the as­tral body I men­tioned ear­lier, there is an in­ter­est­ing his­tor­i­cal par­al­lel to draw there. In 1835, the New York Sun pub­lished a se­ries of ar­ti­cles claim­ing that the as­tronomer Sir John Herschel had dis­cov­ered life on the moon. Bat-winged hu­manoids, uni­corns, tem­ples made of sen­tient sap­phire, that sort of stuff. The ar­ti­cles were de­tailed, de­scrib­ing not only these crea­tures ap­pear­ance, but also their so­cial be­hav­iors and mat­ing prac­tices. All of these cited Herschel’s ob­ser­va­tions through a pow­er­ful new tele­scope. The se­ries was a sen­sa­tion. It was also, ob­vi­ously, a hoax, the Great Moon Hoax as it came to be known. Importantly, the hoax worked not be­cause the de­tails were plau­si­ble, but be­cause they had the en­ergy of gen­uine re­port­ing: Herschel was a real as­tronomer, and tele­scopes were real, and the moon was real, so how could any com­bi­na­tion that in­volved these three be fake?

To clar­ify: I am not say­ing the J. P. Morgan Healthcare Conference is a hoax.

What I am say­ing is that I, nor any­body, can tell the dif­fer­ence be­tween the con­fer­ence cov­er­age and a very well-ex­e­cuted hoax. Consider that the Great Moon Hoax was walk­ing a very fine tightrope be­tween giv­ing the ap­pear­ance of se­ri­ous­ness, while also not giv­ing away too many de­tails that’d let the cat out of the bag. Here, the con­fer­ence rhymes.

For ex­am­ple: pho­tographs. You would think there would be pho­tographs. The (claimed) con­fer­ence at­ten­dees num­ber in the thou­sands, many of them with smart­phones, all of them pre­sum­ably ca­pa­ble of point­ing a cam­era at a thing and press­ing a but­ton. But the pho­tographs are strange, walk­ing that ex­act snick­er­ing line that the New York Sun walked. They are mostly pho­tographs of the out­side of the Westin St. Francis, or they are pho­tographs of peo­ple stand­ing in front of step-and-re­peat ban­ners, or they are pho­tographs of the sched­ule, dis­played on a screen, as if to prove that the sched­ule ex­ists. But pho­tographs of the in­side with the pan­els, au­di­ence, the keynotes in progress; these are rare. And when I do find them, they are shot from an­gles that re­veal noth­ing, that could be any­where, that could be a Marriott ball­room in Cleveland.

Is this a con­spir­acy the­ory? You can call it that, but I have a very pro­fes­sional on­line pres­ence, so I per­son­ally would­n’t. In fact, I would­n’t even say that the J. P. Morgan Healthcare Conference is not real, but rather that it is real, but not ac­tu­ally ma­te­ri­ally real.

To ex­plain what I mean, we can rely on econ­o­mist Thomas Schelling to help us out. Sixty-six years ago, Schelling pro­posed a thought ex­per­i­ment: if you had to meet a stranger in New York City on a spe­cific day, with no way to com­mu­ni­cate be­fore­hand, where would you go? The an­swer, for most peo­ple, is Grand Central Station, at noon. Not be­cause Grand Central Station is spe­cial. Not be­cause noon is spe­cial. But be­cause every­one knows that every­one else knows that Grand Central Station at noon is the ob­vi­ous choice, and this mu­tual knowl­edge of mu­tual knowl­edge is enough to spon­ta­neously pro­duce co­or­di­na­tion out of noth­ing. This, Grand Central Station and places just like it, are what’s known as a Schelling point.

Schelling points ap­pear when they are needed, burnt into our ge­netic code, Pleistocene sub­rou­tines run­ning on re­peat, left over from when we were small and furry and needed to know, with­out speak­ing, where the rest of the troop would be when the leop­ards came. The J. P. Morgan Healthcare Conference, on the sec­ond week of January, every January, Westin St. Francis, San Francisco, is what hap­pened when that an­cient co­or­di­na­tion in­stinct was handed an in­dus­try too vast and too ab­stract to or­ga­nize by any other means. Something deep dri­ves us to gather here, at this time, at this date.

To pre­empt the ob­vi­ous ques­tions: I don’t know why this par­tic­u­lar lo­ca­tion or time or de­mo­graphic were cho­sen. I es­pe­cially don’t know why J. P. Morgan of all groups was cho­sen to or­ga­nize the whole thing. All of this sim­ply is.

If you find any of this hard to be­lieve, ob­serve that the whole event is, struc­turally, a re­li­gious pil­grim­age, and has all the quirks you may ex­pect of a re­li­gious pil­grim­age. And I don’t mean that as a metaphor, I mean it lit­er­ally, in every di­men­sion ex­cept the one where some­one of­fi­cial ad­mits it, and J. P. Morgan cer­tainly won’t.

Consider the el­e­ments. A spe­cific place, a spe­cific time, an an­nual cy­cle, a jour­ney un­der­taken by the faith­ful, the pres­ence of hi­er­ar­chy and ex­clu­sion, the pro­duc­tion of mean­ing through rit­ual rather than con­tent. The hajj re­quires Muslims to cir­cle the Kaaba seven times. The J. P. Morgan Healthcare Conference re­quires devo­tees of the bio­phar­ma­ceu­ti­cal in­dus­try to slither into San Francisco for five days, nearly all of them—in my opin­ion, all of them—never ac­tu­ally en­ter­ing the con­fer­ence it­self, but in­stead or­bit­ing it, cir­cum­am­bu­lat­ing it, tak­ing cof­fee chats in its grav­i­ta­tional field. The Kaaba is a cube con­tain­ing, ac­cord­ing to tra­di­tion, noth­ing, an empty room, the holi­est empty room in the world. The Westin St. Francis is also, roughly, a cube. I am not say­ing these are the same thing. I am say­ing that we have, as a species, a deep and un­ex­am­ined re­la­tion­ship to cubes.

This is my strongest the­ory so far. That the J. P. Morgan Healthcare con­fer­ence is­n’t ex­actly real or un­real, but a mass-co­or­di­na­tion so­cial con­tract that has been un­con­sciously signed by every­one in this in­dus­try, tran­scend­ing the need for an un­der­ly­ing ref­er­ent.

My skep­ti­cal read­ers will protest at this, and they would be cor­rect to do so. The story I have writ­ten out is clean, but it can­not be fully cor­rect. Thomas Schelling was not so naive as to be­lieve that Schelling points spon­ta­neously gen­er­ate out of thin air, there is al­ways a rea­son, a spe­cific, grounded rea­son, that their con­cepts be­come the low-en­ergy meta­phys­i­cal basins that they are. Grand Central Station is spe­cial be­cause of the cul­tural grav­i­tas it has ac­cu­mu­lated through pop­u­lar me­dia. Noon is spe­cial be­cause that is when the sun reaches its zenith. The Kaaba was wor­shipped be­cause it was not some ar­bi­trary cube; the cube it­self was spe­cial, that it con­tained The Black Stone, set into the east­ern cor­ner, a relic that pre­dates Islam it­self, that some tra­di­tions claim fell from heaven.

And there are signs, if you know where to look, that the un­der­ly­ing ref­er­ent for the Westin St. Francis sta­tus be­ing a gath­er­ing area is phys­i­cal. Consider the heat. It is January in San Francisco, usu­ally brisk, yet the in­te­rior of the Westin St. Francis main­tains a dis­tinct, hu­mid mi­cro­cli­mate. Consider the low-fre­quency vi­bra­tion in the lobby that rip­ples the sur­face of wa­ter glasses, but does­n’t seem to reg­is­ter on lo­cal, pub­lic seis­mo­graphs. There is some­thing about the build­ing it­self that feels dis­tinctly alien. But, upon stand­ing out­side the build­ing for long enough, you’ll have the nag­ging sen­sa­tion that it is not some­thing about the ho­tel that feels off, but rather, what lies within, un­der­neath, and around the ho­tel.

There’s no easy way to sug­ar­coat this, so I’ll just come out and say it: it is pos­si­ble that the en­tirety of California is built on top of one im­mensely large or­gan­ism, and the par­tic­u­lar spot in which the Westin St. Francis Hotel stands—335 Powell Street, San Francisco, 94102—is lo­cated di­rectly above its beat­ing heart. And that this is the pri­mary or­ga­niz­ing fo­cal point for both the lo­ca­tion and en­tire rea­son for the J. P. Morgan Healthcare Conference.

I be­lieve that the ho­tel main­tains dozens of me­ter-thick polyvinyl chlo­ride plas­tic tubes that have been threaded down through the base­ment, through the bedrock, through ge­o­log­i­cal strata, and into the car­dio­vas­cu­lar sys­tem of some­thing that has been ly­ing be­neath the Pacific coast since be­fore the Pacific coast ex­isted. That the ho­tel is a sin­gu­lar, thirty-two story cen­tral line. That, dur­ing the week of the con­fer­ence, hun­dreds of gal­lons of drugs flow through these tubes, into the pul­sat­ing mass of the be­ing, pour­ing down ar­ter­ies the size of canyons across California. The dos­ing takes five days; hence the length of the con­fer­ence.

And I do not be­lieve that the drugs be­ing ad­min­is­tered here are sim­ply seda­tives. They are, in fact, the op­po­site of seda­tives. The drugs are keep­ing the thing be­neath California alive. There is some­thing wrong with the crea­ture, and a se­lect group of at­ten­dees at the J. P. Morgan Healthcare Conference have be­come its pri­mary care­tak­ers.

Why? The an­swer is ob­vi­ous: there is noth­ing good that can come from hav­ing an or­ganic crea­ture that spans hun­dreds of thou­sands of square miles sud­denly die, es­pe­cially if that same crea­tures mass makes up a sub­stan­tial por­tion of the fifth-largest econ­omy on the planet, larger than India, larger than the United Kingdom, larger than most coun­tries that we think of as sig­nif­i­cant. Maybe let­ting the na­tion slide off into the sea was an op­tion at one point, but not any­more. California pro­duces more than half of the fruits, veg­eta­bles, and nuts grown in the United States. California pro­duces the ma­jor­ity of the world’s en­ter­tain­ment. California pro­duces the tech­nol­ogy that has re­struc­tured hu­man com­mu­ni­ca­tion. Nobody can af­ford to let the whole thing col­lapse.

So, per­haps it was de­cided that California must sur­vive, at least for as long as pos­si­ble. Hence Amgen. Hence Genentech. Hence the en­tire biotech rev­o­lu­tion, which we are taught to un­der­stand as a tri­umph of sci­ence and en­tre­pre­neur­ship, a story about ven­ture cap­i­tal and re­com­bi­nant DNA and the ge­nius of the California busi­ness cli­mate. The story is not false, but in­com­plete. The rea­son for the rev­o­lu­tion was, above all else, be­cause the crea­ture needed med­i­cine, and the old meth­ods of mak­ing med­i­cine were no longer ad­e­quate, and some­one de­cided that the only way to save the pa­tient was to cre­ate an en­tire in­dus­try ded­i­cated to its care.

Why is drug de­vel­op­ment so ex­pen­sive? Because the real R&D costs are for the pri­mary pa­tient, the be­ing un­der­neath California, and hu­man ap­pli­ca­tions are an af­ter­thought, a way of re­coup­ing in­vest­ment. Why do so many clin­i­cal tri­als fail? For the same rea­son; the drugs are not meant for our species. Why is the in­dus­try con­cen­trated in San Francisco, San Diego, Boston? Because these are mon­i­tor­ing sta­tions, places where other in­tra­venous lines have been drilled into other or­gans, other places where the crea­ture sur­faces close enough to reach.

Finally, con­sider the ho­tel it­self. The Westin St. Francis was built in 1904, and, through­out its en­tire ex­is­tence, it has never, ever, even once, closed or stopped op­er­at­ing. The 1906 earth­quake lev­eled most of San Francisco, and the Westin St. Francis did not fall. It was dam­aged, yes, but it did not fall. The 1989 Loma Prieta earth­quake killed sixty-three peo­ple and col­lapsed a sec­tion of the Bay Bridge. Still, the Westin St. Francis did not fall. It can­not fall, be­cause if it falls, the cen­tral line is sev­ered, and if the cen­tral line is sev­ered, the crea­ture dies, and if the crea­ture dies, we lose California, and if we lose California, our civ­i­liza­tion loses every­thing that California has been qui­etly hold­ing to­gether. And so the Westin St. Francis has hosted every sin­gle J. P. Morgan Healthcare Conference since 1983, has never missed one, has never even come close to miss­ing one, and will not miss the next one, or the one af­ter that, or any of the ones that fol­low.

If you think about it, this all makes a lot of sense. It may also seem very un­likely, but un­likely things have been known to hap­pen through­out his­tory. Mundus Subterraneus had a sec­tion on the seeds of met­als,” a the­ory that gold and sil­ver grew un­der­ground like plants, sprout­ing from min­eral seeds in the moist, oxy­gen-poor dark­ness. This was wrong, but the in­tu­ition be­neath it was not en­tirely mis­guided. We now un­der­stand that the Earth’s man­tle is a kind of eter­nal en­gine of as­tro­nom­i­cal size, cy­cling mat­ter through sub­duc­tion zones and vol­canic sys­tems, cre­at­ing and de­stroy­ing crust. Athanasius was wrong about the mech­a­nism, but right about the struc­ture. The earth is not solid. It is every­where gap­ing, hol­lowed with empty rooms, and it is alive.

...

Read the original on www.owlposting.com »

10 316 shares, 12 trendiness

Every GitHub Object Has Two IDs

I was re­cently build­ing a fea­ture for Greptile (an AI-powered code re­view tool), when I hit a weird snag with GitHub’s API.

The fea­ture should have been sim­ple: I wanted to add click­able links to GitHub PR com­ments, so users could jump di­rectly from our re­views to rel­e­vant GitHub dis­cus­sions. We al­ready stored the com­ment IDs, so I just needed to con­struct the URLs.

The prob­lem was, when I tested it, the links did­n’t work.

Searching through GitHub’s doc­u­men­ta­tion for an­swers re­vealed that their team main­tains two sep­a­rate ID sys­tems. We’d been us­ing GitHub’s GraphQL API, which re­turns node IDs like PRRC_kwDOL4aMSs6Tkzl8. GitHub de­signed these node IDs to uniquely iden­tify any ob­ject across its en­tire sys­tem. But web URLs re­quired data­base IDs, in­te­ger val­ues vis­i­ble in URLs and of­ten as­so­ci­ated with REST re­sponses, like 2475899260.

I was look­ing at ei­ther back­fill­ing mil­lions of records or mi­grat­ing our en­tire data­base, and nei­ther sounded fun. So I did what any an­noyed en­gi­neer would do: I stared at these IDs for way too long, look­ing for a way out of the mi­gra­tion.

I looked for a re­la­tion­ship be­tween these two ID for­mats. I pulled up a few of our stored node IDs and opened the cor­re­spond­ing PR com­ments from the same pull re­quest in my ed­i­tor:

The data­base IDs in­cre­mented se­quen­tially, and the node IDs were al­most iden­ti­cal too, dif­fer­ing only in their last few char­ac­ters. GitHub’s doc­u­men­ta­tion men­tioned that node IDs are base64 en­coded. I tried de­cod­ing just the part af­ter PRRC_:

def base64_2_int(s):

base64_­part = s.split(“_”)[1]

re­turn int.from_bytes(base64.b64de­code(base64_­part))

The de­coded val­ues were very long (96 bit) in­te­gers:

The de­coded in­te­gers were in­cre­mented by 798, ex­actly match­ing the data­base ID in­cre­ment. The data­base ID had to be em­bed­ded in there some­where.

Since both val­ues were chang­ing by the same amount, and the de­coded value was 96 bits, I fig­ured the data­base ID was likely em­bed­ded in the lower 32 bits of the node ID. I wrote a quick test:

def node_id_­to_­data­base_id(s):

de­coded = int.from_bytes(base64.b64de­code(s.split(“_”)[1]))

# Mask to keep only the lower 32 bits

re­turn de­coded & ((1 << 32) - 1)

node_id_­to_­data­base_id(“PRRC_k­w­DOL4aMSs6Tk­zl8”)

# Returns: 2475899260

It worked! The data­base ID was just the last 32 bits of the de­coded node ID. I could skip the en­tire mi­gra­tion, and ex­tract what I needed with a sim­ple bit­mask op­er­a­tion.

After the re­lief sunk in, I could­n’t help but ask, If the data­base ID only used the last 32 bits out of the 96 to­tal bits, what were the first 64 bits be­ing used for?”

Since the node ID is a global iden­ti­fier across all of GitHub, I as­sumed that the ex­tra 64 bits had to en­code ei­ther the ob­ject type or an id to an­other re­source that owned” the cur­rent node. I wanted to see if I could de­code them the same way I’d de­coded the data­base ID.

To un­der­stand what was in those 64 bits, I started query­ing dif­fer­ent GitHub ob­jects. My test repos­i­tory re­turned the fa­mil­iar PRRC_ for­mat for every­thing. I tried the first fa­mous repos­i­tory that came to mind, tor­valds/​linux, to see if the pat­tern held.

The re­sponse was a com­pletely dif­fer­ent base64 en­coded string:

MDEwOlJlcG9zaXRvcnkyMzI1Mjk4

MDQ6VHJlZTIzMjUyOTg6NzIwMWJmYjkyOGIyOWU4MGIwMDVkYTE1OTc4MzQ1ZjIzYmEwZmY5Yg==

MDQ6QmxvYjIzMjUyOTg6ZjM3MWExM2I0ZDE5MmQyZTM3ZDcwMTdiNjNlMzNkZmE3YzY3Mzc4Zg==

When I de­coded these they showed the fol­low­ing:

base64.b64de­code(“MDE­wOlJl­cG9za­XRvc­nkyMz­I1Mjk4”)

# Returns: b′010:Repos­i­to­ry2325298′

The Linux repos­i­tory was us­ing a com­pletely dif­fer­ent for­mat. I re­al­ized the repos­i­tory was cre­ated in 2011. By pick­ing an old repos­i­tory, I’d ac­ci­den­tally stum­bled onto GitHub’s legacy ID for­mat which was quite sim­ple:

[Object Type Number]:[Object Type Name][Database ID]

That repos­i­tory ID (010:Repository2325298) had a clear struc­ture: 010 is some type enum, fol­lowed by a colon, the word Repository, and then the data­base ID 2325298. Since repos­i­to­ries are just con­tain­ers, I wanted to see if git ob­jects like trees would re­veal more com­plex­ity:

base64.b64de­code(“MDQ6VHJlZ­TIzMjUy­OT­g6NzI­wMWJmYjkyOGI…“)

# Returns: b′04:Tree2325298:7201bf­b928b29e80b005­da15978345f23ba0f­f9b’

That’s the enum again, the word Tree, the repos­i­tory ID, and the tree SHA.

It was ap­par­ent that GitHub had two sys­tems for ID’ing their in­ter­nal ob­jects. Somewhere in GitHub’s code­base, there’s an if-state­ment check­ing when a repos­i­tory was cre­ated to de­cide which ID for­mat to re­turn.

I started map­ping out which ob­jects used which for­mat. The pat­tern was­n’t as sim­ple as old re­pos use old IDs, new re­pos use new IDs”:

Old repos­i­to­ries kept their legacy IDs, while newer ones were is­sued IDs fol­low­ing the new for­mat. But the split is­n’t clean; GitHub still uses the legacy for­mat for some ob­ject types, like Users, even when newly cre­ated. New ob­jects in old repos­i­to­ries some­times get new IDs, some­times don’t. It de­pends on their cre­ation date.

Surely the new for­mat had some ben­e­fit that war­ranted this messy mi­gra­tion. It should­n’t be too hard to cre­ate a more ef­fi­cient IDing sys­tem than base64 en­cod­ing the string rep­re­sen­ta­tion of an enum and the ob­ject name. This in­for­ma­tion could eas­ily be packed into those 64 ex­tra bits that I still had to un­der­stand.

GitHub’s mi­gra­tion guide tells de­vel­op­ers to treat the new IDs as opaque strings and treat them as ref­er­ences. However it was clear that there was some un­der­ly­ing struc­ture to these IDs as we just saw with the bit­mask­ing. My best guess was that it used some bi­nary se­ri­al­iza­tion for­mat, so I could just test a bunch to see what worked.

This is when I came across MessagePack, a com­pact bi­nary se­ri­al­iza­tion for­mat. It seemed promis­ing as it was fre­quently used in Ruby pro­jects, and GitHub’s back­end is built on Ruby. I tried de­cod­ing it:

im­port ms­g­pack

im­port base64

def de­code_new_n­ode_id(node_id):

pre­fix, en­coded = node_id.split(‘_’)

packed = base64.b64de­code(en­coded)

re­turn ms­g­pack.un­packb(packed)

de­code_new_n­ode_id(“PRRC_k­w­DOL4aMSs6Tk­zl8”)

# Returns: [0, 47954445, 2475899260]

It worked. The new for­mat uses MessagePack to en­code the rel­e­vant IDs into an ar­ray.

The struc­ture made sense once I saw it:

* First el­e­ment (0): Still un­clear. Probably a ver­sion iden­ti­fier, but if you know what this is for, please email me at soohoon@grep­tile.com.

* Second el­e­ment (47954445): The repos­i­to­ry’s data­base ID. This pro­vides the con­text needed to make the ID global. Pull re­quests, is­sues, and com­ments are all usu­ally scoped to a repos­i­tory.

Different ob­ject types some­times have dif­fer­ent ar­ray lengths. Repositories only need [0, repos­i­to­ry_­data­base_id]. Commits in­clude the git SHA: [0, repos­i­to­ry_­data­base_id, com­mit_sha]. The first el­e­ment is al­ways 0, and repos­i­tory-scoped ob­jects in­clude both the repos­i­tory ID and the spe­cific ob­ject iden­ti­fier. Since the data­base ID of the com­ment is the last el­e­ment in the ar­ray, when bit­mask­ing for the lower 32 bits we are able ex­tract just that.

What started as a URL gen­er­a­tion prob­lem turned into reverse-engineering” and ex­plor­ing of GitHub’s ID sys­tem.

Putting it all to­gether, for mod­ern GitHub node IDs you can use:

im­port base64

im­port ms­g­pack

def node_id_­to_­data­base_id(node_id):

pre­fix, en­coded = node_id.split(‘_’)

packed = base64.b64de­code(en­coded)

ar­ray = ms­g­pack.un­packb(packed)

re­turn ar­ray[-1]

to ex­tract the data­base ID for pull re­quest com­ments. Should I have made sure that we were stor­ing the right ID in the first place? Probably, but then I would­n’t have had much fun un­cov­er­ing all of this. And my deep­est con­do­lences to the GitHub en­gi­neer who has to deal with sup­port­ing these two dif­fer­ent node ID for­mats.

...

Read the original on www.greptile.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.