10 interesting stories served every morning and every evening.




1 404 shares, 21 trendiness

Fully Homomorphic Encryption and the Dawn of A Truly Private Internet

gene-spaf­ford

Using en­cryp­tion on the Internet is the equiv­a­lent of ar­rang­ing an ar­mored car to de­liver credit card in­for­ma­tion from some­one liv­ing in a card­board box to some­one liv­ing on a park bench.” — Gene Spafford

Imagine send­ing Google an en­crypted ques­tion and get­ting back the ex­act re­sults you wanted — with­out them hav­ing any way of know­ing what your ques­tion was or what re­sult they re­turned. The tech­nique to do that is called Fully Homomorphic Encryption (FHE).

The first time I heard about what FHE does, I did­n’t be­lieve it was pos­si­ble. But it is — an it works in real-world sys­tems to­day.

It al­lows ar­bi­trary com­pu­ta­tions on ci­pher­text (encrypted data) — with­out need­ing to de­crypt it first. The re­sult of the com­pu­ta­tion, once de­crypted, matches the re­sult as if it had been per­formed on plain­text (unencrypted data).

As FHE al­lows en­crypted com­pu­ta­tion, users can keep their data en­crypted all the time on the in­ter­net (sending, server com­put­ing, re­ceiv­ing back). This pre­vents any chance of data breach. Full Privacy.

But then, why is­n’t it the de­fault like HTTPS? Why is­n’t every­one us­ing it? And why haven’t most peo­ple even heard of?

Because it is still not prac­ti­cal for most ap­pli­ca­tions, as it is very slow. But this is chang­ing fast as I’ll ex­plain be­low.

Current FHE has 1,000x to 10,000x com­pu­ta­tional over­head com­pared to plain­text op­er­a­tions. On the stor­age side, ci­pher­texts can be 40 to 1,000 times larger than the orig­i­nal. It’s like the in­ter­net in 1990—technically awe­some, but lim­ited in prac­tice.

Here’s where it gets in­ter­est­ing: FHE al­go­rithms are get­ting 8x faster each year. Operations that took 30 min­utes per bit in 2011 now take mil­lisec­onds. See the graph be­low show­ing its dra­matic speed im­prove­ment.

The graph shows 10^12 times im­prove­ment un­til 2014. The pace of im­prove­ment con­tin­ued over the last decade too. I will go deeper on that later in this ar­ti­cle.

If this dra­matic im­prove­ment con­tin­ues, we’re ap­proach­ing a com­pu­ta­tional in­flec­tion point. In not too dis­tant fu­ture, FHE could be fast enough for:

The im­pli­ca­tions are big. The en­tire busi­ness model built on har­vest­ing user data could be­come ob­so­lete. Why send your plain­text when an­other ser­vice can com­pute on your ci­pher­text?

Internet’s Spy by de­fault” can be­come Privacy by de­fault”.

Below sec­tions go deeper into each as­pect of the claim of this ar­ti­cle. You can jump into any if you are par­tic­u­larly cu­ri­ous about those sec­tions. You can also read them in or­der, as they are in a log­i­cal se­quence:

* More on per­for­mance im­prove­ments: The Moore’s Law of FHE: 8x Faster Every Year

* Connecting the dots: Future of com­pu­ta­tion is en­crypted

All data ex­ists in one of three states:

In Use (being processed in mem­ory)

We have ro­bust so­lu­tions for the first two:

But in use—when data is loaded into RAM and processed by CPUs—it is de­crypted. This is the Achilles’ heel of mod­ern se­cu­rity. Cloud providers, in­sid­ers, at­tack­ers, or com­pro­mised CPUs can read your plain­text data.

Think about every ma­jor data breach you’ve heard of:

They were fail­ures of en­cryp­tion-in-use or at rest. The mo­ment data gets loaded into mem­ory for pro­cess­ing, it be­comes vul­ner­a­ble.

FHE fixes this. Data can stay en­crypted through its en­tire life­cy­cle on the cloud, which we can call Full-Privacy Computing”.

Picture an in­ter­net where your data is al­ways en­crypted:

* Your de­vice never sends plain­text to any server

* Only you can de­crypt the re­sults

Here’s what a pri­vate ChatGPT ses­sion might look like:

# Your de­vice

pk, sk = key­gen() # pk: pub­lic key, sk: se­cret (private) key

enc_prompt = en­crypt(“Why did the dev go to ther­apy?”, pk)

server.send(enc_prompt, pk)

# OpenAI’s servers (they can never de­crypt and see your prompt)

enc_prompt, pk = client.re­ceive()

enc_llm = en­crypt(LLM_­MODEL, pk)

enc_an­swer = enc_llm.run(enc_prompt)

client.send(enc_an­swer)

# Your de­vice again

enc_an­swer = server.re­ceive()

an­swer = de­crypt(enc_an­swer, sk)

print(an­swer)

″“”Because of too many un­re­solved de­pen­den­cies!“”″

OpenAI com­putes the cor­rect re­sponse, but can never see your ques­tion or their an­swer in plain­text.

The term homomorphic” comes from Greek: homo” (same) + morphe” (form). It means hav­ing a struc­ture-pre­serv­ing map be­tween two al­ge­braic struc­tures. FHE is ho­mo­mor­phic be­cause op­er­a­tions on en­crypted data can be mapped (i.e. mir­rored) onto those on the orig­i­nal data.

The ho­mo­mor­phism in cat­e­gory the­ory is of­ten shown by a com­mu­ta­tive di­a­gram, where you can go from a point to an­other by in­ter­chang­ing the or­der of op­er­a­tions. In the be­low di­a­gram for FHE, you can go from (a, b) to E(a*b) in two sep­a­rate ways.

Let’s look at an equiv­a­lent di­a­gram with client/​server per­spec­tive and f(x) func­tion.

One help­ful anal­ogy to FHE is the Fourier trans­form, which con­verts a sig­nal from the time do­main into the fre­quency do­main. Computations per­formed on the fre­quency do­main are equiv­a­lent to those in the time do­main and vice versa. Meaning that you can com­pute in ei­ther do­main and still get the same re­sult. In a sim­i­lar way, FHE op­er­ates be­tween plain­text and ci­pher­text do­mains. Transformations done in plain­text do­main are equiv­a­lent to those in the ci­pher­text do­main, and vice versa.

To do the above men­tioned trans­for­ma­tion, FHE uses lat­tice-based cryp­tog­ra­phy—imag­ine a mul­ti­di­men­sional grid of points ex­tend­ing in­fi­nitely in all di­rec­tions.

At the heart of lat­tice-based cryp­tog­ra­phy are prob­lems that are be­lieved to be ex­tremely hard to solve—even for quan­tum com­put­ers. Two of the most well-known ex­am­ples are ::

* Closest Vector Problem (CVP): Find the lat­tice point near­est to any given point

In 2D, these look triv­ial. But add 1,000,000 di­men­sions? Then it be­comes ex­tremely hard, so that it is be­lieved that even quan­tum com­put­ers can’t crack them ef­fi­ciently. This makes FHE in­her­ently quan­tum-re­sis­tant. A very im­por­tant prop­erty to pre­pare for the pos­si­ble quan­tum com­put­ing fu­ture.

Bonus: Lattice op­er­a­tions are highly par­al­leliz­able, which means they ben­e­fit enor­mously from mod­ern GPUs and spe­cial­ized hard­ware ac­cel­er­a­tion.

Lattice-based FHE schemes rely on the Learning With Errors (LWE) or Ring-LWE prob­lem. At a high level, LWE looks like this:

The hard prob­lem: Given the pub­lic key (A, b), find the se­cret key s

Notice that A*s is lin­ear, so vi­su­ally forms a lat­tice. In other words, A*s is on a lat­tice point. The ad­di­tion of noise e makes the re­sult­ing A*s + e = b drift­ing from the lat­tice point (Note that the noise is sam­pled from a nar­row ran­dom dis­tri­b­u­tion so that b does­n’t drift too much from the lat­tice point so that it is closer to an­other lat­tice point).

So the prob­lem be­comes that: If an at­tacker wants to find se­cret key s from the pub­lic key (A, b), he needs to solve the prob­lem of find­ing the clos­est lat­tice point = clos­est vec­tor prob­lem (CVP). And clos­est vec­tor prob­lem is be­lieved to be NP-hard and even quan­tum-re­sis­tant.

To sum up, en­cryp­tion works be­cause of noise. And en­cryp­tion re­mains se­cure be­cause de­cod­ing a lat­tice point with noise is hard.

While the noise is the trick here, it also causes the fol­low­ing prob­lem dur­ing ad­di­tion and mul­ti­pli­ca­tion op­er­a­tions. During ho­mo­mor­phic ad­di­tion, the noises of each ci­pher­text is also added to each other, while dur­ing mul­ti­pli­ca­tions they are mul­ti­plied. This re­sults in:

If the noise gets too big, de­cryp­tion fails — you get garbage in­stead of the re­sult.

Since noise growth un­man­age­able with mul­ti­pli­ca­tion, the noise-based HE schemes be­fore Craig Gentry 2009 al­lowed only a lim­ited num­ber of mul­ti­pli­ca­tions (hence not Turing-complete). For that rea­son, they were called Somewhat Homomorphic Encryption.

Craig Gentry in­vented in 2009 the first HE scheme that al­lows in­fi­nite num­ber of ad­di­tions and mul­ti­pli­ca­tions com­bined. Hence it is Turing Complete. Hence it is called Fully Homomorphic Encryption. Note that any kind of com­pu­ta­tion can be rep­re­sented as ad­di­tions and mul­ti­pli­ca­tions. Indeed, all com­pu­ta­tions are re­duced to ad­di­tions and mul­ti­pli­ca­tions on the CPU/GPU level.

The main piece that makes FHE work was the method called bootstrapping”. Bootstrapping re­duces the noise when­ever it gets too big. In that way, it guar­an­tees that noise does­n’t dis­rupt de­cryp­tion, no mat­ter how many mul­ti­pli­ca­tions are per­formed.

The way how boot­strap­ping work is very clever. It’s prob­a­bly called bootstrapping”, be­cause it refreshes” the ci­pher text un­der an­other pub­lic key. It clev­erly switches the ci­pher­text from one pub­lic key to an­other, as fol­lows:

Take the ci­pher­text ctx which was orig­i­nally en­crypted un­der pk_orig

Encrypt pk_orig un­der pk_new, ob­tain­ing pk_­boot­strap (Yes! Encrypt the key with an­other key. Creative!)

Run (homomorphically) the de­crypt op­er­a­tion on ctx_new with sk_new, ob­tain­ing ctx_­boot­strap

Notice that, the de­cryp­tion pro­ce­dure of FHE is it­self a com­pu­ta­tion. So it can be used for ho­mo­mor­phic com­pu­ta­tions too!

The new ctx_­boot­strap is an­other new ci­pher­text, whose noise is re­set. Note that, it gained some fixed noise as it went through ad­di­tions and mul­ti­pli­ca­tions dur­ing the de­cryp­tion com­pu­ta­tion. But this noise is bounded.

Another thing to keep in mind is, boot­strap­ping is the per­for­mance bot­tle­neck of mod­ern FHE schemes. Though its com­pu­ta­tional over­head gets bet­ter every year by new al­go­rithms.

There are many more de­tails to that. I listed some re­sources go­ing through at FHE Bootstrapping, though they also don’t cover every­thing. One needs to read through FHE Reference Library. Other top­ics around boot­strap­ping are quite com­pli­cated, and as far as I un­der­stood, it is the fo­cal point of FHE al­go­rith­mic in­no­va­tions. It’s re­ally im­por­tant to un­der­stand it con­cep­tu­ally at least, be­cause it’s the root of FHE slow­ness, while also the rea­son why FHE works.

Other top­ics around FHE you need to be aware are re­lin­eariza­tion and mod­u­lus switch­ing. I’ll ex­plain them only by in­tu­ition here. For deeper math, I sug­gest Vitalik’s post as a starter.

A ci­pher­text is lin­ear in the se­cret key -> a + b​⋅s

After mul­ti­pli­ca­tion, the re­sult be­come qua­dratic in se­cret key. To show that:

* Notice the s² term, which makes ct_­mul qua­dratic in the se­cret key.

Relinearization uses ad­di­tional pub­lic key ma­te­r­ial called relinearization keys” to elim­i­nate the higher-de­gree terms. The process:

Uses re­lin­eariza­tion keys to convert” this back into lin­ear terms

Produces a new lin­ear ci­pher­text (c₀‘, c₁’) that de­crypts to the same value

It’s used to man­age noise growth. It’s a trick to re­duce the noise by de­creas­ing the mod­u­lus of the ci­pher­text.

I did cou­ple of con­ver­sa­tions with ChatGPT to un­der­stand some math­e­mat­i­cal de­tails of FHE bet­ter, and wrote down what I learned from them here.

What do we mean by a Homomorphic Encryption (HE) Scheme?

A HE scheme is a cryp­to­graphic con­struc­tion that de­fines how en­cryp­tion, de­cryp­tion, and ho­mo­mor­phic op­er­a­tions are per­formed (e.g. BGV, CKKS, TFHE).

Homomorphic Encryption schemes are clas­si­fied by the types and num­ber of op­er­a­tions they sup­port.

Supports only one op­er­a­tion (e.g., ad­di­tion in Paillier, mul­ti­pli­ca­tion in RSA).

Supports both ad­di­tion and mul­ti­pli­ca­tion, but num­ber of mul­ti­pli­ca­tions al­lowed is lim­ited.

Supports un­lim­ited ad­di­tions and mul­ti­pli­ca­tions. Turing Complete. Manages noise by pe­ri­od­i­cally re­duc­ing via boot­strap­ping.

One of the best ways to re­ally un­der­stand a com­pu­ta­tional con­cept is build­ing it from scratch, min­i­mally.

For the pur­poses of this blog post, cre­at­ing a Fully Homomorphic Encryption scheme would be very lengthy. Instead, we will write Paillier HE, which is shorter and eas­ier to un­der­stand for get­ting an in­tu­ition. Paillier is a Partial HE, mean­ing that it does­n’t sup­port all op­er­a­tions (hence not Fully HE). It only sup­ports ad­di­tions (hence ad­di­tive ho­mo­mor­phism). We’ll fol­low a typ­i­cal HE flow:

fhe-toy-im­ple­men­ta­tions

im­port sympy, ran­dom

def gen­er­ate_key­pair(bit_length=512):

p = sympy.nextprime(ran­dom.ge­trand­bits(bit_length))

q = sympy.nextprime(ran­dom.ge­trand­bits(bit_length))

n = p * q

g = n + 1

lamb­da_ = (p - 1) * (q - 1)

mu = sympy.mod­_in­verse(lamb­da_, n)

re­turn (n, g), (lambda_, mu)

def en­crypt(m, pub­lic_key):

n, g = pub­lic_key

r = ran­dom.randint(1, n - 1)

...

Read the original on bozmen.io »

2 285 shares, 14 trendiness

NIH Is Far Cheaper Than The Wrong Dependency

One of the biggest fal­lac­ies in cod­ing is that de­pen­den­cies have zero down­sides. It’s treated as free func­tion­al­ity you don’t have to write your­self. Wow, what a deal!

* They may re­quire a sig­nif­i­cant time in­vest­ment to learn to use. Often they’re so large and com­plex that sim­ply writ­ing the func­tion­al­ity your­self is faster than learn­ing. Sometimes it’s eas­ier to write it your­self than in­stall the de­pen­dency…

* Their break­ing changes can trig­ger ex­pen­sive re-writes of your own code to han­dle a new in­ter­face.

* You need to en­sure they end up on your clients’ ma­chine.

Let us con­sider this last point; how many ab­surdly com­pli­cated con­tainer­i­sa­tion or bundling se­tups do we en­counter in our work be­cause some­one does not want to reinvent the wheel” and in­stead opts to use large, com­plex de­pen­den­cies? How many times did peo­ple take the time to ap­pre­ci­ate they’ve re-in­vented an en­tirely dif­fer­ent wheel that has noth­ing to do with their apps core func­tion­al­ity, just to de­ploy and run the bloody thing?

Instructive here is Tigerbeetle, a fi­nan­cial data­base writ­ten en­tirely with Vanilla Zig:

TigerBeetle has a zero de­pen­den­cies” pol­icy, apart from the Zig tool­chain. Dependencies, in gen­eral, in­evitably lead to sup­ply chain at­tacks, safety and per­for­mance risk, and slow in­stall times. For foun­da­tional in­fra­struc­ture in par­tic­u­lar, the cost of any de­pen­dency is fur­ther am­pli­fied through­out the rest of the stack.

Similarly, tools have costs. A small stan­dard­ized tool­box is sim­pler to op­er­ate than an ar­ray of spe­cial­ized in­stru­ments each with a ded­i­cated man­ual. Our pri­mary tool is Zig. It may not be the best for every­thing, but it’s good enough for most things. We in­vest into our Zig tool­ing to en­sure that we can tackle new prob­lems quickly, with a min­i­mum of ac­ci­den­tal com­plex­ity in our lo­cal de­vel­op­ment en­vi­ron­ment.

I am aware this post is a re­duc­tio ad ab­sur­dum mag­net. What so you make your own sil­i­con wafers?”, etc, etc. (Programmers of a cer­tain gen­er­a­tion might bring up a we­b­comic and but­ter­flies). Ultimately, we all de­pend on some­thing. But not all de­pen­den­cies are cre­ated equal. Allow me to in­tro­duce my Framework for Evaluating Dependencies.

We have five cat­e­gories:

Dependency ped­dlers typ­i­cally only talk about er­gonom­ics, and ig­nore other cri­te­ria.

With that in mind, let’s eval­u­ate some de­pen­den­cies:

I will leave this as an ex­er­cise for the reader!

But re­mem­ber; think crit­i­cally, eval­u­ate the costs as well of the ben­e­fits, and choose wisely.

...

Read the original on lewiscampbell.tech »

3 282 shares, 12 trendiness

lsr: ls but with io_uring

As an ex­cer­cise in syscall golf, I wrote an im­ple­men­ta­tion of ls(1) which uses my IO li­brary, ou­rio to per­form as much of the IO as pos­si­ble. What I ended up with is some­thing that is faster than any ver­sion or al­ter­na­tive to ls I tested, and also per­forms an or­der of mag­ni­tude fewer syscalls. I’m call­ing it lsr. Let’s start with the bench­marks, then we’ll see how we got there.

Data gath­ered with hy­per­fine on a di­rec­tory of n plain files.

Data gath­ered with strace -c on a di­rec­tory of n plain files. (Lower is bet­ter)

How we got there

Let’s start with how lsr works. To list di­rec­tory con­tents, we ba­si­cally have 3 stages to the pro­gram:

All of the IO in­volved hap­pens in the sec­ond step. Wherever pos­si­ble, lsr uti­lizes io_ur­ing to pull in the data it needs. To get to that point, it means that we open the tar­get di­rec­tory with io_ur­ing, if we need lo­cal time, user data, or group data, we open (and read) those files with io_ur­ing. We do all stat calls via io_ur­ing, and as needed we do the equiv­a­lent of an lstat via io_ur­ing. In prac­tice, this means that the num­ber of syscalls we have should be dras­ti­cally smaller than equiv­a­lent pro­grams be­cause we are able to batch the stat syscall. The re­sults clearly show this…lsr has at least an or­der of mag­ni­tude fewer syscalls than it’s clos­est equiv­a­lent, be­ing uu­tils ls.

We also use the zig stdlib StackFallbackAllocator. This let’s lsr al­lo­cate mem­ory it needs up front, but fall­back to a dif­fer­ent al­lo­ca­tor when it’s ex­hausted the fixed al­lo­ca­tion. We al­lo­cate 1MB up front, which is more than enough for typ­i­cal us­age. This fur­ther re­duces syscalls by re­duc­ing mmap us­age.

As a re­sult of work­ing di­rectly with io_ur­ing, we also by­pass sev­eral libc re­lated pit­falls. Namely, we have no dy­namic link­ing - ls has some con­sid­er­able over­head in load­ing libc and re­lated li­braries…but it also has the ben­e­fit of hav­ing lo­cale sup­port. lsr does not boast such a fea­ture. Despite be­ing sta­t­i­cally linked, lsr is still smaller than GNU ls: 138.7KB vs 79.3KB when built with ReleaseSmall.

I have no idea what lsd is do­ing. I haven’t read the source code, but from view­ing it’s strace, it is call­ing clock­_get­time around 5 times per file. Why? I don’t know. Maybe it’s do­ing in­ter­nal tim­ing of steps along the way?

Sorting ends up be­ing a mas­sive part of the work­load. I sus­pect this is where uu­tils ls is get­ting slowed down, since it is do­ing pretty good on a syscall ba­sis. lsr spends about 30% of it’s run­time sort­ing, the rest is the IO loop.

This ended up be­ing a pretty fun pro­ject to write, and did­n’t take too much time ei­ther. I am shocked at how much io_ur­ing can be used to re­duce syscalls…ls is a pretty ba­sic ex­am­ple but you can only imag­ine how much of an ef­fect this would have on some­thing like a server.

Also - I’m us­ing tan­gled.sh for this pro­ject. They have a re­ally cool idea, and I want to see how the PR work­flow is so…if you have any bugs or changes, please visit the repo. All you need is an at­proto ac­count + app pass­word. I sus­pect more icons will be needed, feel free to make an is­sue for icon re­quests!

...

Read the original on rockorager.dev »

4 282 shares, 25 trendiness

@rockorager.dev/lsr

lsr uses the zig build sys­tem. To in­stall, you will need zig 0.14.0. To in­stall for the lo­cal user (assuming $HOME/.local/bin is in $PATH), run:

zig build -Doptimize=ReleaseSmall –prefix $HOME/.local

which will in­stall lsr and the as­so­ci­ated man­page ap­pro­pri­ately. Replace

$HOME/.local with your pre­ferred in­stal­la­tion di­rec­tory.

lsr [options] [path]

–help Print this mes­sage and exit

–version Print the ver­sion string

DISPLAY OPTIONS

-1, –oneline Print en­tries one per line

-a, –all Show files that start with a dot (ASCII 0x2E)

-A, –almost-all Like –all, but skips im­plicit .” and ..” di­rec­to­ries

-C, –columns Print the out­put in columns

–color=WHEN When to use col­ors (always, auto, never)

–group-directories-first Print all di­rec­to­ries be­fore print­ing reg­u­lar files

–hyperlinks=WHEN When to use OSC 8 hy­per­links (always, auto, never)

–icons=WHEN When to dis­play icons (always, auto, never)

-l, –long Display ex­tended file meta­data

-r, –reverse Reverse the sort or­der

-t, –time Sort the en­tries by mod­i­fi­ca­tion time, most re­cent first

Benchmarks were all gath­ered on the same set of di­rec­to­ries, us­ing the lat­est re­leases of each pro­gram (versions are shown be­low). All bench­marks run on Linux (because io_ur­ing). lsr does work on ma­cOS/​BSD as well, but will not see the syscall batch­ing ben­e­fits that are avail­able with io_ur­ing.

Data gath­ered with hy­per­fine on a di­rec­tory of n plain files.

Data gath­ered with strace -c on a di­rec­tory of n plain files. (Lower is bet­ter)

...

Read the original on tangled.sh »

5 273 shares, 23 trendiness

NYPD Bypassed Facial Recognition Ban to ID Pro-Palestinian Student Protester

A city fire mar­shal used FDNYs ac­cess to a fa­cial recog­ni­tion soft­ware to help NYPD de­tec­tives iden­tify a pro-Pales­tin­ian pro­tester at Columbia University, cir­cum­vent­ing poli­cies that tightly re­strict the Police Department’s use of the tech­nol­ogy.

Details of the arrange­ment emerged in a re­cent de­ci­sion by a Manhattan crim­i­nal court judge and in a law­suit seek­ing in­for­ma­tion from the FDNY filed this month by the Legal Aid Society, which rep­re­sented the pro­tester, Zuhdi Ahmed, now a 21-year-old pre-med CUNY stu­dent go­ing into his se­nior year of col­lege.

Police iden­ti­fied Ahmed af­ter search­ing for a young man ac­cused of hurl­ing what they said was a rock at a pro-Is­raeli pro­tester dur­ing an April 2024 skir­mish at Columbia. Thanks to the FDNYs as­sis­tance and its use of Clearview AI soft­ware, the po­lice were able to iden­tify Ahmed.

The FDNY be­gan us­ing Clearview AI in December 2022 and has an an­nual con­tract with the com­pany, ac­cord­ing to a spokesper­son.

The fire mar­shal also ac­cessed doc­u­ments from the Department of Motor Vehicles that are typ­i­cally un­avail­able to the po­lice, court records show.

Manhattan District Attorney Alvin Bragg charged Ahmed with a felony, as­sault in the third de­gree as a hate crime, which was later re­duced to a mis­de­meanor of sec­ond de­gree ag­gra­vated ha­rass­ment. A crim­i­nal court judge in June dis­missed the case against Ahmed and in a lengthy rul­ing raised red flags about gov­ern­ment sur­veil­lance and prac­tices that ran afoul of law en­force­men­t’s own poli­cies.

Where the state rou­tinely gath­ers, searches, seizes, and pre­serves colos­sal amounts of in­for­ma­tion, trans­parency must re­main a touch­stone, lest fair­ness be lost,” the judge, Valentina Morales, wrote.

Clearview AI — in wide use by law en­force­ment agen­cies na­tion­ally, in­clud­ing the Department of Justice — matches pho­tos up­loaded to its sys­tem with bil­lions of im­ages in a data­base sourced from so­cial me­dia and other web­sites. The NYPD has used the tech­nol­ogy in the past but now for­bids its use un­der a 2020 fa­cial recog­ni­tion pol­icy that lim­its im­age searches to ar­rest and pa­role pho­tos.

A sub­se­quent city law, called the POST Act, re­quires the NYPD to re­port pub­licly on its use of and poli­cies re­gard­ing sur­veil­lance tech­nolo­gies. The City Department of Investigation has found the NYPD has not con­sis­tently com­plied. Reached by THE CITY, Council mem­bers in­di­cated they were work­ing on new leg­is­la­tion to close loop­holes in the POST Act.

Social me­dia pho­tos the FDNY used to iden­tify Ahmed in­cluded pic­tures at a high school for­mal, a school play and his high school grad­u­a­tion.

Ahmed, a Westchester res­i­dent who is Palestinian and grew up go­ing to protests with his fam­ily, said he has re­ceived hate­ful mail and on­line mes­sages since his ar­rest. He said he never thought pho­tos from his teenage years could be used in this way.

It’s some­thing straight out of a dystopian, fu­tur­is­tic movie,” he said. It’s hon­estly kind of scary to think about what peo­ple are ca­pa­ble of in terms of sur­veil­lance.”

The NYPD keeps us­ing these in­cred­i­bly dis­turb­ing com­pa­nies to spy on New Yorkers, while hid­ing that sur­veil­lance from the pub­lic and vi­o­lat­ing New York City law in the process,” said Albert Fox Cahn, ex­ec­u­tive di­rec­tor of the Surveillance Technology Oversight Project. The FDNY is clearly be­ing com­plicit in en­abling these NYPD abuses.”

The NYPD re­ferred THE CITY to FDNY for com­ment. An FDNY spokesper­son said in a state­ment that ap­proved fire mar­shals have ac­cess to Clearview AI and work closely with the NYPD to in­ves­ti­gate crimes.

This small group of elite law en­force­ment agents use fa­cial recog­ni­tion soft­ware as one of the many tools avail­able to con­duct crit­i­cal fire in­ves­ti­ga­tions,” the spokesper­son said. We al­ways fol­low all lo­cal, state and fed­eral laws.”

Shane Ferro, Digital Forensics Unit staff at­tor­ney at Legal Aid, who had rep­re­sented Ahmed, sought to learn more about fa­cial recog­ni­tion tech­nol­ogy op­er­ated by the FDNY, but re­quests made un­der the New York Freedom of Information Law, or FOIL, went nowhere. Legal Aid filed a law­suit last week seek­ing to ob­tain the in­for­ma­tion.

The judge dis­missed the case pre­cisely be­cause of the se­ri­ous ques­tions sur­round­ing how Ahmed was iden­ti­fied, Ferro noted.

Still un­known is whether the NYPDs re­liance on FDNY to cir­cum­vent the po­lice de­part­men­t’s Clearview ban goes be­yond this one in­stance.

The way that the NYPD used FDNY to ac­cess broader and even more un­re­li­able fa­cial recog­ni­tion tech­nolo­gies — in this case, to iden­tify a pro­tester — brings up ques­tions about the NYPD fol­low­ing its own poli­cies, the NYPD com­ply­ing with the POST Act,” she said, adding that Ahmed’s saga brings up ques­tions about the First Amendment and the NYPDs pro­hi­bi­tion on us­ing fa­cial recog­ni­tion tech­nol­ogy to iden­tify peo­ple at po­lit­i­cal ral­lies.”

The FDNYs use of Clearview on the NYPDs be­half emerged in emails dis­closed as part of the case against Ahmed.

The in­ci­dent at the cen­ter of the case oc­curred near an en­camp­ment at Columbia University by pro-Pales­tine demon­stra­tors. Students  protested Israel’s war in Gaza which killed tens of thou­sands of Palestinians in re­sponse to Hamas’ at­tack on Israel on Oct. 7, 2023, where 1,200 Israelis were killed, and 240 hostages were taken.

The Israeli mil­i­tary of­fen­sive has since killed more than 55,000 Palestinians, ac­cord­ing to the Gaza Health Ministry and dev­as­tated the strip.

Both for­mer Columbia University President Minouche Shafik and Mayor Eric Adams faced pres­sure to quell the protests. On April 17, 2024, NYPD of­fi­cers showed up at the en­camp­ment at Shafik’s re­quest and made over 100 ar­rests. Students cre­ated a sec­ond en­camp­ment, and the highly mil­i­ta­rized NYPD pres­ence con­tin­ued on cam­pus un­til grad­u­a­tion. Cops sub­se­quently used stun grenades, fired a gun in­side stu­dent-oc­cu­pied Hamilton Hall and flew drones over cam­pus.

At Columbia, pro-Is­rael stu­dents of­ten showed up to en­camp­ment events and demon­stra­tions to counter-protest.

That was true on Saturday, April 20, 2024, when the en­camp­ment held a film screen­ing and hosted teach-ins.

Columbia stu­dent Jonathan Lederer ar­rived on cam­pus that night with his twin brother. They stood with a group be­hind those gath­ered to watch the films and waved Israeli flags, videos posted to so­cial me­dia show. Music played loudly out of a speaker.

Later, some­one stole one of the flags and ran off, and an­other per­son tried to light it on fire. Lederer de­tailed his ex­pe­ri­ence in The Free Press, say­ing he was hit in the face with ob­jects some­one threw. He later told NY1 other pro­test­ers threw rocks at my face.”

Videos posted to so­cial me­dia, blurry at times, show a white ob­ject lobbed at Lederer, who ap­pears to toss it away from him. The per­son who threw it flipped him the bird.

Lederer, who did not re­spond to emails and a call from THE CITY seek­ing com­ment, in May told the Manhattan DAs of­fice he’d was­n’t sure whether a lac­er­a­tion on the side of his face was from be­ing hit with an ob­ject or from acne.

Ahmed de­clined to an­swer ques­tions from THE CITY about throw­ing an ob­ject, but said he had been at Columbia to at­tend a jazz event when he’d heard chant­ing and walked over to the protest.

The NYPD be­gan a search for the per­son who threw the ob­ject.

On June 3, 2024, the agency posted a photo of Ahmed on its Crime Stoppers Instagram ac­count, say­ing he was WANTED for Hate Crime Assault.” The posted photo was a still from a video taken at a protest in Central Park in May 2024.

Ahmed said he has no rec­ol­lec­tion of the protest or that day, but was completely be­wil­dered” to see his photo on­line with ac­cu­sa­tions he said were false.

The same day the Instagram post went up, an FDNY fire mar­shal emailed an NYPD de­tec­tive.

He went on to say he ran the Instagram photo through our fa­cial.” He said he could­n’t find the sus­pec­t’s name, but per­haps some pho­tos he was send­ing along could help with an ID.”

He at­tached to the email screen­shots taken from Clearview AI with pho­tos of Ahmed: one shows him at a for­mal event, his arm around a friend; in an­other, he re­ceives his diploma at his high school grad­u­a­tion; and in a third, he stands with fel­low grad­u­ates in their bur­gundy gowns. In the grad­u­a­tion pho­tos, Ahmed wears a stole around his neck printed with the Palestinian flag — fol­low­ing a tra­di­tion that all his fam­ily mem­bers have done at grad­u­a­tions, he said.

The fire mar­shal wrote, Not too sure what the scarf says but maybe re­lated to Palestine?”

A dif­fer­ent NYPD de­tec­tive re­sponded with thanks. Shortly af­ter, the fire mar­shal sent links to Clearview AI face search re­sults, an archive of school play pho­tos and an­other to an archive of high school for­mal pho­tos. He said he could­n’t find as­so­ci­ated so­cial me­dia but of­fered to get a dri­ver’s li­cense photo for the de­tec­tive. We have ac­cess to that,” he wrote.

A minute later, the de­tec­tive sent the fire mar­shal Ahmed’s name, date of birth and dri­ver’s li­cense num­ber. Within five min­utes, the fire mar­shal replied, Bingo.”

NYPD de­tec­tives can­not ac­cess DMV records with­out per­mis­sion from su­per­vi­sors.

The NYPD took Ahmed’s dri­ver’s li­cense photo and in­cluded a dig­i­tally al­tered ver­sion of it in an iden­ti­fi­ca­tion ar­ray pre­sented to Lederer, who picked Ahmed’s photo from the lineup. The photo had been edited to change the shape of Ahmed’s neck.

On June 13, the NYPD ar­rested and ar­raigned Ahmed. The fol­low­ing day, the fire mar­shal again emailed the de­tec­tive: Saw the news. Good work. Glad you grabbed him.”

The de­tec­tive re­sponded the next day: Yea that’s to you, I ap­pre­ci­ate the help.”

A few hours later, the fire mar­shal emailed back, All good bro happy to help. Don’t hes­i­tate to reach out again if you need any­thing.”

The NYPD would not have iden­ti­fied Ahmed but for the FDNYs Clearview AI search and ac­cess­ing the DMV photo, the judge in­di­cated in her rul­ing. She wrote it was evident that the in­ves­ti­ga­tory steps de­scribed in the emails clearly con­tra­vene of­fi­cial NYPD pol­icy con­cern­ing the use of fa­cial recog­ni­tion.”

NYPD may only con­duct fa­cial recog­ni­tion searches within a lim­ited repos­i­tory of ar­rest and pa­role pho­tos.

To con­duct searches out­side that repos­i­tory, of­fi­cers must get per­mis­sion from the chief of de­part­ment, chief of de­tec­tives or the deputy com­mis­sioner of in­tel­li­gence. Employees who mis­use fa­cial recog­ni­tion tech­nol­ogy may face ad­min­is­tra­tive or crim­i­nal penal­ties, NYPD pol­icy states.

But in this case, FDNYs use of Clearview’s fa­cial recog­ni­tion soft­ware trawled the Internet and yielded hun­dreds of matches.

Privacy ad­vo­cates said they would like to see the POST Act ex­panded to ap­ply to law en­force­ment of­fi­cials who work for agen­cies other than the NYPD. They say that would pro­vide in­sight into how other agen­cies are us­ing sur­veil­lance tech­nol­ogy, like how FDNY used it to as­sist the NYPD.

It should not be a guess­ing game, who’s us­ing this sort of tech­nol­ogy and who’s do­ing busi­ness with a ven­dor this con­tro­ver­sial,” Cahn said.

In April, the Council ap­proved three ad­di­tional bills to strengthen POST Act re­port­ing and ac­count­abil­ity re­quire­ments.

They in­clude a law that re­quires track­ing in­ter­gov­ern­men­tal data shar­ing. But that only cov­ers in­for­ma­tion the NYPD shares with other agen­cies, not in­for­ma­tion agen­cies pro­vide to the NYPD.

Councilmember Julie Won (D-Queens), who spon­sored one of the re­cently passed bills ex­pand­ing the POST Act, said she and her col­leagues are draft­ing leg­is­la­tion to close the loop­hole. The new bill would pro­hibit city agen­cies from us­ing sur­veil­lance tech­nolo­gies on be­half of law en­force­ment, and man­date agen­cies dis­close their use of sur­veil­lance tech­nol­ogy for any rea­son.

No mat­ter what they’re us­ing it for, the pub­lic de­serves to know,” Won said.

Other Council mem­bers ex­pressed alarm over the rev­e­la­tion about FDNYs use of Clearview AI.

This is a clear loop­hole we did­n’t nec­es­sar­ily an­tic­i­pate,” said Councilmember Crystal Hudson (D-Brooklyn).

Council Majority Leader Amanda Farías (D-The Bronx) called the FDNYs use of Clearview AI on be­half of NYPD deeply con­cern­ing” and ex­posed a trou­bling gap in our cur­rent over­sight laws.”

Councilmember Jennifer Gutiérrez (D-Brooklyn), chair of the tech­nol­ogy com­mit­tee said, What hap­pened here is a warn­ing shot: with­out clear checks and over­sight, city agen­cies are us­ing pow­er­ful sur­veil­lance tools like fa­cial recog­ni­tion and AI with no ac­count­abil­ity, no trans­parency, and no re­gard for due process.”

Councilmember Joann Ariola (R-Queens), who chairs the Council’s fire com­mit­tee, dis­agreed, say­ing the FDNY was within its purview as a law en­force­ment agency to share in­for­ma­tion with the NYPD, but that the case may re­quire a deeper ex­am­i­na­tion at all lev­els.”

As for Ahmed, he said the judge drop­ping the case against him brought him the great­est re­lief” of his life. He said he felt like the ini­tial hate crime charge was an ex­ploita­tion of laws that are meant to pro­tect us, pro­tect mi­nori­ties, pro­tect any eth­nic group.”

Douglas Cohen, a spokesper­son for DA Bragg said: The of­fice con­ducted a thor­ough in­ves­ti­ga­tion into this mat­ter — in­ter­view­ing mul­ti­ple wit­nesses, an­a­lyz­ing avail­able video sur­veil­lance and re­view­ing med­ical records. When that in­ves­ti­ga­tion de­ter­mined we could not prove the le­gal el­e­ments of the top count be­yond a rea­son­able doubt, we moved to dis­miss the charge.”

Ahmed is now fo­cused on re­cov­er­ing from the emo­tional and men­tal toll the or­deal placed on him and his fam­ily.

In December, he earned his cer­ti­fi­ca­tion as an emer­gency med­ical tech­ni­cian and plans to ap­ply to med­ical school af­ter col­lege. He re­cently read a novel, No Longer Human’ by Osamu Dazai, and re­lated to the story.

Essentially, the book is about some­one that gets de­tached from so­ci­ety, and he’s ba­si­cally iso­lated,” Ahmed said. For the past year, I was scared of all the ac­cu­sa­tions, I was scared of what so­ci­ety thought of me.”

...

Read the original on www.thecity.nyc »

6 244 shares, 19 trendiness

Psilocybin produces substantial and sustained decreases in depression and anxiety in patients with life-threatening cancer: A randomized double-blind trial

Participants with a po­ten­tially life-threat­en­ing can­cer di­ag­no­sis and a DSM-IV di­ag­no­sis that in­cluded anx­i­ety and/​or mood symp­toms were re­cruited through fly­ers, in­ter­net, and physi­cian re­fer­ral. Of 566 in­di­vid­u­als who were screened by tele­phone, 56 were ran­dom­ized. Figure 1 shows a CONSORT flow di­a­gram. Table 1 shows de­mo­graph­ics for the 51 par­tic­i­pants who com­pleted at least one ses­sion. The two ran­dom­ized groups did not sig­nif­i­cantly dif­fer de­mo­graph­i­cally. All 51 par­tic­i­pants had a po­ten­tially life-threat­en­ing can­cer di­ag­no­sis, with 65% hav­ing re­cur­rent or metasta­tic dis­ease. Types of can­cer in­cluded breast (13 par­tic­i­pants), up­per aerodi­ges­tive (7), gas­troin­testi­nal (4), gen­i­touri­nary (18), hema­to­logic ma­lig­nan­cies (8), other (1). All had a DSM-IV di­ag­no­sis: chronic ad­just­ment dis­or­der with anx­i­ety (11 par­tic­i­pants), chronic ad­just­ment dis­or­der with mixed anx­i­ety and de­pressed mood (11), dys­thymic dis­or­der (5), gen­er­al­ized anx­i­ety dis­or­der (GAD) (5), ma­jor de­pres­sive dis­or­der (MDD) (14), or a dual di­ag­no­sis of GAD and MDD (4), or GAD and dys­thymic dis­or­der (1). Detailed in­clu­sion/​ex­clu­sion cri­te­ria are in the on­line Supplementary ma­te­r­ial. The Johns Hopkins IRB ap­proved the study. Written in­formed con­sent was ob­tained from par­tic­i­pants.

A two-ses­sion, dou­ble-blind cross-over de­sign com­pared the ef­fects of a low ver­sus high psilo­cy­bin dose on mea­sures of de­pressed mood, anx­i­ety, and qual­ity of life, as well as mea­sures of short-term and en­dur­ing changes in at­ti­tudes and be­hav­ior. Participants were ran­domly as­signed to one of two groups. The Low-Dose-1st Group re­ceived the low dose of psilo­cy­bin on the first ses­sion and the high dose on the sec­ond ses­sion, whereas the High-Dose-1st Group re­ceived the high dose on the first ses­sion and the low dose on the sec­ond ses­sion. The du­ra­tion of each par­tic­i­pan­t’s par­tic­i­pa­tion was ap­prox­i­mately 9 months (mean 275 days). Psilocybin ses­sion 1 oc­curred, on av­er­age, ap­prox­i­mately 1 month af­ter study en­roll­ment (mean 28 days), with ses­sion 2 oc­cur­ring ap­prox­i­mately 5 weeks later (mean 38 days). Data as­sess­ments oc­curred: (1) im­me­di­ately af­ter study en­roll­ment (Baseline as­sess­ment); (2) on both ses­sion days (during and at the end of the ses­sion); (3) ap­prox­i­mately 5 weeks (mean 37 days) af­ter each ses­sion (Post-session 1 and Post-session 2 as­sess­ments); (4) ap­prox­i­mately 6 months (mean 211 days) af­ter Session 2 (6-month fol­low-up).

The study com­pared a high psilo­cy­bin dose (22 or 30 mg/​70 kg) with a low dose (1 or 3 mg/​70 kg) ad­min­is­tered in iden­ti­cally ap­pear­ing cap­sules. When this study was de­signed, we had lit­tle past ex­pe­ri­ence with a range of psilo­cy­bin doses. We de­creased the high dose from 30 to 22 mg/​70 kg af­ter two of the first three par­tic­i­pants who re­ceived a high dose of 30 mg/​70 kg were dis­con­tin­ued from the study (one from vom­it­ing shortly af­ter cap­sule ad­min­is­tra­tion and one for per­sonal rea­sons). Related to this de­ci­sion, pre­lim­i­nary data from a dose-ef­fect study in healthy par­tic­i­pants sug­gested that rates of psy­cho­log­i­cally chal­leng­ing ex­pe­ri­ences were sub­stan­tially greater at 30 than at 20 mg/​70 kg (Griffiths et al., 2011). The low dose of psilo­cy­bin was de­creased from 3 to 1 mg/​70 kg af­ter 12 par­tic­i­pants be­cause data from the same dose-ef­fect study showed sig­nif­i­cant psilo­cy­bin ef­fects at 5 mg/​70 kg, which raised con­cern that 3 mg/​70 kg might not serve as an in­ac­tive placebo.

Expectancies, on part of both par­tic­i­pants and mon­i­tors, are be­lieved to play a large role in the qual­i­ta­tive ef­fects of psilo­cy­bin-like drugs (Griffiths et al., 2006; Metzner et al., 1965). Although dou­ble-blind meth­ods are usu­ally used to pro­tect against such ef­fects, ex­pectancy is likely to be sig­nif­i­cantly op­er­a­tive in a stan­dard drug ver­sus placebo de­sign when the drug be­ing eval­u­ated pro­duces highly dis­crim­inable ef­fects and par­tic­i­pants and staff know the spe­cific drug con­di­tions to be tested. For these rea­sons, in the pre­sent study a low dose of psilo­cy­bin was com­pared with a high dose of psilo­cy­bin, and par­tic­i­pants and mon­i­tors were given in­struc­tions that ob­scured the ac­tual dose con­di­tions to be tested. Specifically, they were told that psilo­cy­bin would be ad­min­is­tered in both ses­sions, the psilo­cy­bin doses ad­min­is­tered in the two ses­sions might range any­where from very low to high, the doses in the two ses­sions might or might not be the same, sen­si­tiv­ity to psilo­cy­bin dose varies widely across in­di­vid­u­als, and that at least one dose would be mod­er­ate to high. Participants and mon­i­tors were fur­ther strongly en­cour­aged to try to at­tain max­i­mal ther­a­peu­tic and per­sonal ben­e­fit from each ses­sion.

Drug ses­sions were con­ducted in an aes­thetic liv­ing-room-like en­vi­ron­ment with two mon­i­tors pre­sent. Participants were in­structed to con­sume a low-fat break­fast be­fore com­ing to the re­search unit. A urine sam­ple was taken to ver­ify ab­sti­nence from com­mon drugs of abuse (cocaine, ben­zo­di­azepines, and opi­oids in­clud­ing methadone). Participants who re­ported use of cannabis or dron­abi­nol were in­structed not to use for at least 24 h be­fore ses­sions. Psilocybin doses were ad­min­is­tered in iden­ti­cally ap­pear­ing opaque, size 0 gelatin cap­sules, with lac­tose as the in­ac­tive cap­sule filler. For most of the time dur­ing the ses­sion, par­tic­i­pants were en­cour­aged to lie down on the couch, use an eye mask to block ex­ter­nal vi­sual dis­trac­tion, and use head­phones through which a mu­sic pro­gram was played. The same mu­sic pro­gram was played for all par­tic­i­pants in both ses­sions. Participants were en­cour­aged to fo­cus their at­ten­tion on their in­ner ex­pe­ri­ences through­out the ses­sion. Thus, there was no ex­plicit in­struc­tion for par­tic­i­pants to fo­cus on their at­ti­tudes, ideas, or emo­tions re­lated to their can­cer. A more de­tailed de­scrip­tion of the study room and pro­ce­dures fol­lowed on ses­sion days is pro­vided else­where (Griffiths et al., 2006; Johnson et al., 2008).

A de­scrip­tion of ses­sion mon­i­tor roles and the con­tent and ra­tio­nale for meet­ings be­tween par­tic­i­pants and mon­i­tors is pro­vided else­where (Johnson et al., 2008). Briefly, prepa­ra­tion meet­ings be­fore the first ses­sion, which in­cluded dis­cus­sion of mean­ing­ful as­pects of the par­tic­i­pan­t’s life, served to es­tab­lish rap­port and pre­pare the par­tic­i­pant for the psilo­cy­bin ses­sions. During ses­sions, mon­i­tors were nondi­rec­tive and sup­port­ive, and they en­cour­aged par­tic­i­pants to trust, let go and be open” to the ex­pe­ri­ence. Meetings af­ter ses­sions gen­er­ally fo­cused on novel thoughts and feel­ings that arose dur­ing ses­sions. Session mon­i­tors were study staff orig­i­nally trained by William Richards PhD, a clin­i­cal psy­chol­o­gist with ex­ten­sive ex­pe­ri­ence con­duct­ing stud­ies with clas­sic hal­lu­cino­gens. Monitor ed­u­ca­tion var­ied from col­lege grad­u­ate to PhD. Formal clin­i­cal train­ing var­ied from none to clin­i­cal psy­chol­o­gist. Monitors were se­lected as hav­ing sig­nif­i­cant hu­man re­la­tions skills and self-de­scribed ex­pe­ri­ence with al­tered states of con­scious­ness in­duced by means such as med­i­ta­tion, yo­gic breath­ing, or re­lax­ation tech­niques.

After study en­roll­ment and as­sess­ment of base­line mea­sures, and be­fore the first psilo­cy­bin ses­sion, each par­tic­i­pant met with the two ses­sion mon­i­tors (staff who would be pre­sent dur­ing ses­sion days) on two or more oc­ca­sions (mean of 3.0 oc­ca­sions for a mean to­tal of 7.9 hours). The day af­ter each psilo­cy­bin ses­sion par­tic­i­pants met with the ses­sion mon­i­tors (mean 1.2 hours). Participants met with mon­i­tors on two or more oc­ca­sions be­tween the first and sec­ond psilo­cy­bin ses­sion (mean of 2.7 oc­ca­sions for a mean to­tal of 3.4 hours) and on two or more oc­ca­sions be­tween the sec­ond ses­sion and 6-month fol­low-up (mean of 2.5 oc­ca­sions for a mean to­tal of 2.4 hours). Preparation meet­ings, the first meet­ing fol­low­ing each ses­sion, and the last meet­ing be­fore the sec­ond ses­sion were al­ways in per­son. For the 37 par­tic­i­pants (73%) who did not re­side within com­mut­ing dis­tance of the re­search fa­cil­ity, 49% of the Post-session 1 meet­ings with mon­i­tors oc­curred via tele­phone or video calls.

The ques­tion­naire in­cluded three fi­nal ques­tions (see Griffiths et al. 2006 for more spe­cific word­ing): (1) How per­son­ally mean­ing­ful was the ex­pe­ri­ence? (rated from 1 to 8, with 1 = no more than rou­tine, every­day ex­pe­ri­ences; 7 = among the five most mean­ing­ful ex­pe­ri­ences of my life; and 8 = the sin­gle most mean­ing­ful ex­pe­ri­ence of my life). (2) Indicate the de­gree to which the ex­pe­ri­ence was spir­i­tu­ally sig­nif­i­cant to you? (rated from 1 to 6, with 1 = not at all; 5 = among the five most spir­i­tu­ally sig­nif­i­cant ex­pe­ri­ences of my life; 6 = the sin­gle most spir­i­tu­ally sig­nif­i­cant ex­pe­ri­ence of my life). (3) Do you be­lieve that the ex­pe­ri­ence and your con­tem­pla­tion of that ex­pe­ri­ence have led to change in your cur­rent sense of per­sonal well-be­ing or life sat­is­fac­tion? (rated from +3 = in­creased very much; +2 = in­creased mod­er­ately; 0 = no change; –3 = de­creased very much).

Participant rat­ings of per­sist­ing ef­fects at­trib­uted to the ses­sion on rat­ings com­pleted 5 weeks af­ter the low-dose and high-dose psilo­cy­bin ses­sions, and, again, ret­ro­spec­tively for the high-dose ses­sion 6 months af­ter the sec­ond ses­sion+.

The Persisting Effects Questionnaire as­sessed self-rated pos­i­tive and neg­a­tive changes in at­ti­tudes, moods, be­hav­ior, and spir­i­tual ex­pe­ri­ence at­trib­uted to the most re­cent psilo­cy­bin ses­sion (Griffiths et al., 2006, 2011). At the 6-month fol­low-up, the ques­tion­naire was com­pleted on the ba­sis of the high-dose ses­sion, which was iden­ti­fied as the ses­sion in which the par­tic­i­pant ex­pe­ri­enced the most pro­nounced changes in their or­di­nary men­tal processes. Twelve sub­scales (described in Table 8) were scored.

Three mea­sures of spir­i­tu­al­ity were as­sessed at three time-points: Baseline, 5 weeks af­ter ses­sion 2, and at the 6-month fol­low-up: FACIT-Sp, a self-rated mea­sure of the spir­i­tual di­men­sion of qual­ity of life in chronic ill­ness (Peterman et al., 2002) as­sessed on how the par­tic­i­pant felt on av­er­age”; Spiritual-Religious Outcome Scale, a three-item mea­sure used to as­sess spir­i­tual and re­li­gious changes dur­ing ill­ness (Pargament et al., 2004); and Faith Maturity Scale, a 12-item scale as­sess­ing the de­gree to which a per­son’s pri­or­i­ties and per­spec­tives align with mainline” Protestant tra­di­tions (Benson et al., 1993).

Structured tele­phone in­ter­views with com­mu­nity ob­servers (e.g. fam­ily mem­bers, friends, or work col­leagues) pro­vided rat­ings of par­tic­i­pant at­ti­tudes and be­hav­ior re­flect­ing healthy psy­choso­cial func­tion­ing (Griffiths et al., 2011). The in­ter­viewer pro­vided no in­for­ma­tion to the rater about the par­tic­i­pant or the na­ture of the re­search study. The struc­tured in­ter­view (Community Observer Questionnaire) con­sisted of ask­ing the rater to rate the par­tic­i­pan­t’s be­hav­ior and at­ti­tudes us­ing a 10-point scale (from 1 = not at all, to 10 = ex­tremely) on 13 items re­flect­ing healthy psy­choso­cial func­tion­ing: in­ner peace; pa­tience; good-na­tured hu­mor/​play­ful­ness; men­tal flex­i­bil­ity; op­ti­mism; anx­i­ety (scored neg­a­tively); in­ter­per­sonal per­cep­tive­ness and car­ing; neg­a­tive ex­pres­sion of anger (scored neg­a­tively); com­pas­sion/​so­cial con­cern; ex­pres­sion of pos­i­tive emo­tions (e.g. joy, love, ap­pre­ci­a­tion); self-con­fi­dence; for­give­ness of oth­ers; and for­give­ness of self. On the first rat­ing oc­ca­sion, which oc­curred soon af­ter ac­cep­tance into the study, raters were in­structed to base their rat­ings on ob­ser­va­tions of and con­ver­sa­tions with the par­tic­i­pant over the past 3 months. On two sub­se­quent as­sess­ments, raters were told their pre­vi­ous rat­ings and were in­structed to rate the par­tic­i­pant based on in­ter­ac­tions over the last month (post-session 2 as­sess­ment) or since be­gin­ning in the study (6-month fol­low-up). Data from each in­ter­view with each rater were cal­cu­lated as a to­tal score. Changes in each par­tic­i­pan­t’s be­hav­ior and at­ti­tudes af­ter drug ses­sions were ex­pressed as a mean change score (i.e. dif­fer­ence score) from the base­line rat­ing across the raters. Of 438 sched­uled rat­ings by com­mu­nity ob­servers, 25 (

The two pri­mary ther­a­peu­tic out­come mea­sures were the widely used clin­i­cian-rated mea­sures of de­pres­sion, GRID-HAM-D-17 (ISCDD, 2003) and anx­i­ety, HAM-A as­sessed with the SIGH-A (Shear et al., 2001). For these clin­i­cian-rated mea­sures, a clin­i­cally sig­nif­i­cant re­sponse was de­fined as ⩾50% de­crease in mea­sure rel­a­tive to Baseline; symp­tom re­mis­sion was de­fined as ⩾50% de­crease in mea­sure rel­a­tive to Baseline and a score of ⩽7 on the GRID-HAMD or HAM-A (Gao et al., 2014; Matza et al., 2010).

Seventeen mea­sures fo­cused on mood states, at­ti­tudes, dis­po­si­tion, and be­hav­iors thought to be ther­a­peu­ti­cally rel­e­vant in psy­cho­log­i­cally dis­tressed can­cer pa­tients were as­sessed at four time-points over the study: im­me­di­ately af­ter study en­roll­ment (Baseline as­sess­ment), about 5 weeks (mean 37 days) af­ter each ses­sion (Post-session 1 and 2 as­sess­ments), and about 6 months (mean 211 days) af­ter ses­sion 2 (6-month fol­low-up).

Ten min­utes be­fore and 30, 60, 90, 120, 180, 240, 300, and 360 min af­ter cap­sule ad­min­is­tra­tion, blood pres­sure, heart rate, and mon­i­tor rat­ings were ob­tained as de­scribed pre­vi­ously (Griffiths et al., 2006). The two ses­sion mon­i­tors com­pleted the Monitor Rating Questionnaire, which in­volved rat­ing or scor­ing sev­eral di­men­sions of the par­tic­i­pan­t’s be­hav­ior or mood. The di­men­sions, which are ex­pressed as peak scores in Table 2, were rated on a 5-point scale from 0 to 4. Data were the mean of the two mon­i­tor rat­ings at each time-point.

Participants with a po­ten­tially life-threat­en­ing can­cer di­ag­no­sis and a DSM-IV di­ag­no­sis that in­cluded anx­i­ety and/​or mood symp­toms were re­cruited through fly­ers, in­ter­net, and physi­cian re­fer­ral. Of 566 in­di­vid­u­als who were screened by tele­phone, 56 were ran­dom­ized. Figure 1 shows a CONSORT flow di­a­gram. Table 1 shows de­mo­graph­ics for the 51 par­tic­i­pants who com­pleted at least one ses­sion. The two ran­dom­ized groups did not sig­nif­i­cantly dif­fer de­mo­graph­i­cally. All 51 par­tic­i­pants had a po­ten­tially life-threat­en­ing can­cer di­ag­no­sis, with 65% hav­ing re­cur­rent or metasta­tic dis­ease. Types of can­cer in­cluded breast (13 par­tic­i­pants), up­per aerodi­ges­tive (7), gas­troin­testi­nal (4), gen­i­touri­nary (18), hema­to­logic ma­lig­nan­cies (8), other (1). All had a DSM-IV di­ag­no­sis: chronic ad­just­ment dis­or­der with anx­i­ety (11 par­tic­i­pants), chronic ad­just­ment dis­or­der with mixed anx­i­ety and de­pressed mood (11), dys­thymic dis­or­der (5), gen­er­al­ized anx­i­ety dis­or­der (GAD) (5), ma­jor de­pres­sive dis­or­der (MDD) (14), or a dual di­ag­no­sis of GAD and MDD (4), or GAD and dys­thymic dis­or­der (1). Detailed in­clu­sion/​ex­clu­sion cri­te­ria are in the on­line Supplementary ma­te­r­ial. The Johns Hopkins IRB ap­proved the study. Written in­formed con­sent was ob­tained from par­tic­i­pants.

A two-ses­sion, dou­ble-blind cross-over de­sign com­pared the ef­fects of a low ver­sus high psilo­cy­bin dose on mea­sures of de­pressed mood, anx­i­ety, and qual­ity of life, as well as mea­sures of short-term and en­dur­ing changes in at­ti­tudes and be­hav­ior. Participants were ran­domly as­signed to one of two groups. The Low-Dose-1st Group re­ceived the low dose of psilo­cy­bin on the first ses­sion and the high dose on the sec­ond ses­sion, whereas the High-Dose-1st Group re­ceived the high dose on the first ses­sion and the low dose on the sec­ond ses­sion. The du­ra­tion of each par­tic­i­pan­t’s par­tic­i­pa­tion was ap­prox­i­mately 9 months (mean 275 days). Psilocybin ses­sion 1 oc­curred, on av­er­age, ap­prox­i­mately 1 month af­ter study en­roll­ment (mean 28 days), with ses­sion 2 oc­cur­ring ap­prox­i­mately 5 weeks later (mean 38 days). Data as­sess­ments oc­curred: (1) im­me­di­ately af­ter study en­roll­ment (Baseline as­sess­ment); (2) on both ses­sion days (during and at the end of the ses­sion); (3) ap­prox­i­mately 5 weeks (mean 37 days) af­ter each ses­sion (Post-session 1 and Post-session 2 as­sess­ments); (4) ap­prox­i­mately 6 months (mean 211 days) af­ter Session 2 (6-month fol­low-up).

The study com­pared a high psilo­cy­bin dose (22 or 30 mg/​70 kg) with a low dose (1 or 3 mg/​70 kg) ad­min­is­tered in iden­ti­cally ap­pear­ing cap­sules. When this study was de­signed, we had lit­tle past ex­pe­ri­ence with a range of psilo­cy­bin doses. We de­creased the high dose from 30 to 22 mg/​70 kg af­ter two of the first three par­tic­i­pants who re­ceived a high dose of 30 mg/​70 kg were dis­con­tin­ued from the study (one from vom­it­ing shortly af­ter cap­sule ad­min­is­tra­tion and one for per­sonal rea­sons). Related to this de­ci­sion, pre­lim­i­nary data from a dose-ef­fect study in healthy par­tic­i­pants sug­gested that rates of psy­cho­log­i­cally chal­leng­ing ex­pe­ri­ences were sub­stan­tially greater at 30 than at 20 mg/​70 kg (Griffiths et al., 2011). The low dose of psilo­cy­bin was de­creased from 3 to 1 mg/​70 kg af­ter 12 par­tic­i­pants be­cause data from the same dose-ef­fect study showed sig­nif­i­cant psilo­cy­bin ef­fects at 5 mg/​70 kg, which raised con­cern that 3 mg/​70 kg might not serve as an in­ac­tive placebo.

Expectancies, on part of both par­tic­i­pants and mon­i­tors, are be­lieved to play a large role in the qual­i­ta­tive ef­fects of psilo­cy­bin-like drugs (Griffiths et al., 2006; Metzner et al., 1965). Although dou­ble-blind meth­ods are usu­ally used to pro­tect against such ef­fects, ex­pectancy is likely to be sig­nif­i­cantly op­er­a­tive in a stan­dard drug ver­sus placebo de­sign when the drug be­ing eval­u­ated pro­duces highly dis­crim­inable ef­fects and par­tic­i­pants and staff know the spe­cific drug con­di­tions to be tested. For these rea­sons, in the pre­sent study a low dose of psilo­cy­bin was com­pared with a high dose of psilo­cy­bin, and par­tic­i­pants and mon­i­tors were given in­struc­tions that ob­scured the ac­tual dose con­di­tions to be tested. Specifically, they were told that psilo­cy­bin would be ad­min­is­tered in both ses­sions, the psilo­cy­bin doses ad­min­is­tered in the two ses­sions might range any­where from very low to high, the doses in the two ses­sions might or might not be the same, sen­si­tiv­ity to psilo­cy­bin dose varies widely across in­di­vid­u­als, and that at least one dose would be mod­er­ate to high. Participants and mon­i­tors were fur­ther strongly en­cour­aged to try to at­tain max­i­mal ther­a­peu­tic and per­sonal ben­e­fit from each ses­sion.

Drug ses­sions were con­ducted in an aes­thetic liv­ing-room-like en­vi­ron­ment with two mon­i­tors pre­sent. Participants were in­structed to con­sume a low-fat break­fast be­fore com­ing to the re­search unit. A urine sam­ple was taken to ver­ify ab­sti­nence from com­mon drugs of abuse (cocaine, ben­zo­di­azepines, and opi­oids in­clud­ing methadone). Participants who re­ported use of cannabis or dron­abi­nol were in­structed not to use for at least 24 h be­fore ses­sions. Psilocybin doses were ad­min­is­tered in iden­ti­cally ap­pear­ing opaque, size 0 gelatin cap­sules, with lac­tose as the in­ac­tive cap­sule filler. For most of the time dur­ing the ses­sion, par­tic­i­pants were en­cour­aged to lie down on the couch, use an eye mask to block ex­ter­nal vi­sual dis­trac­tion, and use head­phones through which a mu­sic pro­gram was played. The same mu­sic pro­gram was played for all par­tic­i­pants in both ses­sions. Participants were en­cour­aged to fo­cus their at­ten­tion on their in­ner ex­pe­ri­ences through­out the ses­sion. Thus, there was no ex­plicit in­struc­tion for par­tic­i­pants to fo­cus on their at­ti­tudes, ideas, or emo­tions re­lated to their can­cer. A more de­tailed de­scrip­tion of the study room and pro­ce­dures fol­lowed on ses­sion days is pro­vided else­where (Griffiths et al., 2006; Johnson et al., 2008).

A de­scrip­tion of ses­sion mon­i­tor roles and the con­tent and ra­tio­nale for meet­ings be­tween par­tic­i­pants and mon­i­tors is pro­vided else­where (Johnson et al., 2008). Briefly, prepa­ra­tion meet­ings be­fore the first ses­sion, which in­cluded dis­cus­sion of mean­ing­ful as­pects of the par­tic­i­pan­t’s life, served to es­tab­lish rap­port and pre­pare the par­tic­i­pant for the psilo­cy­bin ses­sions. During ses­sions, mon­i­tors were nondi­rec­tive and sup­port­ive, and they en­cour­aged par­tic­i­pants to trust, let go and be open” to the ex­pe­ri­ence. Meetings af­ter ses­sions gen­er­ally fo­cused on novel thoughts and feel­ings that arose dur­ing ses­sions. Session mon­i­tors were study staff orig­i­nally trained by William Richards PhD, a clin­i­cal psy­chol­o­gist with ex­ten­sive ex­pe­ri­ence con­duct­ing stud­ies with clas­sic hal­lu­cino­gens. Monitor ed­u­ca­tion var­ied from col­lege grad­u­ate to PhD. Formal clin­i­cal train­ing var­ied from none to clin­i­cal psy­chol­o­gist. Monitors were se­lected as hav­ing sig­nif­i­cant hu­man re­la­tions skills and self-de­scribed ex­pe­ri­ence with al­tered states of con­scious­ness in­duced by means such as med­i­ta­tion, yo­gic breath­ing, or re­lax­ation tech­niques.

After study en­roll­ment and as­sess­ment of base­line mea­sures, and be­fore the first psilo­cy­bin ses­sion, each par­tic­i­pant met with the two ses­sion mon­i­tors (staff who would be pre­sent dur­ing ses­sion days) on two or more oc­ca­sions (mean of 3.0 oc­ca­sions for a mean to­tal of 7.9 hours). The day af­ter each psilo­cy­bin ses­sion par­tic­i­pants met with the ses­sion mon­i­tors (mean 1.2 hours). Participants met with mon­i­tors on two or more oc­ca­sions be­tween the first and sec­ond psilo­cy­bin ses­sion (mean of 2.7 oc­ca­sions for a mean to­tal of 3.4 hours) and on two or more oc­ca­sions be­tween the sec­ond ses­sion and 6-month fol­low-up (mean of 2.5 oc­ca­sions for a mean to­tal of 2.4 hours). Preparation meet­ings, the first meet­ing fol­low­ing each ses­sion, and the last meet­ing be­fore the sec­ond ses­sion were al­ways in per­son. For the 37 par­tic­i­pants (73%) who did not re­side within com­mut­ing dis­tance of the re­search fa­cil­ity, 49% of the Post-session 1 meet­ings with mon­i­tors oc­curred via tele­phone or video calls.

The ques­tion­naire in­cluded three fi­nal ques­tions (see Griffiths et al. 2006 for more spe­cific word­ing): (1) How per­son­ally mean­ing­ful was the ex­pe­ri­ence? (rated from 1 to 8, with 1 = no more than rou­tine, every­day ex­pe­ri­ences; 7 = among the five most mean­ing­ful ex­pe­ri­ences of my life; and 8 = the sin­gle most mean­ing­ful ex­pe­ri­ence of my life). (2) Indicate the de­gree to which the ex­pe­ri­ence was spir­i­tu­ally sig­nif­i­cant to you? (rated from 1 to 6, with 1 = not at all; 5 = among the five most spir­i­tu­ally sig­nif­i­cant ex­pe­ri­ences of my life; 6 = the sin­gle most spir­i­tu­ally sig­nif­i­cant ex­pe­ri­ence of my life). (3) Do you be­lieve that the ex­pe­ri­ence and your con­tem­pla­tion of that ex­pe­ri­ence have led to change in your cur­rent sense of per­sonal well-be­ing or life sat­is­fac­tion? (rated from +3 = in­creased very much; +2 = in­creased mod­er­ately; 0 = no change; –3 = de­creased very much).

Participant rat­ings of per­sist­ing ef­fects at­trib­uted to the ses­sion on rat­ings com­pleted 5 weeks af­ter the low-dose and high-dose psilo­cy­bin ses­sions, and, again, ret­ro­spec­tively for the high-dose ses­sion 6 months af­ter the sec­ond ses­sion+.

The Persisting Effects Questionnaire as­sessed self-rated pos­i­tive and neg­a­tive changes in at­ti­tudes, moods, be­hav­ior, and spir­i­tual ex­pe­ri­ence at­trib­uted to the most re­cent psilo­cy­bin ses­sion (Griffiths et al., 2006, 2011). At the 6-month fol­low-up, the ques­tion­naire was com­pleted on the ba­sis of the high-dose ses­sion, which was iden­ti­fied as the ses­sion in which the par­tic­i­pant ex­pe­ri­enced the most pro­nounced changes in their or­di­nary men­tal processes. Twelve sub­scales (described in Table 8) were scored.

Three mea­sures of spir­i­tu­al­ity were as­sessed at three time-points: Baseline, 5 weeks af­ter ses­sion 2, and at the 6-month fol­low-up: FACIT-Sp, a self-rated mea­sure of the spir­i­tual di­men­sion of qual­ity of life in chronic ill­ness (Peterman et al., 2002) as­sessed on how the par­tic­i­pant felt on av­er­age”; Spiritual-Religious Outcome Scale, a three-item mea­sure used to as­sess spir­i­tual and re­li­gious changes dur­ing ill­ness (Pargament et al., 2004); and Faith Maturity Scale, a 12-item scale as­sess­ing the de­gree to which a per­son’s pri­or­i­ties and per­spec­tives align with mainline” Protestant tra­di­tions (Benson et al., 1993).

Structured tele­phone in­ter­views with com­mu­nity ob­servers (e.g. fam­ily mem­bers, friends, or work col­leagues) pro­vided rat­ings of par­tic­i­pant at­ti­tudes and be­hav­ior re­flect­ing healthy psy­choso­cial func­tion­ing (Griffiths et al., 2011). The in­ter­viewer pro­vided no in­for­ma­tion to the rater about the par­tic­i­pant or the na­ture of the re­search study. The struc­tured in­ter­view (Community Observer Questionnaire) con­sisted of ask­ing the rater to rate the par­tic­i­pan­t’s be­hav­ior and at­ti­tudes us­ing a 10-point scale (from 1 = not at all, to 10 = ex­tremely) on 13 items re­flect­ing healthy psy­choso­cial func­tion­ing: in­ner peace; pa­tience; good-na­tured hu­mor/​play­ful­ness; men­tal flex­i­bil­ity; op­ti­mism; anx­i­ety (scored neg­a­tively); in­ter­per­sonal per­cep­tive­ness and car­ing; neg­a­tive ex­pres­sion of anger (scored neg­a­tively); com­pas­sion/​so­cial con­cern; ex­pres­sion of pos­i­tive emo­tions (e.g. joy, love, ap­pre­ci­a­tion); self-con­fi­dence; for­give­ness of oth­ers; and for­give­ness of self. On the first rat­ing oc­ca­sion, which oc­curred soon af­ter ac­cep­tance into the study, raters were in­structed to base their rat­ings on ob­ser­va­tions of and con­ver­sa­tions with the par­tic­i­pant over the past 3 months. On two sub­se­quent as­sess­ments, raters were told their pre­vi­ous rat­ings and were in­structed to rate the par­tic­i­pant based on in­ter­ac­tions over the last month (post-session 2 as­sess­ment) or since be­gin­ning in the study (6-month fol­low-up). Data from each in­ter­view with each rater were cal­cu­lated as a to­tal score. Changes in each par­tic­i­pan­t’s be­hav­ior and at­ti­tudes af­ter drug ses­sions were ex­pressed as a mean change score (i.e. dif­fer­ence score) from the base­line rat­ing across the raters. Of 438 sched­uled rat­ings by com­mu­nity ob­servers, 25 (

The two pri­mary ther­a­peu­tic out­come mea­sures were the widely used clin­i­cian-rated mea­sures of de­pres­sion, GRID-HAM-D-17 (ISCDD, 2003) and anx­i­ety, HAM-A as­sessed with the SIGH-A (Shear et al., 2001). For these clin­i­cian-rated mea­sures, a clin­i­cally sig­nif­i­cant re­sponse was de­fined as ⩾50% de­crease in mea­sure rel­a­tive to Baseline; symp­tom re­mis­sion was de­fined as ⩾50% de­crease in mea­sure rel­a­tive to Baseline and a score of ⩽7 on the GRID-HAMD or HAM-A (Gao et al., 2014; Matza et al., 2010).

Seventeen mea­sures fo­cused on mood states, at­ti­tudes, dis­po­si­tion, and be­hav­iors thought to be ther­a­peu­ti­cally rel­e­vant in psy­cho­log­i­cally dis­tressed can­cer pa­tients were as­sessed at four time-points over the study: im­me­di­ately af­ter study en­roll­ment (Baseline as­sess­ment), about 5 weeks (mean 37 days) af­ter each ses­sion (Post-session 1 and 2 as­sess­ments), and about 6 months (mean 211 days) af­ter ses­sion 2 (6-month fol­low-up).

Ten min­utes be­fore and 30, 60, 90, 120, 180, 240, 300, and 360 min af­ter cap­sule ad­min­is­tra­tion, blood pres­sure, heart rate, and mon­i­tor rat­ings were ob­tained as de­scribed pre­vi­ously (Griffiths et al., 2006). The two ses­sion mon­i­tors com­pleted the Monitor Rating Questionnaire, which in­volved rat­ing or scor­ing sev­eral di­men­sions of the par­tic­i­pan­t’s be­hav­ior or mood. The di­men­sions, which are ex­pressed as peak scores in Table 2, were rated on a 5-point scale from 0 to 4. Data were the mean of the two mon­i­tor rat­ings at each time-point.

Differences in de­mo­graphic data be­tween the two dose se­quence groups were ex­am­ined with t-tests and chi-square tests with con­tin­u­ous and cat­e­gor­i­cal vari­ables, re­spec­tively.

Data analy­ses were con­ducted to demon­strate the ap­pro­pri­ate­ness of com­bin­ing data for the 1 and 3 mg/​70 kg doses in the low-dose con­di­tion and for in­clud­ing data for the one par­tic­i­pant who re­ceived 30 mg/​70 kg. To de­ter­mine if the two dif­fer­ent psilo­cy­bin doses dif­fered in the low-dose con­di­tion, t-tests were used to com­pare par­tic­i­pants who re­ceived 3 mg/​70 kg (n = 12) with those who re­ceived 1 mg/​70 kg (n = 38) on par­tic­i­pant rat­ings of peak in­ten­sity of ef­fect (HRS in­ten­sity item com­pleted 7 h af­ter ad­min­is­tra­tion) and peak mon­i­tor rat­ings of over­all drug ef­fect across the ses­sion. Because nei­ther of these were sig­nif­i­cantly dif­fer­ent, data from the 1 and 3 mg/​70 kg doses were com­bined in the low-dose con­di­tion for all analy­ses.

Of the 50 par­tic­i­pants who com­pleted the high-dose con­di­tion, one re­ceived 30 mg/​70 kg and 49 re­ceived 22 mg/​70 kg. To de­ter­mine if in­clu­sion of the data from the one par­tic­i­pant who re­ceived 30 mg/​70 kg af­fected con­clu­sions about the most ther­a­peu­ti­cally rel­e­vant out­come mea­sures, the analy­ses for the 17 mea­sures shown in Tables 4 and 5 were con­ducted with and with­out that par­tic­i­pant. Because there were few dif­fer­ences in sig­nif­i­cance (72 of 75 tests re­mained the same), that par­tic­i­pan­t’s data were in­cluded in all the analy­ses.

To ex­am­ine acute drug ef­fects from ses­sions, the drug dose con­di­tions were col­lapsed across the two dose se­quence groups. The ap­pro­pri­ate­ness of this ap­proach was sup­ported by an ab­sence of any sig­nif­i­cant group ef­fects and any group-by-dose in­ter­ac­tions on the car­dio­vas­cu­lar mea­sures (peak sys­tolic and di­as­tolic pres­sures and heart rate) and on sev­eral key mon­i­tor- and par­tic­i­pant-rated mea­sures: peak mon­i­tor rat­ings of drug strength and joy/​in­tense hap­pi­ness, and end-of-ses­sion par­tic­i­pant rat­ings on the Mysticism Scale.

Six par­tic­i­pants re­ported ini­ti­at­ing med­ica­tion treat­ment with an anx­i­olytic (2 par­tic­i­pants), an­ti­de­pres­sant (3), or both (1) be­tween the Post-session 2 and the 6-month fol­low-up as­sess­ments. To de­ter­mine if in­clu­sion of these par­tic­i­pants af­fected sta­tis­ti­cal out­comes in the analy­ses of the 6-month as­sess­ment, the analy­ses sum­ma­rized in Tables 4, 5, 6, 7 and 8 were con­ducted with and with­out these six par­tic­i­pants. All sta­tis­ti­cal out­comes re­mained iden­ti­cal. Thus, data from these six par­tic­i­pants were re­tained in the data analy­ses.

For car­dio­vas­cu­lar mea­sures and mon­i­tor rat­ings as­sessed re­peat­edly dur­ing ses­sions, re­peated mea­sures re­gres­sions were con­ducted in SAS PROC MIXED us­ing an AR(1) co­vari­ance struc­ture and fixed ef­fects of dose and time. Planned com­par­i­son t-tests were used to as­sess dif­fer­ences be­tween the high- and low-dose con­di­tion at each time-point.

Peak scores for car­dio­vas­cu­lar mea­sures and mon­i­tor rat­ings dur­ing ses­sions were de­fined as the max­i­mum value from pre-cap­sule to 6 h post-cap­sule. These peak scores and the end-of-ses­sion rat­ings (Tables 2 and 3) were an­a­lyzed us­ing re­peated mea­sures re­gres­sions in SAS PROC MIXED with a CS co­vari­ance struc­ture and fixed ef­fects of group and dose.

For the analy­ses of con­tin­u­ous mea­sures de­scribed be­low, re­peated mea­sures re­gres­sions were con­ducted in SAS PROC MIXED us­ing an AR(1) co­vari­ance struc­ture and fixed ef­fects of group and time. Planned com­par­i­son t-tests (specified be­low) from these analy­ses are re­ported. For di­choto­mous mea­sures, Friedman’s Test was con­ducted in SPSS for both the over­all analy­sis and planned com­par­isons as spec­i­fied be­low. All re­sults are ex­pressed as un­ad­justed scores.

For the mea­sures that were as­sessed in the two dose se­quence groups at Baseline, Post-session 1, Post-session 2, and 6 months (Tables 4 and 5), the fol­low­ing planned com­par­isons most rel­e­vant to ex­am­in­ing the ef­fects of psilo­cy­bin dose were con­ducted: Between-group com­par­isons at Baseline, Post 1, and Post 2; and within-group com­par­isons of Baseline ver­sus Post 1 in both dose se­quence groups, and Post 1 ver­sus Post 2 in the Low-Dose-1st (High-Dose-2nd) Group. A planned com­par­i­son be­tween Baseline and 6 months col­lapsed across groups was also con­ducted. Effects sizes were cal­cu­lated us­ing Cohen’s d.

For mea­sures as­sessed only at Baseline, Post 2, and 6 months (Table 7), be­tween-group planned com­par­isons were con­ducted at Baseline, Post 2, and 6 months. Because mea­sures as­sessed only at these time-points can­not pro­vide in­for­ma­tion about the psilo­cy­bin dose, data were col­lapsed across the two dose se­quence groups and planned com­par­isons were con­ducted com­par­ing Baseline with Post 2 and Baseline with 6 months.

For par­tic­i­pant rat­ings of per­sist­ing ef­fects at­trib­uted to the ses­sion (e.g. Table 8), planned com­par­isons for con­tin­u­ous and di­choto­mous mea­sures were con­ducted be­tween: (1) rat­ings at 5 weeks af­ter the low ver­sus high-dose ses­sions; (2) rat­ings of low dose at 5 weeks ver­sus rat­ings of high dose at the 6-month fol­low-up; (3) rat­ings of high dose at 5 weeks ver­sus rat­ings of high dose at the 6-month fol­low-up.

As de­scribed above, clin­i­cian-rated mea­sures of de­pres­sion (GRID-HAMD) and anx­i­ety (HAM-A) were an­a­lyzed as con­tin­u­ous mea­sures. In ad­di­tion for both mea­sures, a clin­i­cally sig­nif­i­cant re­sponse was de­fined as ⩾50% de­crease in mea­sure rel­a­tive to Baseline; symp­tom re­mis­sion was de­fined as ⩾50% de­crease in mea­sure rel­a­tive to Baseline and a score of ⩽7. Planned com­par­isons were con­ducted via in­de­pen­dent z-tests of pro­por­tions be­tween the two dose se­quence groups at Post-session 1, Post-session 2, and 6 months. To de­ter­mine if ef­fects were sus­tained at 6 months, planned com­par­isons were also con­ducted via de­pen­dent z-tests of pro­por­tions be­tween Post-session 2 ver­sus 6 months in the Low-Dose-1st (High-Dose-2nd) Group, and be­tween Post-session 1 ver­sus 6 months in the High-Dose-1st (Low-Dose-2nd) Group.

Exploratory analy­ses used Pearson’s cor­re­la­tions to ex­am­ine the re­la­tion­ship be­tween to­tal scores on the Mystical Experience Questionnaire (MEQ30) as­sessed at the end of ses­sion 1 and en­dur­ing ef­fects as­sessed 5 weeks af­ter ses­sion 1. The Post-session 1 mea­sures were rat­ings on three items from the Persisting Effects Questionnaire (meaningfulness, spir­i­tual sig­nif­i­cance, and life sat­is­fac­tion) and 17 ther­a­peu­ti­cally rel­e­vant mea­sures as­sessed at Baseline and Post 1 (Tables 4 and 5) ex­pressed as dif­fer­ence from base­line scores. Significant re­la­tion­ships were fur­ther ex­am­ined us­ing par­tial cor­re­la­tions to con­trol for end-of-ses­sion par­tic­i­pant-rated Intensity” (item 98 from the HRS). To ex­am­ine MEQ30 scores as a me­di­a­tor of the ef­fect of psilo­cy­bin dose on ther­a­peu­tic ef­fects, a boot­strap analy­sis was done us­ing the PROCESS macro (Hayes, 2013) in SPSS. Bootstrapping is a non-para­met­ric method ap­pro­pri­ate for small sam­ples, which was used to es­ti­mate 95% con­fi­dence in­ter­vals for the me­di­a­tion ef­fect. The PROCESS macro also cal­cu­lated di­rect ef­fects on out­come for both group ef­fects and MEQ30.

...

Read the original on pmc.ncbi.nlm.nih.gov »

7 242 shares, 10 trendiness

My favorite use-case for AI is writing logs

My fa­vorite use-case for AI is writ­ing logs

One of my fa­vorite AI dev prod­ucts to­day is Full Line Code Completion in PyCharm (bundled with the IDE since late 2023). It’s ex­tremely well-thought out, un­in­tru­sive, and makes me a more ef­fec­tive de­vel­oper. Most im­por­tantly, it still keeps me mostly in con­trol of my code. I’ve now used it in GoLand as well. I’ve been a happy JetBrains cus­tomer for a long time now, and it’s be­cause they ship fea­tures like this.

I fre­quently work with code that in­volves se­quen­tial data pro­cess­ing, com­pu­ta­tions, and async API calls across mul­ti­ple ser­vices. I also deal with a lot of pre­cise vec­tor op­er­a­tions in PyTorch that shape suf­fixes don’t al­ways il­lu­mi­nate. So, print state­ment de­bug­ging and writ­ing good logs has been a crit­i­cal part of my work­flows for years.

As Kerningan and Pike say in The Practice of Programming about pre­fer­ring print to de­bug­ging,

…[W]e find step­ping through a pro­gram less pro­duc­tive than think­ing harder and adding out­put state­ments and self-check­ing code at crit­i­cal places. Clicking over state­ments takes longer than scan­ning the out­put of ju­di­ciously-placed dis­plays. It takes less time to de­cide where to put print state­ments than to sin­gle-step to the crit­i­cal sec­tion of code, even as­sum­ing we know where that is.

One thing that is an­noy­ing about log­ging is that f-strings are great but be­come repet­i­tive to write if you have to write them over and over, par­tic­u­larly if you’re for­mat­ting val­ues or ac­cess­ing el­e­ments of data frames, lists, and nested struc­tures, and par­tic­u­larly if you have to scan your code­base to find those vari­ables. Writing good logs is im­por­tant but also breaks up a de­bug­ging flow.

from loguru im­port log­ger

log­ger.info(f’Adding a log for {your_variable} and {len(my_list)} and {df.head(0)}’)

The amount of cog­ni­tive over­head in this de­cep­tively sim­ple log is sev­eral lev­els deep: you have to first stop to type log­ger.info (or is it log­ging.info? I use both loguru and log­ger de­pend­ing on the code­base and end up al­ways get­ting the two con­fused.) Then, the paren­the­ses, the f-string it­self, and then the vari­ables in brack­ets. Now, was it your_­vari­able or your_­vari­able_with­_ed­its from five lines up? And what’s the syn­tax for ac­cess­ing a sub­set of df.head again?

With full-line-code com­ple­tion, JetBrains’ model auto-in­fers the log com­ple­tion from the sur­round­ing text, with a limit of 384 char­ac­ters. Inference starts by tak­ing the file ex­ten­sion as in­put, com­bined with the filepath, and then the part of the code above the in­put cur­sor, so that all of the to­kens in the file ex­ten­sion, plus path, plus code above the caret, fit. Everything is com­bined and sent to the model in the prompt.

The con­strained out­put good enough most of the time that it speeds up my work­flow a lot. An added bonus is that it of­ten writes a much clearer log than I, a lazy hu­man, would write, logs. Because they’re so con­cise, I of­ten don’t even re­move when I’m done de­bug­ging be­cause they’re now valu­able in prod.

Here’s an ex­am­ple from a side pro­ject I’m work­ing on. In the first case, the is au­to­com­plete in­fer­ring that I ac­tu­ally want to check the Redis URL, a log­i­cal con­clu­sion here.

In this sec­ond case, it as­sumes I’d like the shape of the dataframe, also a log­i­cal con­clu­sion be­cause the pro­fil­ing dataframes is a very pop­u­lar use-case for logs.

The coolest part of this fea­ture is that the in­fer­ence model is en­tirely lo­cal to your ma­chine.

This en­forces a few very im­por­tant re­quire­ments on the de­vel­op­ment team, namely com­pres­sion and speed.

The model has to be small enough to bun­dle with the IDE for desk­top mem­ory foot­prints (already com­ing in at around ~1GB for the MacOS bi­nary), which elim­i­nates 99% of cur­rent LLMsAnd yet, the model has to be smart enough to in­ter­po­late lines of code from its small con­text win­dowThe lo­cal re­quire­ment elim­i­nates any model in­fer­ence en­gines like vLLM, SGLM, or Ray which im­ple­ment KV cache op­ti­miza­tion like PagedAttentionIt has to be a model that’s fast enough to pro­duce its first to­ken (and all sub­se­quent to­kens) ex­tremely quickly,Fi­nally, it has to be op­ti­mized for Python specif­i­cally since this model is only avail­able in PyCharm

This is dras­ti­cally dif­fer­ent from the cur­rent as­sump­tions around how we build and ship LLMs: that they need to be ex­tremely large, gen­eral-pur­pose mod­els served over pro­pri­etary APIs. we We find our­selves in a very con­strained so­lu­tion space be­cause we no longer have to do all this other stuff that gen­er­al­ized LLMs have to do: write po­etry, rea­son through math prob­lems, act as OCR, of­fer code can­vas tem­plat­ing, write mar­ket­ing emails, and gen­er­ate Studio Ghibli memes.

All we have to do is train a model to com­plete a sin­gle line of code with a con­text of 384 char­ac­ters! And then com­press the crap out of that model so that it can fit on-de­vice and per­form in­fer­ence.

So how did they do it? Luckily, JetBrains pub­lished a pa­per on this, and there are a bunch of in­ter­est­ing notes. The work is split into two parts, model train­ing, and then the in­te­gra­tion of the plu­gin it­self.

The model is trained is done in PyTorch and then quan­tized.

First, they train a GPT-2 style Transformer de­coder-only model of 100 mil­lion pa­ra­me­ters, in­clud­ing a to­k­enizer (aka au­tore­gres­sive text com­ple­tion like you’d get from Claude, OpenAI, Gemini, and friends these days). They later changed this ar­chi­tec­ture to Llama2 af­ter the suc­cess of the grow­ing llama.cpp and GGUF com­mu­nity, as well as the bet­ter per­for­mance of the newer ar­chi­tec­ture. The orig­i­nal dataset they used to train the model was a sub­set of The Stack, a code dataset across per­mis­sive li­censes with 6TB of code in 30 pro­gram­ming lan­guages­The ini­tial train­ing set was just” 45 GB and in prepar­ing the data for train­ing, in data clean­ing, for space con­straints, they re­move all code com­ments in the train­ing data specif­i­cally to fo­cus on code gen­er­a­tionThey do a neat trick for to­k­eniz­ing Python (using a BPE-style to­k­enizer op­ti­mized for char­ac­ter pairs rather than bytes, since code is made up of smaller snip­pets and id­ioms than nat­ural lan­guage text) which is in­den­ta­tion-sen­si­tive, by con­vert­ing spaces and tabs to start-end to­kens, to re­move to­kens that might be dif­fer­ent only be­cause they have dif­fer­ent white­spacing. They ended up go­ing with a to­k­enizer vo­cab size of 16,384.They do an­other very cool step in train­ing which is to re­move im­ports be­cause they find that de­vel­op­ers usu­ally add im­ports in af­ter writ­ing the ac­tual code, a fact that the model needs to an­tic­i­pateThey then split into train/​test for eval­u­a­tion and trained for sev­eral days on 8 NVidia A100 GPU with a cross-en­tropy loss ob­jec­tive func­tion

Because they were able to so clearly fo­cus on the do­main and un­der­stand­ing of how code in­fer­ence works, fo­cus on a sin­gle pro­gram­ming lan­guages with its own nu­ances, they were able to make the train­ing data set smaller, make the out­put more ex­act, and spend much less time and money train­ing the model.

The ac­tual plu­gin that’s in­cluded in PyCharm is im­ple­mented in Kotlin, how­ever, it uti­lizes an ad­di­tional na­tive server that is run lo­cally and is im­ple­mented in C++” for serv­ing the in­fer­ence to­kens.

In or­der to pre­pare the model for serv­ing, they:

Quantized it from FP32 to INT8 which com­pressed the model from 400 MB to 100 MBPrepared as a served ONNX RT ar­ti­fact, which al­lowed them to use CPU in­fer­ence, which re­moved the CUDA over­head tax(later, they switched to us­ing llama.cpp to serve the llama model ar­chi­tec­ture for the server.Fi­nally, in or­der to per­form in­fer­ence on a se­quence of to­kens, they use beam search. Generally, Transformer-decoders are trained on pre­dict­ing the next to­ken in any given se­quence so any in­di­vid­ual step will give you a list of to­kens along with their ranked prob­a­bil­i­ties (cementing my long-run­ning the­ory that every­thing is a search prob­lem).Since this is com­pu­ta­tion­ally im­pos­si­ble at large num­bers of to­kens, a num­ber of so­lu­tions ex­ist to solve the prob­lem of de­cod­ing op­ti­mally. Beam search cre­ates a graph of all pos­si­ble re­turned to­ken se­quences and ex­pands at each node with the high­est po­ten­tial prob­a­bil­ity, lim­it­ing to k pos­si­ble beams. In FLCC, the max num­ber of beams, k, is 20, and they chose to limit gen­er­a­tion to col­lect only those hy­pothe­ses that end with a new­line char­ac­ter.Ad­di­tion­ally they made use of a num­ber of caching strate­gies, in­clud­ing ini­tial­iz­ing the model at 50% of to­tal con­text - i.e. it starts by pre­load­ing ~192 char­ac­ters of pre­vi­ous code, to give you space to ei­ther go back and edit old code, which now no longer has to be put into con­text, or to add new code, which is then added to the con­text. That way, if your cur­sor clicks on code you’ve al­ready writ­ten, the model does­n’t need to re-in­fer.

There are a num­ber of other very cool ar­chi­tec­ture and model de­ci­sions from the pa­per that are very worth read­ing and that show the level of care put into the in­put data, the mod­el­ing, and the in­fer­ence ar­chi­tec­ture.

The bot­tom line is that, for me as a user, this ex­pe­ri­ence is ex­tremely thought­ful. It has saved me count­less times both in print log de­bug­ging and in the logs I ship to prod.

In LLM land, there’s both a place for large, gen­er­al­ist mod­els, and there’s a place for small mod­els, and while much of the rest of the world writes about the for­mer, I’m ex­cited to also find more ap­pli­ca­tions built with the lat­ter.

...

Read the original on newsletter.vickiboykis.com »

8 229 shares, 27 trendiness

ICE Is Getting Unprecedented Access to Medicaid Data

Immigration and Customs Enforcement of­fi­cials are get­ting ac­cess to the per­sonal data of nearly 80 mil­lion peo­ple on Medicaid in or­der to ac­quire information con­cern­ing the iden­ti­fi­ca­tion and lo­ca­tion of aliens in the United States,” ac­cord­ing to an in­for­ma­tion ex­change agree­ment viewed by WIRED.

The agree­ment, which is ti­tled Information Exchange Agreement Between the Centers for Medicare and Medicaid Services and the Department of Homeland Security (DHS) for Disclosure of Identity and Location Information of Aliens,” was signed by CMS of­fi­cials on Tuesday and first re­ported by AP News.

Per the agree­ment, ICE of­fi­cials will get lo­gin cre­den­tials for a Centers for Medicare and Medicaid Services (CMS) data­base con­tain­ing sen­si­tive med­ical in­for­ma­tion, in­clud­ing de­tailed records about di­ag­noses and pro­ce­dures. Language in the agree­ment says it will al­low ICE to ac­cess per­sonal in­for­ma­tion such as home ad­dresses, phone num­bers, IP ad­dresses, bank­ing data, and so­cial se­cu­rity num­bers. (Later on in the agree­ment, what ICE is al­lowed to ac­cess is de­fined dif­fer­ently, spec­i­fy­ing just Medicaid re­cip­i­ents” and their sex, eth­nic­ity, and race but for­go­ing any men­tion of IP or bank­ing data.) The agree­ment is set to last two months. While the doc­u­ment is dated July 9, it is only ef­fec­tive start­ing when both par­ties sign it, which would in­di­cate a 60-day span from July 15 to September 15.

The move comes as President Donald Trump’s ad­min­is­tra­tion has con­tin­ued to ex­pand its crack­down on im­mi­gra­tion. The ad­min­is­tra­tion aims to de­port 3,000 peo­ple per day—four times as many as were de­ported in the fis­cal year of 2024, ac­cord­ing to ICE. Its plans to do so seem­ingly in­volves vac­u­um­ing up data from across the gov­ern­ment. WIRED pre­vi­ously re­ported that the so-called Department of Government Efficiency (DOGE) and DHS were work­ing on a mas­ter data­base, pulling in data from across DHS and other agen­cies, in or­der to sur­veil and de­port im­mi­grants.

Medicaid, state and fed­er­ally gov­ern­ment-funded health care cov­er­age for the coun­try’s poor­est, is largely avail­able only to some non-cit­i­zens, in­clud­ing refugees and asy­lum seek­ers, sur­vivors of hu­man traf­fick­ing, and per­ma­nent res­i­dents. Some states, like New York, pro­vide Medicaid cov­er­age for chil­dren and preg­nant peo­ple, re­gard­less of their im­mi­gra­tion sta­tus. States re­port their Medicaid ex­pen­di­tures and data to the fed­eral gov­ern­ment, which re­im­burses them for some of the costs.

This was never even con­sid­ered dur­ing my five years at DHS work­ing on im­mi­gra­tion en­force­ment,” says John Sandweg, the act­ing di­rec­tor of ICE dur­ing President Barack Obama’s ad­min­is­tra­tion. You want to be care­ful of a pos­si­ble chill­ing ef­fect where peo­ple who might ap­ply for ben­e­fits and be el­i­gi­ble for ben­e­fits—or who seek emer­gency med­ical care—won’t do so be­cause they’re wor­ried the in­for­ma­tion they pro­vide at the hos­pi­tal could make them a tar­get for im­mi­gra­tion ac­tion.”

This is­n’t the con­cern of the ad­min­is­tra­tion now, spokes­peo­ple tell WIRED. Under the lead­er­ship of Dr. [Mehmet] Oz, CMS is ag­gres­sively crack­ing down on states that may be mis­us­ing fed­eral Medicaid funds to sub­si­dize care for il­le­gal im­mi­grants,” Andrew Nixon, the di­rec­tor of com­mu­ni­ca­tions at the Department of Health and Human Services (HHS), tells WIRED. This over­sight ef­fort—sup­ported by law­ful in­ter­a­gency data shar­ing with DHS—is fo­cused on iden­ti­fy­ing waste, fraud, and sys­temic abuse. We are not only pro­tect­ing tax­payer dol­lars—we are restor­ing cred­i­bil­ity to one of America’s most vi­tal pro­grams. The American peo­ple de­serve ac­count­abil­ity. HHS is de­liv­er­ing it.”

...

Read the original on www.wired.com »

9 206 shares, 19 trendiness

100R — home

Receive monthly up­dates via our RSS feed, or by sign­ing up to our monthly newslet­ter.

For a few days, Pino be­came a land crea­ture, liv­ing on stilts, while we scrubbed and re-painted the lower part of the hull. Our pro­peller had a bit of a wob­ble, which we hope is now cor­rected. We also bat­tled with the old wheel quad­rant and were fi­nally able to re­move it, at least a part of it. Boaters have fre­quently helped us while we were in boat­yards, and we are fi­nally able to pay it for­ward. We of­fered both ad­vice to those who asked and lent tools to folks that needed them. It felt nice. Teapot’s new bot­tom has seen wa­ter for the first time, the new gel­coat will al­low us to take it around into bays for many more years to come.

We spent many June days work­ing on both Turnip Complete(Uxn book) and the en­hanced ver­sion of the Victoria to Sitka Logbook, with fre­quent breaks to en­joy the beau­ti­ful places we found our­selves in.

The be­gin­ning of our sail­ing sea­son has been very blus­tery, al­low­ing for some good sail­ing, but also of­ten forc­ing us to wait at an­chor for clement weather. Later, we sailed through the San Juan Islands to meet up with some Merveillans on Blakely Island. We are very grate­ful to be part of a com­mu­nity of such kind, cu­ri­ous, and gen­er­ous peo­ple. The im­age that was drawn for this mon­th’s up­date rep­re­sents co­op­er­a­tion be­tween mem­bers of Merveilles.

Book Club: This month we are read­ing Ill Met By Moonlight by Sarah A. Hoyt, Silmarillion by J. R.R Tolkien and Girl’s Last Tour by Tsukumizu.

Oquonie was re­leased on the Playdate Catalog this month! We’d like to thank every­one who sent us pho­tos of their progress in the game, it has been nice to fol­low along. The game is kind of our first of­fi­cial re­lease on a mod­ern hand­held plat­form, and we’re happy to see that Uxn roms run well on it! It might be one of the first orig­i­nal Playdate games im­ple­mented that way?

In other news, Devine started work­ing on a book, the work­ing ti­tle is Turnip Complete”. The goal is to write a com­plete and stand-alone im­ple­men­ta­tion guide for the Uxn vir­tual ma­chine and de­vices, along with some ex­am­ple pro­grams and thoughts about play­ful com­put­ery things. We might have some­thing to show for it come au­tumn, maybe.

We’ve left Victoria for the sum­mer, and are falling back into the groove of wak­ing up at dusk to catch the tide. We have a quick haul out lined up, and af­ter­ward we’ll be sail­ing around the Gulf Islands un­til the fall. We have lots of pro­jects to fin­ish up these next cou­ple of months and can’t wait to share them with you.

We share pho­tos of life aboard through­out the month on our lit­tle photo site, if you’re cu­ri­ous to see what the daily life aboard Pino is like.

Book Club: This month we are read­ing Artemis by Andy Weir, Gardening Without Work: For the Aging, the Busy and the Indolent by Ruth Stout and A History of Thinking on Paper by Roland Allen.

The weather is get­ting warmer, which is per­fect for air­ing out Pino’s lock­ers, and dry­ing off moldy clothes and tools. Anything stored in the v-berth lock­ers, be­low the wa­ter­line, suf­fer from ex­treme wet­ness. It is a very, very an­noy­ing fact of boat life, but there is re­ally no way to bring good air flow in those spaces. We scrubbed the lock­ers clean, parted with items we no longer needed, and sent two lap­tops to the re­cy­cler.

In last mon­th’s up­date, we men­tioned Flickjam, a game jam based on Increpare’s Flickgame. We re­ceived a to­tal of 27 en­tries! They’re re­ally fun, and all playable in the browser. Devine’s jam en­try is about a very adorable rab­bit learn­ing to play the word rabbit” on a xy­lo­phone in Solresol.

Devine spent some time off the com­puter, skat­ing and fold­ing pa­per. The pa­per com­puter pages have been up­dated to cover some new ways in which com­puter em­u­la­tors can be op­er­ated on pa­per. While on that sub­ject, we highly rec­om­mend Tadashi Tokieda’s ex­cel­lent talk named A world from a sheet of pa­per.

Another item on Devine’s list was to grad­u­ally phase out Uxnasm.c in fa­vor of the self-hosted as­sem­bler. We’re not 100% pleased yet, but it is get­ting closer to re­tire­ment.

Starting on May 20th 2025(1000 PST/PDT) the Playdate Catalogue will in­clude Oquonie. The game is also avail­able on our itch.io store.

The video for Devine’s November 2024 talk A Shining Place Built Upon The Sand is now on YouTube.

Book Club: This month we are read­ing Banvard’s Folly by Paul Collins, Einstein’s Dreams by Alan Lightman, and we are still mak­ing progress on the The Goldfinch by Donna Tartt.

In the above il­lus­tra­tion, lit­tle Ninj is go­ing through a first-aid kit, look­ing through our sup­plies to see what needs to be topped off and what is out-of-date. Rek drew a list of sug­ges­tions on what to in­clude in both a first-aid and a med­ical kit for the Rabbit Waves pro­ject, we plan to add more items soon(thanks to every­one on Mastodon who sug­gested ad­di­tions! It’ll be in the April up­date).

We will spend the first few days of April par­tic­i­pat­ing in Flickjam, mak­ing small games in the style of Flickgame, a tool orig­i­nally made by Increpare, in which the world is nav­i­gated by click­ing on pix­els of dif­fer­ent col­ors to head in dif­fer­ent di­rec­tions. Devine ported Flickgame to Varvara, and wrote a com­piler for flick games to uxn roms.

This past month, Rek fin­ished tran­scrib­ing the en­tire 15 weeks of the Victoria to Sitka log­book! We have plans to turn it into a book, in the style of Busy Doing Nothing, with tons of ex­tra con­tent and il­lus­tra­tions.

March was a very good month for silly cal­en­dar doo­dles. Our pa­per cal­en­dar is al­ways in view, it doc­u­ments im­por­tant events like re­leases, ap­point­ments, as well as food, memes, and other note­wor­thy things that hap­pened on each day.

Book Club: This month we are still read­ing The Goldfinch by Donna Tartt, it’s a long book!

On February 14th, we cel­e­brated our 9th year liv­ing aboard our beloved Pino. Read a short text by Devine, which ex­pands on what it means to truly be a gen­er­al­ist.

Despite the weather be­ing less-than-ideal, we were able to in­stall our re­place­ment so­lar pan­els, and re­visit our notes on so­lar in­stal­la­tions.

Devine com­pleted Nebu, a spritesheet ed­i­tor as well as a desk­top cal­en­dar, along­side many other lit­tle desk­top util­i­ties. Nebu is just over 8.3 kB, a bit less than a blank ex­cel file.

In times of in­creas­ing cli­mate and po­lit­i­cal in­sta­bil­ity, it is a good time to get to­gether with your com­mu­nity and make plans for emer­gen­cies. Consider read­ing Tokyo Bosai about dis­as­ter pre­pared­ness, this elab­o­rate doc­u­ment deals with dis­as­ters that oc­cur specif­i­cally in Japan, but many of the rec­om­men­da­tions are use­ful re­gard­less. We re­leased a new page on rab­bit waves with sug­ges­tions on what to pack in an Emergency Bag. Remember, every emer­gency bag is dif­fer­ent, and what is es­sen­tial varies per per­son.

We also put to­gether a print-it-your­self zine, which com­bines use­ful in­for­ma­tion about Morse Code and . If you have printed the zine and don’t know how to fold it, see Rek’s . Speaking of sig­nal flags, we printed of Rek’s ICS flag draw­ings.

The nice weather fi­nally ar­rived this week and we were able to redo Teapot’s gel­coat. This was our first time work­ing with gel­coat, our friends Rik & Kay, who lent us their work­space, were very pa­tient and gen­er­ous teach­ers. We will con­tinue the pro­ject later when the gel­coat has cured.

Book Club: This month we are read­ing The Goldfinch by Donna Tartt.

Devine spent time im­prov­ing the html5 uxn em­u­la­tor, and thanks to their hard work it is now pos­si­ble to play Niju, Donsol, and Oquonie di­rectly in the browser on itch.io, the same goes for pro­jects like Noodle and Tote.

It’s been a long time com­ing, but Oquonie is now playable on Playdate. Rek spent the last week con­vert­ing the 2-bit as­sets for Oquonie to 1-bit, be­cause some of the char­ac­ters and tiles were too dif­fi­cult to read, now all of the as­sets work per­fectly on mono­chro­matic screens. As an amaz­ing plus, Devine got the mu­sic and sounds work­ing per­fectly, just like in the orig­i­nal iOS ver­sion.

From January 19-25th, we both par­tic­i­pated in Goblin Week, an event in which you make gob­lins every day for a week(what­ever that means to you). See the gob­lin se­ries made by Rek(viewable here in higher rez also) and the one made by Devine(Mastodon).

Pino has earned two new re­place­ment so­lar pan­els this month! We have not in­stalled them yet, it is still too cold out­side in Victoria (we are ex­pect­ing snow this week).

We share pho­tos of­ten in our monthly up­dates, and so Devine spent time build­ing our very own cus­tom photo feed named Days. It is pos­si­ble to fol­low the feed with RSS.

Book Club: This month we are read­ing How do You Live? by Genzaburo Yoshino and Middlemarch by George Eliot.

See log archives for 2024, 2023, 2022, 2021, 2020, 2019, and 2018.

...

Read the original on 100r.co »

10 192 shares, 4 trendiness

People kept working, became healthier while on basic income: report

Participants in Ontario’s pre­ma­turely can­celled ba­sic in­come pi­lot pro­ject were hap­pier, health­ier and con­tin­ued work­ing even though they were re­ceiv­ing money with no-strings at­tached.

That’s ac­cord­ing to a new re­port ti­tled Southern Ontario’s Basic Income Experience, which was com­piled by re­searchers at McMaster and Ryerson University, in part­ner­ship with the Hamilton Roundtable for Poverty Reduction.

The re­port shows nearly three-quar­ters of re­spon­dents who were work­ing when the pi­lot pro­ject be­gan kept at it de­spite re­ceiv­ing ba­sic in­come.

That find­ing ap­pears to con­tra­dict the crit­i­cism some lev­elled at the pro­ject, say­ing it would sap peo­ple’s mo­ti­va­tion to stay in the work­force or seek em­ploy­ment.

They con­tin­ued work­ing,” Wayne Lewchuk, an eco­nom­ics prof at McMaster University who was part of the re­search team told As It Happens.

Many of those who con­tin­ued work­ing were ac­tu­ally able to move to bet­ter jobs, jobs that had a higher hourly wage, that had in gen­eral bet­ter work­ing con­di­tions, that they felt were more se­cure.”

The three-year, $150-million pro­gram was scrapped by Ontario’s PC gov­ern­ment in July. At the time, then-so­cial ser­vices min­is­ter Lisa MacLeod, said the de­ci­sion was made be­cause the pro­gram was fail­ing to help peo­ple be­come independent con­trib­u­tors to the econ­omy.”

On Wednesday a spokesper­son for Todd Smith, the cur­rent min­is­ter of chil­dren, com­mu­nity and so­cial ser­vices sent CBC a state­ment say­ing the gov­ern­ment is fo­cused on pro­grams aimed at em­pow­er­ing unemployed or un­der­em­ployed” peo­ple across the province.

A re­search pro­ject that in­cluded only 4,000 in­di­vid­u­als was not an ad­e­quate so­lu­tion for a province where al­most two mil­lion peo­ple are liv­ing in poverty,” wrote Christine Wood. We are fo­cused on so­lu­tions for Ontario that are prac­ti­cal and sus­tain­able.”

But the re­port points to a wide range of pos­i­tives af­ter just one year.

Its find­ings are the re­sult of a 70-question, anony­mous on­line sur­vey made avail­able to ba­sic in­come re­cip­i­ents in Hamilton, Brantford and Brant County. A to­tal of 217 for­mer re­cip­i­ents par­tic­i­pated, ac­cord­ing to the re­port.

Forty in-depth in­ter­views with par­tic­i­pants were also com­pleted in July 2019.

I re­mem­ber one in­di­vid­ual who said Look, I was on the edge of sui­cide. I just felt no­body cared about me. I did­n’t know how to make ends meet and now with ba­sic in­come I feel like I can be part of so­ci­ety,’” Lewchuk re­called.

Nearly 80 per cent of re­spon­dents re­ported bet­ter over­all health while tak­ing part in the pro­gram. More than half said they were us­ing less to­bacco and 48 per cent said they were drink­ing less.

When it came to men­tal health, 83 per cent of those sur­veyed de­scribed feel­ing stressed or anx­ious less of­ten and 81 per cent said they felt more self-con­fi­dent.

An im­proved diet, bet­ter hous­ing se­cu­rity and less-fre­quent hos­pi­tal vis­its were other out­comes re­spon­dents pointed to, along with 66 per cent who said they formed bet­ter re­la­tion­ships with fam­ily mem­bers.

What be­came clear is that as peo­ple moved to some sta­bil­ity their health im­proved, their men­tal health im­proved, their out­look on life im­proved,” said Lewchuk. You have to be­lieve that ac­tu­ally made them more em­ploy­able.”

That’s in con­trast to the sit­u­a­tion for par­tic­i­pants once the plug was pulled.

Almost all sur­vey re­spon­dents in­di­cated that the pi­lot’s can­cel­la­tion forced them to place on hold or aban­don cer­tain life plans,” reads the re­port.

The pro­ject worked by re­cruit­ing low-in­come peo­ple and cou­ples, of­fer­ing them a fixed pay­ment with no strings at­tached that worked out to ap­prox­i­mately $17,000 for in­di­vid­u­als and $24,000 for cou­ples.

Whatever in­come par­tic­i­pants earned was de­ducted from their ba­sic in­come at 50 per cent, mean­ing once some­one hit $34,000 they would­n’t re­ceive a pay­ment any­more, Lew­chuk ex­plained while speak­ing with As It Happens.

The ba­sic in­come pay­ments were about 15-20 per cent higher than ODSP, said the pro­fes­sor, but the ben­e­fits of peo­ple vis­it­ing the hos­pi­tal less of­ten and pay­ing more taxes would off­set that cost.

In terms of the net cost to a province, it’s not mon­u­men­tal.”

Lewchuk added that while some peo­ple did stop work­ing, about half of them headed back to school in hopes of com­ing back to a bet­ter job.

He ac­knowl­edged the re­port’s find­ings are only based on short-term ef­fects but, given the pro­ject has been shut down, it’s all they have.

We just don’t have the data to un­der­stand what hap­pened in the long run. This is the tragedy of the pi­lot not run­ning for three years.”

...

Read the original on www.cbc.ca »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.