10 interesting stories served every morning and every evening.
gene-spafford
“Using encryption on the Internet is the equivalent of arranging an armored car to deliver credit card information from someone living in a cardboard box to someone living on a park bench.” — Gene Spafford
Imagine sending Google an encrypted question and getting back the exact results you wanted — without them having any way of knowing what your question was or what result they returned. The technique to do that is called Fully Homomorphic Encryption (FHE).
The first time I heard about what FHE does, I didn’t believe it was possible. But it is — an it works in real-world systems today.
It allows arbitrary computations on ciphertext (encrypted data) — without needing to decrypt it first. The result of the computation, once decrypted, matches the result as if it had been performed on plaintext (unencrypted data).
As FHE allows encrypted computation, users can keep their data encrypted all the time on the internet (sending, server computing, receiving back). This prevents any chance of data breach. Full Privacy.
But then, why isn’t it the default like HTTPS? Why isn’t everyone using it? And why haven’t most people even heard of?
Because it is still not practical for most applications, as it is very slow. But this is changing fast as I’ll explain below.
Current FHE has 1,000x to 10,000x computational overhead compared to plaintext operations. On the storage side, ciphertexts can be 40 to 1,000 times larger than the original. It’s like the internet in 1990—technically awesome, but limited in practice.
Here’s where it gets interesting: FHE algorithms are getting 8x faster each year. Operations that took 30 minutes per bit in 2011 now take milliseconds. See the graph below showing its dramatic speed improvement.
The graph shows 10^12 times improvement until 2014. The pace of improvement continued over the last decade too. I will go deeper on that later in this article.
If this dramatic improvement continues, we’re approaching a computational inflection point. In not too distant future, FHE could be fast enough for:
The implications are big. The entire business model built on harvesting user data could become obsolete. Why send your plaintext when another service can compute on your ciphertext?
Internet’s “Spy by default” can become “Privacy by default”.
Below sections go deeper into each aspect of the claim of this article. You can jump into any if you are particularly curious about those sections. You can also read them in order, as they are in a logical sequence:
* More on performance improvements: The Moore’s Law of FHE: 8x Faster Every Year
* Connecting the dots: Future of computation is encrypted
All data exists in one of three states:
In Use (being processed in memory)
We have robust solutions for the first two:
But in use—when data is loaded into RAM and processed by CPUs—it is decrypted. This is the Achilles’ heel of modern security. Cloud providers, insiders, attackers, or compromised CPUs can read your plaintext data.
Think about every major data breach you’ve heard of:
They were failures of encryption-in-use or at rest. The moment data gets loaded into memory for processing, it becomes vulnerable.
FHE fixes this. Data can stay encrypted through its entire lifecycle on the cloud, which we can call “Full-Privacy Computing”.
Picture an internet where your data is always encrypted:
* Your device never sends plaintext to any server
* Only you can decrypt the results
Here’s what a private ChatGPT session might look like:
# Your device
pk, sk = keygen() # pk: public key, sk: secret (private) key
enc_prompt = encrypt(“Why did the dev go to therapy?”, pk)
server.send(enc_prompt, pk)
# OpenAI’s servers (they can never decrypt and see your prompt)
enc_prompt, pk = client.receive()
enc_llm = encrypt(LLM_MODEL, pk)
enc_answer = enc_llm.run(enc_prompt)
client.send(enc_answer)
# Your device again
enc_answer = server.receive()
answer = decrypt(enc_answer, sk)
print(answer)
″“”Because of too many unresolved dependencies!“”″
OpenAI computes the correct response, but can never see your question or their answer in plaintext.
The term “homomorphic” comes from Greek: “homo” (same) + “morphe” (form). It means having a structure-preserving map between two algebraic structures. FHE is homomorphic because operations on encrypted data can be mapped (i.e. mirrored) onto those on the original data.
The homomorphism in category theory is often shown by a commutative diagram, where you can go from a point to another by interchanging the order of operations. In the below diagram for FHE, you can go from (a, b) to E(a*b) in two separate ways.
Let’s look at an equivalent diagram with client/server perspective and f(x) function.
One helpful analogy to FHE is the Fourier transform, which converts a signal from the time domain into the frequency domain. Computations performed on the frequency domain are equivalent to those in the time domain and vice versa. Meaning that you can compute in either domain and still get the same result. In a similar way, FHE operates between plaintext and ciphertext domains. Transformations done in plaintext domain are equivalent to those in the ciphertext domain, and vice versa.
To do the above mentioned transformation, FHE uses lattice-based cryptography—imagine a multidimensional grid of points extending infinitely in all directions.
At the heart of lattice-based cryptography are problems that are believed to be extremely hard to solve—even for quantum computers. Two of the most well-known examples are ::
* Closest Vector Problem (CVP): Find the lattice point nearest to any given point
In 2D, these look trivial. But add 1,000,000 dimensions? Then it becomes extremely hard, so that it is believed that even quantum computers can’t crack them efficiently. This makes FHE inherently quantum-resistant. A very important property to prepare for the possible quantum computing future.
Bonus: Lattice operations are highly parallelizable, which means they benefit enormously from modern GPUs and specialized hardware acceleration.
Lattice-based FHE schemes rely on the Learning With Errors (LWE) or Ring-LWE problem. At a high level, LWE looks like this:
The hard problem: Given the public key (A, b), find the secret key s
Notice that A*s is linear, so visually forms a lattice. In other words, A*s is on a lattice point. The addition of noise e makes the resulting A*s + e = b drifting from the lattice point (Note that the noise is sampled from a narrow random distribution so that b doesn’t drift too much from the lattice point so that it is closer to another lattice point).
So the problem becomes that: If an attacker wants to find secret key s from the public key (A, b), he needs to solve the problem of finding the closest lattice point = closest vector problem (CVP). And closest vector problem is believed to be NP-hard and even quantum-resistant.
To sum up, encryption works because of noise. And encryption remains secure because decoding a lattice point with noise is hard.
While the noise is the trick here, it also causes the following problem during addition and multiplication operations. During homomorphic addition, the noises of each ciphertext is also added to each other, while during multiplications they are multiplied. This results in:
If the noise gets too big, decryption fails — you get garbage instead of the result.
Since noise growth unmanageable with multiplication, the noise-based HE schemes before Craig Gentry 2009 allowed only a limited number of multiplications (hence not Turing-complete). For that reason, they were called Somewhat Homomorphic Encryption.
Craig Gentry invented in 2009 the first HE scheme that allows infinite number of additions and multiplications combined. Hence it is Turing Complete. Hence it is called Fully Homomorphic Encryption. Note that any kind of computation can be represented as additions and multiplications. Indeed, all computations are reduced to additions and multiplications on the CPU/GPU level.
The main piece that makes FHE work was the method called “bootstrapping”. Bootstrapping reduces the noise whenever it gets too big. In that way, it guarantees that noise doesn’t disrupt decryption, no matter how many multiplications are performed.
The way how bootstrapping work is very clever. It’s probably called “bootstrapping”, because it “refreshes” the cipher text under another public key. It cleverly switches the ciphertext from one public key to another, as follows:
Take the ciphertext ctx which was originally encrypted under pk_orig
Encrypt pk_orig under pk_new, obtaining pk_bootstrap (Yes! Encrypt the key with another key. Creative!)
Run (homomorphically) the decrypt operation on ctx_new with sk_new, obtaining ctx_bootstrap
Notice that, the decryption procedure of FHE is itself a computation. So it can be used for homomorphic computations too!
The new ctx_bootstrap is another new ciphertext, whose noise is reset. Note that, it gained some fixed noise as it went through additions and multiplications during the decryption computation. But this noise is bounded.
Another thing to keep in mind is, bootstrapping is the performance bottleneck of modern FHE schemes. Though its computational overhead gets better every year by new algorithms.
There are many more details to that. I listed some resources going through at FHE Bootstrapping, though they also don’t cover everything. One needs to read through FHE Reference Library. Other topics around bootstrapping are quite complicated, and as far as I understood, it is the focal point of FHE algorithmic innovations. It’s really important to understand it conceptually at least, because it’s the root of FHE slowness, while also the reason why FHE works.
Other topics around FHE you need to be aware are relinearization and modulus switching. I’ll explain them only by intuition here. For deeper math, I suggest Vitalik’s post as a starter.
A ciphertext is linear in the secret key -> a + b⋅s
After multiplication, the result become quadratic in secret key. To show that:
* Notice the s² term, which makes ct_mul quadratic in the secret key.
Relinearization uses additional public key material called “relinearization keys” to eliminate the higher-degree terms. The process:
Uses relinearization keys to “convert” this back into linear terms
Produces a new linear ciphertext (c₀‘, c₁’) that decrypts to the same value
It’s used to manage noise growth. It’s a trick to reduce the noise by decreasing the modulus of the ciphertext.
I did couple of conversations with ChatGPT to understand some mathematical details of FHE better, and wrote down what I learned from them here.
What do we mean by a Homomorphic Encryption (HE) Scheme?
A HE scheme is a cryptographic construction that defines how encryption, decryption, and homomorphic operations are performed (e.g. BGV, CKKS, TFHE).
Homomorphic Encryption schemes are classified by the types and number of operations they support.
Supports only one operation (e.g., addition in Paillier, multiplication in RSA).
Supports both addition and multiplication, but number of multiplications allowed is limited.
Supports unlimited additions and multiplications. Turing Complete. Manages noise by periodically reducing via bootstrapping.
One of the best ways to really understand a computational concept is building it from scratch, minimally.
For the purposes of this blog post, creating a Fully Homomorphic Encryption scheme would be very lengthy. Instead, we will write Paillier HE, which is shorter and easier to understand for getting an intuition. Paillier is a Partial HE, meaning that it doesn’t support all operations (hence not Fully HE). It only supports additions (hence additive homomorphism). We’ll follow a typical HE flow:
fhe-toy-implementations
import sympy, random
def generate_keypair(bit_length=512):
p = sympy.nextprime(random.getrandbits(bit_length))
q = sympy.nextprime(random.getrandbits(bit_length))
n = p * q
g = n + 1
lambda_ = (p - 1) * (q - 1)
mu = sympy.mod_inverse(lambda_, n)
return (n, g), (lambda_, mu)
def encrypt(m, public_key):
n, g = public_key
r = random.randint(1, n - 1)
...
Read the original on bozmen.io »
One of the biggest fallacies in coding is that dependencies have zero downsides. It’s treated as free functionality you don’t have to write yourself. Wow, what a deal!
* They may require a significant time investment to learn to use. Often they’re so large and complex that simply writing the functionality yourself is faster than learning. Sometimes it’s easier to write it yourself than install the dependency…
* Their breaking changes can trigger expensive re-writes of your own code to handle a new interface.
* You need to ensure they end up on your clients’ machine.
Let us consider this last point; how many absurdly complicated containerisation or bundling setups do we encounter in our work because someone does not want to “reinvent the wheel” and instead opts to use large, complex dependencies? How many times did people take the time to appreciate they’ve re-invented an entirely different wheel that has nothing to do with their apps core functionality, just to deploy and run the bloody thing?
Instructive here is Tigerbeetle, a financial database written entirely with Vanilla Zig:
TigerBeetle has a “zero dependencies” policy, apart from the Zig toolchain. Dependencies, in general, inevitably lead to supply chain attacks, safety and performance risk, and slow install times. For foundational infrastructure in particular, the cost of any dependency is further amplified throughout the rest of the stack.
Similarly, tools have costs. A small standardized toolbox is simpler to operate than an array of specialized instruments each with a dedicated manual. Our primary tool is Zig. It may not be the best for everything, but it’s good enough for most things. We invest into our Zig tooling to ensure that we can tackle new problems quickly, with a minimum of accidental complexity in our local development environment.
I am aware this post is a reductio ad absurdum magnet. “What so you make your own silicon wafers?”, etc, etc. (Programmers of a certain generation might bring up a webcomic and butterflies). Ultimately, we all depend on something. But not all dependencies are created equal. Allow me to introduce my Framework for Evaluating Dependencies.
We have five categories:
Dependency peddlers typically only talk about ergonomics, and ignore other criteria.
With that in mind, let’s evaluate some dependencies:
I will leave this as an exercise for the reader!
But remember; think critically, evaluate the costs as well of the benefits, and choose wisely.
...
Read the original on lewiscampbell.tech »
As an excercise in syscall golf, I wrote an implementation of ls(1) which uses my IO library, ourio to perform as much of the IO as possible. What I ended up with is something that is faster than any version or alternative to ls I tested, and also performs an order of magnitude fewer syscalls. I’m calling it lsr. Let’s start with the benchmarks, then we’ll see how we got there.
Data gathered with hyperfine on a directory of n plain files.
Data gathered with strace -c on a directory of n plain files. (Lower is better)
How we got there
Let’s start with how lsr works. To list directory contents, we basically have 3 stages to the program:
All of the IO involved happens in the second step. Wherever possible, lsr utilizes io_uring to pull in the data it needs. To get to that point, it means that we open the target directory with io_uring, if we need local time, user data, or group data, we open (and read) those files with io_uring. We do all stat calls via io_uring, and as needed we do the equivalent of an lstat via io_uring. In practice, this means that the number of syscalls we have should be drastically smaller than equivalent programs because we are able to batch the stat syscall. The results clearly show this…lsr has at least an order of magnitude fewer syscalls than it’s closest equivalent, being uutils ls.
We also use the zig stdlib StackFallbackAllocator. This let’s lsr allocate memory it needs up front, but fallback to a different allocator when it’s exhausted the fixed allocation. We allocate 1MB up front, which is more than enough for typical usage. This further reduces syscalls by reducing mmap usage.
As a result of working directly with io_uring, we also bypass several libc related pitfalls. Namely, we have no dynamic linking - ls has some considerable overhead in loading libc and related libraries…but it also has the benefit of having locale support. lsr does not boast such a feature. Despite being statically linked, lsr is still smaller than GNU ls: 138.7KB vs 79.3KB when built with ReleaseSmall.
I have no idea what lsd is doing. I haven’t read the source code, but from viewing it’s strace, it is calling clock_gettime around 5 times per file. Why? I don’t know. Maybe it’s doing internal timing of steps along the way?
Sorting ends up being a massive part of the workload. I suspect this is where uutils ls is getting slowed down, since it is doing pretty good on a syscall basis. lsr spends about 30% of it’s runtime sorting, the rest is the IO loop.
This ended up being a pretty fun project to write, and didn’t take too much time either. I am shocked at how much io_uring can be used to reduce syscalls…ls is a pretty basic example but you can only imagine how much of an effect this would have on something like a server.
Also - I’m using tangled.sh for this project. They have a really cool idea, and I want to see how the PR workflow is so…if you have any bugs or changes, please visit the repo. All you need is an atproto account + app password. I suspect more icons will be needed, feel free to make an issue for icon requests!
...
Read the original on rockorager.dev »
lsr uses the zig build system. To install, you will need zig 0.14.0. To install for the local user (assuming $HOME/.local/bin is in $PATH), run:
zig build -Doptimize=ReleaseSmall –prefix $HOME/.local
which will install lsr and the associated manpage appropriately. Replace
$HOME/.local with your preferred installation directory.
lsr [options] [path]
–help Print this message and exit
–version Print the version string
DISPLAY OPTIONS
-1, –oneline Print entries one per line
-a, –all Show files that start with a dot (ASCII 0x2E)
-A, –almost-all Like –all, but skips implicit ”.” and ”..” directories
-C, –columns Print the output in columns
–color=WHEN When to use colors (always, auto, never)
–group-directories-first Print all directories before printing regular files
–hyperlinks=WHEN When to use OSC 8 hyperlinks (always, auto, never)
–icons=WHEN When to display icons (always, auto, never)
-l, –long Display extended file metadata
-r, –reverse Reverse the sort order
-t, –time Sort the entries by modification time, most recent first
Benchmarks were all gathered on the same set of directories, using the latest releases of each program (versions are shown below). All benchmarks run on Linux (because io_uring). lsr does work on macOS/BSD as well, but will not see the syscall batching benefits that are available with io_uring.
Data gathered with hyperfine on a directory of n plain files.
Data gathered with strace -c on a directory of n plain files. (Lower is better)
...
Read the original on tangled.sh »
A city fire marshal used FDNY’s access to a facial recognition software to help NYPD detectives identify a pro-Palestinian protester at Columbia University, circumventing policies that tightly restrict the Police Department’s use of the technology.
Details of the arrangement emerged in a recent decision by a Manhattan criminal court judge and in a lawsuit seeking information from the FDNY filed this month by the Legal Aid Society, which represented the protester, Zuhdi Ahmed, now a 21-year-old pre-med CUNY student going into his senior year of college.
Police identified Ahmed after searching for a young man accused of hurling what they said was a rock at a pro-Israeli protester during an April 2024 skirmish at Columbia. Thanks to the FDNY’s assistance and its use of Clearview AI software, the police were able to identify Ahmed.
The FDNY began using Clearview AI in December 2022 and has an annual contract with the company, according to a spokesperson.
The fire marshal also accessed documents from the Department of Motor Vehicles that are typically unavailable to the police, court records show.
Manhattan District Attorney Alvin Bragg charged Ahmed with a felony, assault in the third degree as a hate crime, which was later reduced to a misdemeanor of second degree aggravated harassment. A criminal court judge in June dismissed the case against Ahmed and in a lengthy ruling raised red flags about government surveillance and practices that ran afoul of law enforcement’s own policies.
“Where the state routinely gathers, searches, seizes, and preserves colossal amounts of information, transparency must remain a touchstone, lest fairness be lost,” the judge, Valentina Morales, wrote.
Clearview AI — in wide use by law enforcement agencies nationally, including the Department of Justice — matches photos uploaded to its system with billions of images in a database sourced from social media and other websites. The NYPD has used the technology in the past but now forbids its use under a 2020 facial recognition policy that limits image searches to arrest and parole photos.
A subsequent city law, called the POST Act, requires the NYPD to report publicly on its use of and policies regarding surveillance technologies. The City Department of Investigation has found the NYPD has not consistently complied. Reached by THE CITY, Council members indicated they were working on new legislation to close loopholes in the POST Act.
Social media photos the FDNY used to identify Ahmed included pictures at a high school formal, a school play and his high school graduation.
Ahmed, a Westchester resident who is Palestinian and grew up going to protests with his family, said he has received hateful mail and online messages since his arrest. He said he never thought photos from his teenage years could be used in this way.
“It’s something straight out of a dystopian, futuristic movie,” he said. “It’s honestly kind of scary to think about what people are capable of in terms of surveillance.”
“The NYPD keeps using these incredibly disturbing companies to spy on New Yorkers, while hiding that surveillance from the public and violating New York City law in the process,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project. “The FDNY is clearly being complicit in enabling these NYPD abuses.”
The NYPD referred THE CITY to FDNY for comment. An FDNY spokesperson said in a statement that approved fire marshals have access to Clearview AI and work closely with the NYPD to investigate crimes.
“This small group of elite law enforcement agents use facial recognition software as one of the many tools available to conduct critical fire investigations,” the spokesperson said. “We always follow all local, state and federal laws.”
Shane Ferro, Digital Forensics Unit staff attorney at Legal Aid, who had represented Ahmed, sought to learn more about facial recognition technology operated by the FDNY, but requests made under the New York Freedom of Information Law, or FOIL, went nowhere. Legal Aid filed a lawsuit last week seeking to obtain the information.
The judge dismissed the case precisely because of the serious questions surrounding how Ahmed was identified, Ferro noted.
Still unknown is whether the NYPD’s reliance on FDNY to circumvent the police department’s Clearview ban goes beyond this one instance.
“The way that the NYPD used FDNY to access broader and even more unreliable facial recognition technologies — in this case, to identify a protester — brings up questions about the NYPD following its own policies, the NYPD complying with the POST Act,” she said, adding that Ahmed’s saga “brings up questions about the First Amendment and the NYPD’s prohibition on using facial recognition technology to identify people at political rallies.”
The FDNY’s use of Clearview on the NYPD’s behalf emerged in emails disclosed as part of the case against Ahmed.
The incident at the center of the case occurred near an encampment at Columbia University by pro-Palestine demonstrators. Students protested Israel’s war in Gaza which killed tens of thousands of Palestinians in response to Hamas’ attack on Israel on Oct. 7, 2023, where 1,200 Israelis were killed, and 240 hostages were taken.
The Israeli military offensive has since killed more than 55,000 Palestinians, according to the Gaza Health Ministry and devastated the strip.
Both former Columbia University President Minouche Shafik and Mayor Eric Adams faced pressure to quell the protests. On April 17, 2024, NYPD officers showed up at the encampment at Shafik’s request and made over 100 arrests. Students created a second encampment, and the highly militarized NYPD presence continued on campus until graduation. Cops subsequently used stun grenades, fired a gun inside student-occupied Hamilton Hall and flew drones over campus.
At Columbia, pro-Israel students often showed up to encampment events and demonstrations to counter-protest.
That was true on Saturday, April 20, 2024, when the encampment held a film screening and hosted teach-ins.
Columbia student Jonathan Lederer arrived on campus that night with his twin brother. They stood with a group behind those gathered to watch the films and waved Israeli flags, videos posted to social media show. Music played loudly out of a speaker.
Later, someone stole one of the flags and ran off, and another person tried to light it on fire. Lederer detailed his experience in The Free Press, saying he was hit in the face with objects someone threw. He later told NY1 other protesters “threw rocks at my face.”
Videos posted to social media, blurry at times, show a white object lobbed at Lederer, who appears to toss it away from him. The person who threw it flipped him the bird.
Lederer, who did not respond to emails and a call from THE CITY seeking comment, in May told the Manhattan DA’s office he’d wasn’t sure whether a laceration on the side of his face was from being hit with an object or from acne.
Ahmed declined to answer questions from THE CITY about throwing an object, but said he had been at Columbia to attend a jazz event when he’d heard chanting and walked over to the protest.
The NYPD began a search for the person who threw the object.
On June 3, 2024, the agency posted a photo of Ahmed on its Crime Stoppers Instagram account, saying he was “WANTED for Hate Crime Assault.” The posted photo was a still from a video taken at a protest in Central Park in May 2024.
Ahmed said he has no recollection of the protest or that day, but was “completely bewildered” to see his photo online with accusations he said were false.
The same day the Instagram post went up, an FDNY fire marshal emailed an NYPD detective.
He went on to say he ran the Instagram photo “through our facial.” He said he couldn’t find the suspect’s name, but perhaps some photos he was sending along could “help with an ID.”
He attached to the email screenshots taken from Clearview AI with photos of Ahmed: one shows him at a formal event, his arm around a friend; in another, he receives his diploma at his high school graduation; and in a third, he stands with fellow graduates in their burgundy gowns. In the graduation photos, Ahmed wears a stole around his neck printed with the Palestinian flag — following a tradition that all his family members have done at graduations, he said.
The fire marshal wrote, “Not too sure what the scarf says but maybe related to Palestine?”
A different NYPD detective responded with thanks. Shortly after, the fire marshal sent links to Clearview AI face search results, an archive of school play photos and another to an archive of high school formal photos. He said he couldn’t find associated social media but offered to get a driver’s license photo for the detective. “We have access to that,” he wrote.
A minute later, the detective sent the fire marshal Ahmed’s name, date of birth and driver’s license number. Within five minutes, the fire marshal replied, “Bingo.”
NYPD detectives cannot access DMV records without permission from supervisors.
The NYPD took Ahmed’s driver’s license photo and included a digitally altered version of it in an identification array presented to Lederer, who picked Ahmed’s photo from the lineup. The photo had been edited to change the shape of Ahmed’s neck.
On June 13, the NYPD arrested and arraigned Ahmed. The following day, the fire marshal again emailed the detective: “Saw the news. Good work. Glad you grabbed him.”
The detective responded the next day: “Yea that’s to you, I appreciate the help.”
A few hours later, the fire marshal emailed back, “All good bro happy to help. Don’t hesitate to reach out again if you need anything.”
The NYPD would not have identified Ahmed but for the FDNY’s Clearview AI search and accessing the DMV photo, the judge indicated in her ruling. She wrote it was “evident that the investigatory steps described in the emails clearly contravene official NYPD policy concerning the use of facial recognition.”
NYPD may only conduct facial recognition searches within a limited repository of arrest and parole photos.
To conduct searches outside that repository, officers must get permission from the chief of department, chief of detectives or the deputy commissioner of intelligence. Employees who misuse facial recognition technology may face administrative or criminal penalties, NYPD policy states.
But in this case, FDNY’s use of Clearview’s facial recognition software trawled the Internet and yielded hundreds of matches.
Privacy advocates said they would like to see the POST Act expanded to apply to law enforcement officials who work for agencies other than the NYPD. They say that would provide insight into how other agencies are using surveillance technology, like how FDNY used it to assist the NYPD.
“It should not be a guessing game, who’s using this sort of technology and who’s doing business with a vendor this controversial,” Cahn said.
In April, the Council approved three additional bills to strengthen POST Act reporting and accountability requirements.
They include a law that requires tracking intergovernmental data sharing. But that only covers information the NYPD shares with other agencies, not information agencies provide to the NYPD.
Councilmember Julie Won (D-Queens), who sponsored one of the recently passed bills expanding the POST Act, said she and her colleagues are drafting legislation to close the loophole. The new bill would prohibit city agencies from using surveillance technologies on behalf of law enforcement, and mandate agencies disclose their use of surveillance technology for any reason.
“No matter what they’re using it for, the public deserves to know,” Won said.
Other Council members expressed alarm over the revelation about FDNY’s use of Clearview AI.
“This is a clear loophole we didn’t necessarily anticipate,” said Councilmember Crystal Hudson (D-Brooklyn).
Council Majority Leader Amanda Farías (D-The Bronx) called the FDNY’s use of Clearview AI on behalf of NYPD “deeply concerning” and exposed “a troubling gap in our current oversight laws.”
Councilmember Jennifer Gutiérrez (D-Brooklyn), chair of the technology committee said, “What happened here is a warning shot: without clear checks and oversight, city agencies are using powerful surveillance tools like facial recognition and AI with no accountability, no transparency, and no regard for due process.”
Councilmember Joann Ariola (R-Queens), who chairs the Council’s fire committee, disagreed, saying the FDNY was within its purview as a law enforcement agency to share information with the NYPD, but that the case “may require a deeper examination at all levels.”
As for Ahmed, he said the judge dropping the case against him brought him “the greatest relief” of his life. He said he felt like the initial hate crime charge was “an exploitation of laws that are meant to protect us, protect minorities, protect any ethnic group.”
Douglas Cohen, a spokesperson for DA Bragg said: “The office conducted a thorough investigation into this matter — interviewing multiple witnesses, analyzing available video surveillance and reviewing medical records. When that investigation determined we could not prove the legal elements of the top count beyond a reasonable doubt, we moved to dismiss the charge.”
Ahmed is now focused on recovering from the emotional and mental toll the ordeal placed on him and his family.
In December, he earned his certification as an emergency medical technician and plans to apply to medical school after college. He recently read a novel, ‘No Longer Human’ by Osamu Dazai, and related to the story.
“Essentially, the book is about someone that gets detached from society, and he’s basically isolated,” Ahmed said. “For the past year, I was scared of all the accusations, I was scared of what society thought of me.”
...
Read the original on www.thecity.nyc »
Participants with a potentially life-threatening cancer diagnosis and a DSM-IV diagnosis that included anxiety and/or mood symptoms were recruited through flyers, internet, and physician referral. Of 566 individuals who were screened by telephone, 56 were randomized. Figure 1 shows a CONSORT flow diagram. Table 1 shows demographics for the 51 participants who completed at least one session. The two randomized groups did not significantly differ demographically. All 51 participants had a potentially life-threatening cancer diagnosis, with 65% having recurrent or metastatic disease. Types of cancer included breast (13 participants), upper aerodigestive (7), gastrointestinal (4), genitourinary (18), hematologic malignancies (8), other (1). All had a DSM-IV diagnosis: chronic adjustment disorder with anxiety (11 participants), chronic adjustment disorder with mixed anxiety and depressed mood (11), dysthymic disorder (5), generalized anxiety disorder (GAD) (5), major depressive disorder (MDD) (14), or a dual diagnosis of GAD and MDD (4), or GAD and dysthymic disorder (1). Detailed inclusion/exclusion criteria are in the online Supplementary material. The Johns Hopkins IRB approved the study. Written informed consent was obtained from participants.
A two-session, double-blind cross-over design compared the effects of a low versus high psilocybin dose on measures of depressed mood, anxiety, and quality of life, as well as measures of short-term and enduring changes in attitudes and behavior. Participants were randomly assigned to one of two groups. The Low-Dose-1st Group received the low dose of psilocybin on the first session and the high dose on the second session, whereas the High-Dose-1st Group received the high dose on the first session and the low dose on the second session. The duration of each participant’s participation was approximately 9 months (mean 275 days). Psilocybin session 1 occurred, on average, approximately 1 month after study enrollment (mean 28 days), with session 2 occurring approximately 5 weeks later (mean 38 days). Data assessments occurred: (1) immediately after study enrollment (Baseline assessment); (2) on both session days (during and at the end of the session); (3) approximately 5 weeks (mean 37 days) after each session (Post-session 1 and Post-session 2 assessments); (4) approximately 6 months (mean 211 days) after Session 2 (6-month follow-up).
The study compared a high psilocybin dose (22 or 30 mg/70 kg) with a low dose (1 or 3 mg/70 kg) administered in identically appearing capsules. When this study was designed, we had little past experience with a range of psilocybin doses. We decreased the high dose from 30 to 22 mg/70 kg after two of the first three participants who received a high dose of 30 mg/70 kg were discontinued from the study (one from vomiting shortly after capsule administration and one for personal reasons). Related to this decision, preliminary data from a dose-effect study in healthy participants suggested that rates of psychologically challenging experiences were substantially greater at 30 than at 20 mg/70 kg (Griffiths et al., 2011). The low dose of psilocybin was decreased from 3 to 1 mg/70 kg after 12 participants because data from the same dose-effect study showed significant psilocybin effects at 5 mg/70 kg, which raised concern that 3 mg/70 kg might not serve as an inactive placebo.
Expectancies, on part of both participants and monitors, are believed to play a large role in the qualitative effects of psilocybin-like drugs (Griffiths et al., 2006; Metzner et al., 1965). Although double-blind methods are usually used to protect against such effects, expectancy is likely to be significantly operative in a standard drug versus placebo design when the drug being evaluated produces highly discriminable effects and participants and staff know the specific drug conditions to be tested. For these reasons, in the present study a low dose of psilocybin was compared with a high dose of psilocybin, and participants and monitors were given instructions that obscured the actual dose conditions to be tested. Specifically, they were told that psilocybin would be administered in both sessions, the psilocybin doses administered in the two sessions might range anywhere from very low to high, the doses in the two sessions might or might not be the same, sensitivity to psilocybin dose varies widely across individuals, and that at least one dose would be moderate to high. Participants and monitors were further strongly encouraged to try to attain maximal therapeutic and personal benefit from each session.
Drug sessions were conducted in an aesthetic living-room-like environment with two monitors present. Participants were instructed to consume a low-fat breakfast before coming to the research unit. A urine sample was taken to verify abstinence from common drugs of abuse (cocaine, benzodiazepines, and opioids including methadone). Participants who reported use of cannabis or dronabinol were instructed not to use for at least 24 h before sessions. Psilocybin doses were administered in identically appearing opaque, size 0 gelatin capsules, with lactose as the inactive capsule filler. For most of the time during the session, participants were encouraged to lie down on the couch, use an eye mask to block external visual distraction, and use headphones through which a music program was played. The same music program was played for all participants in both sessions. Participants were encouraged to focus their attention on their inner experiences throughout the session. Thus, there was no explicit instruction for participants to focus on their attitudes, ideas, or emotions related to their cancer. A more detailed description of the study room and procedures followed on session days is provided elsewhere (Griffiths et al., 2006; Johnson et al., 2008).
A description of session monitor roles and the content and rationale for meetings between participants and monitors is provided elsewhere (Johnson et al., 2008). Briefly, preparation meetings before the first session, which included discussion of meaningful aspects of the participant’s life, served to establish rapport and prepare the participant for the psilocybin sessions. During sessions, monitors were nondirective and supportive, and they encouraged participants to “trust, let go and be open” to the experience. Meetings after sessions generally focused on novel thoughts and feelings that arose during sessions. Session monitors were study staff originally trained by William Richards PhD, a clinical psychologist with extensive experience conducting studies with classic hallucinogens. Monitor education varied from college graduate to PhD. Formal clinical training varied from none to clinical psychologist. Monitors were selected as having significant human relations skills and self-described experience with altered states of consciousness induced by means such as meditation, yogic breathing, or relaxation techniques.
After study enrollment and assessment of baseline measures, and before the first psilocybin session, each participant met with the two session monitors (staff who would be present during session days) on two or more occasions (mean of 3.0 occasions for a mean total of 7.9 hours). The day after each psilocybin session participants met with the session monitors (mean 1.2 hours). Participants met with monitors on two or more occasions between the first and second psilocybin session (mean of 2.7 occasions for a mean total of 3.4 hours) and on two or more occasions between the second session and 6-month follow-up (mean of 2.5 occasions for a mean total of 2.4 hours). Preparation meetings, the first meeting following each session, and the last meeting before the second session were always in person. For the 37 participants (73%) who did not reside within commuting distance of the research facility, 49% of the Post-session 1 meetings with monitors occurred via telephone or video calls.
The questionnaire included three final questions (see Griffiths et al. 2006 for more specific wording): (1) How personally meaningful was the experience? (rated from 1 to 8, with 1 = no more than routine, everyday experiences; 7 = among the five most meaningful experiences of my life; and 8 = the single most meaningful experience of my life). (2) Indicate the degree to which the experience was spiritually significant to you? (rated from 1 to 6, with 1 = not at all; 5 = among the five most spiritually significant experiences of my life; 6 = the single most spiritually significant experience of my life). (3) Do you believe that the experience and your contemplation of that experience have led to change in your current sense of personal well-being or life satisfaction? (rated from +3 = increased very much; +2 = increased moderately; 0 = no change; –3 = decreased very much).
Participant ratings of persisting effects attributed to the session on ratings completed 5 weeks after the low-dose and high-dose psilocybin sessions, and, again, retrospectively for the high-dose session 6 months after the second session+.
The Persisting Effects Questionnaire assessed self-rated positive and negative changes in attitudes, moods, behavior, and spiritual experience attributed to the most recent psilocybin session (Griffiths et al., 2006, 2011). At the 6-month follow-up, the questionnaire was completed on the basis of the high-dose session, which was identified as the session in which the participant experienced the most pronounced changes in their ordinary mental processes. Twelve subscales (described in Table 8) were scored.
Three measures of spirituality were assessed at three time-points: Baseline, 5 weeks after session 2, and at the 6-month follow-up: FACIT-Sp, a self-rated measure of the spiritual dimension of quality of life in chronic illness (Peterman et al., 2002) assessed on how the participant felt “on average”; Spiritual-Religious Outcome Scale, a three-item measure used to assess spiritual and religious changes during illness (Pargament et al., 2004); and Faith Maturity Scale, a 12-item scale assessing the degree to which a person’s priorities and perspectives align with “mainline” Protestant traditions (Benson et al., 1993).
Structured telephone interviews with community observers (e.g. family members, friends, or work colleagues) provided ratings of participant attitudes and behavior reflecting healthy psychosocial functioning (Griffiths et al., 2011). The interviewer provided no information to the rater about the participant or the nature of the research study. The structured interview (Community Observer Questionnaire) consisted of asking the rater to rate the participant’s behavior and attitudes using a 10-point scale (from 1 = not at all, to 10 = extremely) on 13 items reflecting healthy psychosocial functioning: inner peace; patience; good-natured humor/playfulness; mental flexibility; optimism; anxiety (scored negatively); interpersonal perceptiveness and caring; negative expression of anger (scored negatively); compassion/social concern; expression of positive emotions (e.g. joy, love, appreciation); self-confidence; forgiveness of others; and forgiveness of self. On the first rating occasion, which occurred soon after acceptance into the study, raters were instructed to base their ratings on observations of and conversations with the participant over the past 3 months. On two subsequent assessments, raters were told their previous ratings and were instructed to rate the participant based on interactions over the last month (post-session 2 assessment) or since beginning in the study (6-month follow-up). Data from each interview with each rater were calculated as a total score. Changes in each participant’s behavior and attitudes after drug sessions were expressed as a mean change score (i.e. difference score) from the baseline rating across the raters. Of 438 scheduled ratings by community observers, 25 (
The two primary therapeutic outcome measures were the widely used clinician-rated measures of depression, GRID-HAM-D-17 (ISCDD, 2003) and anxiety, HAM-A assessed with the SIGH-A (Shear et al., 2001). For these clinician-rated measures, a clinically significant response was defined as ⩾50% decrease in measure relative to Baseline; symptom remission was defined as ⩾50% decrease in measure relative to Baseline and a score of ⩽7 on the GRID-HAMD or HAM-A (Gao et al., 2014; Matza et al., 2010).
Seventeen measures focused on mood states, attitudes, disposition, and behaviors thought to be therapeutically relevant in psychologically distressed cancer patients were assessed at four time-points over the study: immediately after study enrollment (Baseline assessment), about 5 weeks (mean 37 days) after each session (Post-session 1 and 2 assessments), and about 6 months (mean 211 days) after session 2 (6-month follow-up).
Ten minutes before and 30, 60, 90, 120, 180, 240, 300, and 360 min after capsule administration, blood pressure, heart rate, and monitor ratings were obtained as described previously (Griffiths et al., 2006). The two session monitors completed the Monitor Rating Questionnaire, which involved rating or scoring several dimensions of the participant’s behavior or mood. The dimensions, which are expressed as peak scores in Table 2, were rated on a 5-point scale from 0 to 4. Data were the mean of the two monitor ratings at each time-point.
Participants with a potentially life-threatening cancer diagnosis and a DSM-IV diagnosis that included anxiety and/or mood symptoms were recruited through flyers, internet, and physician referral. Of 566 individuals who were screened by telephone, 56 were randomized. Figure 1 shows a CONSORT flow diagram. Table 1 shows demographics for the 51 participants who completed at least one session. The two randomized groups did not significantly differ demographically. All 51 participants had a potentially life-threatening cancer diagnosis, with 65% having recurrent or metastatic disease. Types of cancer included breast (13 participants), upper aerodigestive (7), gastrointestinal (4), genitourinary (18), hematologic malignancies (8), other (1). All had a DSM-IV diagnosis: chronic adjustment disorder with anxiety (11 participants), chronic adjustment disorder with mixed anxiety and depressed mood (11), dysthymic disorder (5), generalized anxiety disorder (GAD) (5), major depressive disorder (MDD) (14), or a dual diagnosis of GAD and MDD (4), or GAD and dysthymic disorder (1). Detailed inclusion/exclusion criteria are in the online Supplementary material. The Johns Hopkins IRB approved the study. Written informed consent was obtained from participants.
A two-session, double-blind cross-over design compared the effects of a low versus high psilocybin dose on measures of depressed mood, anxiety, and quality of life, as well as measures of short-term and enduring changes in attitudes and behavior. Participants were randomly assigned to one of two groups. The Low-Dose-1st Group received the low dose of psilocybin on the first session and the high dose on the second session, whereas the High-Dose-1st Group received the high dose on the first session and the low dose on the second session. The duration of each participant’s participation was approximately 9 months (mean 275 days). Psilocybin session 1 occurred, on average, approximately 1 month after study enrollment (mean 28 days), with session 2 occurring approximately 5 weeks later (mean 38 days). Data assessments occurred: (1) immediately after study enrollment (Baseline assessment); (2) on both session days (during and at the end of the session); (3) approximately 5 weeks (mean 37 days) after each session (Post-session 1 and Post-session 2 assessments); (4) approximately 6 months (mean 211 days) after Session 2 (6-month follow-up).
The study compared a high psilocybin dose (22 or 30 mg/70 kg) with a low dose (1 or 3 mg/70 kg) administered in identically appearing capsules. When this study was designed, we had little past experience with a range of psilocybin doses. We decreased the high dose from 30 to 22 mg/70 kg after two of the first three participants who received a high dose of 30 mg/70 kg were discontinued from the study (one from vomiting shortly after capsule administration and one for personal reasons). Related to this decision, preliminary data from a dose-effect study in healthy participants suggested that rates of psychologically challenging experiences were substantially greater at 30 than at 20 mg/70 kg (Griffiths et al., 2011). The low dose of psilocybin was decreased from 3 to 1 mg/70 kg after 12 participants because data from the same dose-effect study showed significant psilocybin effects at 5 mg/70 kg, which raised concern that 3 mg/70 kg might not serve as an inactive placebo.
Expectancies, on part of both participants and monitors, are believed to play a large role in the qualitative effects of psilocybin-like drugs (Griffiths et al., 2006; Metzner et al., 1965). Although double-blind methods are usually used to protect against such effects, expectancy is likely to be significantly operative in a standard drug versus placebo design when the drug being evaluated produces highly discriminable effects and participants and staff know the specific drug conditions to be tested. For these reasons, in the present study a low dose of psilocybin was compared with a high dose of psilocybin, and participants and monitors were given instructions that obscured the actual dose conditions to be tested. Specifically, they were told that psilocybin would be administered in both sessions, the psilocybin doses administered in the two sessions might range anywhere from very low to high, the doses in the two sessions might or might not be the same, sensitivity to psilocybin dose varies widely across individuals, and that at least one dose would be moderate to high. Participants and monitors were further strongly encouraged to try to attain maximal therapeutic and personal benefit from each session.
Drug sessions were conducted in an aesthetic living-room-like environment with two monitors present. Participants were instructed to consume a low-fat breakfast before coming to the research unit. A urine sample was taken to verify abstinence from common drugs of abuse (cocaine, benzodiazepines, and opioids including methadone). Participants who reported use of cannabis or dronabinol were instructed not to use for at least 24 h before sessions. Psilocybin doses were administered in identically appearing opaque, size 0 gelatin capsules, with lactose as the inactive capsule filler. For most of the time during the session, participants were encouraged to lie down on the couch, use an eye mask to block external visual distraction, and use headphones through which a music program was played. The same music program was played for all participants in both sessions. Participants were encouraged to focus their attention on their inner experiences throughout the session. Thus, there was no explicit instruction for participants to focus on their attitudes, ideas, or emotions related to their cancer. A more detailed description of the study room and procedures followed on session days is provided elsewhere (Griffiths et al., 2006; Johnson et al., 2008).
A description of session monitor roles and the content and rationale for meetings between participants and monitors is provided elsewhere (Johnson et al., 2008). Briefly, preparation meetings before the first session, which included discussion of meaningful aspects of the participant’s life, served to establish rapport and prepare the participant for the psilocybin sessions. During sessions, monitors were nondirective and supportive, and they encouraged participants to “trust, let go and be open” to the experience. Meetings after sessions generally focused on novel thoughts and feelings that arose during sessions. Session monitors were study staff originally trained by William Richards PhD, a clinical psychologist with extensive experience conducting studies with classic hallucinogens. Monitor education varied from college graduate to PhD. Formal clinical training varied from none to clinical psychologist. Monitors were selected as having significant human relations skills and self-described experience with altered states of consciousness induced by means such as meditation, yogic breathing, or relaxation techniques.
After study enrollment and assessment of baseline measures, and before the first psilocybin session, each participant met with the two session monitors (staff who would be present during session days) on two or more occasions (mean of 3.0 occasions for a mean total of 7.9 hours). The day after each psilocybin session participants met with the session monitors (mean 1.2 hours). Participants met with monitors on two or more occasions between the first and second psilocybin session (mean of 2.7 occasions for a mean total of 3.4 hours) and on two or more occasions between the second session and 6-month follow-up (mean of 2.5 occasions for a mean total of 2.4 hours). Preparation meetings, the first meeting following each session, and the last meeting before the second session were always in person. For the 37 participants (73%) who did not reside within commuting distance of the research facility, 49% of the Post-session 1 meetings with monitors occurred via telephone or video calls.
The questionnaire included three final questions (see Griffiths et al. 2006 for more specific wording): (1) How personally meaningful was the experience? (rated from 1 to 8, with 1 = no more than routine, everyday experiences; 7 = among the five most meaningful experiences of my life; and 8 = the single most meaningful experience of my life). (2) Indicate the degree to which the experience was spiritually significant to you? (rated from 1 to 6, with 1 = not at all; 5 = among the five most spiritually significant experiences of my life; 6 = the single most spiritually significant experience of my life). (3) Do you believe that the experience and your contemplation of that experience have led to change in your current sense of personal well-being or life satisfaction? (rated from +3 = increased very much; +2 = increased moderately; 0 = no change; –3 = decreased very much).
Participant ratings of persisting effects attributed to the session on ratings completed 5 weeks after the low-dose and high-dose psilocybin sessions, and, again, retrospectively for the high-dose session 6 months after the second session+.
The Persisting Effects Questionnaire assessed self-rated positive and negative changes in attitudes, moods, behavior, and spiritual experience attributed to the most recent psilocybin session (Griffiths et al., 2006, 2011). At the 6-month follow-up, the questionnaire was completed on the basis of the high-dose session, which was identified as the session in which the participant experienced the most pronounced changes in their ordinary mental processes. Twelve subscales (described in Table 8) were scored.
Three measures of spirituality were assessed at three time-points: Baseline, 5 weeks after session 2, and at the 6-month follow-up: FACIT-Sp, a self-rated measure of the spiritual dimension of quality of life in chronic illness (Peterman et al., 2002) assessed on how the participant felt “on average”; Spiritual-Religious Outcome Scale, a three-item measure used to assess spiritual and religious changes during illness (Pargament et al., 2004); and Faith Maturity Scale, a 12-item scale assessing the degree to which a person’s priorities and perspectives align with “mainline” Protestant traditions (Benson et al., 1993).
Structured telephone interviews with community observers (e.g. family members, friends, or work colleagues) provided ratings of participant attitudes and behavior reflecting healthy psychosocial functioning (Griffiths et al., 2011). The interviewer provided no information to the rater about the participant or the nature of the research study. The structured interview (Community Observer Questionnaire) consisted of asking the rater to rate the participant’s behavior and attitudes using a 10-point scale (from 1 = not at all, to 10 = extremely) on 13 items reflecting healthy psychosocial functioning: inner peace; patience; good-natured humor/playfulness; mental flexibility; optimism; anxiety (scored negatively); interpersonal perceptiveness and caring; negative expression of anger (scored negatively); compassion/social concern; expression of positive emotions (e.g. joy, love, appreciation); self-confidence; forgiveness of others; and forgiveness of self. On the first rating occasion, which occurred soon after acceptance into the study, raters were instructed to base their ratings on observations of and conversations with the participant over the past 3 months. On two subsequent assessments, raters were told their previous ratings and were instructed to rate the participant based on interactions over the last month (post-session 2 assessment) or since beginning in the study (6-month follow-up). Data from each interview with each rater were calculated as a total score. Changes in each participant’s behavior and attitudes after drug sessions were expressed as a mean change score (i.e. difference score) from the baseline rating across the raters. Of 438 scheduled ratings by community observers, 25 (
The two primary therapeutic outcome measures were the widely used clinician-rated measures of depression, GRID-HAM-D-17 (ISCDD, 2003) and anxiety, HAM-A assessed with the SIGH-A (Shear et al., 2001). For these clinician-rated measures, a clinically significant response was defined as ⩾50% decrease in measure relative to Baseline; symptom remission was defined as ⩾50% decrease in measure relative to Baseline and a score of ⩽7 on the GRID-HAMD or HAM-A (Gao et al., 2014; Matza et al., 2010).
Seventeen measures focused on mood states, attitudes, disposition, and behaviors thought to be therapeutically relevant in psychologically distressed cancer patients were assessed at four time-points over the study: immediately after study enrollment (Baseline assessment), about 5 weeks (mean 37 days) after each session (Post-session 1 and 2 assessments), and about 6 months (mean 211 days) after session 2 (6-month follow-up).
Ten minutes before and 30, 60, 90, 120, 180, 240, 300, and 360 min after capsule administration, blood pressure, heart rate, and monitor ratings were obtained as described previously (Griffiths et al., 2006). The two session monitors completed the Monitor Rating Questionnaire, which involved rating or scoring several dimensions of the participant’s behavior or mood. The dimensions, which are expressed as peak scores in Table 2, were rated on a 5-point scale from 0 to 4. Data were the mean of the two monitor ratings at each time-point.
Differences in demographic data between the two dose sequence groups were examined with t-tests and chi-square tests with continuous and categorical variables, respectively.
Data analyses were conducted to demonstrate the appropriateness of combining data for the 1 and 3 mg/70 kg doses in the low-dose condition and for including data for the one participant who received 30 mg/70 kg. To determine if the two different psilocybin doses differed in the low-dose condition, t-tests were used to compare participants who received 3 mg/70 kg (n = 12) with those who received 1 mg/70 kg (n = 38) on participant ratings of peak intensity of effect (HRS intensity item completed 7 h after administration) and peak monitor ratings of overall drug effect across the session. Because neither of these were significantly different, data from the 1 and 3 mg/70 kg doses were combined in the low-dose condition for all analyses.
Of the 50 participants who completed the high-dose condition, one received 30 mg/70 kg and 49 received 22 mg/70 kg. To determine if inclusion of the data from the one participant who received 30 mg/70 kg affected conclusions about the most therapeutically relevant outcome measures, the analyses for the 17 measures shown in Tables 4 and 5 were conducted with and without that participant. Because there were few differences in significance (72 of 75 tests remained the same), that participant’s data were included in all the analyses.
To examine acute drug effects from sessions, the drug dose conditions were collapsed across the two dose sequence groups. The appropriateness of this approach was supported by an absence of any significant group effects and any group-by-dose interactions on the cardiovascular measures (peak systolic and diastolic pressures and heart rate) and on several key monitor- and participant-rated measures: peak monitor ratings of drug strength and joy/intense happiness, and end-of-session participant ratings on the Mysticism Scale.
Six participants reported initiating medication treatment with an anxiolytic (2 participants), antidepressant (3), or both (1) between the Post-session 2 and the 6-month follow-up assessments. To determine if inclusion of these participants affected statistical outcomes in the analyses of the 6-month assessment, the analyses summarized in Tables 4, 5, 6, 7 and 8 were conducted with and without these six participants. All statistical outcomes remained identical. Thus, data from these six participants were retained in the data analyses.
For cardiovascular measures and monitor ratings assessed repeatedly during sessions, repeated measures regressions were conducted in SAS PROC MIXED using an AR(1) covariance structure and fixed effects of dose and time. Planned comparison t-tests were used to assess differences between the high- and low-dose condition at each time-point.
Peak scores for cardiovascular measures and monitor ratings during sessions were defined as the maximum value from pre-capsule to 6 h post-capsule. These peak scores and the end-of-session ratings (Tables 2 and 3) were analyzed using repeated measures regressions in SAS PROC MIXED with a CS covariance structure and fixed effects of group and dose.
For the analyses of continuous measures described below, repeated measures regressions were conducted in SAS PROC MIXED using an AR(1) covariance structure and fixed effects of group and time. Planned comparison t-tests (specified below) from these analyses are reported. For dichotomous measures, Friedman’s Test was conducted in SPSS for both the overall analysis and planned comparisons as specified below. All results are expressed as unadjusted scores.
For the measures that were assessed in the two dose sequence groups at Baseline, Post-session 1, Post-session 2, and 6 months (Tables 4 and 5), the following planned comparisons most relevant to examining the effects of psilocybin dose were conducted: Between-group comparisons at Baseline, Post 1, and Post 2; and within-group comparisons of Baseline versus Post 1 in both dose sequence groups, and Post 1 versus Post 2 in the Low-Dose-1st (High-Dose-2nd) Group. A planned comparison between Baseline and 6 months collapsed across groups was also conducted. Effects sizes were calculated using Cohen’s d.
For measures assessed only at Baseline, Post 2, and 6 months (Table 7), between-group planned comparisons were conducted at Baseline, Post 2, and 6 months. Because measures assessed only at these time-points cannot provide information about the psilocybin dose, data were collapsed across the two dose sequence groups and planned comparisons were conducted comparing Baseline with Post 2 and Baseline with 6 months.
For participant ratings of persisting effects attributed to the session (e.g. Table 8), planned comparisons for continuous and dichotomous measures were conducted between: (1) ratings at 5 weeks after the low versus high-dose sessions; (2) ratings of low dose at 5 weeks versus ratings of high dose at the 6-month follow-up; (3) ratings of high dose at 5 weeks versus ratings of high dose at the 6-month follow-up.
As described above, clinician-rated measures of depression (GRID-HAMD) and anxiety (HAM-A) were analyzed as continuous measures. In addition for both measures, a clinically significant response was defined as ⩾50% decrease in measure relative to Baseline; symptom remission was defined as ⩾50% decrease in measure relative to Baseline and a score of ⩽7. Planned comparisons were conducted via independent z-tests of proportions between the two dose sequence groups at Post-session 1, Post-session 2, and 6 months. To determine if effects were sustained at 6 months, planned comparisons were also conducted via dependent z-tests of proportions between Post-session 2 versus 6 months in the Low-Dose-1st (High-Dose-2nd) Group, and between Post-session 1 versus 6 months in the High-Dose-1st (Low-Dose-2nd) Group.
Exploratory analyses used Pearson’s correlations to examine the relationship between total scores on the Mystical Experience Questionnaire (MEQ30) assessed at the end of session 1 and enduring effects assessed 5 weeks after session 1. The Post-session 1 measures were ratings on three items from the Persisting Effects Questionnaire (meaningfulness, spiritual significance, and life satisfaction) and 17 therapeutically relevant measures assessed at Baseline and Post 1 (Tables 4 and 5) expressed as difference from baseline scores. Significant relationships were further examined using partial correlations to control for end-of-session participant-rated “Intensity” (item 98 from the HRS). To examine MEQ30 scores as a mediator of the effect of psilocybin dose on therapeutic effects, a bootstrap analysis was done using the PROCESS macro (Hayes, 2013) in SPSS. Bootstrapping is a non-parametric method appropriate for small samples, which was used to estimate 95% confidence intervals for the mediation effect. The PROCESS macro also calculated direct effects on outcome for both group effects and MEQ30.
...
Read the original on pmc.ncbi.nlm.nih.gov »
My favorite use-case for AI is writing logs
One of my favorite AI dev products today is Full Line Code Completion in PyCharm (bundled with the IDE since late 2023). It’s extremely well-thought out, unintrusive, and makes me a more effective developer. Most importantly, it still keeps me mostly in control of my code. I’ve now used it in GoLand as well. I’ve been a happy JetBrains customer for a long time now, and it’s because they ship features like this.
I frequently work with code that involves sequential data processing, computations, and async API calls across multiple services. I also deal with a lot of precise vector operations in PyTorch that shape suffixes don’t always illuminate. So, print statement debugging and writing good logs has been a critical part of my workflows for years.
As Kerningan and Pike say in The Practice of Programming about preferring print to debugging,
…[W]e find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is.
One thing that is annoying about logging is that f-strings are great but become repetitive to write if you have to write them over and over, particularly if you’re formatting values or accessing elements of data frames, lists, and nested structures, and particularly if you have to scan your codebase to find those variables. Writing good logs is important but also breaks up a debugging flow.
from loguru import logger
logger.info(f’Adding a log for {your_variable} and {len(my_list)} and {df.head(0)}’)
The amount of cognitive overhead in this deceptively simple log is several levels deep: you have to first stop to type logger.info (or is it logging.info? I use both loguru and logger depending on the codebase and end up always getting the two confused.) Then, the parentheses, the f-string itself, and then the variables in brackets. Now, was it your_variable or your_variable_with_edits from five lines up? And what’s the syntax for accessing a subset of df.head again?
With full-line-code completion, JetBrains’ model auto-infers the log completion from the surrounding text, with a limit of 384 characters. Inference starts by taking the file extension as input, combined with the filepath, and then the part of the code above the input cursor, so that all of the tokens in the file extension, plus path, plus code above the caret, fit. Everything is combined and sent to the model in the prompt.
The constrained output good enough most of the time that it speeds up my workflow a lot. An added bonus is that it often writes a much clearer log than I, a lazy human, would write, logs. Because they’re so concise, I often don’t even remove when I’m done debugging because they’re now valuable in prod.
Here’s an example from a side project I’m working on. In the first case, the is autocomplete inferring that I actually want to check the Redis URL, a logical conclusion here.
In this second case, it assumes I’d like the shape of the dataframe, also a logical conclusion because the profiling dataframes is a very popular use-case for logs.
The coolest part of this feature is that the inference model is entirely local to your machine.
This enforces a few very important requirements on the development team, namely compression and speed.
The model has to be small enough to bundle with the IDE for desktop memory footprints (already coming in at around ~1GB for the MacOS binary), which eliminates 99% of current LLMsAnd yet, the model has to be smart enough to interpolate lines of code from its small context windowThe local requirement eliminates any model inference engines like vLLM, SGLM, or Ray which implement KV cache optimization like PagedAttentionIt has to be a model that’s fast enough to produce its first token (and all subsequent tokens) extremely quickly,Finally, it has to be optimized for Python specifically since this model is only available in PyCharm
This is drastically different from the current assumptions around how we build and ship LLMs: that they need to be extremely large, general-purpose models served over proprietary APIs. we We find ourselves in a very constrained solution space because we no longer have to do all this other stuff that generalized LLMs have to do: write poetry, reason through math problems, act as OCR, offer code canvas templating, write marketing emails, and generate Studio Ghibli memes.
All we have to do is train a model to complete a single line of code with a context of 384 characters! And then compress the crap out of that model so that it can fit on-device and perform inference.
So how did they do it? Luckily, JetBrains published a paper on this, and there are a bunch of interesting notes. The work is split into two parts, model training, and then the integration of the plugin itself.
The model is trained is done in PyTorch and then quantized.
First, they train a GPT-2 style Transformer decoder-only model of 100 million parameters, including a tokenizer (aka autoregressive text completion like you’d get from Claude, OpenAI, Gemini, and friends these days). They later changed this architecture to Llama2 after the success of the growing llama.cpp and GGUF community, as well as the better performance of the newer architecture. The original dataset they used to train the model was a subset of The Stack, a code dataset across permissive licenses with 6TB of code in 30 programming languagesThe initial training set was “just” 45 GB and in preparing the data for training, in data cleaning, for space constraints, they remove all code comments in the training data specifically to focus on code generationThey do a neat trick for tokenizing Python (using a BPE-style tokenizer optimized for character pairs rather than bytes, since code is made up of smaller snippets and idioms than natural language text) which is indentation-sensitive, by converting spaces and tabs to start-end tokens, to remove tokens that might be different only because they have different whitespacing. They ended up going with a tokenizer vocab size of 16,384.They do another very cool step in training which is to remove imports because they find that developers usually add imports in after writing the actual code, a fact that the model needs to anticipateThey then split into train/test for evaluation and trained for several days on 8 NVidia A100 GPU with a cross-entropy loss objective function
Because they were able to so clearly focus on the domain and understanding of how code inference works, focus on a single programming languages with its own nuances, they were able to make the training data set smaller, make the output more exact, and spend much less time and money training the model.
The actual plugin that’s included in PyCharm “is implemented in Kotlin, however, it utilizes an additional native server that is run locally and is implemented in C++” for serving the inference tokens.
In order to prepare the model for serving, they:
Quantized it from FP32 to INT8 which compressed the model from 400 MB to 100 MBPrepared as a served ONNX RT artifact, which allowed them to use CPU inference, which removed the CUDA overhead tax(later, they switched to using llama.cpp to serve the llama model architecture for the server.Finally, in order to perform inference on a sequence of tokens, they use beam search. Generally, Transformer-decoders are trained on predicting the next token in any given sequence so any individual step will give you a list of tokens along with their ranked probabilities (cementing my long-running theory that everything is a search problem).Since this is computationally impossible at large numbers of tokens, a number of solutions exist to solve the problem of decoding optimally. Beam search creates a graph of all possible returned token sequences and expands at each node with the highest potential probability, limiting to k possible beams. In FLCC, the max number of beams, k, is 20, and they chose to limit generation to collect only those hypotheses that end with a newline character.Additionally they made use of a number of caching strategies, including initializing the model at 50% of total context - i.e. it starts by preloading ~192 characters of previous code, to give you space to either go back and edit old code, which now no longer has to be put into context, or to add new code, which is then added to the context. That way, if your cursor clicks on code you’ve already written, the model doesn’t need to re-infer.
There are a number of other very cool architecture and model decisions from the paper that are very worth reading and that show the level of care put into the input data, the modeling, and the inference architecture.
The bottom line is that, for me as a user, this experience is extremely thoughtful. It has saved me countless times both in print log debugging and in the logs I ship to prod.
In LLM land, there’s both a place for large, generalist models, and there’s a place for small models, and while much of the rest of the world writes about the former, I’m excited to also find more applications built with the latter.
...
Read the original on newsletter.vickiboykis.com »
Immigration and Customs Enforcement officials are getting access to the personal data of nearly 80 million people on Medicaid in order to acquire “information concerning the identification and location of aliens in the United States,” according to an information exchange agreement viewed by WIRED.
The agreement, which is titled “Information Exchange Agreement Between the Centers for Medicare and Medicaid Services and the Department of Homeland Security (DHS) for Disclosure of Identity and Location Information of Aliens,” was signed by CMS officials on Tuesday and first reported by AP News.
Per the agreement, ICE officials will get login credentials for a Centers for Medicare and Medicaid Services (CMS) database containing sensitive medical information, including detailed records about diagnoses and procedures. Language in the agreement says it will allow ICE to access personal information such as home addresses, phone numbers, IP addresses, banking data, and social security numbers. (Later on in the agreement, what ICE is allowed to access is defined differently, specifying just “Medicaid recipients” and their sex, ethnicity, and race but forgoing any mention of IP or banking data.) The agreement is set to last two months. While the document is dated July 9, it is only effective starting when both parties sign it, which would indicate a 60-day span from July 15 to September 15.
The move comes as President Donald Trump’s administration has continued to expand its crackdown on immigration. The administration aims to deport 3,000 people per day—four times as many as were deported in the fiscal year of 2024, according to ICE. Its plans to do so seemingly involves vacuuming up data from across the government. WIRED previously reported that the so-called Department of Government Efficiency (DOGE) and DHS were working on a master database, pulling in data from across DHS and other agencies, in order to surveil and deport immigrants.
Medicaid, state and federally government-funded health care coverage for the country’s poorest, is largely available only to some non-citizens, including refugees and asylum seekers, survivors of human trafficking, and permanent residents. Some states, like New York, provide Medicaid coverage for children and pregnant people, regardless of their immigration status. States report their Medicaid expenditures and data to the federal government, which reimburses them for some of the costs.
“This was never even considered during my five years at DHS working on immigration enforcement,” says John Sandweg, the acting director of ICE during President Barack Obama’s administration. “You want to be careful of a possible chilling effect where people who might apply for benefits and be eligible for benefits—or who seek emergency medical care—won’t do so because they’re worried the information they provide at the hospital could make them a target for immigration action.”
This isn’t the concern of the administration now, spokespeople tell WIRED. “Under the leadership of Dr. [Mehmet] Oz, CMS is aggressively cracking down on states that may be misusing federal Medicaid funds to subsidize care for illegal immigrants,” Andrew Nixon, the director of communications at the Department of Health and Human Services (HHS), tells WIRED. “This oversight effort—supported by lawful interagency data sharing with DHS—is focused on identifying waste, fraud, and systemic abuse. We are not only protecting taxpayer dollars—we are restoring credibility to one of America’s most vital programs. The American people deserve accountability. HHS is delivering it.”
...
Read the original on www.wired.com »
Receive monthly updates via our RSS feed, or by signing up to our monthly newsletter.
For a few days, Pino became a land creature, living on stilts, while we scrubbed and re-painted the lower part of the hull. Our propeller had a bit of a wobble, which we hope is now corrected. We also battled with the old wheel quadrant and were finally able to remove it, at least a part of it. Boaters have frequently helped us while we were in boatyards, and we are finally able to pay it forward. We offered both advice to those who asked and lent tools to folks that needed them. It felt nice. Teapot’s new bottom has seen water for the first time, the new gelcoat will allow us to take it around into bays for many more years to come.
We spent many June days working on both Turnip Complete(Uxn book) and the enhanced version of the Victoria to Sitka Logbook, with frequent breaks to enjoy the beautiful places we found ourselves in.
The beginning of our sailing season has been very blustery, allowing for some good sailing, but also often forcing us to wait at anchor for clement weather. Later, we sailed through the San Juan Islands to meet up with some Merveillans on Blakely Island. We are very grateful to be part of a community of such kind, curious, and generous people. The image that was drawn for this month’s update represents cooperation between members of Merveilles.
Book Club: This month we are reading Ill Met By Moonlight by Sarah A. Hoyt, Silmarillion by J. R.R Tolkien and Girl’s Last Tour by Tsukumizu.
Oquonie was released on the Playdate Catalog this month! We’d like to thank everyone who sent us photos of their progress in the game, it has been nice to follow along. The game is kind of our first official release on a modern handheld platform, and we’re happy to see that Uxn roms run well on it! It might be one of the first original Playdate games implemented that way?
In other news, Devine started working on a book, the working title is “Turnip Complete”. The goal is to write a complete and stand-alone implementation guide for the Uxn virtual machine and devices, along with some example programs and thoughts about playful computery things. We might have something to show for it come autumn, maybe.
We’ve left Victoria for the summer, and are falling back into the groove of waking up at dusk to catch the tide. We have a quick haul out lined up, and afterward we’ll be sailing around the Gulf Islands until the fall. We have lots of projects to finish up these next couple of months and can’t wait to share them with you.
We share photos of life aboard throughout the month on our little photo site, if you’re curious to see what the daily life aboard Pino is like.
Book Club: This month we are reading Artemis by Andy Weir, Gardening Without Work: For the Aging, the Busy and the Indolent by Ruth Stout and A History of Thinking on Paper by Roland Allen.
The weather is getting warmer, which is perfect for airing out Pino’s lockers, and drying off moldy clothes and tools. Anything stored in the v-berth lockers, below the waterline, suffer from extreme wetness. It is a very, very annoying fact of boat life, but there is really no way to bring good air flow in those spaces. We scrubbed the lockers clean, parted with items we no longer needed, and sent two laptops to the recycler.
In last month’s update, we mentioned Flickjam, a game jam based on Increpare’s Flickgame. We received a total of 27 entries! They’re really fun, and all playable in the browser. Devine’s jam entry is about a very adorable rabbit learning to play the word “rabbit” on a xylophone in Solresol.
Devine spent some time off the computer, skating and folding paper. The paper computer pages have been updated to cover some new ways in which computer emulators can be operated on paper. While on that subject, we highly recommend Tadashi Tokieda’s excellent talk named A world from a sheet of paper.
Another item on Devine’s list was to gradually phase out Uxnasm.c in favor of the self-hosted assembler. We’re not 100% pleased yet, but it is getting closer to retirement.
Starting on May 20th 2025(1000 PST/PDT) the Playdate Catalogue will include Oquonie. The game is also available on our itch.io store.
The video for Devine’s November 2024 talk A Shining Place Built Upon The Sand is now on YouTube.
Book Club: This month we are reading Banvard’s Folly by Paul Collins, Einstein’s Dreams by Alan Lightman, and we are still making progress on the The Goldfinch by Donna Tartt.
In the above illustration, little Ninj is going through a first-aid kit, looking through our supplies to see what needs to be topped off and what is out-of-date. Rek drew a list of suggestions on what to include in both a first-aid and a medical kit for the Rabbit Waves project, we plan to add more items soon(thanks to everyone on Mastodon who suggested additions! It’ll be in the April update).
We will spend the first few days of April participating in Flickjam, making small games in the style of Flickgame, a tool originally made by Increpare, in which the world is navigated by clicking on pixels of different colors to head in different directions. Devine ported Flickgame to Varvara, and wrote a compiler for flick games to uxn roms.
This past month, Rek finished transcribing the entire 15 weeks of the Victoria to Sitka logbook! We have plans to turn it into a book, in the style of Busy Doing Nothing, with tons of extra content and illustrations.
March was a very good month for silly calendar doodles. Our paper calendar is always in view, it documents important events like releases, appointments, as well as food, memes, and other noteworthy things that happened on each day.
Book Club: This month we are still reading The Goldfinch by Donna Tartt, it’s a long book!
On February 14th, we celebrated our 9th year living aboard our beloved Pino. Read a short text by Devine, which expands on what it means to truly be a generalist.
Despite the weather being less-than-ideal, we were able to install our replacement solar panels, and revisit our notes on solar installations.
Devine completed Nebu, a spritesheet editor as well as a desktop calendar, alongside many other little desktop utilities. Nebu is just over 8.3 kB, a bit less than a blank excel file.
In times of increasing climate and political instability, it is a good time to get together with your community and make plans for emergencies. Consider reading Tokyo Bosai about disaster preparedness, this elaborate document deals with disasters that occur specifically in Japan, but many of the recommendations are useful regardless. We released a new page on rabbit waves with suggestions on what to pack in an Emergency Bag. Remember, every emergency bag is different, and what is essential varies per person.
We also put together a print-it-yourself zine, which combines useful information about Morse Code and . If you have printed the zine and don’t know how to fold it, see Rek’s . Speaking of signal flags, we printed of Rek’s ICS flag drawings.
The nice weather finally arrived this week and we were able to redo Teapot’s gelcoat. This was our first time working with gelcoat, our friends Rik & Kay, who lent us their workspace, were very patient and generous teachers. We will continue the project later when the gelcoat has cured.
Book Club: This month we are reading The Goldfinch by Donna Tartt.
Devine spent time improving the html5 uxn emulator, and thanks to their hard work it is now possible to play Niju, Donsol, and Oquonie directly in the browser on itch.io, the same goes for projects like Noodle and Tote.
It’s been a long time coming, but Oquonie is now playable on Playdate. Rek spent the last week converting the 2-bit assets for Oquonie to 1-bit, because some of the characters and tiles were too difficult to read, now all of the assets work perfectly on monochromatic screens. As an amazing plus, Devine got the music and sounds working perfectly, just like in the original iOS version.
From January 19-25th, we both participated in Goblin Week, an event in which you make goblins every day for a week(whatever that means to you). See the goblin series made by Rek(viewable here in higher rez also) and the one made by Devine(Mastodon).
Pino has earned two new replacement solar panels this month! We have not installed them yet, it is still too cold outside in Victoria (we are expecting snow this week).
We share photos often in our monthly updates, and so Devine spent time building our very own custom photo feed named Days. It is possible to follow the feed with RSS.
Book Club: This month we are reading How do You Live? by Genzaburo Yoshino and Middlemarch by George Eliot.
See log archives for 2024, 2023, 2022, 2021, 2020, 2019, and 2018.
...
Read the original on 100r.co »
Participants in Ontario’s prematurely cancelled basic income pilot project were happier, healthier and continued working even though they were receiving money with no-strings attached.
That’s according to a new report titled Southern Ontario’s Basic Income Experience, which was compiled by researchers at McMaster and Ryerson University, in partnership with the Hamilton Roundtable for Poverty Reduction.
The report shows nearly three-quarters of respondents who were working when the pilot project began kept at it despite receiving basic income.
That finding appears to contradict the criticism some levelled at the project, saying it would sap people’s motivation to stay in the workforce or seek employment.
“They continued working,” Wayne Lewchuk, an economics prof at McMaster University who was part of the research team told As It Happens.
“Many of those who continued working were actually able to move to better jobs, jobs that had a higher hourly wage, that had in general better working conditions, that they felt were more secure.”
The three-year, $150-million program was scrapped by Ontario’s PC government in July. At the time, then-social services minister Lisa MacLeod, said the decision was made because the program was failing to help people become “independent contributors to the economy.”
On Wednesday a spokesperson for Todd Smith, the current minister of children, community and social services sent CBC a statement saying the government is focused on programs aimed at empowering “unemployed or underemployed” people across the province.
“A research project that included only 4,000 individuals was not an adequate solution for a province where almost two million people are living in poverty,” wrote Christine Wood. “We are focused on solutions for Ontario that are practical and sustainable.”
But the report points to a wide range of positives after just one year.
Its findings are the result of a 70-question, anonymous online survey made available to basic income recipients in Hamilton, Brantford and Brant County. A total of 217 former recipients participated, according to the report.
Forty in-depth interviews with participants were also completed in July 2019.
“I remember one individual who said ‘Look, I was on the edge of suicide. I just felt nobody cared about me. I didn’t know how to make ends meet and now with basic income I feel like I can be part of society,’” Lewchuk recalled.
Nearly 80 per cent of respondents reported better overall health while taking part in the program. More than half said they were using less tobacco and 48 per cent said they were drinking less.
When it came to mental health, 83 per cent of those surveyed described feeling stressed or anxious less often and 81 per cent said they felt more self-confident.
An improved diet, better housing security and less-frequent hospital visits were other outcomes respondents pointed to, along with 66 per cent who said they formed better relationships with family members.
“What became clear is that as people moved to some stability their health improved, their mental health improved, their outlook on life improved,” said Lewchuk. “You have to believe that actually made them more employable.”
That’s in contrast to the situation for participants once the plug was pulled.
“Almost all survey respondents indicated that the pilot’s cancellation forced them to place on hold or abandon certain life plans,” reads the report.
The project worked by recruiting low-income people and couples, offering them a fixed payment with no strings attached that worked out to approximately $17,000 for individuals and $24,000 for couples.
Whatever income participants earned was deducted from their basic income at 50 per cent, meaning once someone hit $34,000 they wouldn’t receive a payment anymore, Lewchuk explained while speaking with As It Happens.
The basic income payments were about 15-20 per cent higher than ODSP, said the professor, but the benefits of people visiting the hospital less often and paying more taxes would offset that cost.
“In terms of the net cost to a province, it’s not monumental.”
Lewchuk added that while some people did stop working, about half of them headed back to school in hopes of coming back to a better job.
He acknowledged the report’s findings are only based on short-term effects but, given the project has been shut down, it’s all they have.
“We just don’t have the data to understand what happened in the long run. This is the tragedy of the pilot not running for three years.”
...
Read the original on www.cbc.ca »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.