10 interesting stories served every morning and every evening.
We’re excited to see the ﬁrst press reviews go live for the Framework Laptop and the ﬁrst orders land on your doorsteps today! With the FTC unanimously voting to enforce the Right to Repair just yesterday, our timing couldn’t be better for delivering a great, high performance, easy to repair product. There is a ton of amazing material to read and watch, with more coming in the next weeks. Some of our favorite quotes so far are:
“A poster child for the right-to-repair movement, Framework’s modular laptop is one of the smartest designs I’ve seen in a long time.”
“It’s the ultimate Right to Repair laptop.”
“The Framework Laptop is more than just [a] worthwhile experiment in modularity, it’s also a great laptop.”
Reviewers loved the freedom to repair and upgrade, the Expansion Card system, CPU performance, keyboard feel, webcam quality, and more. Of course, inside of Framework, we gravitate towards the critical feedback that points us to where to do better. We take every bit of feedback seriously, and we want your thoughts as you start using your Framework Laptop. This lets us know where to focus for future improvements, whether that is for ﬁrmware updates, modules, or next products. A wonderful thing about our product philosophy is that improvements can go into replacement parts and upgrades that every existing user can pick up and swap to, rather than needing to wait around and pay for an entirely new product.
We’re grateful to each of you who have ordered already, and we’re looking forward to getting your Framework Laptop to you. Batch 1 pre-orders for July delivery continue to ship out from our warehouse each day. We’ll start Batch 2 shipments for August delivery soon after. We have a small number of Batch 2 Framework Laptop and Framework Laptop DIY Edition units currently available for sale, with just a fully refundable $100 deposit due today. If you pre-order now, some of you will be able to receive your order within 3-4 weeks.
As proud as we are of the Framework Laptop (and we’re extremely proud!), the greatest thing we have created over the last 18 months is the team that built it. It takes an incredible team to build an excellent product this complex and deliver it on time. We’re hiring on all fronts to continue developing the Framework Laptop ecosystem and initiate our next categories. Let us know if you know anyone who may be interested in helping us build products that are better for people and the planet.
New Delhi: The United States’ Department of Justice’s case against Wikileaks founder Julian Assange took a serious hit last week after a key witness admitted that he fabricated accusations in order to get immunity. Though these revelations were made public by an Icelandic newspaper on June 26, the mainstream media in the US has largely chosen to ignore them.
According to the bi-weekly Stundin, the witness, Sigurdur Ingi Thordarson, “has a documented history with sociopathy and has received several convictions for sexual abuse of minors and wide-ranging ﬁnancial fraud”. He was recruited by US authorities in order to build a case against Assange, and misled them into believing he was a close associate of the Wikileaks founder. In reality, however, he had only “volunteered on a limited basis to raise money for Wikileaks in 2010 but was found to have used that opportunity to embezzle more than $50,000 from the organisation”, the Icelandic newspaper reports.
The US is currently seeking Assange’s extradition from the UK. If it succeeds, Assange could face up to 175 years in jail because of the charges ﬁled against him. But now, with Thordason accepting that he fabricated his testimony, the veracity of the indictment submitted by American authorities in the UK has come under serious question.
The court documents, according to Stundin, claim that Thordarson (referred to only as ‘Teenager’, because he looks young even though he is 28), was asked by Assange to hack MPs’ computers in Iceland to access certain recordings of them. However, the witness has now said that Assange made no such demand, and instead Thordarson received these recordings from a third party and offered them to Assange without checking them himself. He has also made clear that his earlier allegations, on Assange asking him to hack computers, was false.
There are also other misleading elements in the court documents based on Thordarson’s false testimony, Stundin reports:
“One is a reference to Icelandic bank documents. The magistrate court judgement reads: “It is alleged that Mr. Assange and Teenager failed a joint attempt to decrypt a ﬁle stolen from a “NATO country 1” bank”.
Thordarson admits to Stundin that this actually refers to a well publicised event in which an encrypted ﬁle was leaked from an Icelandic bank and assumed to contain information about defaulted loans provided by the Icelandic Landsbanki. The bank went under in the fall of 2008, along with almost all other ﬁnancial institutions in Iceland, and plunged the country into a severe economic crisis. The ﬁle was at this time, in summer of 2010, shared by many online who attempted to decrypt it for the public interest purpose of revealing what precipitated the ﬁnancial crisis. Nothing supports the claim that this ﬁle was even “stolen” per se, as it was assumed to have been distributed by whistleblowers from inside the failed bank.”
Thordarson, Stundin has claimed, continued his own criminal activities even while he was in contact with US authorities. “It is as if the offer of immunity, later secured and sealed in a meeting in DC, had encouraged Thordarson to take bolder steps in crime. He started to ﬂeece individuals and companies on a grander scale than ever; usually by either acquiring or forming legal entities he then used to borrow merchandise, rent luxury cars, even order large quantities of goods from wholesalers without any intention to pay for these goods and services,” the report notes.
Also read: Wikileaks Acted in Public Interest, ‘Pentagon Papers’ Whistleblower Says at Assange Hearing
“This is just the latest revelation of how problematic the United States’ case is against Julian Assange — and, in fact, baseless,” human rights attorney Jennifer Robinson told Democracy Now on the Stundin investigation. “The evidence from Thordarson that was given to the United States and formed the basis of the second, superseding indictment, including allegations of hacking, has now been, on his own admission, demonstrated to have been fabricated. Not only did he misrepresent his access to Julian Assange and to WikiLeaks and his association with Julian Assange, he has now admitted that he made up and falsely misrepresented to the United States that there was any association with WikiLeaks and any association with hacking.”
“…the factual basis for this case has completely fallen apart. And we have been calling for this case to be dropped for a very long time. And this is just the last form of abuse demonstrated in this case that shows why it ought to be dropped,” Robinson continued.
While these revelations should have created an uproar, most big, corporate-owned media houses in the US have ignored them FAIR, an American media watchdog, has pointed out in an article on its website.
“Such a blatant and juicy piece of important news should have made worldwide headlines. But, instead, as of Friday, July 2, there has been literally zero coverage of it in corporate media; not one word in the New York Times, Washington Post, CNN, NBC News, Fox News or NPR. A search online for either “Assange” or “Thordarson” will elicit zero relevant articles from establishment sources, either US or elsewhere in the Anglosphere, even in tech-focused platforms like the Verge, Wired or Gizmodo,” FAIR says.
“It is not that the corporate press are completely uninterested in Assange. A number of outlets have covered the news this week that he and his partner Stella Morris are planning to get married (e.g., SBS, 6/27/21; Daily Mail, 6/28/21; Evening Standard, 6/28/21; London Times, 6/29/21). Yet none of these articles mentioned the far more consequential news about Thordarson and how it undermines the entire prosecution of Assange,” FAIR continues.
Other independent journalists too have pointed out the one-sided and biased coverage of the Assange case.
This @declassiﬁedUK story details how as a British judge made rulings against Assange, her husband was closely involved with a right-wing lobby group running a campaign against WikiLeaks founder.
It has never been mentioned in the mainstream media. https://t.co/yUakZ1ZeRk
With its state of the art Fan Simulation Engine (patent pending), FanFan can bring back that soothing sound of computer fans to your Apple Silicon Mac.
This is an April Fools joke.
Health research is based on trust. Health professionals and journal editors reading the results of a clinical trial assume that the trial happened and that the results were honestly reported. But about 20% of the time, said Ben Mol, professor of obstetrics and gynaecology at Monash Health, they would be wrong. As I’ve been concerned about research fraud for 40 years, I wasn’t that surprised as many would be by this ﬁgure, but it led me to think that the time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported. The Cochrane Collaboration, which purveys “trusted information,” has now taken a step in that direction.
As he described in a webinar last week, Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing the mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and conﬁrmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published. When Roberts contacted one of the journals the editor responded that “I wouldn’t trust the data.” Why, Roberts wondered, did he publish the trial? None of the trials have been retracted.
Later Roberts, who headed one of the Cochrane groups, did a systematic review of colloids versus crystalloids only to discover again that many of the trials that were included in the review could not be trusted. He is now sceptical about all systematic reviews, particularly those that are mostly reviews of multiple small trials. He compared the original idea of systematic reviews as searching for diamonds, knowledge that was available if brought together in systematic reviews; now he thinks of systematic reviewing as searching through rubbish. He proposed that small, single centre trials should be discarded, not combined in systematic reviews.
Mol, like Roberts, has conducted systematic reviews only to realise that most of the trials included either were zombie trials that were fatally ﬂawed or were untrustworthy. What, he asked, is the scale of the problem? Although retractions are increasing, only about 0.04% of biomedical studies have been retracted, suggesting the problem is small. But the anaesthetist John Carlisle analysed 526 trials submitted to Anaesthesia and found that 73 (14%) had false data, and 43 (8%) he categorised as zombie. When he was able to examine individual patient data in 153 studies, 67 (44%) had untrustworthy data and 40 (26%) were zombie trials. Many of the trials came from the same countries (Egypt, China, India, Iran, Japan, South Korea, and Turkey), and when John Ioannidis, a professor at Stanford University, examined individual patient data from trials submitted from those countries to Anaesthesia during a year he found that many were false: 100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan. Most of the trials were zombies. Ioannidis concluded that there are hundreds of thousands of zombie trials published from those countries alone.
Others have found similar results, and Mol’s best guess is that about 20% of trials are false. Very few of these papers are retracted.
We have long known that peer review is ineffective at detecting fraud, especially if the reviewers start, as most have until now, by assuming that the research is honestly reported. I remember being part of a panel in the 1990s investigating one of Britain’s most outrageous cases of fraud, when the statistical reviewer of the study told us that he had found multiple problems with the study and only hoped that it was better done than it was reported. We asked if had ever considered that the study might be fraudulent, and he told us that he hadn’t.
We have now reached a point where those doing systematic reviews must start by assuming that a study is fraudulent until they can have some evidence to the contrary. Some supporting evidence comes from the trial having been registered and having ethics committee approval. Andrew Grey, an associate professor of medicine at the University of Auckland, and others have developed a checklist with around 40 items that can be used as a screening tool for fraud (you can view the checklist here). The REAPPRAISED checklist (Research governance, Ethics, Authorship, Plagiarism, Research conduct, Analyses and methods, Image manipulation, Statistics, Errors, Data manipulation and reporting) covers issues like “ethical oversight and funding, research productivity and investigator workload, validity of randomisation, plausibility of results and duplicate data reporting.” The checklist has been used to detect studies that have subsequently been retracted but hasn’t been through the full evaluation that you would expect for a clinical screening tool. (But I must congratulate the authors on a clever acronym: some say that dreaming up the acronym for a study is the most difﬁcult part of the whole process.)
Roberts and others wrote about the problem of the many untrustworthy and zombie trials in The BMJ six years ago with the provocative title: “The knowledge system underpinning healthcare is not ﬁt for purpose and must change.” They wanted the Cochrane Collaboration and anybody conducting systematic reviews to take very seriously the problem of fraud. It was perhaps coincidence, but a few weeks before the webinar the Cochrane Collaboration produced guidelines on reviewing studies where there has been a retraction, an expression of concern, or the reviewers are worried about the trustworthiness of the data.
Retractions are the easiest to deal with, but they are, as Mol said, only a tiny fraction of untrustworthy or zombie studies. An editorial in the Cochrane Library accompanying the new guidelines recognises that there is no agreement on what constitutes an untrustworthy study, screening tools are not reliable, and “Misclassiﬁcation could also lead to reputational damage to authors, legal consequences, and ethical issues associated with participants having taken part in research, only for it to be discounted.” The Collaboration is being cautious but does stand to lose credibility—and income—if the world ceases to trust Cochrane Reviews because they are thought to be based on untrustworthy trials.
Research fraud is often viewed as a problem of “bad apples,” but Barbara K Redman, who spoke at the webinar insists that it is not a problem of bad apples but bad barrels if not, she said, of rotten forests or orchards. In her book Research Misconduct Policy in Biomedicine: Beyond the Bad-Apple Approach she argues that research misconduct is a systems problem—the system provides incentives to publish fraudulent research and does not have adequate regulatory processes. Researchers progress by publishing research, and because the publication system is built on trust and peer review is not designed to detect fraud it is easy to publish fraudulent research. The business model of journals and publishers depends on publishing, preferably lots of studies as cheaply as possible. They have little incentive to check for fraud and a positive disincentive to experience reputational damage—and possibly legal risk—from retracting studies. Funders, universities, and other research institutions similarly have incentives to fund and publish studies and disincentives to make a fuss about fraudulent research they may have funded or had undertaken in their institution—perhaps by one of their star researchers. Regulators often lack the legal standing and the resources to respond to what is clearly extensive fraud, recognising that proving a study to be fraudulent (as opposed to suspecting it of being fraudulent) is a skilled, complex, and time consuming process. Another problem is that research is increasingly international with participants from many institutions in many countries: who then takes on the unenviable task of investigating fraud? Science really needs global governance.
Everybody gains from the publication game, concluded Roberts, apart from the patients who suffer from being given treatments based on fraudulent data.
Stephen Lock, my predecessor as editor of The BMJ, became worried about research fraud in the 1980s, but people thought his concerns eccentric. Research authorities insisted that fraud was rare, didn’t matter because science was self-correcting, and that no patients had suffered because of scientiﬁc fraud. All those reasons for not taking research fraud seriously have proved to be false, and, 40 years on from Lock’s concerns, we are realising that the problem is huge, the system encourages fraud, and we have no adequate way to respond. It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary.
Competing interest: RS was a cofounder of the Committee on Medical Ethics (COPE), for many years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Ofﬁce.
Looking Glass is targeted at extremely low latency use requirements on the local computer, it is not designed to stream over a network or pipe but rather through a block of shared memory. In current testing at a refresh rate of 60Hz it is possible to obtain equal or better then 16 milliseconds of latency with the guest. If the user doesn’t care for VSYNC this can be further reduced to under a few milliseconds on average.
Unlike network based streaming applications, Looking Glass does not use any form of compression or color space conversion, all frames are transferred to the viewer (client application) in 32-bit RGBA without any transformations or modiﬁcations. This is possible through the use of a shared memory segment which enables extremely high throughput low latency guest to host communication.
Huge data leak shatters the lie that the innocent need not fear surveillance Companies such as NSO operate in a market that is almost entirely unregulated. Illustration: Guardian DesignCompanies such as NSO operate in a market that is almost entirely unregulated. Illustration: Guardian DesignOur investigation shows how repressive regimes can buy and use the kind of spying tools Edward Snowden warned us aboutBillions of people are inseparable from their phones. Their devices are within reach — and earshot — for almost every daily experience, from the most mundane to the most intimate. Few pause to think that their phones can be transformed into surveillance devices, with someone thousands of miles away silently extracting their messages, photos and location, activating their microphone to record them in real time.Such are the capabilities of Pegasus, the spyware manufactured by NSO Group, the Israeli purveyor of weapons of mass surveillance.NSO rejects this label. It insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”.Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identiﬁed as candidates for possible surveillance by NSO clients in a massive leak of data.Without forensics on their devices, we cannot know whether governments successfully targeted these people. But the presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.What is in the Pegasus project data? What is in the data leak?The data leak is a list of more than 50,000 phone numbers that, since 2016, are believed to have been selected as those of people of interest by government clients of NSO Group, which sells surveillance software. The data also contains the time and date that numbers were selected, or entered on to a system. Forbidden Stories, a Paris-based nonproﬁt journalism organisation, and Amnesty International initially had access to the list and shared access with 16 media organisations including the Guardian. More than 80 journalists have worked together over several months as part of the Pegasus project. Amnesty’s Security Lab, a technical partner on the project, did the forensic analyses.What does the leak indicate?The consortium believes the data indicates the potential targets NSO’s government clients identiﬁed in advance of possible surveillance. While the data is an indication of intent, the presence of a number in the data does not reveal whether there was an attempt to infect the phone with spyware such as Pegasus, the company’s signature surveillance tool, or whether any attempt succeeded. The presence in the data of a very small number of landlines and US numbers, which NSO says are “technically impossible” to access with its tools, reveals some targets were selected by NSO clients even though they could not be infected with Pegasus. However, forensic examinations of a small sample of mobile phones with numbers on the list found tight correlations between the time and date of a number in the data and the start of Pegasus activity — in some cases as little as a few seconds.Amnesty examined 67 smartphones where attacks were suspected. Of those, 23 were successfully infected and 14 showed signs of attempted penetration. For the remaining 30, the tests were inconclusive, in several cases because the handsets had been replaced. Fifteen of the phones were Android devices, none of which showed evidence of successful infection. However, unlike iPhones, phones that use Android do not log the kinds of information required for Amnesty’s detective work. Three Android phones showed signs of targeting, such as Pegasus-linked SMS messages.Amnesty shared “backup copies” of four iPhones with Citizen Lab, a research group at the University of Toronto that specialises in studying Pegasus, which conﬁrmed that they showed signs of Pegasus infection. Citizen Lab also conducted a peer review of Amnesty’s forensic methods, and found them to be sound.While the data is organised into clusters, indicative of individual NSO clients, it does not say which NSO client was responsible for selecting any given number. NSO claims to sell its tools to 60 clients in 40 countries, but refuses to identify them. By closely examining the pattern of targeting by individual clients in the leaked data, media partners were able to identify 10 governments believed to be responsible for selecting the targets: Azerbaijan, Bahrain, Kazakhstan, Mexico, Morocco, Rwanda, Saudi Arabia, Hungary, India, and the United Arab Emirates. Citizen Lab has also found evidence of all 10 being clients of NSO.What does NSO Group say?You can read NSO Group’s full statement here. The company has always said it does not have access to the data of its customers’ targets. Through its lawyers, NSO said the consortium had made “incorrect assumptions” about which clients use the company’s technology. It said the 50,000 number was “exaggerated” and that the list could not be a list of numbers “targeted by governments using Pegasus”. The lawyers said NSO had reason to believe the list accessed by the consortium “is not a list of numbers targeted by governments using Pegasus, but instead, may be part of a larger list of numbers that might have been used by NSO Group customers for other purposes”. They said it was a list of numbers that anyone could search on an open source system. After further questions, the lawyers said the consortium was basing its ﬁndings “on misleading interpretation of leaked data from accessible and overt basic information, such as HLR Lookup services, which have no bearing on the list of the customers’ targets of Pegasus or any other NSO products … we still do not see any correlation of these lists to anything related to use of NSO Group technologies”. Following publication, they explained that they considered a “target” to be a phone that was the subject of a successful or attempted (but failed) infection by Pegasus, and reiterated that the list of 50,000 phones was too large for it to represent “targets” of Pegasus. They said that the fact that a number appeared on the list was in no way indicative of whether it had been selected for surveillance using Pegasus. The term HLR, or home location register, refers to a database that is essential to operating mobile phone networks. Such registers keep records on the networks of phone users and their general locations, along with other identifying information that is used routinely in routing calls and texts. Telecoms and surveillance experts say HLR data can sometimes be used in the early phase of a surveillance attempt, when identifying whether it is possible to connect to a phone. The consortium understands NSO clients have the capability through an interface on the Pegasus system to conduct HLR lookup inquiries. It is unclear whether Pegasus operators are required to conduct HRL lookup inquiries via its interface to use its software; an NSO source stressed its clients may have different reasons — unrelated to Pegasus — for conducting HLR lookups via an NSO system.Thank you for your feedback.First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools.Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious ﬁgures, academics, businesspeople, diplomats, senior government ofﬁcials and heads of state.Our reporting is rooted in the public interest. We believe the public should know that NSO’s technology is being abused by the governments who license and operate its spyware. But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups can be exploited in this environment.The Pegasus project is a collaborative reporting project led by the French nonproﬁt organisation Forbidden Stories, including the Guardian and 16 other media outlets. For months, our journalists have been working with reporters across the world to establish the identities of people in the leaked data and see if and how this links to NSO’s software.It is not possible to know without forensic analysis whether the phone of someone whose number appears in the data was actually targeted by a government or whether it was successfully hacked with NSO’s spyware. But when our technical partner, Amnesty International’s Security Lab, conducted forensic analysis on dozens of iPhones that belonged to potential targets at the time they were selected, they found evidence of Pegasus activity in more than half.One phone that has contained signs of Pegasus activity belonged to our esteemed Mexican colleague Carmen Aristegui, whose number was in the data leak and who was targeted following her exposé of a corruption scandal involving her country’s former president Enrique Peña Nieto.The data leak suggests that Mexican authorities did not stop at Aristegui. The phone numbers of at least four of her journalist colleagues appear in the leak, as well as her assistant, her sister and her son, who was 16 at the time.Investigating software produced and sold by a company as secretive as NSO is not easy. Its business is surveillance, after all. It meant a radical overhaul of our working methods, including a ban on discussing our work with sources, editors or lawyers in the presence of our phones.The last time the Guardian adopted such extreme counter-espionage measures was in 2013, when reporting on documents leaked by the whistleblower Edward Snowden. Those disclosures pulled back the curtains on the vast apparatus of mass surveillance created after 9/11 by western intelligence agencies such as the National Security Agency (NSA) and its British partner, GCHQ.In doing so, they instigated a global debate about western state surveillance capabilities and led to countries, including the UK, admitting their regulatory regime was out of date and open to potential abuse.The Pegasus project may do the same for the privatised government surveillance industry that has turned NSO into a billion-dollar company.Companies such as NSO operate in a market that is almost entirely unregulated, enabling tools that can be used as instruments of repression for authoritarian regimes such as those in Saudi Arabia, Kazakhstan and Azerbaijan.The market for NSO-style surveillance-on-demand services has boomed post-Snowden, whose revelations prompted the mass adoption of encryption across the internet. As a result the internet became far more secure, and mass harvesting of communications much more difﬁcult.But that in turn spurred the proliferation of companies such as NSO offering solutions to governments struggling to intercept messages, emails and calls in transit. The NSO answer was to bypass encryption by hacking devices.Two years ago the then UN special rapporteur on freedom of expression, David Kaye, called for a moratorium on the sale of NSO-style spyware to governments until viable export controls could be put in place. He warned of an industry that seemed “out of control, unaccountable and unconstrained in providing governments with relatively low-cost access to the sorts of spying tools that only the most advanced state intelligence services were previously able to use”.His warnings were ignored. The sale of surveillance continued unabated. That GCHQ-like surveillance tools are now available for purchase by repressive governments may give some of Snowden’s critics pause for thought.In the UK, the whistleblower’s detractors argued breezily that spying was what intelligence agencies were supposed to do. We were assured that innocent citizens in the Five Eyes alliance of intelligence powers, comprising Australia, Canada, New Zealand, the UK and US, were safe from abuse. Some invoked the dictum: “If you have done nothing wrong, you have nothing to fear.”The Pegasus project is likely to put an end to any such wishful thinking. Law-abiding people — including citizens and residents of democracies such as the UK, such as editors-in-chief of leading newspapers — are not immune from unwarranted surveillance. And western countries do not have a monopoly on the most invasive surveillance technologies. We’re entering a new surveillance era, and unless protections are put in place, none of us are safe.On Tuesday 27 July, at 8pm BST, join The Guardian’s head of investigations, Paul Lewis, for a livestreamed Guardian Live event on the implications of the Pegasus project. Book your ticket here.
Apple and its security contractor Security Industry Specialists (SIS) were sued on Friday in Massachusetts as part of a multijurisdictional defamation and malicious prosecution complaint brought on behalf of Ousmane Bah, a New York resident misidentiﬁed as a shoplifter multiple times in 2018 and 2019.
The lawsuit contends that Apple and SIS exhibited reckless disregard for the truth by misidentifying Bah as the perpetrator of multiple shoplifting crimes at iStores, leading to his unjustiﬁed arrest and to his defamation.
The ﬁling [PDF] in US District Court in Massachusetts aims to revive charges relevant to events in Boston that were excluded from related ongoing litigation in New York. A third related case is being heard in New Jersey.
Apple and SIS have a qualiﬁed law enforcement privilege that allows them to err in store security-related accusations and not be sued for it. However, if they exhibit “reckless disregard for the truth” [PDF] — ignoring obvious facts, for example, they lose that privilege.
Among the more startling allegations in the case is that an SIS VP falsely claimed that no SIS employee ever identiﬁed Bah to the NYPD or to Apple. The complaint points to an exhibit that’s been submitted as evidence, an email from an SIS employee to an NYPD detective does in fact identify Bah as a shoplifter.
The lawsuit also claims that Apple and SIS selectively deleted video evidence that would have exposed them to potential criminal and civil liability for ﬁling false complaints with the police.
In addition, it asserts Bah’s apprehension was in part due to the application of unreliable facial-recognition technology in the shoplifting incidents in New York.
Bah, who is Black, obtained a New York State temporary learner driver’s permit in March 2018 at the age of 17, when he was an honors student at Bronx Latin Academy, a New York City high school. The document included his height, weight, date of birth, and eye color, but no photograph.
According to the Massachusetts court ﬁling, he had lost the temporary permit by May that year, but had obtained a permanent laminated copy that included his picture.
In Greenwich, Connecticut in April 2018, Apple allegedly detained an individual for stealing store merchandise and identiﬁed the individual as Ousmane Bah based on the examination of the temporary learner’s permit he is said to have had on him — this despite the fact that the ID says, “This temporary document is not to be used for identiﬁcation purposes.”
The complaint states that the person detained was not Bah, who is 5′7″ but a 6′1″ impostor using the lost temporary learner’s permit. Nonetheless, Apple personnel are said to have retained some video surveillance evidence and published the record with the name “Ousmane Bah” through an online system to make it available to SIS and Apple Stores in the Northeastern US.
On May 24, 2018, SIS, acting in a security capacity for Apple, apprehended and handcuffed the impostor for allegedly stealing merchandise from a Paramus, New Jersey Apple Store. Again, it’s claimed the impostor was carrying Bah’s lost learner’s permit and identiﬁed himself as such to authorities or tried to do so — the detained individual is said to have misspelled his stolen name as “Ousama Bah” before correcting the spelling.
Yet the Paramus Police Department apparently did not make any further effort to verify the suspect’s identity, content to accept the identiﬁcation provided by the SIS employee who apprehended the shoplifter. It’s also claimed SIS told authorities it had video evidence.
“Without probable cause, SIS began linking prior thefts in the region involving the impostor to the Plaintiff,” the complaint says, with SIS representing to police that video of these other thefts, such as one at the Short Hills Apple Store near Millburn, NJ on May 5, 2018.
At this point, it’s alleged that SIS, on behalf of Apple, distributed a “Be on the Lookout” (BOLO) notice with the impostor’s image but the name “Ousmane Bah” as a “known shoplifter.” This is said to have been sent not only to Apple Stores but to police departments in the region.
Then there was the May 31, 2018 theft of a dozen Apple Pencils from an Apple Store in Boston. It’s claimed that an SIS employee in his police report accused Ousmane Bah — who was not in Massachusetts at the time — of the thefts and said there was video to back that up.
According to the complaint, the video depicted the impostor, not Bah, and Apple and SIS had information at the time that their identiﬁcation of Bah was unreliable and therefore were reckless in their accusation.
In June 2018, Bah appeared in Boston Municipal Court to answer the charges and his attorney asked Apple and SIS to present the video evidence of the thefts to prove his client’s innocence. Apple then told the Suffolk County prosecutor “that the video evidence of the impostor, which would have completely exculpated Ousmane Bah, had been routinely deleted.”
The video from an October 2018 theft misattributed to Bah in Rockaway, New Jersey, was also deleted. Apple and SIS are said to have told the New York court that neither ﬁrm has any written policy on video retention.
And as it turned out, the video of the Boston incident turned up eventually — Bah’s attorneys found it during the discovery process. It showed the impostor, not Bah.
On September 18, 2018, the impostor is said to have struck at an Apple Store in Freehold, New Jersey, and escaped. An SIS employee acting on Apple’s behalf again ﬁled a police complaint. The complaint charges that both Apple and SIS knew that identiﬁcation was unreliable but accused Bah anyway.
The identity of the impostor would be revealed in the following months, the complaint says, when the impostor twice tried to pass himself off as Bah in New York and twice was arrested and booked.
“The arresting ofﬁcer was able to identify the impostor as Mamadou Barrie, a friend of the Plaintiff, who apparently stole the learner’s permit from the Plaintiff,” the complaint says. “These arrests specifically [noted] that Barrie had pretended to be Ousmane Bah.”
There were more Apple Store thefts in October 2018, the previously mentioned one in Rockaway, New Jersey, and another incident in Trumbull, Connecticut. Apple and SIS again told authorities that Bah was to blame.
Also that month, the impostor is said to have hit an Apple Store in Staten Island, New York. A New York police detective, it’s claimed, published details of the crime and a store video screenshot to a reporting service used by the NYPD called MetrORCA.
The detective subsequently submitted an information request “to the NYPD’s Facial Identiﬁcation Section (FIS), which identiﬁed the photograph as potentially depicting two people, one of whom was purportedly Ousmane Bah — and the other was the actual thief, Mamadou Barrie.”
The complaint further notes that FIS policy is that automated identiﬁcation is not sufﬁcient to provide the probable cause necessary to make an arrest. Shortly thereafter, an SIS employee saw the MetrORCA bulletin and emailed the NYPD detective to tell him that Apple and SIS had identiﬁed Bah as the Staten Island thief.
Around 0400 ET, on November 29, 2018, Paramus Police Department, under a warrant obtained by NYPD, arrested Bah for the New York thefts.
“The warrant issued for Bah’s arrest contained the photo of the impostor (now known to be Mamadou Barrie),” the complaint says, adding that “Barrie in no way physically resembles the Plaintiff, other than being Black.”
Despite the inconsistency noted at the time of the arrest, police took him into custody. This was while Bah was still being wrongfully prosecuted in Boston.
At the New York precinct, police recognized that Bah was not the individual in Apple’s images and charges were dropped.
Two days later, on December 1, 2018, SIS employees apprehended the impostor trying to steal merchandise from an Apple Store in Holyoke, Massachusetts. Holyoke police forwarded the suspects ﬁngerprints to the FBI’s National Criminal Identiﬁcation Center and they were identiﬁed as belonging to Mamadou Barrie.
Yet two weeks later, Bah received a mailed notice of a warrant from the Freehold County District Court for his arrest for the Freehold theft based on the information provided by Apple and SIS.
Around that time, with an SIS employee appearing in a New Jersey court to press charges against the Cherry Hill, New Jersey thefts, a different individual with the same name “Ousmane Bah,” this one a resident of Willingboro, New Jersey, showed up for the summons. He was not the thief, the complaint says, and the charges against Ousmane Bah from New York were dropped.
Nonetheless, prosecution against Bah continued in multiple states through June 2019.
Presently, the attorneys representing Bah, Daniel Malis and Subhan Tariq, are pursuing lawsuits against Apple and SIS in New York, New Jersey, and now Massachusetts.
Neither Apple nor SIS responded to requests for comment. ®
The Stockﬁsh project strongly believes in free and open-source software and data. Collaboration is what made this engine the strongest chess engine in the world. We license our software using the GNU General Public License, Version 3 (GPL) with the intent to guarantee all chess enthusiasts the freedom to use, share and change all versions of the program.
Unfortunately, not everybody shares this vision of openness. We have come to realize that ChessBase concealed from their customers Stockﬁsh as the true origin of key parts of their products (see also earlier blog posts by us and the joint Lichess, Leela Chess Zero, and Stockﬁsh teams). Indeed, few customers know they obtained a modiﬁed version of Stockﬁsh when they paid for Fat Fritz 2 or Houdini 6 - both Stockﬁsh derivatives - and they thus have good reason to be upset. ChessBase repeatedly violated central obligations of the GPL, which ensures that the user of the software is informed of their rights. These rights are explicit in the license and include access to the corresponding sources, and the right to reproduce, modify and distribute GPLed programs royalty-free.
In the past four months, we, supported by a certiﬁed copyright and media law attorney in Germany, went through a long process to enforce our license. Even though we had our ﬁrst successes, leading to a recall of the Fat Fritz 2 DVD and the termination of the sales of Houdini 6, we were unable to ﬁnalize our dispute out of court. Due to Chessbase’s repeated license violations, leading developers of Stockﬁsh have terminated their GPL license with ChessBase permanently. However, ChessBase is ignoring the fact that they no longer have the right to distribute Stockﬁsh, modiﬁed or unmodiﬁed, as part of their products.
Thus, to enforce the consequences of the license termination, we have ﬁled a lawsuit. This lawsuit is broadly supported by the team of maintainers and developers of Stockﬁsh. We believe we have the evidence, the ﬁnancial means and the determination to bring this lawsuit to a successful end. We will provide an update to this statement once significant progress has been made.
We would like to thank our fans for their support, and encourage them to download and use the ofﬁcial version of Stockﬁsh that we enjoy developing and sharing freely.
Claiming that “right-wing voices are being censored,” Republican-led legislatures in Florida and Texas have introduced legislation to “end Big Tech censorship.” They say that the dominant tech platforms block legitimate speech without ever articulating their moderation policies, that they are slow to admit their mistakes, and that there is no meaningful due process for people who think the platforms got it wrong.
But it’s not just conservatives who have their political speech blocked by social media giants. It’s Palestinians and other critics of Israel, including many Israelis. And it’s queer people, of course. We have a whole project tracking people who’ve been censored, blocked, downranked, suspended and terminated for their legitimate speech, from punk musicians to peanuts fans, historians to war crimes investigators, sex educators to Christian ministries.
Content moderation is hard at any scale, but even so, the catalog of big platforms’ unforced errors makes for sorry reading. Experts who care about political diversity, harassment and inclusion came together in 2018 to draft the Santa Clara Principles on Transparency and Accountability in Content Moderation but the biggest platforms are still just winging it for the most part.
The situation is especially grim when it comes to political speech, particularly when platforms are told they have a duty to remove “extremism.”
The Florida and Texas social media laws are deeply misguided and nakedly unconstitutional, but we get why people are fed up with Big Tech’s ongoing goat-rodeo of content moderation gaffes.
Let’s start with talking about why platform censorship matters. In theory, if you don’t like the moderation policies at Facebook, you can quit and go to a rival, or start your own. In practice, it’s not that simple.
First of all, the internet’s “marketplace of ideas” is severely lopsided at the platform level, consisting of a single gargantuan service (Facebook), a handful of massive services (YouTube, Twitter, Reddit, TikTok, etc) and a constellation of plucky, struggling, endangered indieweb alternatives.
If none of the big platforms want you, you can try to strike out on your own. Setting up your own rival platform requires that you get cloud services, anti-DDoS, domain registration and DNS, payment processing and other essential infrastructure. Unfortunately, every one of these sectors has grown increasingly concentrated, and with just a handful of companies dominating every layer of the stack, there are plenty of weak links in the chain and if just one breaks, your service is at risk.
But even if you can set up your own service, you’ve still got a problem: everyone you want to talk about your disfavored ideas with is stuck in one of the Big Tech silos. Economists call this the “network effect,” when a service gets more valuable as more users join it. You join Facebook because your friends are there, and once you’re there, more of your friends join so they can talk to you.
Setting up your own service might get you a more nuanced and welcoming moderation environment, but it’s not going to do you much good if your people aren’t willing to give up access to all their friends, customers and communities by quitting Facebook and joining your nascent alternative, not least because there’s a limit to how many services you can be active on.
If all you think about is network effects, then you might be tempted to think that we’ve arrived at the end of history, and that the internet was doomed to be a winner-take-all world of ﬁve giant websites ﬁlled with screenshots of text from the other four.
But network effects aren’t the only idea from economics we need to pay attention to when it comes to the internet and free speech. Just as important is the idea of “switching costs,” the things you have to give up when you switch away from one of the big services - if you resign from Facebook, you lose access to everyone who isn’t willing to follow you to a better place.
Switching costs aren’t an inevitable feature of large communications systems. You can switch email providers and still connect with your friends; you can change cellular carriers without even having to tell your friends because you get to keep your phone number.
The high switching costs of Big Tech are there by design. Social media may make signing up as easy as a greased slide, but leaving is another story. It’s like a roach motel: users check in but they’re not supposed to check out.
Enter interoperability, the practice of designing new technologies that connect to existing ones. Interoperability is why you can access any website with any browser, and read Microsoft Ofﬁce ﬁles using free/open software like LibreOfﬁce, cloud software like Google Ofﬁce, or desktop software like Apple iWorks.
An interoperable social media giant - one that allowed new services to connect to it - would bust open that roach motel. If you could leave Facebook but continue to connect with the friends, communities and customers who stayed behind, the decision to leave would be much simpler. If you don’t like Facebook’s rules (and who does?) you could go somewhere else and still reach the people that matter to you, without having to convince them that it’s time to make a move.
That’s where laws like the proposed ACCESS Act come in. While not perfect, this proposal to force the Big Tech platforms to open up their walled gardens to privacy-respecting, consent-seeking third parties is a way forward for anyone who chafes against Big Tech’s moderation policies and their uneven, high-handed application.
Some tech platforms are already moving in that direction. Twitter says it wants to create an “app store for moderation,” with multiple services connecting to it, each offering different moderation options. We wish it well! Twitter is well-positioned to do this - it’s one tenth the size of Facebook and needs to ﬁnd ways to grow.
But the biggest tech companies show no sign of voluntarily reducing their switching costs. The ACCESS Act is the most important interoperability proposal in the world, and it could be a game-changer for all internet users.
Unfortunately for all of us, many of the people who don’t like Big Tech’s moderation think the way to ﬁx it is to eliminate Section 230, a law that promotes users’ free speech. Section 230 is a rule that says you sue the person who caused the harm while organizations that host expressive speech are free to remove offensive, harassing or otherwise objectionable content.
That means that conservative Twitter alternatives can delete ﬂoods of pornographic memes without being sued by their users. It means that online forums can allow survivors of workplace harassment to name their abusers without worrying about libel suits.
If hosting speech makes you liable for what your users say, then only the very biggest platforms can afford to operate, and then only by resorting to shoot-ﬁrst/ask-questions-later automated takedown systems.
There’s not much that the political left and right agree on these days, but there’s one subject that reliably crosses the political divide: frustration with monopolists’ clumsy handling of online speech.
For the ﬁrst time, there’s a law before Congress that could make Big Tech more accountable and give internet users more control over speech and moderation policies. The promise of the ACCESS Act is an internet where if you don’t like a big platform’s moderation policies, if you think they’re too tolerant of abusers or too quick to kick someone off for getting too passionate during a debate, you can leave, and still stay connected to the people who matter to you.
Killing CDA 230 won’t ﬁx Big Tech (if that was the case, Mark Zuckerberg wouldn’t be calling for CDA 230 reform). The ACCESS Act won’t either, by itself — but by making Big Tech open up to new services that are accountable to their users, the ACCESS Act takes several steps in the right direction.