10 interesting stories served every morning and every evening.
I’m a diving instructor. I’m also a platform engineer who spends lots of his time thinking about and implementing infrastructure security. Sometimes those two worlds collide in unexpected ways.
A Sula sula (Frigatebird) and a dive flag on the actual boat where I found the vulnerability - somewhere off Cocos Island.
While on a 14 day-long dive trip around Cocos Island in Costa Rica, I stumbled across a vulnerability in the member portal of a major diving insurer - one that I’m personally insured through. What I found was so trivial, so fundamentally broken, that I genuinely couldn’t believe it hadn’t been exploited already.
I disclosed this vulnerability on April 28, 2025 with a standard 30-day embargo period. That embargo expired on May 28, 2025 - over eight months ago. I waited this long to publish because I wanted to give the organization every reasonable opportunity to fully remediate the issue and notify affected users. The vulnerability has since been addressed, but to my knowledge, I have not received confirmation that affected users were notified. I have reached out to the organization to ask for clarification on this matter.
This is the story of what happened when I tried to do the right thing.
To understand why this is so bad, you need to know how the registration process works. As a diving instructor, I register my students (to get them insured) through my account on the portal. I enter their personal information with their consent - name, date of birth, address, phone number, email - and the system creates an account for them. The student then receives an email with their new account credentials: a numeric user ID and a default password. They might log in to complete additional information, or they might never touch the portal again.
When I registered three students in quick succession, they were sitting right next to me and checked their welcome emails. The user IDs were nearly identical - sequential numbers, one after the other. That’s when it clicked that something really bad was going on.
Now here’s the problem: the portal used incrementing numeric user IDs for login. User XXXXXX0, XXXXXX1, XXXXXX2, and so on. That alone is a red flag, but it gets worse: every account was provisioned with a static default password that was never enforced to be changed on first login. And many users - especially students who had their accounts created for them by their instructors - never changed it.
So the “authentication” to access a user’s full profile - name, address, phone number, email, date of birth - was:
Type the same default password that every account shares on account creation.
There’s a good chance you get in.
That’s it. No rate limiting. No account lockout. No MFA. Just an incrementing integer and a password that might as well have been password123.
I verified the issue with the minimum access necessary to confirm the scope - and stopped immediately after.
I did everything by the book. I contacted CSIRT Malta (MaltaCIP) first - since the organization is registered in Malta, this is the competent national authority. The Maltese National Coordinated Vulnerability Disclosure Policy (NCVDP) explicitly requires that confirmed vulnerabilities be reported to both the responsible organization and CSIRTMalta.
As a fellow diving instructor insured through [the organization] and a full-time Linux Platform Engineer, I am contacting you to responsibly disclose a critical vulnerability I identified within the [the organization]’s user account system.
During recent testing, I discovered that user accounts - including those of underage students - are accessible through a combination of predictable User ID enumeration (incrementing user IDs) and the use of a static default password that is not enforced to be changed upon first login. This misconfiguration currently exposes sensitive personal data (e.g., names, addresses, contact information - including phone numbers and emails -, dates of birth) and represents multiple GDPR violations.
Exposure of sensitive and underage user data without adequate safeguards
For initial confirmation, I am attaching a screenshot from Member ID XXXXXXX showing the exposed data, partly redacted for privacy reasons.
Additionally, for transparency and validation, I have shared my proof-of-concept code securely via an encrypted paste service: [link redacted]
In the spirit of responsible disclosure, I have already informed CSIRT Malta (in CC) to officially initiate a reporting process, given [the organization]’s operational presence in Malta.
I kindly request that [the organization] acknowledges receipt of this disclosure within 7 days.
I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure.
Please note that I am fully available to assist your IT team with technical details, verification steps and recommendations from a security perspective.
I strongly recommend assigning an IT-Security Point of Contact (PoC) for direct collaboration on this issue.
Thank you very much for your attention to this critical matter. I am looking forward to working with you towards a secure resolution.
Both of these timelines are standard - if anything, generous - in responsible disclosure frameworks.
Two days later, I got a reply. Not from their IT team. From their Data Privacy Officers (DPO’s) law firm.
The letter opened politely enough - they acknowledged the issue and said they’d launched an investigation. They even mentioned they were resetting default passwords and planning to roll out 2FA. Good.
But then the tone shifted:
While we genuinely appreciate your seemingly good intentions and transparency in highlighting this matter to our attention, we must respectfully note that notifying the authorities prior to contacting the Group creates additional complexities in how the matter is perceived and addressed and also exposes us to unfair liability.
Let me translate: “We wish you hadn’t told the government about our security issue.”
It got better:
We also do not appreciate your threat to make this matter public […] and remind you that you may be held accountable for any damage we, or the data subjects, may suffer as a result of your own actions, which actions likely constitute a criminal offence under Maltese law.
So, to be clear: their portal had a default password on every account, exposing personal data including that of children, and I’m the one who “likely” committed a criminal offence by finding it and telling them.
They also sent a declaration they wanted me to sign - while requesting my passport ID - confirming I’d deleted all data, wouldn’t disclose anything, and would keep the entire matter “strictly confidential.” The deadline? End of business the same day they sent it.
This declaration included the following gem:
I also declare that I shall keep the content of this declaration strictly confidential.
That’s an NDA with extra steps: I was being asked to sign away my right to discuss the disclosure process itself - including the fact that I found a vulnerability in their system - under threat of legal action.
Then came the reminders. One “friendly” reminder. Then an “urgent” one. Sign the declaration. De-escalate. Move on. Quietly.
I generally refuse to sign confidentiality clauses in cases involving exposure of sensitive information, and I did so here as well. Coordinated disclosure depends on transparency and trust between researchers and organizations: trust that affected users will be informed, and trust that a report leads to real remediation.
Given that the organization in question had already breached that trust by exposing personal data through weak controls, I wasn’t willing to grant blanket confidentiality that could be used to keep the incident out of public scrutiny. And with trying to actual silence me through legal threats, they had already made it clear that their priority was reputation management over user data protection. So I stood my ground.
Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.
I also pointed out that, under Malta’s NCVDP, involving CSIRT Malta is part of the expected reporting path - not a hostile act - and that publishing post-remediation analyses is standard practice in the security community.
Their response doubled down. They cited Article 337E of the Maltese Criminal Code - computer misuse - and helpfully reminded me that:
Art. 337E of the Criminal Code also provides that “If any act is committed outside Malta which, had it been committed in Malta, would have constituted an offence […] it shall […] be deemed to have been committed in Malta.” Meaning that your actions would be deemed a criminal offence in Malta, even if committed in another country.
They also made their position on disclosure crystal clear, after I reiterated my refusal to sign their NDA:
We object strongly to the use of [the organization’s name] in any such blogs or conferences you may write/attend as this would be a disproportionate harm to [the organization’s] reputation […]. We reserve our rights at law to hold you responsible for any damages [the organization] may suffer as a result of any such public disclosures you may make.
That’s fine by me. Because here’s the thing: The vulnerability has been fixed. Default passwords have been reset. 2FA is being rolled out. I feel sorry for the developer(s) who had to clean up this mess, but at least the issue is no longer exploitable. Sure, it would have been better if the organization had thanked me and taken responsibility for notifying affected users. If the incident qualified as a personal data breach (which it does) and was likely to result in a (high) risk to individuals - especially given minors were involved - GDPR Articles 33 and 34 generally require notification to the supervisory authority and communication to affected data subjects.
GDPR Article 34(1) When the personal data breach is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall communicate the personal data breach to the data subject without undue delay.
GDPR Article 34(2) The communication to the data subject referred to in paragraph 1 of this Article shall describe in clear and plain language the nature of the personal data breach and contain at least the information and measures referred to in points (b), (c) and (d) of Article 33(3).
I have not received confirmation that those notifications were ever carried out.
My favourite part was the organization’s position on whose fault this actually was:
We contend that it is the responsibility of users to change their own password (after we allocate a default one).
Read that again. A company that assigned the same default password to every account, never forced a password change, and used incrementing numeric IDs as usernames is blaming the users for not securing their own accounts. Accounts that include those of minors.
GDPR Article 5(1)(f) (integrity and confidentiality): Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.
Under GDPR, the data controller (namely: the organization) is responsible for implementing appropriate technical and organizational measures to ensure data security. A static default password on an IDOR-vulnerable portal is not an “appropriate measure” by any definition.
GDPR Article 24(1) (controller responsibility): Taking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation. Those measures shall be reviewed and updated where necessary.
This isn’t an isolated case. The security research community has been dealing with this pattern for decades: find a vulnerability, report it responsibly, get threatened with legal action. It’s so common it has a name - the chilling effect.
Organizations that respond to disclosure with lawyers instead of engineers are telling the world something important: they care more about their reputation than about the data they’re supposed to protect.
And the real irony? The legal threats are the reputation damage. Not the vulnerability itself - vulnerabilities happen to everyone. It’s the response that tells you everything about an organization’s security culture.
What Should Have Happened
Acknowledge the report - they did this, to be fair.
Fix the vulnerability - they started on this too.
Thank the researcher - instead of threatening them with criminal prosecution.
Have a CVD policy - so researchers know how to report issues and what to expect.
Notify affected users - especially the parents of underage members whose data was exposed.
Not try to silence the researcher with NDAs disguised as “declarations.”
What You Can Do
Publish a Coordinated Vulnerability Disclosure policy. It doesn’t have to be complex - maybe begin with a security.txt file and a clear process that favors transparency.
Thank researchers for helping you improve your security posture.
Don’t shoot the messenger. The person reporting the bug is not your enemy. The bug is.
Don’t blame your users for security failures that are your responsibility as a data controller.
Always involve your national CSIRT. It protects you and creates an official record.
Document everything. Every email, every timestamp, every response.
Don’t sign NDAs that prevent you from discussing the disclosure process. But you can agree to delete data (and MUST do so!) without agreeing to silence.
Know your rights. Many jurisdictions have legal protections for good-faith security research. The EU’s NIS2 Directive encourages coordinated vulnerability disclosure.
Because right now, in 2026, reporting a trivial vulnerability exposing personal data - including that of children - still gets met with legal threats instead of gratitude. And that’s a problem for all of us. Let’s burn some Tokens! - AI Chatbot Cost Exploitation as an Attack VectorLet’s burn some Tokens! - AI Chatbot Cost Exploitation as an Attack VectorMany companies ship AI chatbots as thin wrappers around commercial LLM APIs with zero cost controls. What if a tool behaved like an overly engaged, perfectly valid user - and just burned through their budget?Imprint / ImpressumData Privacy / DatenschutzDo you know the code?
...
Read the original on dixken.de »
Dependabot is a noise machine. It makes you feel like you’re doing work, but you’re actually discouraging more useful work. This is especially true for security alerts in the Go ecosystem.
I recommend turning it off and replacing it with a pair of scheduled GitHub Actions, one running govulncheck, and the other running your test suite against the latest version of your dependencies.
On Tuesday, I published a security fix for filippo.io/edwards25519. The (*Point).MultiScalarMult method would produce invalid results if the receiver was not the identity point.
A lot of the Go ecosystem depends on filippo.io/edwards25519, mostly through github.com/go-sql-driver/mysql (228k dependents only on GitHub). Essentially no one uses (*Point).MultiScalarMult.
Yesterday, Dependabot opened thousands of PRs against unaffected repositories to update filippo.io/edwards25519. These PRs were accompanied by a security alert with a nonsensical, made up CVSS v4 score and by a worrying 73% compatibility score, allegedly based on the breakage the update is causing in the ecosystem. Note that the diff between v1.1.0 and v1.1.1 is one line in the method no one uses.
We even got one of these alerts for the Wycheproof repository, which does not import the affected filippo.io/edwards25519 package at all. Instead, it only imports the unaffected filippo.io/edwards25519/field package.
$ go mod why -m filippo.io/edwards25519
github.com/c2sp/wycheproof/tools/twistcheck
filippo.io/edwards25519/field
We have turned Dependabot off.
But isn’t this toil unavoidable, to prevent attackers from exploiting old vulnerabilities in your dependencies? Absolutely not!
Computers are perfectly capable of doing the work of filtering out these irrelevant alerts for you. The Go Vulnerability Database has rich version, package, and symbol metadata for all Go vulnerabilities.
Here’s the entry for the filippo.io/edwards25519 vulnerability, also available in standard OSV format.
modules:
- module: filippo.io/edwards25519
versions:
- fixed: 1.1.1
vulnerable_at: 1.1.0
packages:
- package: filippo.io/edwards25519
symbols:
- Point.MultiScalarMult
summary: Invalid result or undefined behavior in filippo.io/edwards25519
description: |-
Previously, if MultiScalarMult was invoked on an
initialized point who was not the identity point, MultiScalarMult
produced an incorrect result. If called on an
uninitialized point, MultiScalarMult exhibited undefined behavior.
cves:
- CVE-2026-26958
credits:
- shaharcohen1
- WeebDataHoarder
references:
- advisory: https://github.com/FiloSottile/edwards25519/security/advisories/GHSA-fw7p-63qq-7hpr
source:
id: go-security-team
created: 2026-02-17T14:45:04.271552-05:00
review_status: REVIEWED
Any decent vulnerability scanner will at the very least filter based on the package, which requires a simple go list -deps ./…. This already silences a lot of noise, because it’s common and good practice for modules to separate functionality relevant to different dependents into different sub-packages. For example, it would have avoided the false alert against the Wycheproof repository.
If you use a third-party vulnerability scanner, you should demand at least package-level filtering.
Good vulnerability scanners will go further, though, and filter based on the reachability of the vulnerable symbol using static analysis. That’s what govulncheck does!
$ go mod why -m filippo.io/edwards25519
filippo.io/sunlight/internal/ctlog
github.com/google/certificate-transparency-go/trillian/ctfe
github.com/go-sql-driver/mysql
$ govulncheck ./…
=== Symbol Results ===
No vulnerabilities found.
Your code is affected by 0 vulnerabilities.
This scan also found 1 vulnerability in packages you import and 2
vulnerabilities in modules you require, but your code doesn’t appear to call
these vulnerabilities.
Use ‘-show verbose’ for more details.
govulncheck noticed that my project indirectly depends on filippo.io/edwards25519 through github.com/go-sql-driver/mysql, which does not make the vulnerable symbol reachable, so it chose not to notify me.
If you want, you can tell it to show the package- and module-level matches.
$ govulncheck -show verbose,color ./…
Fetching vulnerabilities from the database…
Checking the code against the vulnerabilities…
The package pattern matched the following 16 root packages:
filippo.io/sunlight/internal/stdlog
Govulncheck scanned the following 54 modules and the go1.26.0 standard library:
crawshaw.io/sqlite@v0.3.3-0.20220618202545-d1964889ea3c
filippo.io/edwards25519@v1.1.0
filippo.io/keygen@v0.0.0-20240718133620-7f162efbbd87
=== Symbol Results ===
No vulnerabilities found.
=== Package Results ===
Vulnerability #1: GO-2026-4503
Invalid result or undefined behavior in filippo.io/edwards25519
More info: https://pkg.go.dev/vuln/GO-2026-4503
Module: filippo.io/edwards25519
Found in: filippo.io/edwards25519@v1.1.0
Fixed in: filippo.io/edwards25519@v1.1.1
=== Module Results ===
Vulnerability #1: GO-2025-4135
Malformed constraint may cause denial of service in
golang.org/x/crypto/ssh/agent
More info: https://pkg.go.dev/vuln/GO-2025-4135
Module: golang.org/x/crypto
Found in: golang.org/x/crypto@v0.44.0
Fixed in: golang.org/x/crypto@v0.45.0
Vulnerability #2: GO-2025-4134
Unbounded memory consumption in golang.org/x/crypto/ssh
More info: https://pkg.go.dev/vuln/GO-2025-4134
Module: golang.org/x/crypto
Found in: golang.org/x/crypto@v0.44.0
Fixed in: golang.org/x/crypto@v0.45.0
Your code is affected by 0 vulnerabilities.
This scan also found 1 vulnerability in packages you import and 2
vulnerabilities in modules you require, but your code doesn’t appear to call
these vulnerabilities.
...
Read the original on words.filippo.io »
The English-language edition of Wikipedia is blacklisting Archive.today after the controversial archive site was used to direct a distributed denial of service (DDoS) attack against a blog.
In the course of discussing whether Archive.today should be deprecated because of the DDoS, Wikipedia editors discovered that the archive site altered snapshots of webpages to insert the name of the blogger who was targeted by the DDoS. The alterations were apparently fueled by a grudge against the blogger over a post that described how the Archive.today maintainer hid their identity behind several aliases.
“There is consensus to immediately deprecate archive.today, and, as soon as practicable, add it to the spam blacklist (or create an edit filter that blocks adding new links), and remove all links to it,” stated an update today on Wikipedia’s Archive.today discussion. “There is a strong consensus that Wikipedia should not direct its readers towards a website that hijacks users’ computers to run a DDoS attack (see WP:ELNO#3). Additionally, evidence has been presented that archive.today’s operators have altered the content of archived pages, rendering it unreliable.”
More than 695,000 links to Archive.today are distributed across 400,000 or so Wikipedia pages. The archive site is commonly used to bypass news paywalls, and the FBI has on the site operator’s identity with a subpoena to domain registrar Tucows.
“Those in favor of maintaining the status quo rested their arguments primarily on the utility of archive.today for verifiability,” said today’s Wikipedia update. “However, an analysis of existing links has shown that most of its uses can be replaced. Several editors started to work out implementation details during this RfC [request for comment] and the community should figure out how to efficiently remove links to archive.today.”
Guidance published as a result of the decision asked editors to help remove and replace links to the following domain names used by the archive site: archive.today, archive.is, archive.ph, archive.fo, archive.li, archive.md, and archive.vn. The guidance says editors can remove Archive.today links when the original source is still online and has identical content; replace the archive link so it points to a different archive site, like the Internet Archive, Ghostarchive, or Megalodon; or “change the original source to something that doesn’t need an archive (e.g., a source that was printed on paper), or for which a link to an archive is only a matter of convenience.”
...
Read the original on arstechnica.com »
Silicon Valley is tightening its ties with Trumpworld, the surveillance state is rapidly expanding, and big tech’s AI data center buildout is booming. Civilians are pushing back.
In today’s edition of Blood in the Machine:
* Across the nation, people are dismantling and destroying Flock cameras that conduct warrantless vehicle surveillance, and whose data is shared with ICE.
* An Oklahoma man airing his concerns about a local data center project at a public hearing is arrested after he exceeded his allotted time by a couple seconds.
* Uber and Lyft drivers deliver a petition signed by 10,000 gig workers demanding that stolen wages be returned to them.
* PLUS: A climate researcher has a new report that unravels the ‘AI will solve climate change’ mythos, Tesla’s Robotaxis are crashing 4 times as often as humans, and AI-generated public comments helped kill a vote on air quality.
A brief note that this reporting, research, and writing takes a lot of time, resources, and energy. I can only do it thanks to the paid subscribers who chip in a few bucks each month; if you’re able, and you find value in this work, please consider upgrading to a paid subscription so I can continue on. Many thanks, hammers up, and onwards.
Last week, in La Mesa, a small city just east of San Diego, California, observers happened upon a pair of destroyed Flock cameras. One had been smashed and left on the median, the other had key parts removed. The destruction was obviously intentional, and appears perhaps even staged to leave a message: It came just weeks after the city decided, in the face of public protest, to continue its contracts with the surveillance company.
Flock cameras are typically mounted on 8 to 12 foot poles and powered by a solar panel. The smashed remains of all of the above in La Mesa are the latest examples of a widening anti-Flock backlash. In recent months, people have been smashing and dismantling the surveillance devices, in incidents reported in at least five states, from coast to coast.
Bill Paul, who runs the local news outlet San Diego Slackers, and who first reported on the smashed Flock equipment, tells me that the sabotage comes just a month or two after San Diego held a raucous city council meeting over whether to keep operating the Flock cameras. A clear majority of public attendees present were in favor of shutting them down.
There was “a huge turnout against them,” he tells me, “but the council approved continuation of the contract.”
The tenor of the meeting reflects a growing anger and concern over the surveillance technology that’s gone nationwide: Flock, which is based in Atlanta and is currently valued at $7.5 billion, operates automatic license plate readers (ALPR) that have now been installed in some 6,000 US communities. They gather not just license plate images, but other identifying data used to ‘fingerprint’ vehicles, their owners, and their movements. This data can be collected, stored, and accessed without a warrant, making it a popular workaround for law enforcement. Perhaps most controversially, Flock’s vehicle data is routinely accessed by ICE.
If you’ve heard Flock’s name come up recently, it’s likely as a result of their now-canceled partnership with Ring, made instantly famous by a particularly dystopian Super Bowl ad that promised to turn regular neighborhoods into a surveillance dragnet.
Meanwhile, abuses have been prevalent. A Georgia police chief was arrested and charged with using Flock data to stalk and harass private citizens. Flock data has been used to track citizens who cross state lines for abortions when the procedure is illegal in their state. And municipalities have found that federal agencies have accessed local flock data without their knowledge or consent. Critics claim that this warrantless data collection is Orwellian and unconstitutional; a violation of the 4th amendment. As a result, civilians from Oregon to Virginia to California and beyond are pushing their governments to abandon Flock contracts. In some cases, they’re succeeding. Cities like Santa Cruz, CA, and Eugene, OR, have cancelled their contracts with Flock.
In Oregon’s case, the public outcry was accompanied by a campaign of destruction against the surveillance devices: Last year, at least six Flock license plate readers mounted on poles located in Eugene and Springfield were cut down and destroyed, according to the Lookout Eugene-Springfield.
A note reading “Hahaha get wrecked ya surveilling fucks” was attached to one of the destroyed poles, and somewhat incredibly, broadcast on the local news.
In Greenview, Illinois, a Flock camera pole was severed at the base and the device destroyed. In Lisbon, Connecticut, police are investigating another smashed Flock camera.
In Virginia, last December, a man was arrested for dismantling and destroying 13 Flock cameras throughout the state over the course of the year. He’s apparently already admitted to doing so, according to local news:
Jefferey S. Sovern, 41, was arrested in October after detectives say he “intentionally destroyed” 13 Flock Safety cameras between April and October of this year. He was charged with 13 counts of destruction of property, six counts of petit larceny and six counts of possession of burglary tools. Sovern admitted to the crimes, according to a criminal complaint filed in Suffolk General District Court, going as far as to say he used vice grips to help him disassemble the tow-piece polls. He also admitted to keeping some of the wiring, batteries and solar panels taken from the cameras. Some of the items were recovered by police after they searched the property.
After his arrest, Sovern created a GoFundMe to help cover his legal costs, in which he sheds a little light on his intentions:
My name is Jeff and I appreciate my privacy. I appreciate everyone’s right to privacy, enshrined in the fourth amendment. With the local news outlets finding my legal issues and creating a story that is starting to grow, there has been community support for me that I humbly welcome.
Sovern points his GoFundMe contributors to DeFlock, a website aimed at tracking and countering the rise of Flock cameras in US communities. It counts 46 cities that have officially rejected Flock and other ALPRs since its campaign began.
In fact, it’s hard to think of a tech product or project this side of generative AI that is more roundly opposed and reviled, on a bipartisan level, than Flock, and resistance takes many forms and stripes. Here’s the YouTuber Benn Jordan, showing his viewers how to Flock-proof their license plates and render their vehicles illegible to the company’s data ingestion systems:
In response to such Flock counter-tactics, Florida passed a law last year making it illegal to cover or alter your license plate.
In his GoFundMe, Sovern also mentioned the support for him he’d seen on forums online, so I went over to Reddit to get a sense for how his actions were being received online. Here was the page that shared news of his arrest for destroying the Flock cameras:
There was, in other words, nearly universal support for Sovern’s Flock dismantling campaign. Bear in mind that this is r/Norfolk, and while it’s still reddit users we’re talking about, it’s not like this is r/anarchism here:
The San Diego reddit threads carrying news of the destroyed Flock equipment told a similar story:
There were plenty of outright endorsements of the sabotage:
Off the message boards and in real civic life, Bill Paul, the reporter with the San Diego Slacker, says anger is boiling over, too. He points again to that heated December 2025 city council meeting, in which public outrage was left unaddressed. The city, perhaps aware of the stigma Flock now carries, apparently tried to highlight that their focus was on the “smart streetlights” made by another company, while downplaying the fact that those streetlights run on Flock software.
“San Diego gets to hide behind a slight facade in that their contract is with Ubicquia,” the smart streetlight manufacturer, Paul says, “but the software layer is Flock. You can easily see Flock hardware on retail properties, looking at the same citizens, with zero oversight, and SDPD can claim they have clean hands.”
Weeks later, pieces of smashed Flock cameras littered the ground.
Across the country, in other words, municipal governments are overriding public will to make deals with a profiteering tech company to surveil their citizens and to collaborate with federal agencies like ICE. It might be taken as a sign of the times that in states and cities across the US, thousands of miles apart, those opposed to the technology are refusing to countenance what they view as violations of privacy and civil liberty, and are instead taking up vice grips and metal cutters. And in many cases, they’re getting hailed by their peers as heroes.
If you’ve heard stories of smashed Flock cameras or dismantled surveillance equipment in your neighborhood, please share—drop a link in the comments, or contact me on Signal or at briancmerchant@proton.me.
Thanks to Lilly Irani for the tip on the smashed Flock cams in San Diego.
In case you missed it, I shared my five takeaways on the most recent round of ultraheated AI discourse here:
The exchange was filmed and recorded on YouTube:
Police in Claremore, Oklahoma arrested a local man after he went slightly over his time giving public remarks during a city council meeting opposing a proposed data center. Darren Blanchard showed up at a Claremore City Council meeting on Tuesday to talk about public records and the data center. When he went over his allotted 3 minutes by a few seconds, the city had him arrested and charged with trespassing. The subject of the city council meeting was Project Mustang, a proposed data center that would be located within a local industrial park. In a mirror of fights playing out across the United States, developer Beale Infrastructure is attempting to build a large data center in a small town and the residents are concerned about water rights, spiking electricity bills, and noise.The public hearing was a chance for the city council to address some of these concerns and all residents were given a strict three minute time limit. The entire event was livestreamed and archive of it is on YouTube. Blanchard was warned, barely, to “respect the process” by one of the council members but was clearly finishing reading from papers he had brought to read from, was not belligerent, and went over time by just a few seconds. Anyone who has ever attended or watched a city council meeting anywhere will know that people go over their time at essentially any meeting that includes public comment.Blanchard arrived with documents in hand and questions about public records requests he’d made. During his remarks, people clapped and cheered and he asked that this not be counted against his three minutes. “There are major concerns about the public process in Claremore,” Blanchard said, referencing compliance documents and irregularities he’d uncovered in public records.
Blanchard was then arrested as the crowd jeered in disbelief. Also disconcerting was the way the local news framed the event, with a local anchor defending authorities by claiming he was “warned multiple times.” Seems like a pretty surefire way to make people hate data centers and the governments protecting them even more!
On Wednesday, I headed to Pershing Square in downtown Los Angeles, where dozens of gig workers and organizers with Rideshare Drivers United had assembled to deliver a petition to the California Labor Commission signed by thousands of workers, calling on the body to deliver a settlement on their behalf. Organizers made short speeches on the steps of the square while local radio and TV stations captured the moment. “
The Labor Commission is suing the gig companies on drivers’ behalf, alleging that Uber and Lyft stole billions of dollars worth of wages from drivers before Prop 22 was enacted in 2020. The commission is believed to be in negotiations with the gig companies right now that will determine a settlement.
I spoke with one driver, Karen, who had traveled from San Diego to join the demonstration, and asked her why she came. “It’s important we build driver power” she said. “Without driver power, we won’t get what we need, and we just want fairness.” She said she was hoping to claim at least $20,000 in stolen wages.
“We’re fighting for wages that were stolen for us from us and continue to be stolen from us every single day by these app companies from hell,” RDU organizer Nicole Moore told me. “So we’re marching in downtown L. A. to deliver 10,000 signatures of drivers demanding that the state fight hard for us, and don’t let these companies rip us off.”
According to Tesla’s own numbers, its new RoboTaxis in Austin are crashing at a rate 4 times higher than human drivers. The EV trade publication Electrek reports:
With 14 crashes now on the books, Tesla’s “Robotaxi” crash rate in Austin continues to deteriorate. Extrapolating from Tesla’s Q4 2025 earnings mileage data, which showed roughly 700,000 cumulative paid miles through November, the fleet likely reached around 800,000 miles by mid-January 2026. That works out to one crash every 57,000 miles. The irony is that Tesla’s own numbers condemn it. Tesla’s Vehicle Safety Report claims the average American driver experiences a minor collision every 229,000 miles and a major collision every 699,000 miles. By Tesla’s own benchmark, its “Robotaxi” fleet is crashing nearly 4 times more often than what the company says is normal for a regular human driver in a minor collision, and virtually every single one of these miles was driven with a trained safety monitor in the vehicle who could intervene at any moment, which means they likely prevented more crashes that Tesla’s system wouldn’t have avoided.Using NHTSA’s broader police-reported crash average of roughly one per 500,000 miles, the picture is even worse, Tesla’s fleet is crashing at approximately 8 times the human rate.
-“The Left Doesn’t Hate Technology, We Hate Being Exploited,” by Gita Jackson at Aftermath.
“Meta drops $65 million into super PACs to boost tech-friendly state candidates,” by Christine Mui in Politico.
-A great new report from climate researcher Ketan Joshi, “The AI Climate Hoax: Behind the Curtain of How Big Tech Greenwashes Impacts,” has been making headlines and is well worth a read. Perhaps we’ll dig deeper into it in a future issue.
-The LA Times reports that the Southern California air board rejected new pollution rules after an AI-generated flood of made-up comments. Here’s UCLA’s Evan George on how AI poses a unique threat to the civic process.
Okay okay, that’s it for this week. Thanks as always for reading. Hammers up.
...
Read the original on www.bloodinthemachine.com »
Pre-orders for the Juno Pioneer Edition now open, reserve your Juno today!
On January 16, OpenAI quietly announced that ChatGPT would begin showing advertisements. By February 9th, ads were live. Eight months earlier, OpenAI spent $6.5 billion to acquire Jony Ive’s hardware startup io. They’re building a pocket-sized, screenless device with built-in cameras and microphones — “contextually aware,” designed to replace your phone.
But this isn’t a post about OpenAI. They’re just the latest. The problem is structural.
Every single companyWe can quibble about Apple.
building AI assistants is now funded by advertising.
And every one of them is building hardware designed to see and hear everything around you, all day, every day. These two facts are on a collision course, and local on-device inference is the only way off the track.
Before we talk about who’s building it, let’s be clear about what’s being built.
Every mainstream voice assistant today works behind a gate. You say a magic word — “Hey Siri,” “OK Google,” “Alexa” — and only then does the system listen. Everything before the wake word is theoretically discarded.
This was a reasonable design in 2014. It is a dead end for where AI assistance needs to go.
Here’s what happens in a real kitchen at 6:30am:Anonymized from one of our test homes. The real version was messier and
included a toddler screaming about Cheerios.
Nobody is going to preface that with a wake word. The information is woven into natural speech between two flustered parents getting the family ready to leave the house. The moment you require a trigger, you lose the most valuable interactions — the ones that happen while people are living their lives, not thinking of how to give context to an AI assistant.
You cannot build proactive assistance behind a wake word. The AI has to be present in the room, continuously, accumulating context over days and weeks and months, to build the understanding that makes proactive help possible.
This is where every major AI company is heading. Not just audio — vision, presence detection, wearables, multi-room awareness. The next generation of AI assistants will hear and see everything. Some will be on your face or in your ears all day. They will be always on, always sensing, always building a model of your life.
The question is not whether always-on AI will happen. It’s who
controls the data it collects. And right now, the answer to that
question is: advertising companies.
Here’s where the industry’s response gets predictable. “We encrypt the data in transit.” “We delete it after processing.” “We anonymize everything.” “Ads don’t influence the AI’s answers.” “Read our privacy policy.“With cloud processing, every user is trusting:
• The company’s current privacy policy
• Every employee with production access
• Every third-party vendor in the processing pipeline
• Every government that can issue a subpoena or national security
letter
• Every advertiser partnership that hasn’t been announced yet
• The company’s future privacy policy
OpenAI’s own ad announcement includes this language: “OpenAI keeps conversations with ChatGPT private from advertisers, and never sells data to advertisers.” It sounds reassuring. But Google scanned every Gmail for ad targeting for thirteen years
before quietly stopping in 2017. Policies change. Architectures don’t.
When a device processes data locally, the data physically cannot leave the network. There is no API endpoint to call. There is no telemetry pipeline. There is no “anonymized usage data” that somehow still contains enough signal to be useful for ad targeting. The inference hardware sits inside the device or in the user’s home, on their network.
Your email is sensitive. A continuous audio and visual feed of your home is something else entirely. It captures arguments, breakdowns, medical conversations, financial discussions, intimate moments, parenting at its worst, the completely unguarded version of people that exists only when they believe nobody is watching. We wrote a deep dive on our memory system in
Building Memory for an Always-On AI That Listens to Your Kitchen.
Amazon already showed us what happens. They eliminated local voice processing.
They planned to feed Alexa conversations to advertisers.
They partnered Ring with a surveillance network that had federal law
enforcement access.
What happens when those same economic incentives are applied to devices that capture everything?
The counterargument is always the same: “Local models aren’t good enough.” Three years ago, that was true. It is no longer true.
You can run a complete ambient AI pipeline today — real-time speech-to-text, semantic memory, conversational reasoning, text-to-speech, etc — on a device that fits next to a cable box (remember those?). No fan noise. A one-time hardware purchase with no per-query fee and no data leaving the building. New model architectures, better compression, and open-source inference engines have converged to make this possible, and the silicon roadmap points in one direction: more capability per watt, every year. We’ve been running always-on prototypes in five homes. The complaints
we get are about the AI misunderstanding context, not about raw model
capability. That’s a memory architecture problem, not a model size
problem.
Are local models as capable as the best cloud models? No. But we’re usually not asking our smart speaker to re-derive the Planck constant.
Hardware that runs inference on-device. Models that process audio and video locally and never transmit it. There needs to be a business model based on selling the hardware and
software, not the data the hardware collects. An architecture where the
company that makes the device literally cannot access the data
it processes, because there is no connection to access it
through.
The most helpful AI will also be the most intimate technology ever built. It will hear everything. See everything. Know everything about the family. The only architecture that keeps that technology safe is one where it is structurally incapable of betraying that knowledge. Not policy. Not promises. Not a privacy setting that can be quietly removed in a March software update.
Choose local. Choose edge. Build the AI that knows everything but phones home nothing.
...
Read the original on juno-labs.com »
Andrej Karpathy talks about “Claws”. Andrej Karpathy tweeted a mini-essay about buying a Mac Mini (“The apple store person told me they are selling like hotcakes and everyone is confused”) to tinker with Claws:
Andrej Karpathy talks about “Claws”. Andrej Karpathy tweeted a mini-essay about buying a Mac Mini (“The apple store person told me they are selling like hotcakes and everyone is confused”) to tinker with Claws:
I’m definitely a bit sus’d to run OpenClaw specifically […] But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level.
Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. […]
Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). […]
Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.
...
Read the original on simonwillison.net »
In December 1990, an application called WorldWideWeb was developed on a NeXT machine at The European Organization for Nuclear Research (known as CERN) just outside of Geneva. This program – WorldWideWeb — is the antecedent of most of what we consider or know of as “the web” today.
In February 2019, in celebration of the thirtieth anniversary of the development of WorldWideWeb, a group of developers and designers convened at CERN to rebuild the original browser within a contemporary browser, allowing users around the world to experience the rather humble origins of this transformative technology.
This project was supported by the US Mission in Geneva through the CERN & Society Foundation.
Ready to browse the World Wide Web using WorldWideWeb?
Select “Document” from the menu on the side.
Click here to jump in (and remember you need to double-click on links):
* History — a brief history of the application which was built in 1989 as a progenitor to what we know as “the web” today.
* Timeline — a timeline of the thirty years of influences leading up to (and the thirty years of influence leading out from) the publication of the memo that lead to the development of the first web browser.
* The Browser — instructions for using the recreated WorldWideWeb browser, and a collection of its interface patterns.
* Typography — details of the NeXT computer’s fonts used by the WorldWideWeb browser.
* Inside the Code — a look at some of the original code of WorldWideWeb.
* Production Process — a behind the scenes look at how the WorldWideWeb browser was rebuilt for today.
* Related Links — links to additional historical and technical resources around the production of WorldWideWeb.
* Colophon — a bit of info about the folks behind the project.
...
Read the original on worldwideweb.cern.ch »
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
DISCOUNTS: Instead of random discounts we prefer keeping the prices stable (already since early 2022)
US Shipping - Now all taxes and fees are included in the shipping cost at checkout
...
Read the original on openscan.eu »
A new law to ensure that batteries are collected, reused and recycled in Europe is entering into force today. The new Batteries Regulation will ensure that, in the future, batteries have a low carbon footprint, use minimal harmful substances, need less raw materials from non-EU countries, and are collected, reused and recycled to a high degree in Europe. This will support the shift to a circular economy, increase security of supply for raw materials and energy, and enhance the EU’s strategic autonomy.
In line with the circularity ambitions of the European Green Deal, the Batteries Regulation is the first piece of European legislation taking a full life-cycle approach in which sourcing, manufacturing, use and recycling are addressed and enshrined in a single law.
Batteries are a key technology to drive the green transition, support sustainable mobility and contribute to climate neutrality by 2050. To that end, starting from 2025, the Regulation will gradually introduce declaration requirements, performance classes and maximum limits on the carbon footprint of electric vehicles, light means of transport (such as e-bikes and scooters) and rechargeable industrial batteries.
The Batteries Regulation will ensure that batteries placed on the EU single market will only be allowed to contain a restricted amount of harmful substances that are necessary. Substances of concerns used in batteries will be regularly reviewed.
Targets for recycling efficiency, material recovery and recycled content will be introduced gradually from 2025 onwards. All collected waste batteries will have to be recycled and high levels of recovery will have to be achieved, in particular of critical raw materials such as cobalt, lithium and nickel. This will guarantee that valuable materials are recovered at the end of their useful life and brought back in the economy by adopting stricter targets for recycling efficiency and material recovery over time.
Starting in 2027, consumers will be able to remove and replace the portable batteries in their electronic products at any time of the life cycle. This will extend the life of these products before their final disposal, will encourage re-use and will contribute to the reduction of post-consumer waste.
To help consumers make informed decisions on which batteries to purchase, key data will be provided on a label. A QR code will provide access to a digital passport with detailed information on each battery that will help consumers and especially professionals along the value chain in their efforts to make the circular economy a reality for batteries.
Under the new law’s due diligence obligations, companies must identify, prevent and address social and environmental risks linked to the sourcing, processing and trading of raw materials such as lithium, cobalt, nickel and natural graphite contained in their batteries. The expected massive increase in demand for batteries in the EU should not contribute to an increase of such environmental and social risks.
Work will now focus on the application of the law in the Member States, and the redaction of secondary legislation (implementing and delegated acts) providing more detailed rules.
Since 2006, batteries and waste batteries have been regulated at EU level under the Batteries Directive. The Commission proposed to revise this Directive in December 2020 due to new socioeconomic conditions, technological developments, markets, and battery uses.
Demand for batteries is increasing rapidly. It is set to increase 14-fold globally by 2030 and the EU could account for 17% of that demand. This is mostly driven by the electrification of transport. Such exponential growth in demand for batteries will lead to an equivalent increase in demand for raw materials, hence the need to minimise their environmental impact.
In 2017, the Commission launched the European Battery Alliance to build an innovative, sustainable and globally competitive battery value chain in Europe, and ensure supply of batteries needed for decarbonising the transport and energy sectors.
...
Read the original on environment.ec.europa.eu »
I desperately need a Matt Levine style explanation of how OAuth works. What is the historical cascade of requirements that got us to this place?
There are plenty of explanations of the inner mechanical workings of OAuth, and lots of explanations about how various flows etc work, but Geoffrey is asking a different question:
What I need is to understand why it is designed this wayconcrete examples of use cases that motivate the design
In the 19 years (!) since I wrote the first sketch of an OAuth specification, there has been a lot of minutiae and cruft added, but the core idea remains the same. Thankfully, it’s a very simple core. Geoffrey’s a very smart guy, and the fact that he’s asking this question made me think it’s time to write down an answer to this.
It’s maybe easiest to start with the Sign-In use-case, which is a much more complicated specification (OpenID Connect) than core OAuth. OIDC uses OAuth under the hood, but helps us get to the heart of what’s actually happening.
We send a secret to a place that only the person trying to identify themselves can access, and they prove that they can access that place by showing us the secret.
The rest is just accumulated consensus, in part bikeshedding (agreeing on vocabulary, etc), part UX, and part making sure that all the specific mechanisms are secure.
There’s also an historical reason to start with OIDC to explain how all this works: in late 2006, I was working on Twitter, and we wanted to support OpenID (then 1.0) so that ahem Twitter wouldn’t become a centralized holder of online identities. After chatting with the OpenID folks, we quickly realized that as it was constructed, we wouldn’t be able to support both desktop clients and web sign-in, since our users wouldn’t have passwords anymore! (mobile apps didn’t exist yet, but weren’t far out). So, in order to allow OpenID sign-in, we needed a way for folks using Twitter via alternative clients to sign in without a password.
There were plenty of solutions for this; Flickr had an approach, AWS had one, delicious had one, lots of sites just let random other apps sign-in to your account with your password, etc, but virtually every site in the “Web 2.0” cohort needed a way to do this. They were all insecure and all fully custom.
Rather than building TwitterAuth, I figured it was time to have a standard. Insert XKCD 927:
Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB? Shit.
Thankfully, against all odds, we now have one standard for delegated auth. What it does is very simple:
At its core, OAuth for delegation is a standard way to do the following:
* The first half exists to send, with consent, a multi-use secret to a known delegate.
* The other half of OAuth details how the delegate can use that secret to make subsequent requests on behalf of the person that gave the consent in the first place.
That’s it. The rest is (sadly, mostly necessary) noise.
Obviously, the above elides absolute volumes of detail about how this is done securely and in a consistent interoperable way. This is the unenviable work of standards bodies. I have plenty of opinions on the pros and cons of our current standards bodies, but that’s for another time.
There are very credible arguments that the-set-of-IETF-standards-that-describe-OAuth are less a standard than a framework. I’m not sure that’s a bad thing, though. HTML is a framework, too – not all browsers need to implement all features, by design.
OIDC itself is an interesting thing — immediately after creating OAuth, we realized that we could compose OpenID’s behaviour out of OAuth, even though it was impossible to use OpenID to do what OAuth did. For various social, political, technical, and operational reasons it took the better part of a decade to write down the bits to make that insight a thing that was true in the world. I consider it one of my biggest successes with OAuth that I was in no way involved in that work. I don’t have children, but know all the remarkable and complicated feelings of having created something that takes on a life of its own.
More generally, though, authentication and authorization are complicated, situated beasts, impossible to separate from the UX and architectural concerns of the systems that incorporate them.
The important thing when implementing a standard like OAuth is to understand first what you’re trying to do and why. Once that’s in place, the how is usually a “simple” question of mechanics with fairly constrained requirements. I think that’s what makes Geoffrey’s question so powerful – it digs into the core of the reason why OAuth is often so inscrutable to so many: the complicated machinery of the standard means that the actual goals it encodes are lost.
Hopefully, this post helps clear that up!
...
Read the original on leaflet.pub »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.