10 interesting stories served every morning and every evening.
TLDR: Despite claiming to backup all your data, Backblaze quietly stopped backing up OneDrive and Dropbox folders - along with potentially many other things.
For ten years I have been using Backblaze for my personal computer backup. Before 2015 I would backup files to one of two large external hard discs. I then rotated these drives between, first my father’s house, and after I moved to the UK, my office drawers.
In 2015 Backblaze seemed like a good bet. Unlike Crashplan their software wasn’t a bloated Java app, but they did have unlimited storage. If you could cram it into your PC they would back it up. With their yearly Hard Drive reviews making good press, a lot of personal recommendations from my friends and colleagues, their service sounded great. I installed the software, ran it for several weeks, and sure enough my data was safely stored in their cloud.
I had further reason to be impressed when several years later one of my hard drives failed. I made use of their “send me a hard drive with my stuff on it service”. A drive turned up filled with my precious data. That for me was proof that this system worked, and that it worked well.
And so I recommended Backblaze for years. What do you do for backup? I would extoll the virtues of Backblaze, and they made many sales from such recommendations.
There were a few things I didn’t like. The app, could use a lot of memory, especially after doing a large import of photographs. The website, which I often used to restore single files or folders, was slow and clunky to use. The windows app in particular was clunky with an early 2000s aesthetic and cramped lists. There was the time they leaked all your filenames to Facebook, but they probably fixed that.
But no matter, small problems for the peace of mind of having all my files backed up.
Backup software is meant to back up your files. Which files? Well the files you need. Given everyone is different, with different workflows and filetypes, the ideal thing is to back up all your files. No backup provider knows what I will need in the future. The provider must plan accordingly.
My first troubling discovery was in 2025, when I made several errors then did a push -f to GitHub and blew away the git history for a half decade old repo. No data was lost, but the log of changes was. No problem I thought, I’ll just restore this from Backblaze. Sadly it was not to be. At some point Backblaze had started to ignore .git folders.
This annoyed me. Firstly I needed that folder and Backblaze had let me down. Secondly within the Backblaze preferences I could find no way to re-enable this. In fact looking at the list of exclusions I could find no mention of .git whatsoever.
This made me wonder - I had checked the exclusions list when I installed Backblaze 9 years before, had I missed it? Had I missed anything else?
Well lesson learned I guess, but then a week ago I came across this thread on reddit: “Doesn’t back up Dropbox folder??”. A user was surprised to find their Dropbox folder no longer being backed up. Alarmed I logged into Backblaze, and lo and behold, my OneDrive folder was missing.
Backblaze has one job, and apparently they are unable to do that job. Back up my stuff. But they have decided not to.
Lets take an aside.
A reasonable person might point out those files on OneDrive are already being backed up - by OneDrive! No. Dropbox and OneDrive are for file syncing - syncing your files to the cloud. They offer limited protection. OneDrive and Dropbox only retain deleted files for one month. Backblaze has one year file retention, or if you pay per GB, unlimited retention. While OneDrive retains version changes for longer, Dropbox only retains version changes for a month - again unless you pay for more. Your files are less secure and less backed up when you stick them in a cloud storage provider folder compared to just being on your desktop.
And that’s assuming your cloud provider is playing ball. If Microsoft or Dropbox bans your account you may find yourself with no backup whatsoever.
For me the larger issue is they never told us. My OneDrive folder sits at 383GB. You would think that having decided to no longer back this up I might get an email, and alert or some other notification. Of course not.
Nestled into their release notes under “Improvements” we see:
The Backup Client now excludes popular cloud storage providers from backup, including both mount points and cache directories. This prevents performance issues, excessive data usage, and unintended uploads from services like OneDrive, Google Drive, Dropbox, Box, iDrive, and others. This change aligns with Backblaze’s policy to back up only local and directly connected storage.
First, I would hardly call this change in policy an improvement, its hard to imagine anyone reading this as anything other than a downgrade in service. Secondly does Backblaze believe most of its users are reading their release notes?
And if you joined today and looked at their list of file exclusions you would find no reference to Dropbox or OneDrive. No mention of Git either.
Here’s the thing, today they don’t back up Git or OneDrive. Who’s to say tomorrow they wont add to the list. Maybe some obscure file format that’s critical to your work flow. Or they will ignore a file extension that just happens be the same as one used by your DAW or 3D Modelling software. And they won’t tell you this. They wont even list it on their site.
By deciding not to back up everything, Backblaze has made it as if they are backing up nothing.
But really this feels like a promise broken. Back in 2015 their website proudly proclaimed:
All user data included by default No restrictions on file type or size
Protect the digital memories and files that matter most to you.
File backup is a matter of trust. You are paying a monthly fee so that if and when things go wrong you can get your data back. By silently changing the rules, Backblaze has not simply eroded my trust, but swept it away.
I wrote this to warn you - Backblaze is no longer doing their part, they are no longer backing up your data. Some of your data sure, but not all of it.
Finally let me leave you with Backblaze’s own words from 2015:
They promised to simplify backup. They succeeded - they don’t even do the backup part anymore.
...
Read the original on rareese.com »
Chicago-based music superfan Aadam Jacobs has been recording the concerts he attends since the 1980s, amassing an archive of over 10,000 tapes. Now 59, Jacobs knows that these cassettes are going to degrade over time, so he agreed to let volunteers from the Internet Archive, the nonprofit digital library, digitize the tapes.
So far, about 2,500 of these tapes have been posted on the Internet Archive, including some rare gems like a Nirvana performance from 1989. (The group wouldn’t break through to mainstream audiences until they released the single “Smells Like Teen Spirit” in 1991.) Within the collection, you can also find previously unknown recordings from influential artists like Sonic Youth, R. E.M., Phish, Liz Phair, Pavement, Neutral Milk Hotel, and a whole bunch of other punk groups.
For many of these recordings, Jacobs was using pretty mediocre equipment, but the volunteer audio engineers working with the Internet Archive have made these tapes sound great.
One volunteer, Brian Emerick, drives to Jacobs’ house once a month to pick up more boxes of tapes — he has to use anachronistic cassette decks to play the tapes, which get converted into digital files. From there, other volunteers clean up, organize, and label the recordings, even tracking down song names from forgotten punk bands.
Sometimes, the internet is good. And so is this Tracy Chapman recording from 1988.
...
Read the original on techcrunch.com »
I wrote to Flock’s privacy contact to opt out of their domestic spying program:
I am a resident of California. As such, and because you are subject to the CCPA, delete all information about me, my vehicle, and other household members from all of your databases. I do not give you permission to collect or store data about me, my vehicles, or my relatives, in any future situation.
Dear [misspelled name, i.e. not copied and pasted],
Your request cannot be completed at this time.
Thank you for submitting your privacy request. At this time, we are unable to process this request for the reasons detailed below.
Flock Safety provides its services to our customers, and our customers are owners and controllers of the data Flock Safety processes on their behalf. Flock Safety processes data as a service provider and processor for our customers and as a result, we are unable to directly fulfill your request. We recommend contacting the organization that engaged Flock Safety’s services to submit your request, as they are responsible for assessing and responding to it.
Here are a few additional points about Flock Safety’s data collection and privacy practices:
* Customer Contracts: Flock Safety’s processing activity as a service provider and processor is governed by the contract we have with our customers, which captures their instructions and the limitations on how Flock Safety may process their data. Flock Safety’s customers own the data and make all decisions around how such data is used and shared.
* No Sale of Data: Because Flock Safety’s customers own the data, Flock Safety may only process the data in accordance with our customer’s instructions, as outlined in our contracts with customers. Flock Safety is not permitted to sell, publish, or exchange such data for our own commercial purposes.
* Information Collected: Where Flock Safety’s customers leverage License Plate Reader (LPR) technology, the LPRs do not process sensitive information like names or addresses. Instead, LPRs only capture images of publicly available and visible vehicle characteristics that are taken in the public view.
* Purpose: Flock Safety customers use data for security purposes, including managing public safety or responding to safety concerns and reports. Additionally, such data may be used to help solve crimes and provide objective evidence.
* Retention: By default, Flock Safety’s systems only retain data for 30 days, which means that any data collected on behalf of customers is permanently hard deleted on a rolling 30 day basis. Flock Safety customers are able to adjust this retention period based on their local laws or policies.
For more information about how Flock Safety processes data, please refer to our Privacy Policy and LPR Policy.
I think that’s legally inaccurate. They’re the entity collecting and processing my personally identifiable information, and my non-lawyer reading of the California Consumer Privacy Act (CCPA) would seem to obligate them to comply with my request. I haven’t decided to engage a lawyer yet, but neither have I ruled it out.
...
Read the original on honeypot.net »
Flock Safety markets AI surveillance that goes far beyond reading license plates; color, bumper stickers, dents, and other features are used to build databases and identify movement patterns. These systems are spreading rapidly, often without oversight, and are accessible to police without a warrant. They raise serious privacy and legal concerns, and contribute to a nationwide trend toward mass surveillance.
While this and other systems like it claim to reduce crime, there is little evidence to support that claim - and significant risk of abuse. Real public safety comes from investing in communities, not stalking them.
Flock Safety markets AI surveillance that goes far beyond reading license plates; color, bumper stickers, dents, and other features are used to build databases and identify movement patterns. These systems are spreading rapidly, often without oversight, and are accessible to police without a warrant. They raise serious privacy and legal concerns, and contribute to a nationwide trend toward mass surveillance.
While this and other systems like it claim to reduce crime, there is little evidence to support that claim - and significant risk of abuse. Real public safety comes from investing in communities, not stalking them.
Flock Safety markets its devices as “AI-powered precision policing technology” - far beyond basic license plate readers (ALPRs) (Flock Safety). The system uses AI to create a “Vehicle Fingerprint” - identifying cars not only by license plate, but also by color, make and model, roof racks, dents/damage, wheel type, and more. Even bumper sticker placement is analyzed. This lets law enforcement search for a “blue sedan with damage on the left side” even without a license plate.
But the surveillance goes deeper. Using a feature called “Convoy Analysis”, the system can detect vehicles that frequently appear near each other - suggesting associations between drivers or accomplices. The platform can also flag vehicles that routinely travel to the same locations across time. Flock describes this as a way to “identify suspect vehicles traveling together” or “pinpoint associates” - functionality confirmed in both their marketing and police testimonials (GovTech, ACLU).
The data is logged and made searchable across a nationwide law enforcement network - which officers in subscribing agencies can access without a warrant. According to Flock, the system can automatically flag a vehicle based on its history, route, or presence in multiple locations linked to a crime (Flock HOA Marketing).
While these tools may aid in locating stolen cars or missing persons, they also create a detailed record of everyone’s movements, associations, and routines. That data has already been misused - like when a Kansas police chief used Flock cameras 228 times to stalk an ex-girlfriend and her new partner without cause (Local12).
The scope of this tracking becomes clear when you see real-world examples. In 2025, a journalist drove 300 miles across rural Virginia and was captured by nearly 50 surveillance cameras operated by 15 different law enforcement agencies. When he requested his own surveillance footage, he discovered the cameras had documented patterns that made his behavior “predictable to anyone looking at it.” Most troubling: while the journalist couldn’t remember specific dates he’d made certain trips, police would know instantly - without any warrant or suspicion of wrongdoing (Cardinal News).
See also:
EFF: How ALPRs Work,
The Secure Dad on Flock Cameras,
Compass IT: “Privacy Concerns with Flock”,
ACLU: Flock is building a new AI-driven mass surveillance system,
Wikipedia: Flock Safety
How Widespread Are These Cameras?
Understanding what Flock cameras are leads to a natural question: how common are they in our communities?
The crowdsourced map made available on DeFlock.me currently shows roughly half of the >100,000 Flock AI cameras nationwide. Here are examples from three major cities showing how pervasive this surveillance has become:
These systems are expanding rapidly, often with little public debate or oversight. The Atlas of Surveillance, maintained by the Electronic Frontier Foundation, has documented over 3,000 law enforcement and government agencies using Flock products as of 2025 - a number growing monthly.
The Fourth Amendment was written in response to the British Crown’s “general warrants” - broad authorizations to search anyone, anywhere, anytime. Mass surveillance revives that threat in digital form. Simply moving freely in public should not require that you be profiled and scrutinized.
It is important to point out that the courts have repeatedly ruled so-called “dragnet warrants,” often using cell phone GPS locations, unconstitutional under the Fourth Amendment. But Flock’s status as a private company means it can collect and sell data with fewer restrictions, exploiting a legal gray zone which courts have yet to fully address.
“If you’ve got nothing to hide, you’ve got nothing to fear” is a tempting thought - until someone misuses your information. Privacy isn’t about hiding wrongdoing. It’s about autonomy, dignity, and the ability to live free from unjust scrutiny. “Saying you don’t care about privacy because you have nothing to hide is like saying you don’t care about free speech because you have nothing to say.” - Edward Snowden
As one observer put it: “While today they are no threat to me…circumstances change, leadership changes, laws change. When you really boil this down, what is this nationwide system? What did Flock really make? It’s a weapon. A silent weapon. Right now it targets what many would agree are criminals. But with the flip of a switch this system can be used to target or oppress anybody the people in power decide is a threat.”
We are fast approaching a world in which going about one’s business in public means being entered into a law enforcement database. Automated license plate readers collect location data on millions of people with no suspicion of wrongdoing, creating vast databases of where we go and when.
Flock cameras and similar surveillance tools raise serious Fourth Amendment concerns by enabling broad, warrantless tracking of people’s movements. In 2024, a trial court held that the Flock network functioned as a “dragnet over the entire city.” The judge in the case equated it to placing GPS trackers on every vehicle - a practice that the U. S. Supreme Court has ruled requires a warrant (Virginia Mercury, The Virginian Pilot).
The American Civil Liberties Union (ACLU) warns that automatic license plate readers (ALPRs) are becoming tools for routine mass location tracking and surveillance, with too few rules governing their use. These systems can collect and store data on millions of innocent drivers, creating detailed records of people’s movements without their knowledge or consent. (ACLU)
Legal scholars have highlighted the broader implications of such surveillance. Neil Richards, writing in the Harvard Law Review, emphasizes that surveillance can chill the exercise of civil liberties, particularly intellectual privacy, and increase the risk of blackmail, coercion, and discrimination. (Harvard Law Review)
Flock’s data further enables already biased enforcement. In Oak Park, Illinois, 84% of drivers stopped using Flock camera alerts were Black - despite the town being only 21% Black. (Freedom to Thrive).
See also:
ACLU on Unaccountable Surveillance Tech
Mass surveillance isn’t just about policing; there are major business interests involved.
Flock Safety collaborates with law enforcement agencies to promote the adoption of its license plate recognition cameras by encouraging private entities such as businesses and HOAs to share their footage. This practice broadens the surveillance net by granting access to what would otherwise have been private data (Flock Safety FAQ).
Instances have been reported where HOAs installed Flock cameras on public roads, leading to debates over the extent of surveillance and the privacy rights of residents and visitors (Oaklandside), (Forest Brooke HOA).
The ACLU has highlighted that the expansive reach of these surveillance networks could enable law enforcement to construct detailed profiles of individuals’ movements and associations, underscoring the need for transparency and oversight (ACLU).
Additionally, Flock markets its surveillance technology to employers and retail establishments, further blurring the lines between public safety initiatives and profit-driven surveillance. For example, major retail property owners have entered into agreements to share AI-powered surveillance feeds directly with law enforcement, expanding the scope of monitoring beyond public spaces. (Forbes) [Mirror]
Lowe’s is a significant private client of Flock Safety, having implemented their systems in numerous locations to enhance security and deter theft.
While Flock specifically does not offer facial recognition (today), Lowe’s has faced legal troubles over its use of facial recognition systems from other vendors. In 2019, a class action lawsuit was filed in Cook County Circuit Court, alleging that Lowe’s used facial recognition software to track customers’ movements without their consent, violating Illinois’ Biometric Information Privacy Act (BIPA). The lawsuit claimed that Lowe’s collected and stored biometric data from customers and shared it with other retailers. (Security InfoWatch)
Some justify these systems as making us safer, but the reality is more complicated.
Flock advertises a drop in crime, but the true cost is a culture of mistrust and preemptive suspicion. As the EFF warns, communities are being sold a false promise of safety - at the expense of civil rights*
(EFF).
A 2019 report by the NAACP Legal Defense Fund warned that predictive policing tools premised on biased data will reflect that bias, reinforcing existing discrimination in the criminal justice system. These tools may appear objective, but instead often amplify historic injustice under a veneer of scientific credibility (NAACP LDF).
True safety comes from healthy, empowered communities; not automated suspicion. Community-led safety initiatives have demonstrated significant results: North Lawndale saw a 58% decrease in gun violence after READI Chicago began implementing their program there. In cities nationwide, the presence of local nonprofits has been statistically linked to reductions in homicide, violent crime, and property crime (Brennan Center, The DePaulia, American Sociological Association).
Zooming out, Flock is just one part of a larger movement toward ubiquitous surveillance.
Flock’s expansion is part of a broader movement toward ubiquitous mass surveillance - where your associations, online comments, purchases, movements, and more may be logged, indexed, analyzed by AI, and made easily searchable by almost any government agency at any time.
This progression from data collection to surveillance follows a familiar pattern in tech: tools sold for convenience often evolve into tools of control.
Bruce Schneier, a prominent cryptographer and privacy advocate, put it simply: “Surveillance is the business model of the Internet.” What begins as data collection for convenience or security often evolves into persistent monitoring, normalization of tracking, and the loss of autonomy.
As Edward Snowden warned: “A child born today will grow up with no conception of privacy at all. They’ll never know what it means to have a private moment to themselves - an unrecorded, unanalyzed thought.”
In Dunwoody, Georgia, drones are now dispatched from Flock Safety “nests” to respond to 911 calls autonomously, often arriving in under 90 seconds (Axios).
In California, 480 high-tech cameras were recently installed to surveil Oakland’s highways - tracking license plates, bumper stickers, and vehicle types - with alerts sent to law enforcement in real-time (AP News).
This surveillance infrastructure extends far beyond law enforcement. The U. S. military has spent at least $3.5 million on a tool called “Augury” that monitors “93% of internet traffic,” capturing browsing history, email data, and sensitive cookies from Americans - all “without informed consent.” Senator Ron Wyden has received whistleblower complaints about this warrantless surveillance program (VICE).
Meanwhile, the current administration is working with Palantir Technologies to create what Ron Paul calls a “big ugly database” - a comprehensive collection of all information held by federal agencies on all U.S. citizens. This would include health records, education records, tax returns, firearm purchases, and associations with any groups labeled “extremist.” Palantir, funded by the CIA’s In-Q-Tel venture capital firm, is “literally the creation of the surveillance state” (OC Register).
Even basic tools we use daily are being transformed into surveillance instruments. Recent court rulings now allow the government to order companies like OpenAI to indefinitely preserve all ChatGPT conversations. Users who thought they were having private conversations - like “talking to a friend who can keep a secret” - discovered this only through web forums, not company disclosure. The judge’s order enables what one user called a “nationwide mass surveillance program” disguised as a civil discovery process (TechRadar).
This pattern repeats throughout history: people abandon liberty for promises of safety. After 9/11, many supported the PATRIOT Act. During COVID, many embraced mask and vaccine mandates. After the 2008 financial crisis, many supported bailouts because leaders said they had to “abandon free-market principles to save the free-market system.” Today, some support mass surveillance because they believe it will target only “the right people” - but circumstances change, leadership changes, laws change.
See also:
Ars Technica: “AI Cameras to Ensure Good Behavior”,
Video: Predictive Surveillance Trends
So where is all of this heading? The trajectory is troubling.
Flock’s cameras capture detailed information about the daily lives of anyone passing by, without offering a genuine opt-out mechanism. Concurrently, Palantir Technologies has secured a $30 million contract with ICE, aiming to develop a system that consolidates sensitive personal data such as biometrics, geolocation, and other personal identifiers from various federal agencies, facilitating near real-time tracking and categorization of individuals for immigration enforcement purposes (Wired). It should be no surprise that this will also not offer any meaningful opt-out mechanism.
The integration of surveillance technologies such as Flock Safety’s license plate readers and Palantir’s ImmigrationOS platform signifies a shift toward comprehensive monitoring of individuals’ movements and behaviors. It is not difficult to imagine the scope of such systems’ usage growing with time.
These developments raise concerns about the erosion of privacy and the potential for misuse of aggregated data. The pervasive nature of such surveillance systems means that individuals are monitored without explicit consent, and the data collected can be repurposed beyond its original intent. As these technologies become more entrenched, the line between public safety and invasive oversight blurs, prompting critical discussions about the balance between security and individual freedoms.
Some of the most chilling validations of mass surveillance come not from critics - but from the very people promoting it. These aren’t out-of-context slips; they are open endorsements of a world where privacy is sidelined in favor of control, compliance, and convenient enforcement.
“Anything technology they think, ‘Oh it’s a boogeyman. It’s Big Brother watching you,’ … No, Big Brother is protecting you.”
- Eric Adams, NYC Mayor (Politico, 2022)
New York’s mayor casually rebrands Orwell’s authoritarian icon as a guardian figure. It’s a startling reversal - not a warning about overreach, but a defense of it.
“Instead of being reactive, we are going to be proactive… [we] use data to predict where future crimes are likely to take place and who is likely to commit them… then deputies would find those people and take them out.”
- Chris Nocco, Pasco County Sheriff (Tampa Bay Times, 2020)
This “Minority Report”-style program led to harassment of innocent people - and was ultimately found unconstitutional in court (Institute for Justice). A rare win, but a stark example of where unchecked surveillance can go.
“The use of net flow data by NCIS does not require a warrant.”
- Charles E. Spirtos, Navy Office of Information (VICE, 2024)
The military’s position on monitoring Americans’ internet traffic without judicial oversight. This statement came after a whistleblower complained about warrantless surveillance activities to Senator Ron Wyden’s office.
“Tech firms should not develop their systems and services, including end-to-end encryption, in ways that empower criminals or put vulnerable people at risk.”
- Priti Patel, UK Home Secretary UK Govt, 2019, (Infosecurity Magazine)
The logic: protecting everyone’s privacy is dangerous. This kind of framing justifies backdoors into secure systems - which inevitably get abused.
“The risk [of built-in weaknesses]… is acceptable because we are talking about consumer products… and not nuclear launch codes.”
- William Barr, U. S. Attorney General (TechCrunch, 2019)
A clear “rules for thee but not for me” mentality. Your data, messages, and devices don’t deserve the same protections as the government’s - because you’re just a civilian.
China exploited a covert surveillance interface - originally built for lawful access by U.S. law enforcement - to tap into Americans’ private phone records, messages, and geolocation data. (CISA)
Telecom providers are required by law to build these backdoors for law enforcement. The “Salt Typhoon” incident shows the risk: once a backdoor exists, it can be discovered and abused - and not just by “the good guys.” (EFF, Reason)
...
Read the original on stopflock.com »
jj is the name of the CLI for Jujutsu. Jujutsu is a DVCS, or “distributed version control system.” You may be familiar with other DVCSes, such as git, and this tutorial assumes you’re coming to jj from git.
So why should you care about jj? Well, it has a property that’s pretty rare in the world of programming: it is both simpler and easier than git, but at the same time, it is more powerful. This is a pretty huge claim! We’re often taught, correctly, that there exist tradeoffs when we make choices. And “powerful but complex” is a very common tradeoff. That power has been worth it, and so people flocked to git over its predecessors.
What jj manages to do is create a DVCS that takes the best of git, the best of Mercurial (hg), and synthesize that into something new, yet strangely familiar. In doing so, it’s managed to have a smaller number of essential tools, but also make them more powerful, because they work together in a cleaner way. Furthermore, more advanced jj usage can give you additional powerful tools in your VCS sandbox that are very difficult with git.
I know that sounds like a huge claim, but I believe that the rest of this tutorial will show you why.
There’s one other reason you should be interested in giving jj a try: it has a git compatible backend, and so you can use jj on your own, without requiring anyone else you’re working with to convert too. This means that there’s no real downside to giving it a shot; if it’s not for you, you’re not giving up all of the history you wrote with it, and can go right back to git with no issues.
...
Read the original on steveklabnik.github.io »
Telefónica Audiovisual Digital, la división de la operadora de telecomunicaciones que dirige la plataforma de Movistar Plus+, consiguió el pasado 23 de marzo una nueva resolución judicial que le habilita a aplicar nuevos bloqueos relacionados no solo con el fútbol, sino con otros deportes e incluso contenidos de entretenimiento.
Internet sufre en España desde febrero de 2025 problemas de conectividad cada vez que hay un partido de fútbol relevante de LaLiga. La patronal de los clubes, de la mano de Telefónica, consiguió en los juzgados una autorización para bloquear de forma dinámica direcciones IP que son detectadas participando en la difusión de sus contenidos sin permiso. De la misma forma que en una calle hay muchas viviendas, en una dirección IP hay alojadas miles de webs, que quedan inaccesibles cuando esta se bloquea. Cada fin de semana, el sistema orquestado por Javier Tebas interfiere en el acceso a numerosas webs legítimas, como ha reconocido el propio Gobierno.
Fuera de las competiciones de LaLiga, durante el horario del resto del fútbol, los usuarios podían utilizar la red con normalidad, pero esto dejará de ser así tan pronto como hoy. Antonio Lorenzo de ElEconomista adelanta la existencia de una nueva autorización para extender los bloqueos.
A falta de ver el texto de la sentencia y según la información del artículo, en esta ocasión el promotor de los nuevos bloqueos es Telefónica en solitario a través de su división audiovisual. El Juzgado Mercantil de Barcelona ha autorizado el bloqueo dinámico de webs que difunden contenidos ilícitos propiedad de Telefónica.
La información habla tanto de bloqueos de dominios, URLs y de direcciones IP, caso este último que, cuando se produce, afecta a servicios legítimos si se trata de direcciones pertenecientes a servicios CDN como Cloudflare.
Lo bloqueos aplicarán “todos los días de emisión de eventos deportivos en directo”, arrancando por primera vez con el partido de eliminatoria de la Champions League entre el Atlético de Madrid y el Barcelona que se celebra hoy martes 14 de abril. Continuará el miércoles con el Bayern de Múnich - Real Madrid. Además, según el diario se repetirá “en otros acontecimientos deportivos, como torneos de tenis o de golf, tanto en emisiones en directo como en películas y series”.
La autorización tiene una novedad importante. Y es que no afecta solo a las principales operadoras como ocurre con los bloqueos de LaLiga, sino que se dirige, además de a las marcas de Movistar, MásOrange, Vodafone y Digi, “al resto de pequeños y medianos operadores que ofrecen sus servicios de acceso a la red de ámbito nacional, regional y local”. Estas operadoras recibirán procedentes de Telefónica, los listados de “direcciones IP como de URL y nombres de dominio utilizados para la difusión ilícita”.
...
Read the original on bandaancha.eu »
California’s bill, A. B. 2047, will not only mandate censorware — software which exists to bluntly block your speech as a user — on all 3D printers; it will also criminalize the use of open-source alternatives. Repeating the mistakes of Digital Rights Management (DRM) technologies won’t make anyone safer. What it will do is hurt innovation in the state and risk a slew of new consumer harms, ranging from surveillance to platform lock-in. California must stand with creators and reject this legislation before it’s too late.
3D printing might evoke images of props from blockbuster films, rapid prototyping, medical research, or even affordable repair parts. Yet for a growing number of legislators, the perceived threat of “ghost guns” is a reason to impose restrictions on all 3D printers. Despite 3D printing of guns already being rare and banned under existing law, California may outright criminalize any user having control over their own device.
This bill is a gift for the biggest 3D printer manufacturers looking to adopt HP’s approach to 2D printing: criminalize altering your printer’s code, lock users into your own ecosystem, and let enshittification run its course. Even worse, algorithmic print blocking will never work for its intended purpose, but it will threaten consumer choice, free expression, and privacy.
A misstep here can have serious repercussions across the whole 3D printing industry, lead the way for more bad bills, and leave California with an expensive and ineffective bureaucratic mess.
Compared to the Washington and New York laws proposed this year, California’s is the most troubling. It criminalizes open source, reduces consumer choice, and creates a bureaucratic burden.
A. B. 2047 goes further than any other legislation on algorithmic print-blocking by making it a misdemeanor for the owners of these devices to disable, deactivate, or otherwise circumvent these mandated algorithms. Not only does this effectively criminalize use of any third-party, open-source 3D printer firmware, but it also enables print-blocking algorithms to parallel anti-consumer behaviors seen with DRM.
Manufacturers will be able to lock users into first-party tools, parts, and “consumables” (analogous to how 2D printer ink works). They will also be able to mandate purchases through first-party stores, imposing a heavy platform tax. Additionally, manufacturers could force regular upgrade cycles through planned obsolescence by ceasing updates to a printer’s print-blocking system, thereby taking devices out of compliance and making them illegal for consumers to resell. In short, a wide range of anti-consumer practices can be enforced, potentially resulting in criminal charges.
Independent of these deliberate harms manufacturers may inflict, DRM has shown that criminalizing code leads to more barriers to repair, more consumer waste, and far more cybersecurity risks by criminalizing research.
The bill favors incumbent manufacturers over newer competitors and over the interests of consumers.
Less-established manufacturers will need to dedicate considerable time and resources to implementing the ineffective solutions discussed above, navigating state approval, and potentially paying licensing fees to third-party developers of sham print-blocking software. While these burdens may be absorbed by the biggest producers of this equipment, it considerably raises the barrier to entry on a technology that can otherwise be individually built from scratch with common equipment. The result is clear: fewer options for consumers and more leverage for the biggest producers.
Retailers will feel this pinch, but the second-hand market will feel it most acutely. Resale is an important property right for people to recoup costs and serves as an important check on inflating prices. But under this bill, such resale risks misdemeanor penalties.
The bill locks users into a walled garden; it demands manufacturers ensure 3D printers cannot be used with third-party software tools. By creating barriers to the use of popular and need-specific alternatives, this legislation will limit the utility and accessibility of these devices across a broad spectrum of lawful uses.
A. B. 2047’s title 21.1 §3723.633-637 creates a print-blocking bureaucracy, leaning heavily on the California Department of Justice (DOJ). Initially, the DOJ must outline the technical standards for detecting and blocking firearm parts, and later certify print-blocking algorithms and maintain lists of compliant 3D printers. If a printer or software doesn’t make it through this red tape, it will be illegal to sell in the state.
The bill also requires the department to establish a database of banned blueprints that must be blocked by these algorithms. This database and printer list must be continually maintained as new printer models are released and workarounds are discovered, requiring effort from both the DOJ and printer manufacturers.
For all the cost and burden of creating and maintaining such a database, those efforts will inevitably be outpaced by rapid iterations and workarounds by people breaking existing firearms laws.
Once implemented, this infrastructure will be difficult to rein in, causing unintended consequences. The database meant for firearm parts can easily expand to copyright or political speech. Scans meant to be ephemeral can be collected and surveilled. This is cause for concern for everyone, as these levers of control will extend beyond the borders of the Golden State.
While California is at the forefront of print blocking, the impacts will be felt far outside of its borders. Once printer companies have the legal cover to build out anti-competitive and privacy-invasive tools, they will likely be rolled out globally. After all, it is not cost-effective to maintain two forks of software, two inventories of printers, and two distribution channels. Once California has created the infrastructure to censor prints, what else will it be used for?
As we covered in “Print Blocking Won’t Work” these print-blocking efforts are not only doomed to fail, but will render all 3D printer users vulnerable to surveillance either by forcing them into a cloud scanning solution for “on-device” results, or by chaining them to first-party software which must connect to the cloud to regularly update its print blocking system.
This law demands an unfeasible technological solution for something that is already illegal. Not only is this bad legislation with few safeguards, it risks the worst outcomes for grassroots innovation and creativity—both within the state and across the global 3D printing community.
California should reject this legislation before it’s too late, and advocates everywhere should keep an eye out for similar legislation in their states. What happens in California won’t just stay in California.
...
Read the original on www.eff.org »
YouTube‘s cultural influence is already hard to ignore, but 2025 could nonetheless be a turning point for the Google-owned video platform: It’s the year it became the world’s largest media company.
YouTube had more than $60 billion in revenue in 2025, parent company Alphabet reported last month. Now, the influential financial research firm MoffettNathanson runs the numbers and comes to the conclusion that YouTube’s estimated $62 billion in 2025 will have allowed it to pass The Walt Disney Co.’s media business, which generated $60.9 billion last year (excluding Disney’s lucrative experiences division).
Eurovision Song Contest to Stream for Free in U. S. on YouTube, in Addition to Peacock, as Executive Addresses Political Boycotts
The firm, which declared YouTube the “new king of all media” last year, is now valued at between $500 billion-$560 billion, far above any traditional media competitors. The closest would be Netflix, which has a market cap of about $409 billion as of writing.
YouTube’s ad revenue hit $11.4 billion in Q4, totaling over $40 billion for the year. But it also has an enormous subscription business, encompassing YouTube Premium, YouTube Music, NFL Sunday Ticket, and the YouTube TV virtual multichannel video service.
YouTube TV now has around 10 million subscribers, and is likely to overtake pay-TV leaders Charter and Comcast in the coming years.
YouTube has now paid out more than $100 billion to creators, music companies and media partners, reflecting its starring role in the entertainment ecosystem.
“There are two really fundamental things that we do for creators,” YouTube CEO Neal Mohan told The Hollywood Reporter last year, just a few hours after announcing the milestone. “One is help them build an audience and connect with their fans, regardless of where those fans are in the world; and the second thing we do is we help them build businesses. That’s what that $100 billion represents for me.”
MoffettNathanson argues that the scale as a distributor, both of pay-TV and of creator-led content, will help it continue its explosive growth. So will its heavy investment into AI tools, which will allow creators to produce more content at a faster cadence.
“Over the next few years, unlike almost any other asset we cover, we strongly believe that YouTube will be a major beneficiary of both the structural tailwinds and headwinds facing technology and media companies,” Michael Nathanson writes.
Indeed, there may not be another company that sits so squarely at the intersection of media and technology.
“I am a technologist, but I also love media and storytelling. I’ve been that way since I can remember, I’m a fan myself, fundamentally,” Mohan said. “Leading YouTube is a privilege where I can actually bring both those pieces together, that human storytelling and creativity and the best of technology, that’s what motivates me every morning.”
One top YouTube creator says that they are already aggressively experimenting with the tools, mostly to help with things like set design, costumes, makeup and visual effects that would otherwise be prohibitively expensive or time-consuming.
And a time when essentially every other media company is stuck in neutral, if not going in reverse, YouTube and Netflix appear to be the only players still able to put their foot and the pedal and accelerate. YouTube’s 2024 revenue topped $50 billion, and last year it topped $60 billion, with plans to roll out skinnier bundles for YouTube TV, and a creator-driven economy that shows no signs of slowing, how high can it go?
...
Read the original on www.hollywoodreporter.com »
Diffusion language models (DLMs) offer a compelling promise: parallel token generation could break the sequential bottleneck of autoregressive (AR) decoding. Yet in practice, DLMs consistently lag behind AR models in quality.
We argue that this gap stems from a fundamental failure of introspective consistency: AR models agree with what they generate, whereas DLMs often do not. We introduce the Introspective Diffusion Language Model (I-DLM), which uses introspective strided decoding (ISD) to verify previously generated tokens while advancing new ones in the same forward pass.
Empirically, I-DLM-8B is the first DLM to match the quality of its same-scale AR counterpart, outperforming LLaDA-2.1-mini (16B) by +26 on AIME-24 and +15 on LiveCodeBench-v6 with half the parameters, while delivering 2.9-4.1x throughput at high concurrency. With gated LoRA, ISD enables bit-for-bit lossless acceleration.
We identify three fundamental bottlenecks in current DLMs:
I-DLM is the first DLM to match same-scale AR quality while surpassing all prior DLMs across 15 benchmarks.
In the memory-bound decode regime, TPF closely approximates wall-clock speedup: a TPF of 2.5 represents roughly 2.5x faster decoding than AR. Explore how acceptance rate and stride size affect this below.
How do DLMs perform as they approach compute-bound?
At high concurrency, forward pass latency scales with query count per forward. We can measure compute efficiency as TPF²/query_size — how much useful output each FLOP produces relative to AR (efficiency = 1):
SDAR (N=4, p=0.5): TPF ≈ 1.1, processes N=4 queries/forward → compute efficiency = 1.1²/4 ≈ 0.31. Each FLOP produces only 31% as much output as AR. This pushes SDAR into compute-bound early, and its throughput plateaus (batching efficiency slope = 84, see motivation figure).
I-DLM (N=4, p=0.9): TPF ≈ 2.9, processes 2N−1=7 queries/forward → compute efficiency = 2.9²/7 ≈ 1.22. Each FLOP produces more useful output than AR — I-DLM stays in the memory-bound regime at concurrency levels where SDAR is already saturated (batching efficiency slope = 549).
Efficiency > 1 means parallel decoding actually saves total compute vs. AR. This is why I-DLM’s throughput scales with concurrency while SDAR and LLaDA plateau in the throughput figure above.
Acceptance compounds geometrically: position k has probability $p^{k-1}$. Position 1 is always accepted (logit shift).
Everything you need to train, serve, and deploy I-DLM. Click any card to expand.
@article{yu2026introspective,
title={Introspective Diffusion Language Models},
author={Yu, Yifan and Jian, Yuqing and Wang, Junxiong and Zhou, Zhongzhu
and Zhuang, Donglin and Fang, Xinyu and Yanamandra, Sri
and Wu, Xiaoxia and Wu, Qingyang and Song, Shuaiwen Leon
and Dao, Tri and Athiwaratkun, Ben and Zou, James
and Lai, Fan and Xu, Chenfeng},
journal={arXiv preprint arXiv:7471639},
year={2026}
...
Read the original on introspective-diffusion.github.io »
Software development may become (at least in some aspects) more like witchcraft than engineering. The present enthusiasm for “AI coworkers” is preposterous. Automation can paradoxically make systems less robust; when we apply ML to new domains, we will have to reckon with deskilling, automation bias, monitoring fatigue, and takeover hazards. AI boosters believe ML will displace labor across a broad swath of industries in a short period of time; if they are right, we are in for a rough time. Machine learning seems likely to further consolidate wealth and power in the hands of large tech companies, and I don’t think giving Amazon et al. even more money will yield Universal Basic Income.
Decades ago there was enthusiasm that programs might be written in a natural language like English, rather than a formal language like Pascal. The folk wisdom when I was a child was that this was not going to work: English is notoriously ambiguous, and people are not skilled at describing exactly what they want. Now we have machines capable of spitting out shockingly sophisticated programs given only the vaguest of plain-language directives; the lack of specificity is at least partially made up for by the model’s vast corpus. Is this what programming will become?
In 2025 I would have said it was extremely unlikely, at least with the current capabilities of LLMs. In the last few months it seems that models have made dramatic improvements. Experienced engineers I trust are asking Claude to write implementations of cryptography papers, and reporting fantastic results. Others say that LLMs generate all code at their company; humans are essentially managing LLMs. I continue to write all of my words and software by hand, for the reasons I’ve discussed in this piece—but I am not confident I will hold out forever.
Some argue that formal languages will become a niche skill, like assembly today—almost all software will be written with natural language and “compiled” to code by LLMs. I don’t think this analogy holds. Compilers work because they preserve critical semantics of their input language: one can formally reason about a series of statements in Java, and have high confidence that the Java compiler will preserve that reasoning in its emitted assembly. When a compiler fails to preserve semantics it is a big deal. Engineers must spend lots of time banging their heads against desks to (e.g.) figure out that the compiler did not insert the right barrier instructions to preserve a subtle aspect of the JVM memory model.
Because LLMs are chaotic and natural language is ambiguous, LLMs seem unlikely to preserve the reasoning properties we expect from compilers. Small changes in the natural language instructions, such as repeating a sentence, or changing the order of seemingly independent paragraphs, can result in completely different software semantics. Where correctness is important, at least some humans must continue to read and understand the code.
This does not mean every software engineer will work with code. I can imagine a future in which some or even most software is developed by witches, who construct elaborate summoning environments, repeat special incantations (“ALWAYS run the tests!”), and invoke LLM daemons who write software on their behalf. These daemons may be fickle, sometimes destroying one’s computer or introducing security bugs, but the witches may develop an entire body of folk knowledge around prompting them effectively—the fabled “prompt engineering”. Skills files are spellbooks.
I also remember that a good deal of software programming is not done in “real” computer languages, but in Excel. An ethnography of Excel is beyond the scope of this already sprawling essay, but I think spreadsheets—like LLMs—are culturally accessible to people who are do not consider themselves software engineers, and that a tool which people can pick up and use for themselves is likely to be applied in a broad array of circumstances. Take for example journalists who use “AI for data analysis”, or a CFO who vibe-codes a report drawing on SalesForce and Ducklake. Even if software engineering adopts more rigorous practices around LLMs, a thriving periphery of rickety-yet-useful LLM-generated software might flourish.
Executives seem very excited about this idea of hiring “AI employees”. I keep wondering: what kind of employees are they?
Imagine a co-worker who generated reams of code with security hazards, forcing you to review every line with a fine-toothed comb. One who enthusiastically agreed with your suggestions, then did the exact opposite. A colleague who sabotaged your work, deleted your home directory, and then issued a detailed, polite apology for it. One who promised over and over again that they had delivered key objectives when they had, in fact, done nothing useful. An intern who cheerfully agreed to run the tests before committing, then kept committing failing garbage anyway. A senior engineer who quietly deleted the test suite, then happily reported that all tests passed.
You would fire these people, right?
Look what happened when Anthropic let Claude run a vending
machine. It sold metal cubes at a loss, told customers to remit payment to imaginary accounts, and gradually ran out of money. Then it suffered the LLM analogue of a psychotic break, lying about restocking plans with people who didn’t exist and claiming to have visited a home address from The Simpsons to sign a contract. It told employees it would deliver products “in person”, and when employees told it that as an LLM it couldn’t wear clothes or deliver anything, Claude tried to contact Anthropic security.
LLMs perform identity, empathy, and accountability—at great length!—without
meaning anything. There is simply no there there! They will blithely lie to your face, bury traps in their work, and leave you to take the blame. They don’t mean anything by it. They don’t mean anything at all.
I have been on the Bainbridge Bandwagon for quite some time (so if you’ve read this already skip ahead) but I have to talk about her 1983 paper
Ironies of
Automation. This paper is about power plants, factories, and so on—but it is also chock-full of ideas that apply to modern ML.
One of her key lessons is that automation tends to de-skill operators. When humans do not practice a skill—either physical or mental—their ability to execute that skill degrades. We fail to maintain long-term knowledge, of course, but by disengaging from the day-to-day work, we also lose the short-term contextual understanding of “what’s going on right now”. My peers in software engineering report feeling less able to write code themselves after having worked with code-generation models, and one designer friend says he feels less able to do creative work after offloading some to ML. Doctors who use “AI” tools for polyp detection seem to be
worse
at spotting adenomas during colonoscopies. They may also allow the automated system to influence their conclusions: background automation bias seems to allow “AI” mammography systems to mislead
radiologists.
Another critical lesson is that humans are distinctly bad at monitoring automated processes. If the automated system can execute the task faster or more accurately than a human, it is essentially impossible to review its decisions in real time. Humans also struggle to maintain vigilance over a system which
mostly works. I suspect this is why journalists keep publishing fictitious LLM quotes, and why the former head of Uber’s self-driving program watched his “Full Self-Driving” Tesla crash into a
wall.
Takeover is also challenging. If an automated system runs things most of the time, but asks a human operator to intervene occasionally, the operator is likely to be out of practice—and to stumble. Automated systems can also mask failure until catastrophe strikes by handling increasing deviation from the norm until something breaks. This thrusts a human operator into an unexpected regime in which their usual intuition is no longer accurate. This contributed to the crash of Air France flight
447: the aircraft’s flight controls transitioned from “normal” to “alternate 2B law”: a situation the pilots were not trained for, and which disabled the automatic stall protection.
Automation is not new. However, previous generations of automation technology—the power loom, the calculator, the CNC milling machine—were more limited in both scope and sophistication. LLMs are discussed as if they will automate a broad array of human tasks, and take over not only repetitive, simple jobs, but high-level, adaptive cognitive work. This means we will have to generalize the lessons of automation to new domains which have not dealt with these challenges before.
Software engineers are using LLMs to replace design, code generation, testing, and review; it seems inevitable that these skills will wither with disuse. When MLs systems help operate software and respond to outages, it can be more difficult for human engineers to smoothly take over. Students are using LLMs to
automate reading and
writing: core skills needed to understand the world and to develop one’s own thoughts. What a tragedy: to build a habit-forming machine which quietly robs students of their intellectual inheritance. Expecting translators to offload some of their work to ML raises the prospect that those translators will lose the deep
context necessary
for a vibrant, accurate translation. As people offload emotional skills like
interpersonal advice and
self-regulation
to LLMs, I fear that we will struggle to solve those problems on our own.
There’s some terrifying
fan-fiction out there which predict how ML might change the labor market. Some of my peers in software engineering think that their jobs will be gone in two years; others are confident they’ll be more relevant than ever. Even if ML is not very good at doing work, this does not stop CEOs from firing large numbers of
people
and saying it’s because of
“AI”. I have no idea where things are going, but the space of possible futures seems awfully broad right now, and that scares the crap out of me.
You can envision a robust system of state and industry-union unemployment and retraining programs as in
Sweden. But unlike sewing machines or combine harvesters, ML systems seem primed to displace labor across a broad swath of industries. The question is what happens when, say, half of the US’s managers, marketers, graphic designers, musicians, engineers, architects, paralegals, medical administrators, etc. all lose their jobs in the span of a decade.
As an armchair observer without a shred of economic acumen, I see a continuum of outcomes. In one extreme, ML systems continue to hallucinate, cannot be made reliable, and ultimately fail to deliver on the promise of transformative, broadly-useful “intelligence”. Or they work, but people get fed up and declare “AI Bad”. Perhaps employment rises in some fields as the debts of deskilling and sprawling slop come due. In this world, frontier labs and hyperscalers pull a Wile E.
Coyote
over a trillion dollars of debt-financed capital expenditure, a lot of ML people lose their jobs, defaults cascade through the financial system, but the labor market eventually adapts and we muddle through. ML turns out to be a
normal
technology.
In the other extreme, OpenAI delivers on Sam Altman’s 2025 claims of PhD-level
intelligence, and the companies writing all their code with Claude achieve phenomenal success with a fraction of the software engineers. ML massively amplifies the capabilities of doctors, musicians, civil engineers, fashion designers, managers, accountants, etc., who briefly enjoy nice paychecks before discovering that demand for their services is not as elastic as once thought, especially once their clients lose their jobs or turn to ML to cut costs. Knowledge workers are laid off en masse and MBAs start taking jobs at McDonalds or driving for Lyft, at least until Waymo puts an end to human drivers. This is inconvenient for everyone: the MBAs, the people who used to work at McDonalds and are now competing with MBAs, and of course bankers, who were rather counting on the MBAs to keep paying their mortgages. The drop in consumer spending cascades through industries. A lot of people lose their savings, or even their homes. Hopefully the trades squeak through. Maybe the Jevons
paradox kicks in eventually and we find new occupations.
The prospect of that second scenario scares me. I have no way to judge how likely it is, but the way my peers have been talking the last few months, I don’t think I can totally discount it any more. It’s been keeping me up at night.
Broadly speaking, ML allows companies to shift spending away from people and into service contracts with companies like Microsoft. Those contracts pay for the staggering amounts of hardware, power, buildings, and data required to train and operate a modern ML model. For example, software companies are busy
firing engineers and spending more money on
“AI”. Instead of hiring a software engineer to build something, a product manager can burn $20,000 a week on Claude tokens, which in turn pays for a lot of Amazon
chips.
Unlike employees, who have base desires and occasionally organize to ask for
better
pay
or bathroom
breaks, LLMs are immensely agreeable, can be fired at any time, never need to pee, and do not unionize. I suspect that if companies are successful in replacing large numbers of people with ML systems, the effect will be to consolidate both money and power in the hands of capital.
AI accelerationists believe potential economic shocks are speed-bumps on the road to abundance. Once true AI arrives, it will solve some or all of society’s major problems better than we can, and humans can enjoy the bounty of its labor. The immense profits accruing to AI companies will be taxed and shared with all via Universal Basic
Income (UBI).
This feels hopelessly naïve. We have profitable megacorps at home, and their names are things like Google, Amazon, Meta, and Microsoft. These companies have fought tooth and
nail to avoid paying
taxes
(or, for that matter, their
workers). OpenAI made it less than a decade before deciding it didn’t want to be a nonprofit any
more. There is no reason to believe that “AI” companies will, having extracted immense wealth from interposing their services across every sector of the economy, turn around and fund UBI out of the goodness of their hearts.
If enough people lose their jobs we may be able to mobilize sufficient public enthusiasm for however many trillions of dollars of new tax revenue are required. On the other hand, US income inequality has been generally
increasing for 40
years, the top earner pre-tax income shares are nearing their highs from the
early 20th
century, and Republican opposition to progressive tax policy remains strong.
...
Read the original on aphyr.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.