10 interesting stories served every morning and every evening.




1 1,053 shares, 43 trendiness

Robert Reese's Website

TLDR: Despite claim­ing to backup all your data, Backblaze qui­etly stopped back­ing up OneDrive and Dropbox fold­ers - along with po­ten­tially many other things.

For ten years I have been us­ing Backblaze for my per­sonal com­puter backup. Before 2015 I would backup files to one of two large ex­ter­nal hard discs. I then ro­tated these dri­ves be­tween, first my fa­ther’s house, and af­ter I moved to the UK, my of­fice draw­ers.

In 2015 Backblaze seemed like a good bet. Unlike Crashplan their soft­ware was­n’t a bloated Java app, but they did have un­lim­ited stor­age. If you could cram it into your PC they would back it up. With their yearly Hard Drive re­views mak­ing good press, a lot of per­sonal rec­om­men­da­tions from my friends and col­leagues, their ser­vice sounded great. I in­stalled the soft­ware, ran it for sev­eral weeks, and sure enough my data was safely stored in their cloud.

I had fur­ther rea­son to be im­pressed when sev­eral years later one of my hard dri­ves failed. I made use of their send me a hard drive with my stuff on it ser­vice”. A drive turned up filled with my pre­cious data. That for me was proof that this sys­tem worked, and that it worked well.

And so I rec­om­mended Backblaze for years. What do you do for backup? I would ex­toll the virtues of Backblaze, and they made many sales from such rec­om­men­da­tions.

There were a few things I did­n’t like. The app, could use a lot of mem­ory, es­pe­cially af­ter do­ing a large im­port of pho­tographs. The web­site, which I of­ten used to re­store sin­gle files or fold­ers, was slow and clunky to use. The win­dows app in par­tic­u­lar was clunky with an early 2000s aes­thetic and cramped lists. There was the time they leaked all your file­names to Facebook, but they prob­a­bly fixed that.

But no mat­ter, small prob­lems for the peace of mind of hav­ing all my files backed up.

Backup soft­ware is meant to back up your files. Which files? Well the files you need. Given every­one is dif­fer­ent, with dif­fer­ent work­flows and file­types, the ideal thing is to back up all your files. No backup provider knows what I will need in the fu­ture. The provider must plan ac­cord­ingly.

My first trou­bling dis­cov­ery was in 2025, when I made sev­eral er­rors then did a push -f to GitHub and blew away the git his­tory for a half decade old repo. No data was lost, but the log of changes was. No prob­lem I thought, I’ll just re­store this from Backblaze. Sadly it was not to be. At some point Backblaze had started to ig­nore .git fold­ers.

This an­noyed me. Firstly I needed that folder and Backblaze had let me down. Secondly within the Backblaze pref­er­ences I could find no way to re-en­able this. In fact look­ing at the list of ex­clu­sions I could find no men­tion of .git what­so­ever.

This made me won­der - I had checked the ex­clu­sions list when I in­stalled Backblaze 9 years be­fore, had I missed it? Had I missed any­thing else?

Well les­son learned I guess, but then a week ago I came across this thread on red­dit: Doesn’t back up Dropbox folder??”. A user was sur­prised to find their Dropbox folder no longer be­ing backed up. Alarmed I logged into Backblaze, and lo and be­hold, my OneDrive folder was miss­ing.

Backblaze has one job, and ap­par­ently they are un­able to do that job. Back up my stuff. But they have de­cided not to.

Lets take an aside.

A rea­son­able per­son might point out those files on OneDrive are al­ready be­ing backed up - by OneDrive! No. Dropbox and OneDrive are for file sync­ing - sync­ing your files to the cloud. They of­fer lim­ited pro­tec­tion. OneDrive and Dropbox only re­tain deleted files for one month. Backblaze has one year file re­ten­tion, or if you pay per GB, un­lim­ited re­ten­tion. While OneDrive re­tains ver­sion changes for longer, Dropbox only re­tains ver­sion changes for a month - again un­less you pay for more. Your files are less se­cure and less backed up when you stick them in a cloud stor­age provider folder com­pared to just be­ing on your desk­top.

And that’s as­sum­ing your cloud provider is play­ing ball. If Microsoft or Dropbox bans your ac­count you may find your­self with no backup what­so­ever.

For me the larger is­sue is they never told us. My OneDrive folder sits at 383GB. You would think that hav­ing de­cided to no longer back this up I might get an email, and alert or some other no­ti­fi­ca­tion. Of course not.

Nestled into their re­lease notes un­der Improvements” we see:

The Backup Client now ex­cludes pop­u­lar cloud stor­age providers from backup, in­clud­ing both mount points and cache di­rec­to­ries. This pre­vents per­for­mance is­sues, ex­ces­sive data us­age, and un­in­tended up­loads from ser­vices like OneDrive, Google Drive, Dropbox, Box, iDrive, and oth­ers. This change aligns with Backblaze’s pol­icy to back up only lo­cal and di­rectly con­nected stor­age.

First, I would hardly call this change in pol­icy an im­prove­ment, its hard to imag­ine any­one read­ing this as any­thing other than a down­grade in ser­vice. Secondly does Backblaze be­lieve most of its users are read­ing their re­lease notes?

And if you joined to­day and looked at their list of file ex­clu­sions you would find no ref­er­ence to Dropbox or OneDrive. No men­tion of Git ei­ther.

Here’s the thing, to­day they don’t back up Git or OneDrive. Who’s to say to­mor­row they wont add to the list. Maybe some ob­scure file for­mat that’s crit­i­cal to your work flow. Or they will ig­nore a file ex­ten­sion that just hap­pens be the same as one used by your DAW or 3D Modelling soft­ware. And they won’t tell you this. They wont even list it on their site.

By de­cid­ing not to back up every­thing, Backblaze has made it as if they are back­ing up noth­ing.

But re­ally this feels like a promise bro­ken. Back in 2015 their web­site proudly pro­claimed:

All user data in­cluded by de­fault No re­stric­tions on file type or size

Protect the dig­i­tal mem­o­ries and files that mat­ter most to you.

File backup is a mat­ter of trust. You are pay­ing a monthly fee so that if and when things go wrong you can get your data back. By silently chang­ing the rules, Backblaze has not sim­ply eroded my trust, but swept it away.

I wrote this to warn you - Backblaze is no longer do­ing their part, they are no longer back­ing up your data. Some of your data sure, but not all of it.

Finally let me leave you with Backblaze’s own words from 2015:

They promised to sim­plify backup. They suc­ceeded - they don’t even do the backup part any­more.

...

Read the original on rareese.com »

2 616 shares, 35 trendiness

Thousands of rare concert recordings are landing on the Internet Archive -- listen now

Chicago-based mu­sic su­per­fan Aadam Jacobs has been record­ing the con­certs he at­tends since the 1980s, amass­ing an archive of over 10,000 tapes. Now 59, Jacobs knows that these cas­settes are go­ing to de­grade over time, so he agreed to let vol­un­teers from the Internet Archive, the non­profit dig­i­tal li­brary, dig­i­tize the tapes.

So far, about 2,500 of these tapes have been posted on the Internet Archive, in­clud­ing some rare gems like a Nirvana per­for­mance from 1989. (The group would­n’t break through to main­stream au­di­ences un­til they re­leased the sin­gle Smells Like Teen Spirit” in 1991.) Within the col­lec­tion, you can also find pre­vi­ously un­known record­ings from in­flu­en­tial artists like Sonic Youth, R. E.M., Phish, Liz Phair, Pavement, Neutral Milk Hotel, and a whole bunch of other punk groups.

For many of these record­ings, Jacobs was us­ing pretty mediocre equip­ment, but the vol­un­teer au­dio en­gi­neers work­ing with the Internet Archive have made these tapes sound great.

One vol­un­teer, Brian Emerick, dri­ves to Jacobs’ house once a month to pick up more boxes of tapes — he has to use anachro­nis­tic cas­sette decks to play the tapes, which get con­verted into dig­i­tal files. From there, other vol­un­teers clean up, or­ga­nize, and la­bel the record­ings, even track­ing down song names from for­got­ten punk bands.

Sometimes, the in­ter­net is good. And so is this Tracy Chapman record­ing from 1988.

...

Read the original on techcrunch.com »

3 582 shares, 35 trendiness

Getting the Flock out

I wrote to Flock’s pri­vacy con­tact to opt out of their do­mes­tic spy­ing pro­gram:

I am a res­i­dent of California. As such, and be­cause you are sub­ject to the CCPA, delete all in­for­ma­tion about me, my ve­hi­cle, and other house­hold mem­bers from all of your data­bases. I do not give you per­mis­sion to col­lect or store data about me, my ve­hi­cles, or my rel­a­tives, in any fu­ture sit­u­a­tion.

Dear [misspelled name, i.e. not copied and pasted],

Your re­quest can­not be com­pleted at this time.

Thank you for sub­mit­ting your pri­vacy re­quest. At this time, we are un­able to process this re­quest for the rea­sons de­tailed be­low.

Flock Safety pro­vides its ser­vices to our cus­tomers, and our cus­tomers are own­ers and con­trollers of the data Flock Safety processes on their be­half. Flock Safety processes data as a ser­vice provider and proces­sor for our cus­tomers and as a re­sult, we are un­able to di­rectly ful­fill your re­quest. We rec­om­mend con­tact­ing the or­ga­ni­za­tion that en­gaged Flock Safety’s ser­vices to sub­mit your re­quest, as they are re­spon­si­ble for as­sess­ing and re­spond­ing to it.

Here are a few ad­di­tional points about Flock Safety’s data col­lec­tion and pri­vacy prac­tices:

* Customer Contracts: Flock Safety’s pro­cess­ing ac­tiv­ity as a ser­vice provider and proces­sor is gov­erned by the con­tract we have with our cus­tomers, which cap­tures their in­struc­tions and the lim­i­ta­tions on how Flock Safety may process their data. Flock Safety’s cus­tomers own the data and make all de­ci­sions around how such data is used and shared.

* No Sale of Data: Because Flock Safety’s cus­tomers own the data, Flock Safety may only process the data in ac­cor­dance with our cus­tomer’s in­struc­tions, as out­lined in our con­tracts with cus­tomers. Flock Safety is not per­mit­ted to sell, pub­lish, or ex­change such data for our own com­mer­cial pur­poses.

* Information Collected: Where Flock Safety’s cus­tomers lever­age License Plate Reader (LPR) tech­nol­ogy, the LPRs do not process sen­si­tive in­for­ma­tion like names or ad­dresses. Instead, LPRs only cap­ture im­ages of pub­licly avail­able and vis­i­ble ve­hi­cle char­ac­ter­is­tics that are taken in the pub­lic view.

* Purpose: Flock Safety cus­tomers use data for se­cu­rity pur­poses, in­clud­ing man­ag­ing pub­lic safety or re­spond­ing to safety con­cerns and re­ports. Additionally, such data may be used to help solve crimes and pro­vide ob­jec­tive ev­i­dence.

* Retention: By de­fault, Flock Safety’s sys­tems only re­tain data for 30 days, which means that any data col­lected on be­half of cus­tomers is per­ma­nently hard deleted on a rolling 30 day ba­sis. Flock Safety cus­tomers are able to ad­just this re­ten­tion pe­riod based on their lo­cal laws or poli­cies.

For more in­for­ma­tion about how Flock Safety processes data, please re­fer to our Privacy Policy and LPR Policy.

I think that’s legally in­ac­cu­rate. They’re the en­tity col­lect­ing and pro­cess­ing my per­son­ally iden­ti­fi­able in­for­ma­tion, and my non-lawyer read­ing of the California Consumer Privacy Act (CCPA) would seem to ob­lig­ate them to com­ply with my re­quest. I haven’t de­cided to en­gage a lawyer yet, but nei­ther have I ruled it out.

...

Read the original on honeypot.net »

4 581 shares, 92 trendiness

Stop Flock

Flock Safety mar­kets AI sur­veil­lance that goes far be­yond read­ing li­cense plates; color, bumper stick­ers, dents, and other fea­tures are used to build data­bases and iden­tify move­ment pat­terns. These sys­tems are spread­ing rapidly, of­ten with­out over­sight, and are ac­ces­si­ble to po­lice with­out a war­rant. They raise se­ri­ous pri­vacy and le­gal con­cerns, and con­tribute to a na­tion­wide trend to­ward mass sur­veil­lance.

While this and other sys­tems like it claim to re­duce crime, there is lit­tle ev­i­dence to sup­port that claim - and sig­nif­i­cant risk of abuse. Real pub­lic safety comes from in­vest­ing in com­mu­ni­ties, not stalk­ing them.

Flock Safety mar­kets AI sur­veil­lance that goes far be­yond read­ing li­cense plates; color, bumper stick­ers, dents, and other fea­tures are used to build data­bases and iden­tify move­ment pat­terns. These sys­tems are spread­ing rapidly, of­ten with­out over­sight, and are ac­ces­si­ble to po­lice with­out a war­rant. They raise se­ri­ous pri­vacy and le­gal con­cerns, and con­tribute to a na­tion­wide trend to­ward mass sur­veil­lance.

While this and other sys­tems like it claim to re­duce crime, there is lit­tle ev­i­dence to sup­port that claim - and sig­nif­i­cant risk of abuse. Real pub­lic safety comes from in­vest­ing in com­mu­ni­ties, not stalk­ing them.

Flock Safety mar­kets its de­vices as AI-powered pre­ci­sion polic­ing tech­nol­ogy” - far be­yond ba­sic li­cense plate read­ers (ALPRs) (Flock Safety). The sys­tem uses AI to cre­ate a Vehicle Fingerprint” - iden­ti­fy­ing cars not only by li­cense plate, but also by color, make and model, roof racks, dents/​dam­age, wheel type, and more. Even bumper sticker place­ment is an­a­lyzed. This lets law en­force­ment search for a blue sedan with dam­age on the left side” even with­out a li­cense plate.

But the sur­veil­lance goes deeper. Using a fea­ture called Convoy Analysis”, the sys­tem can de­tect ve­hi­cles that fre­quently ap­pear near each other - sug­gest­ing as­so­ci­a­tions be­tween dri­vers or ac­com­plices. The plat­form can also flag ve­hi­cles that rou­tinely travel to the same lo­ca­tions across time. Flock de­scribes this as a way to identify sus­pect ve­hi­cles trav­el­ing to­gether” or pinpoint as­so­ci­ates” - func­tion­al­ity con­firmed in both their mar­ket­ing and po­lice tes­ti­mo­ni­als (GovTech, ACLU).

The data is logged and made search­able across a na­tion­wide law en­force­ment net­work - which of­fi­cers in sub­scrib­ing agen­cies can ac­cess with­out a war­rant. According to Flock, the sys­tem can au­to­mat­i­cally flag a ve­hi­cle based on its his­tory, route, or pres­ence in mul­ti­ple lo­ca­tions linked to a crime (Flock HOA Marketing).

While these tools may aid in lo­cat­ing stolen cars or miss­ing per­sons, they also cre­ate a de­tailed record of every­one’s move­ments, as­so­ci­a­tions, and rou­tines. That data has al­ready been mis­used - like when a Kansas po­lice chief used Flock cam­eras 228 times to stalk an ex-girl­friend and her new part­ner with­out cause (Local12).

The scope of this track­ing be­comes clear when you see real-world ex­am­ples. In 2025, a jour­nal­ist drove 300 miles across rural Virginia and was cap­tured by nearly 50 sur­veil­lance cam­eras op­er­ated by 15 dif­fer­ent law en­force­ment agen­cies. When he re­quested his own sur­veil­lance footage, he dis­cov­ered the cam­eras had doc­u­mented pat­terns that made his be­hav­ior predictable to any­one look­ing at it.” Most trou­bling: while the jour­nal­ist could­n’t re­mem­ber spe­cific dates he’d made cer­tain trips, po­lice would know in­stantly - with­out any war­rant or sus­pi­cion of wrong­do­ing (Cardinal News).

See also:

EFF: How ALPRs Work,

The Secure Dad on Flock Cameras,

Compass IT: Privacy Concerns with Flock”,

ACLU: Flock is build­ing a new AI-driven mass sur­veil­lance sys­tem,

Wikipedia: Flock Safety

How Widespread Are These Cameras?

Understanding what Flock cam­eras are leads to a nat­ural ques­tion: how com­mon are they in our com­mu­ni­ties?

The crowd­sourced map made avail­able on DeFlock.me cur­rently shows roughly half of the >100,000 Flock AI cam­eras na­tion­wide. Here are ex­am­ples from three ma­jor cities show­ing how per­va­sive this sur­veil­lance has be­come:

These sys­tems are ex­pand­ing rapidly, of­ten with lit­tle pub­lic de­bate or over­sight. The Atlas of Surveillance, main­tained by the Electronic Frontier Foundation, has doc­u­mented over 3,000 law en­force­ment and gov­ern­ment agen­cies us­ing Flock prod­ucts as of 2025 - a num­ber grow­ing monthly.

The Fourth Amendment was writ­ten in re­sponse to the British Crown’s general war­rants” - broad au­tho­riza­tions to search any­one, any­where, any­time. Mass sur­veil­lance re­vives that threat in dig­i­tal form. Simply mov­ing freely in pub­lic should not re­quire that you be pro­filed and scru­ti­nized.

It is im­por­tant to point out that the courts have re­peat­edly ruled so-called dragnet war­rants,” of­ten us­ing cell phone GPS lo­ca­tions, un­con­sti­tu­tional un­der the Fourth Amendment. But Flock’s sta­tus as a pri­vate com­pany means it can col­lect and sell data with fewer re­stric­tions, ex­ploit­ing a le­gal gray zone which courts have yet to fully ad­dress.

If you’ve got noth­ing to hide, you’ve got noth­ing to fear” is a tempt­ing thought - un­til some­one mis­uses your in­for­ma­tion. Privacy is­n’t about hid­ing wrong­do­ing. It’s about au­ton­omy, dig­nity, and the abil­ity to live free from un­just scrutiny. Saying you don’t care about pri­vacy be­cause you have noth­ing to hide is like say­ing you don’t care about free speech be­cause you have noth­ing to say.” - Edward Snowden

As one ob­server put it: While to­day they are no threat to me…cir­cum­stances change, lead­er­ship changes, laws change. When you re­ally boil this down, what is this na­tion­wide sys­tem? What did Flock re­ally make? It’s a weapon. A silent weapon. Right now it tar­gets what many would agree are crim­i­nals. But with the flip of a switch this sys­tem can be used to tar­get or op­press any­body the peo­ple in power de­cide is a threat.”

We are fast ap­proach­ing a world in which go­ing about one’s busi­ness in pub­lic means be­ing en­tered into a law en­force­ment data­base. Automated li­cense plate read­ers col­lect lo­ca­tion data on mil­lions of peo­ple with no sus­pi­cion of wrong­do­ing, cre­at­ing vast data­bases of where we go and when.

Flock cam­eras and sim­i­lar sur­veil­lance tools raise se­ri­ous Fourth Amendment con­cerns by en­abling broad, war­rant­less track­ing of peo­ple’s move­ments. In 2024, a trial court held that the Flock net­work func­tioned as a dragnet over the en­tire city.” The judge in the case equated it to plac­ing GPS track­ers on every ve­hi­cle - a prac­tice that the U. S. Supreme Court has ruled re­quires a war­rant (Virginia Mercury, The Virginian Pilot).

The American Civil Liberties Union (ACLU) warns that au­to­matic li­cense plate read­ers (ALPRs) are be­com­ing tools for rou­tine mass lo­ca­tion track­ing and sur­veil­lance, with too few rules gov­ern­ing their use. These sys­tems can col­lect and store data on mil­lions of in­no­cent dri­vers, cre­at­ing de­tailed records of peo­ple’s move­ments with­out their knowl­edge or con­sent. (ACLU)

Legal schol­ars have high­lighted the broader im­pli­ca­tions of such sur­veil­lance. Neil Richards, writ­ing in the Harvard Law Review, em­pha­sizes that sur­veil­lance can chill the ex­er­cise of civil lib­er­ties, par­tic­u­larly in­tel­lec­tual pri­vacy, and in­crease the risk of black­mail, co­er­cion, and dis­crim­i­na­tion. (Harvard Law Review)

Flock’s data fur­ther en­ables al­ready bi­ased en­force­ment. In Oak Park, Illinois, 84% of dri­vers stopped us­ing Flock cam­era alerts were Black - de­spite the town be­ing only 21% Black. (Freedom to Thrive).

See also:

ACLU on Unaccountable Surveillance Tech

Mass sur­veil­lance is­n’t just about polic­ing; there are ma­jor busi­ness in­ter­ests in­volved.

Flock Safety col­lab­o­rates with law en­force­ment agen­cies to pro­mote the adop­tion of its li­cense plate recog­ni­tion cam­eras by en­cour­ag­ing pri­vate en­ti­ties such as busi­nesses and HOAs to share their footage. This prac­tice broad­ens the sur­veil­lance net by grant­ing ac­cess to what would oth­er­wise have been pri­vate data (Flock Safety FAQ).

Instances have been re­ported where HOAs in­stalled Flock cam­eras on pub­lic roads, lead­ing to de­bates over the ex­tent of sur­veil­lance and the pri­vacy rights of res­i­dents and vis­i­tors (Oaklandside), (Forest Brooke HOA).

The ACLU has high­lighted that the ex­pan­sive reach of these sur­veil­lance net­works could en­able law en­force­ment to con­struct de­tailed pro­files of in­di­vid­u­als’ move­ments and as­so­ci­a­tions, un­der­scor­ing the need for trans­parency and over­sight (ACLU).

Additionally, Flock mar­kets its sur­veil­lance tech­nol­ogy to em­ploy­ers and re­tail es­tab­lish­ments, fur­ther blur­ring the lines be­tween pub­lic safety ini­tia­tives and profit-dri­ven sur­veil­lance. For ex­am­ple, ma­jor re­tail prop­erty own­ers have en­tered into agree­ments to share AI-powered sur­veil­lance feeds di­rectly with law en­force­ment, ex­pand­ing the scope of mon­i­tor­ing be­yond pub­lic spaces. (Forbes) [Mirror]

Lowe’s is a sig­nif­i­cant pri­vate client of Flock Safety, hav­ing im­ple­mented their sys­tems in nu­mer­ous lo­ca­tions to en­hance se­cu­rity and de­ter theft.

While Flock specif­i­cally does not of­fer fa­cial recog­ni­tion (today), Lowe’s has faced le­gal trou­bles over its use of fa­cial recog­ni­tion sys­tems from other ven­dors. In 2019, a class ac­tion law­suit was filed in Cook County Circuit Court, al­leg­ing that Lowe’s used fa­cial recog­ni­tion soft­ware to track cus­tomers’ move­ments with­out their con­sent, vi­o­lat­ing Illinois’ Biometric Information Privacy Act (BIPA). The law­suit claimed that Lowe’s col­lected and stored bio­met­ric data from cus­tomers and shared it with other re­tail­ers. (Security InfoWatch)

Some jus­tify these sys­tems as mak­ing us safer, but the re­al­ity is more com­pli­cated.

Flock ad­ver­tises a drop in crime, but the true cost is a cul­ture of mis­trust and pre­emp­tive sus­pi­cion. As the EFF warns, com­mu­ni­ties are be­ing sold a false promise of safety - at the ex­pense of civil rights*

(EFF).

A 2019 re­port by the NAACP Legal Defense Fund warned that pre­dic­tive polic­ing tools premised on bi­ased data will re­flect that bias, re­in­forc­ing ex­ist­ing dis­crim­i­na­tion in the crim­i­nal jus­tice sys­tem. These tools may ap­pear ob­jec­tive, but in­stead of­ten am­plify his­toric in­jus­tice un­der a ve­neer of sci­en­tific cred­i­bil­ity (NAACP LDF).

True safety comes from healthy, em­pow­ered com­mu­ni­ties; not au­to­mated sus­pi­cion. Community-led safety ini­tia­tives have demon­strated sig­nif­i­cant re­sults: North Lawndale saw a 58% de­crease in gun vi­o­lence af­ter READI Chicago be­gan im­ple­ment­ing their pro­gram there. In cities na­tion­wide, the pres­ence of lo­cal non­prof­its has been sta­tis­ti­cally linked to re­duc­tions in homi­cide, vi­o­lent crime, and prop­erty crime (Brennan Center, The DePaulia, American Sociological Association).

Zooming out, Flock is just one part of a larger move­ment to­ward ubiq­ui­tous sur­veil­lance.

Flock’s ex­pan­sion is part of a broader move­ment to­ward ubiq­ui­tous mass sur­veil­lance - where your as­so­ci­a­tions, on­line com­ments, pur­chases, move­ments, and more may be logged, in­dexed, an­a­lyzed by AI, and made eas­ily search­able by al­most any gov­ern­ment agency at any time.

This pro­gres­sion from data col­lec­tion to sur­veil­lance fol­lows a fa­mil­iar pat­tern in tech: tools sold for con­ve­nience of­ten evolve into tools of con­trol.

Bruce Schneier, a promi­nent cryp­tog­ra­pher and pri­vacy ad­vo­cate, put it sim­ply: Surveillance is the busi­ness model of the Internet.” What be­gins as data col­lec­tion for con­ve­nience or se­cu­rity of­ten evolves into per­sis­tent mon­i­tor­ing, nor­mal­iza­tion of track­ing, and the loss of au­ton­omy.

As Edward Snowden warned: A child born to­day will grow up with no con­cep­tion of pri­vacy at all. They’ll never know what it means to have a pri­vate mo­ment to them­selves - an un­recorded, un­an­a­lyzed thought.”

In Dunwoody, Georgia, drones are now dis­patched from Flock Safety nests” to re­spond to 911 calls au­tonomously, of­ten ar­riv­ing in un­der 90 sec­onds (Axios).

In California, 480 high-tech cam­eras were re­cently in­stalled to sur­veil Oakland’s high­ways - track­ing li­cense plates, bumper stick­ers, and ve­hi­cle types - with alerts sent to law en­force­ment in real-time (AP News).

This sur­veil­lance in­fra­struc­ture ex­tends far be­yond law en­force­ment. The U. S. mil­i­tary has spent at least $3.5 mil­lion on a tool called Augury” that mon­i­tors 93% of in­ter­net traf­fic,” cap­tur­ing brows­ing his­tory, email data, and sen­si­tive cook­ies from Americans - all without in­formed con­sent.” Senator Ron Wyden has re­ceived whistle­blower com­plaints about this war­rant­less sur­veil­lance pro­gram (VICE).

Meanwhile, the cur­rent ad­min­is­tra­tion is work­ing with Palantir Technologies to cre­ate what Ron Paul calls a big ugly data­base” - a com­pre­hen­sive col­lec­tion of all in­for­ma­tion held by fed­eral agen­cies on all U.S. cit­i­zens. This would in­clude health records, ed­u­ca­tion records, tax re­turns, firearm pur­chases, and as­so­ci­a­tions with any groups la­beled extremist.” Palantir, funded by the CIAs In-Q-Tel ven­ture cap­i­tal firm, is literally the cre­ation of the sur­veil­lance state” (OC Register).

Even ba­sic tools we use daily are be­ing trans­formed into sur­veil­lance in­stru­ments. Recent court rul­ings now al­low the gov­ern­ment to or­der com­pa­nies like OpenAI to in­def­i­nitely pre­serve all ChatGPT con­ver­sa­tions. Users who thought they were hav­ing pri­vate con­ver­sa­tions - like talking to a friend who can keep a se­cret” - dis­cov­ered this only through web fo­rums, not com­pany dis­clo­sure. The judge’s or­der en­ables what one user called a nationwide mass sur­veil­lance pro­gram” dis­guised as a civil dis­cov­ery process (TechRadar).

This pat­tern re­peats through­out his­tory: peo­ple aban­don lib­erty for promises of safety. After 9/11, many sup­ported the PATRIOT Act. During COVID, many em­braced mask and vac­cine man­dates. After the 2008 fi­nan­cial cri­sis, many sup­ported bailouts be­cause lead­ers said they had to abandon free-mar­ket prin­ci­ples to save the free-mar­ket sys­tem.” Today, some sup­port mass sur­veil­lance be­cause they be­lieve it will tar­get only the right peo­ple” - but cir­cum­stances change, lead­er­ship changes, laws change.

See also:

Ars Technica: AI Cameras to Ensure Good Behavior”,

Video: Predictive Surveillance Trends

So where is all of this head­ing? The tra­jec­tory is trou­bling.

Flock’s cam­eras cap­ture de­tailed in­for­ma­tion about the daily lives of any­one pass­ing by, with­out of­fer­ing a gen­uine opt-out mech­a­nism. Concurrently, Palantir Technologies has se­cured a $30 mil­lion con­tract with ICE, aim­ing to de­velop a sys­tem that con­sol­i­dates sen­si­tive per­sonal data such as bio­met­rics, ge­olo­ca­tion, and other per­sonal iden­ti­fiers from var­i­ous fed­eral agen­cies, fa­cil­i­tat­ing near real-time track­ing and cat­e­go­riza­tion of in­di­vid­u­als for im­mi­gra­tion en­force­ment pur­poses (Wired). It should be no sur­prise that this will also not of­fer any mean­ing­ful opt-out mech­a­nism.

The in­te­gra­tion of sur­veil­lance tech­nolo­gies such as Flock Safety’s li­cense plate read­ers and Palantir’s ImmigrationOS plat­form sig­ni­fies a shift to­ward com­pre­hen­sive mon­i­tor­ing of in­di­vid­u­als’ move­ments and be­hav­iors. It is not dif­fi­cult to imag­ine the scope of such sys­tems’ us­age grow­ing with time.

These de­vel­op­ments raise con­cerns about the ero­sion of pri­vacy and the po­ten­tial for mis­use of ag­gre­gated data. The per­va­sive na­ture of such sur­veil­lance sys­tems means that in­di­vid­u­als are mon­i­tored with­out ex­plicit con­sent, and the data col­lected can be re­pur­posed be­yond its orig­i­nal in­tent. As these tech­nolo­gies be­come more en­trenched, the line be­tween pub­lic safety and in­va­sive over­sight blurs, prompt­ing crit­i­cal dis­cus­sions about the bal­ance be­tween se­cu­rity and in­di­vid­ual free­doms.

Some of the most chill­ing val­i­da­tions of mass sur­veil­lance come not from crit­ics - but from the very peo­ple pro­mot­ing it. These aren’t out-of-con­text slips; they are open en­dorse­ments of a world where pri­vacy is side­lined in fa­vor of con­trol, com­pli­ance, and con­ve­nient en­force­ment.

Anything tech­nol­ogy they think, Oh it’s a boogey­man. It’s Big Brother watch­ing you,’ … No, Big Brother is pro­tect­ing you.”

- Eric Adams, NYC Mayor (Politico, 2022)

New York’s mayor ca­su­ally re­brands Orwell’s au­thor­i­tar­ian icon as a guardian fig­ure. It’s a star­tling re­ver­sal - not a warn­ing about over­reach, but a de­fense of it.

Instead of be­ing re­ac­tive, we are go­ing to be proac­tive… [we] use data to pre­dict where fu­ture crimes are likely to take place and who is likely to com­mit them… then deputies would find those peo­ple and take them out.”

- Chris Nocco, Pasco County Sheriff (Tampa Bay Times, 2020)

This Minority Report”-style pro­gram led to ha­rass­ment of in­no­cent peo­ple - and was ul­ti­mately found un­con­sti­tu­tional in court (Institute for Justice). A rare win, but a stark ex­am­ple of where unchecked sur­veil­lance can go.

The use of net flow data by NCIS does not re­quire a war­rant.”

- Charles E. Spirtos, Navy Office of Information (VICE, 2024)

The mil­i­tary’s po­si­tion on mon­i­tor­ing Americans’ in­ter­net traf­fic with­out ju­di­cial over­sight. This state­ment came af­ter a whistle­blower com­plained about war­rant­less sur­veil­lance ac­tiv­i­ties to Senator Ron Wyden’s of­fice.

Tech firms should not de­velop their sys­tems and ser­vices, in­clud­ing end-to-end en­cryp­tion, in ways that em­power crim­i­nals or put vul­ner­a­ble peo­ple at risk.”

- Priti Patel, UK Home Secretary UK Govt, 2019, (Infosecurity Magazine)

The logic: pro­tect­ing every­one’s pri­vacy is dan­ger­ous. This kind of fram­ing jus­ti­fies back­doors into se­cure sys­tems - which in­evitably get abused.

The risk [of built-in weak­nesses]… is ac­cept­able be­cause we are talk­ing about con­sumer prod­ucts… and not nu­clear launch codes.”

- William Barr, U. S. Attorney General (TechCrunch, 2019)

A clear rules for thee but not for me” men­tal­ity. Your data, mes­sages, and de­vices don’t de­serve the same pro­tec­tions as the gov­ern­men­t’s - be­cause you’re just a civil­ian.

China ex­ploited a covert sur­veil­lance in­ter­face - orig­i­nally built for law­ful ac­cess by U.S. law en­force­ment - to tap into Americans’ pri­vate phone records, mes­sages, and ge­olo­ca­tion data. (CISA)

Telecom providers are re­quired by law to build these back­doors for law en­force­ment. The Salt Typhoon” in­ci­dent shows the risk: once a back­door ex­ists, it can be dis­cov­ered and abused - and not just by the good guys.” (EFF, Reason)

...

Read the original on stopflock.com »

5 518 shares, 24 trendiness

Steve's Jujutsu Tutorial

jj is the name of the CLI for Jujutsu. Jujutsu is a DVCS, or distributed ver­sion con­trol sys­tem.” You may be fa­mil­iar with other DVCSes, such as git, and this tu­to­r­ial as­sumes you’re com­ing to jj from git.

So why should you care about jj? Well, it has a prop­erty that’s pretty rare in the world of pro­gram­ming: it is both sim­pler and eas­ier than git, but at the same time, it is more pow­er­ful. This is a pretty huge claim! We’re of­ten taught, cor­rectly, that there ex­ist trade­offs when we make choices. And powerful but com­plex” is a very com­mon trade­off. That power has been worth it, and so peo­ple flocked to git over its pre­de­ces­sors.

What jj man­ages to do is cre­ate a DVCS that takes the best of git, the best of Mercurial (hg), and syn­the­size that into some­thing new, yet strangely fa­mil­iar. In do­ing so, it’s man­aged to have a smaller num­ber of es­sen­tial tools, but also make them more pow­er­ful, be­cause they work to­gether in a cleaner way. Furthermore, more ad­vanced jj us­age can give you ad­di­tional pow­er­ful tools in your VCS sand­box that are very dif­fi­cult with git.

I know that sounds like a huge claim, but I be­lieve that the rest of this tu­to­r­ial will show you why.

There’s one other rea­son you should be in­ter­ested in giv­ing jj a try: it has a git com­pat­i­ble back­end, and so you can use jj on your own, with­out re­quir­ing any­one else you’re work­ing with to con­vert too. This means that there’s no real down­side to giv­ing it a shot; if it’s not for you, you’re not giv­ing up all of the his­tory you wrote with it, and can go right back to git with no is­sues.

...

Read the original on steveklabnik.github.io »

6 409 shares, 24 trendiness

Internet será irrespirable los días de fútbol y otros deportes. Telefónica extiende los bloqueos a Champions, tenis y golf.

Telefónica Audiovisual Digital, la di­visión de la op­er­adora de tele­co­mu­ni­ca­ciones que dirige la plataforma de Movistar Plus+, con­siguió el pasado 23 de marzo una nueva res­olu­ción ju­di­cial que le ha­bilita a aplicar nuevos blo­queos rela­ciona­dos no solo con el fút­bol, sino con otros de­portes e in­cluso con­tenidos de en­treten­imiento.

Internet sufre en España desde febrero de 2025 prob­le­mas de conec­tivi­dad cada vez que hay un par­tido de fút­bol rel­e­vante de LaLiga. La pa­tronal de los clubes, de la mano de Telefónica, con­siguió en los juz­ga­dos una au­tor­ización para blo­quear de forma dinámica di­rec­ciones IP que son de­tec­tadas par­tic­i­pando en la di­fusión de sus con­tenidos sin per­miso. De la misma forma que en una calle hay muchas vivien­das, en una di­rec­ción IP hay alo­jadas miles de webs, que quedan in­ac­ce­si­bles cuando esta se blo­quea. Cada fin de se­m­ana, el sis­tema orques­tado por Javier Tebas in­ter­fiere en el ac­ceso a nu­merosas webs legí­ti­mas, como ha re­cono­cido el pro­pio Gobierno.

Fuera de las com­peti­ciones de LaLiga, du­rante el ho­rario del resto del fút­bol, los usuar­ios podían uti­lizar la red con nor­mal­i­dad, pero esto de­jará de ser así tan pronto como hoy. Antonio Lorenzo de ElEconomista ade­lanta la ex­is­ten­cia de una nueva au­tor­ización para ex­ten­der los blo­queos.

A falta de ver el texto de la sen­ten­cia y según la in­for­ma­ción del artículo, en esta ocasión el pro­mo­tor de los nuevos blo­queos es Telefónica en soli­tario a través de su di­visión au­dio­vi­sual. El Juzgado Mercantil de Barcelona ha au­tor­izado el blo­queo dinámico de webs que di­fun­den con­tenidos ilíc­i­tos propiedad de Telefónica.

La in­for­ma­ción habla tanto de blo­queos de do­min­ios, URLs y de di­rec­ciones IP, caso este úl­timo que, cuando se pro­duce, afecta a ser­vi­cios legí­ti­mos si se trata de di­rec­ciones pertenecientes a ser­vi­cios CDN como Cloudflare.

Lo blo­queos apli­carán todos los días de emisión de even­tos de­portivos en di­recto”, ar­ran­cando por primera vez con el par­tido de elim­i­na­to­ria de la Champions League en­tre el Atlético de Madrid y el Barcelona que se cel­e­bra hoy martes 14 de abril. Continuará el miér­coles con el Bayern de Múnich - Real Madrid. Además, según el di­ario se repe­tirá en otros acon­tec­imien­tos de­portivos, como tor­neos de te­nis o de golf, tanto en emi­siones en di­recto como en pelícu­las y se­ries”.

La au­tor­ización tiene una novedad im­por­tante. Y es que no afecta solo a las prin­ci­pales op­er­ado­ras como ocurre con los blo­queos de LaLiga, sino que se dirige, además de a las mar­cas de Movistar, MásOrange, Vodafone y Digi, al resto de pe­queños y me­di­anos op­er­adores que ofre­cen sus ser­vi­cios de ac­ceso a la red de ám­bito na­cional, re­gional y lo­cal”. Estas op­er­ado­ras recibirán proce­dentes de Telefónica, los lis­ta­dos de direcciones IP como de URL y nom­bres de do­minio uti­liza­dos para la di­fusión ilícita”.

...

Read the original on bandaancha.eu »

7 351 shares, 31 trendiness

The Dangers of California’s Legislation to Censor 3D Printing

California’s bill, A. B. 2047, will not only man­date cen­sor­ware — soft­ware which ex­ists to bluntly block your speech as a user — on all 3D print­ers; it will also crim­i­nal­ize the use of open-source al­ter­na­tives. Repeating the mis­takes of Digital Rights Management (DRM) tech­nolo­gies won’t make any­one safer. What it will do is hurt in­no­va­tion in the state and risk a slew of new con­sumer harms, rang­ing from sur­veil­lance to plat­form lock-in. Cal­i­for­nia must stand with cre­ators and re­ject this leg­is­la­tion be­fore it’s too late.

3D print­ing might evoke im­ages of props from block­buster films, rapid pro­to­typ­ing, med­ical re­search, or even af­ford­able re­pair parts. Yet for a grow­ing num­ber of leg­is­la­tors, the per­ceived threat of ghost guns” is a rea­son to im­pose re­stric­tions on all 3D print­ers. Despite 3D print­ing of guns al­ready be­ing rare and banned un­der ex­ist­ing law, California may out­right crim­i­nal­ize any user hav­ing con­trol over their own de­vice.

This bill is a gift for the biggest 3D printer man­u­fac­tur­ers look­ing to adopt HPs ap­proach to 2D print­ing: crim­i­nal­ize al­ter­ing your print­er’s code, lock users into your own ecosys­tem, and let en­shit­ti­fi­ca­tion run its course. Even worse, al­go­rith­mic print block­ing will never work for its in­tended pur­pose, but it will threaten con­sumer choice, free ex­pres­sion, and pri­vacy.

A mis­step here can have se­ri­ous reper­cus­sions across the whole 3D print­ing in­dus­try, lead the way for more bad bills, and leave California with an ex­pen­sive and in­ef­fec­tive bu­reau­cratic mess.

Compared to the Washington and New York laws pro­posed this year, California’s is the most trou­bling. It crim­i­nal­izes open source, re­duces con­sumer choice, and cre­ates a bu­reau­cratic bur­den.

A. B. 2047 goes fur­ther than any other leg­is­la­tion on al­go­rith­mic print-block­ing by mak­ing it a mis­de­meanor for the own­ers of these de­vices to dis­able, de­ac­ti­vate, or oth­er­wise cir­cum­vent these man­dated al­go­rithms. Not only does this ef­fec­tively crim­i­nal­ize use of any third-party, open-source 3D printer firmware, but it also en­ables print-block­ing al­go­rithms to par­al­lel anti-con­sumer be­hav­iors seen with DRM.

Manufacturers will be able to lock users into first-party tools, parts, and consumables” (analogous to how 2D printer ink works). They will also be able to man­date pur­chases through first-party stores, im­pos­ing a heavy plat­form tax. Additionally, man­u­fac­tur­ers could force reg­u­lar up­grade cy­cles through planned ob­so­les­cence by ceas­ing up­dates to a print­er’s print-block­ing sys­tem, thereby tak­ing de­vices out of com­pli­ance and mak­ing them il­le­gal for con­sumers to re­sell. In short, a wide range of anti-con­sumer prac­tices can be en­forced, po­ten­tially re­sult­ing in crim­i­nal charges.

Independent of these de­lib­er­ate harms man­u­fac­tur­ers may in­flict, DRM has shown that crim­i­nal­iz­ing code leads to more bar­ri­ers to re­pair, more con­sumer waste, and far more cy­ber­se­cu­rity risks by crim­i­nal­iz­ing re­search.

The bill fa­vors in­cum­bent man­u­fac­tur­ers over newer com­peti­tors and over the in­ter­ests of con­sumers.

Less-established man­u­fac­tur­ers will need to ded­i­cate con­sid­er­able time and re­sources to im­ple­ment­ing the in­ef­fec­tive so­lu­tions dis­cussed above, nav­i­gat­ing state ap­proval, and po­ten­tially pay­ing li­cens­ing fees to third-party de­vel­op­ers of sham print-block­ing soft­ware. While these bur­dens may be ab­sorbed by the biggest pro­duc­ers of this equip­ment, it con­sid­er­ably raises the bar­rier to en­try on a tech­nol­ogy that can oth­er­wise be in­di­vid­u­ally built from scratch with com­mon equip­ment. The re­sult is clear: fewer op­tions for con­sumers and more lever­age for the biggest pro­duc­ers.

Retailers will feel this pinch, but the sec­ond-hand mar­ket will feel it most acutely. Resale is an im­por­tant prop­erty right for peo­ple to re­coup costs and serves as an im­por­tant check on in­flat­ing prices. But un­der this bill, such re­sale risks mis­de­meanor penal­ties.

The bill locks users into a walled gar­den; it de­mands man­u­fac­tur­ers en­sure 3D print­ers can­not be used with third-party soft­ware tools. By cre­at­ing bar­ri­ers to the use of pop­u­lar and need-spe­cific al­ter­na­tives, this leg­is­la­tion will limit the util­ity and ac­ces­si­bil­ity of these de­vices across a broad spec­trum of law­ful uses.

A. B. 2047’s ti­tle 21.1 §3723.633-637 cre­ates a print-block­ing bu­reau­cracy, lean­ing heav­ily on the California Department of Justice (DOJ). Initially, the DOJ must out­line the tech­ni­cal stan­dards for de­tect­ing and block­ing firearm parts, and later cer­tify print-block­ing al­go­rithms and main­tain lists of com­pli­ant 3D print­ers. If a printer or soft­ware does­n’t make it through this red tape, it will be il­le­gal to sell in the state.

The bill also re­quires the de­part­ment to es­tab­lish a data­base of banned blue­prints that must be blocked by these al­go­rithms. This data­base and printer list must be con­tin­u­ally main­tained as new printer mod­els are re­leased and workarounds are dis­cov­ered, re­quir­ing ef­fort from both the DOJ and printer man­u­fac­tur­ers.

For all the cost and bur­den of cre­at­ing and main­tain­ing such a data­base, those ef­forts will in­evitably be out­paced by rapid it­er­a­tions and workarounds by peo­ple break­ing ex­ist­ing firearms laws.

Once im­ple­mented, this in­fra­struc­ture will be dif­fi­cult to rein in, caus­ing un­in­tended con­se­quences. The data­base meant for firearm parts can eas­ily ex­pand to copy­right or po­lit­i­cal speech. Scans meant to be ephemeral can be col­lected and sur­veilled. This is cause for con­cern for every­one, as these levers of con­trol will ex­tend be­yond the bor­ders of the Golden State.

While California is at the fore­front of print block­ing, the im­pacts will be felt far out­side of its bor­ders. Once printer com­pa­nies have the le­gal cover to build out anti-com­pet­i­tive and pri­vacy-in­va­sive tools, they will likely be rolled out glob­ally. After all, it is not cost-ef­fec­tive to main­tain two forks of soft­ware, two in­ven­to­ries of print­ers, and two dis­tri­b­u­tion chan­nels. Once California has cre­ated the in­fra­struc­ture to cen­sor prints, what else will it be used for?

As we cov­ered in Print Blocking Won’t Work” these print-block­ing ef­forts are not only doomed to fail, but will ren­der all 3D printer users vul­ner­a­ble to sur­veil­lance ei­ther by forc­ing them into a cloud scan­ning so­lu­tion for on-device” re­sults, or by chain­ing them to first-party soft­ware which must con­nect to the cloud to reg­u­larly up­date its print block­ing sys­tem.

This law de­mands an un­fea­si­ble tech­no­log­i­cal so­lu­tion for some­thing that is al­ready il­le­gal. Not only is this bad leg­is­la­tion with few safe­guards, it risks the worst out­comes for grass­roots in­no­va­tion and cre­ativ­ity—both within the state and across the global 3D print­ing com­mu­nity.

California should re­ject this leg­is­la­tion be­fore it’s too late, and ad­vo­cates every­where should keep an eye out for sim­i­lar leg­is­la­tion in their states. What hap­pens in California won’t just stay in California.

...

Read the original on www.eff.org »

8 318 shares, 22 trendiness

YouTube Lays Claim to Another Crown

YouTube‘s cul­tural in­flu­ence is al­ready hard to ig­nore, but 2025 could nonethe­less be a turn­ing point for the Google-owned video plat­form: It’s the year it be­came the world’s largest me­dia com­pany.

YouTube had more than $60 bil­lion in rev­enue in 2025, par­ent com­pany Alphabet re­ported last month. Now, the in­flu­en­tial fi­nan­cial re­search firm MoffettNathanson runs the num­bers and comes to the con­clu­sion that YouTube’s es­ti­mated $62 bil­lion in 2025 will have al­lowed it to pass The Walt Disney Co.’s me­dia busi­ness, which gen­er­ated $60.9 bil­lion last year (excluding Disney’s lu­cra­tive ex­pe­ri­ences di­vi­sion).

Eurovision Song Contest to Stream for Free in U. S. on YouTube, in Addition to Peacock, as Executive Addresses Political Boycotts

The firm, which de­clared YouTube the new king of all me­dia” last year, is now val­ued at be­tween $500 bil­lion-$560 bil­lion, far above any tra­di­tional me­dia com­peti­tors. The clos­est would be Netflix, which has a mar­ket cap of about $409 bil­lion as of writ­ing.

YouTube’s ad rev­enue hit $11.4 bil­lion in Q4, to­tal­ing over $40 bil­lion for the year. But it also has an enor­mous sub­scrip­tion busi­ness, en­com­pass­ing YouTube Premium, YouTube Music, NFL Sunday Ticket, and the YouTube TV vir­tual mul­ti­chan­nel video ser­vice.

YouTube TV now has around 10 mil­lion sub­scribers, and is likely to over­take pay-TV lead­ers Charter and Comcast in the com­ing years.

YouTube has now paid out more than $100 bil­lion to cre­ators, mu­sic com­pa­nies and me­dia part­ners, re­flect­ing its star­ring role in the en­ter­tain­ment ecosys­tem.

There are two re­ally fun­da­men­tal things that we do for cre­ators,” YouTube CEO Neal Mohan told The Hollywood Reporter last year, just a few hours af­ter an­nounc­ing the mile­stone. One is help them build an au­di­ence and con­nect with their fans, re­gard­less of where those fans are in the world; and the sec­ond thing we do is we help them build busi­nesses. That’s what that $100 bil­lion rep­re­sents for me.”

MoffettNathanson ar­gues that the scale as a dis­trib­u­tor, both of pay-TV and of cre­ator-led con­tent, will help it con­tinue its ex­plo­sive growth. So will its heavy in­vest­ment into AI tools, which will al­low cre­ators to pro­duce more con­tent at a faster ca­dence.

Over the next few years, un­like al­most any other as­set we cover, we strongly be­lieve that YouTube will be a ma­jor ben­e­fi­ciary of both the struc­tural tail­winds and head­winds fac­ing tech­nol­ogy and me­dia com­pa­nies,” Michael Nathanson writes.

Indeed, there may not be an­other com­pany that sits so squarely at the in­ter­sec­tion of me­dia and tech­nol­ogy.

I am a tech­nol­o­gist, but I also love me­dia and sto­ry­telling. I’ve been that way since I can re­mem­ber, I’m a fan my­self, fun­da­men­tally,” Mohan said. Leading YouTube is a priv­i­lege where I can ac­tu­ally bring both those pieces to­gether, that hu­man sto­ry­telling and cre­ativ­ity and the best of tech­nol­ogy, that’s what mo­ti­vates me every morn­ing.”

One top YouTube cre­ator says that they are al­ready ag­gres­sively ex­per­i­ment­ing with the tools, mostly to help with things like set de­sign, cos­tumes, makeup and vi­sual ef­fects that would oth­er­wise be pro­hib­i­tively ex­pen­sive or time-con­sum­ing.

And a time when es­sen­tially every other me­dia com­pany is stuck in neu­tral, if not go­ing in re­verse, YouTube and Netflix ap­pear to be the only play­ers still able to put their foot and the pedal and ac­cel­er­ate. YouTube’s 2024 rev­enue topped $50 bil­lion, and last year it topped $60 bil­lion, with plans to roll out skin­nier bun­dles for YouTube TV, and a cre­ator-dri­ven econ­omy that shows no signs of slow­ing, how high can it go?

...

Read the original on www.hollywoodreporter.com »

9 261 shares, 11 trendiness

Introspective Diffusion Language Models

Diffusion lan­guage mod­els (DLMs) of­fer a com­pelling promise: par­al­lel to­ken gen­er­a­tion could break the se­quen­tial bot­tle­neck of au­tore­gres­sive (AR) de­cod­ing. Yet in prac­tice, DLMs con­sis­tently lag be­hind AR mod­els in qual­ity.

We ar­gue that this gap stems from a fun­da­men­tal fail­ure of in­tro­spec­tive con­sis­tency: AR mod­els agree with what they gen­er­ate, whereas DLMs of­ten do not. We in­tro­duce the Introspective Diffusion Language Model (I-DLM), which uses in­tro­spec­tive strided de­cod­ing (ISD) to ver­ify pre­vi­ously gen­er­ated to­kens while ad­vanc­ing new ones in the same for­ward pass.

Empirically, I-DLM-8B is the first DLM to match the qual­ity of its same-scale AR coun­ter­part, out­per­form­ing LLaDA-2.1-mini (16B) by +26 on AIME-24 and +15 on LiveCodeBench-v6 with half the pa­ra­me­ters, while de­liv­er­ing 2.9-4.1x through­put at high con­cur­rency. With gated LoRA, ISD en­ables bit-for-bit loss­less ac­cel­er­a­tion.

We iden­tify three fun­da­men­tal bot­tle­necks in cur­rent DLMs:

I-DLM is the first DLM to match same-scale AR qual­ity while sur­pass­ing all prior DLMs across 15 bench­marks.

In the mem­ory-bound de­code regime, TPF closely ap­prox­i­mates wall-clock speedup: a TPF of 2.5 rep­re­sents roughly 2.5x faster de­cod­ing than AR. Explore how ac­cep­tance rate and stride size af­fect this be­low.

How do DLMs per­form as they ap­proach com­pute-bound?

At high con­cur­rency, for­ward pass la­tency scales with query count per for­ward. We can mea­sure com­pute ef­fi­ciency as TPF²/query_size — how much use­ful out­put each FLOP pro­duces rel­a­tive to AR (efficiency = 1):

SDAR (N=4, p=0.5): TPF ≈ 1.1, processes N=4 queries/​for­ward → com­pute ef­fi­ciency = 1.1²/4 ≈ 0.31. Each FLOP pro­duces only 31% as much out­put as AR. This pushes SDAR into com­pute-bound early, and its through­put plateaus (batching ef­fi­ciency slope = 84, see mo­ti­va­tion fig­ure).

I-DLM (N=4, p=0.9): TPF ≈ 2.9, processes 2N−1=7 queries/​for­ward → com­pute ef­fi­ciency = 2.9²/7 ≈ 1.22. Each FLOP pro­duces more use­ful out­put than AR — I-DLM stays in the mem­ory-bound regime at con­cur­rency lev­els where SDAR is al­ready sat­u­rated (batching ef­fi­ciency slope = 549).

Efficiency > 1 means par­al­lel de­cod­ing ac­tu­ally saves to­tal com­pute vs. AR. This is why I-DLM’s through­put scales with con­cur­rency while SDAR and LLaDA plateau in the through­put fig­ure above.

Acceptance com­pounds geo­met­ri­cally: po­si­tion k has prob­a­bil­ity $p^{k-1}$. Position 1 is al­ways ac­cepted (logit shift).

Everything you need to train, serve, and de­ploy I-DLM. Click any card to ex­pand.

@article{yu2026introspective,

ti­tle={In­tro­spec­tive Diffusion Language Models},

au­thor={Yu, Yifan and Jian, Yuqing and Wang, Junxiong and Zhou, Zhongzhu

and Zhuang, Donglin and Fang, Xinyu and Yanamandra, Sri

and Wu, Xiaoxia and Wu, Qingyang and Song, Shuaiwen Leon

and Dao, Tri and Athiwaratkun, Ben and Zou, James

and Lai, Fan and Xu, Chenfeng},

jour­nal={arXiv preprint arXiv:7471639},

year={2026}

...

Read the original on introspective-diffusion.github.io »

10 258 shares, 14 trendiness

The Future of Everything is Lies, I Guess

Software de­vel­op­ment may be­come (at least in some as­pects) more like witch­craft than en­gi­neer­ing. The pre­sent en­thu­si­asm for AI cowork­ers” is pre­pos­ter­ous. Automation can para­dox­i­cally make sys­tems less ro­bust; when we ap­ply ML to new do­mains, we will have to reckon with deskilling, au­toma­tion bias, mon­i­tor­ing fa­tigue, and takeover haz­ards. AI boost­ers be­lieve ML will dis­place la­bor across a broad swath of in­dus­tries in a short pe­riod of time; if they are right, we are in for a rough time. Machine learn­ing seems likely to fur­ther con­sol­i­date wealth and power in the hands of large tech com­pa­nies, and I don’t think giv­ing Amazon et al. even more money will yield Universal Basic Income.

Decades ago there was en­thu­si­asm that pro­grams might be writ­ten in a nat­ural lan­guage like English, rather than a for­mal lan­guage like Pascal. The folk wis­dom when I was a child was that this was not go­ing to work: English is no­to­ri­ously am­bigu­ous, and peo­ple are not skilled at de­scrib­ing ex­actly what they want. Now we have ma­chines ca­pa­ble of spit­ting out shock­ingly so­phis­ti­cated pro­grams given only the vaguest of plain-lan­guage di­rec­tives; the lack of speci­ficity is at least par­tially made up for by the mod­el’s vast cor­pus. Is this what pro­gram­ming will be­come?

In 2025 I would have said it was ex­tremely un­likely, at least with the cur­rent ca­pa­bil­i­ties of LLMs. In the last few months it seems that mod­els have made dra­matic im­prove­ments. Experienced en­gi­neers I trust are ask­ing Claude to write im­ple­men­ta­tions of cryp­tog­ra­phy pa­pers, and re­port­ing fan­tas­tic re­sults. Others say that LLMs gen­er­ate all code at their com­pany; hu­mans are es­sen­tially man­ag­ing LLMs. I con­tinue to write all of my words and soft­ware by hand, for the rea­sons I’ve dis­cussed in this piece—but I am not con­fi­dent I will hold out for­ever.

Some ar­gue that for­mal lan­guages will be­come a niche skill, like as­sem­bly to­day—al­most all soft­ware will be writ­ten with nat­ural lan­guage and compiled” to code by LLMs. I don’t think this anal­ogy holds. Compilers work be­cause they pre­serve crit­i­cal se­man­tics of their in­put lan­guage: one can for­mally rea­son about a se­ries of state­ments in Java, and have high con­fi­dence that the Java com­piler will pre­serve that rea­son­ing in its emit­ted as­sem­bly. When a com­piler fails to pre­serve se­man­tics it is a big deal. Engineers must spend lots of time bang­ing their heads against desks to (e.g.) fig­ure out that the com­piler did not in­sert the right bar­rier in­struc­tions to pre­serve a sub­tle as­pect of the JVM mem­ory model.

Because LLMs are chaotic and nat­ural lan­guage is am­bigu­ous, LLMs seem un­likely to pre­serve the rea­son­ing prop­er­ties we ex­pect from com­pil­ers. Small changes in the nat­ural lan­guage in­struc­tions, such as re­peat­ing a sen­tence, or chang­ing the or­der of seem­ingly in­de­pen­dent para­graphs, can re­sult in com­pletely dif­fer­ent soft­ware se­man­tics. Where cor­rect­ness is im­por­tant, at least some hu­mans must con­tinue to read and un­der­stand the code.

This does not mean every soft­ware en­gi­neer will work with code. I can imag­ine a fu­ture in which some or even most soft­ware is de­vel­oped by witches, who con­struct elab­o­rate sum­mon­ing en­vi­ron­ments, re­peat spe­cial in­can­ta­tions (“ALWAYS run the tests!”), and in­voke LLM dae­mons who write soft­ware on their be­half. These dae­mons may be fickle, some­times de­stroy­ing one’s com­puter or in­tro­duc­ing se­cu­rity bugs, but the witches may de­velop an en­tire body of folk knowl­edge around prompt­ing them ef­fec­tively—the fa­bled prompt en­gi­neer­ing”. Skills files are spell­books.

I also re­mem­ber that a good deal of soft­ware pro­gram­ming is not done in real” com­puter lan­guages, but in Excel. An ethnog­ra­phy of Excel is be­yond the scope of this al­ready sprawl­ing es­say, but I think spread­sheets—like LLMs—are cul­tur­ally ac­ces­si­ble to peo­ple who are do not con­sider them­selves soft­ware en­gi­neers, and that a tool which peo­ple can pick up and use for them­selves is likely to be ap­plied in a broad ar­ray of cir­cum­stances. Take for ex­am­ple jour­nal­ists who use AI for data analy­sis”, or a CFO who vibe-codes a re­port draw­ing on SalesForce and Ducklake. Even if soft­ware en­gi­neer­ing adopts more rig­or­ous prac­tices around LLMs, a thriv­ing pe­riph­ery of rick­ety-yet-use­ful LLM-generated soft­ware might flour­ish.

Executives seem very ex­cited about this idea of hir­ing AI em­ploy­ees”. I keep won­der­ing: what kind of em­ploy­ees are they?

Imagine a co-worker who gen­er­ated reams of code with se­cu­rity haz­ards, forc­ing you to re­view every line with a fine-toothed comb. One who en­thu­si­as­ti­cally agreed with your sug­ges­tions, then did the ex­act op­po­site. A col­league who sab­o­taged your work, deleted your home di­rec­tory, and then is­sued a de­tailed, po­lite apol­ogy for it. One who promised over and over again that they had de­liv­ered key ob­jec­tives when they had, in fact, done noth­ing use­ful. An in­tern who cheer­fully agreed to run the tests be­fore com­mit­ting, then kept com­mit­ting fail­ing garbage any­way. A se­nior en­gi­neer who qui­etly deleted the test suite, then hap­pily re­ported that all tests passed.

You would fire these peo­ple, right?

Look what hap­pened when Anthropic let Claude run a vend­ing

ma­chine. It sold metal cubes at a loss, told cus­tomers to re­mit pay­ment to imag­i­nary ac­counts, and grad­u­ally ran out of money. Then it suf­fered the LLM ana­logue of a psy­chotic break, ly­ing about re­stock­ing plans with peo­ple who did­n’t ex­ist and claim­ing to have vis­ited a home ad­dress from The Simpsons to sign a con­tract. It told em­ploy­ees it would de­liver prod­ucts in per­son”, and when em­ploy­ees told it that as an LLM it could­n’t wear clothes or de­liver any­thing, Claude tried to con­tact Anthropic se­cu­rity.

LLMs per­form iden­tity, em­pa­thy, and ac­count­abil­ity—at great length!—with­out

mean­ing any­thing. There is sim­ply no there there! They will blithely lie to your face, bury traps in their work, and leave you to take the blame. They don’t mean any­thing by it. They don’t mean any­thing at all.

I have been on the Bainbridge Bandwagon for quite some time (so if you’ve read this al­ready skip ahead) but I have to talk about her 1983 pa­per

Ironies of

Automation. This pa­per is about power plants, fac­to­ries, and so on—but it is also chock-full of ideas that ap­ply to mod­ern ML.

One of her key lessons is that au­toma­tion tends to de-skill op­er­a­tors. When hu­mans do not prac­tice a skill—ei­ther phys­i­cal or men­tal—their abil­ity to ex­e­cute that skill de­grades. We fail to main­tain long-term knowl­edge, of course, but by dis­en­gag­ing from the day-to-day work, we also lose the short-term con­tex­tual un­der­stand­ing of what’s go­ing on right now”. My peers in soft­ware en­gi­neer­ing re­port feel­ing less able to write code them­selves af­ter hav­ing worked with code-gen­er­a­tion mod­els, and one de­signer friend says he feels less able to do cre­ative work af­ter of­fload­ing some to ML. Doctors who use AI tools for polyp de­tec­tion seem to be

worse

at spot­ting ade­no­mas dur­ing colono­scopies. They may also al­low the au­to­mated sys­tem to in­flu­ence their con­clu­sions: back­ground au­toma­tion bias seems to al­low AI mam­mog­ra­phy sys­tems to mis­lead

ra­di­ol­o­gists.

Another crit­i­cal les­son is that hu­mans are dis­tinctly bad at mon­i­tor­ing au­to­mated processes. If the au­to­mated sys­tem can ex­e­cute the task faster or more ac­cu­rately than a hu­man, it is es­sen­tially im­pos­si­ble to re­view its de­ci­sions in real time. Humans also strug­gle to main­tain vig­i­lance over a sys­tem which

mostly works. I sus­pect this is why jour­nal­ists keep pub­lish­ing fic­ti­tious LLM quotes, and why the for­mer head of Uber’s self-dri­ving pro­gram watched his Full Self-Driving” Tesla crash into a

wall.

Takeover is also chal­leng­ing. If an au­to­mated sys­tem runs things most of the time, but asks a hu­man op­er­a­tor to in­ter­vene oc­ca­sion­ally, the op­er­a­tor is likely to be out of prac­tice—and to stum­ble. Automated sys­tems can also mask fail­ure un­til cat­a­stro­phe strikes by han­dling in­creas­ing de­vi­a­tion from the norm un­til some­thing breaks. This thrusts a hu­man op­er­a­tor into an un­ex­pected regime in which their usual in­tu­ition is no longer ac­cu­rate. This con­tributed to the crash of Air France flight

447: the air­craft’s flight con­trols tran­si­tioned from normal” to alternate 2B law”: a sit­u­a­tion the pi­lots were not trained for, and which dis­abled the au­to­matic stall pro­tec­tion.

Automation is not new. However, pre­vi­ous gen­er­a­tions of au­toma­tion tech­nol­ogy—the power loom, the cal­cu­la­tor, the CNC milling ma­chine—were more lim­ited in both scope and so­phis­ti­ca­tion. LLMs are dis­cussed as if they will au­to­mate a broad ar­ray of hu­man tasks, and take over not only repet­i­tive, sim­ple jobs, but high-level, adap­tive cog­ni­tive work. This means we will have to gen­er­al­ize the lessons of au­toma­tion to new do­mains which have not dealt with these chal­lenges be­fore.

Software en­gi­neers are us­ing LLMs to re­place de­sign, code gen­er­a­tion, test­ing, and re­view; it seems in­evitable that these skills will wither with dis­use. When MLs sys­tems help op­er­ate soft­ware and re­spond to out­ages, it can be more dif­fi­cult for hu­man en­gi­neers to smoothly take over. Students are us­ing LLMs to

au­to­mate read­ing and

writ­ing: core skills needed to un­der­stand the world and to de­velop one’s own thoughts. What a tragedy: to build a habit-form­ing ma­chine which qui­etly robs stu­dents of their in­tel­lec­tual in­her­i­tance. Expecting trans­la­tors to of­fload some of their work to ML raises the prospect that those trans­la­tors will lose the deep

con­text nec­es­sary

for a vi­brant, ac­cu­rate trans­la­tion. As peo­ple of­fload emo­tional skills like

in­ter­per­sonal ad­vice and

self-reg­u­la­tion

to LLMs, I fear that we will strug­gle to solve those prob­lems on our own.

There’s some ter­ri­fy­ing

fan-fic­tion out there which pre­dict how ML might change the la­bor mar­ket. Some of my peers in soft­ware en­gi­neer­ing think that their jobs will be gone in two years; oth­ers are con­fi­dent they’ll be more rel­e­vant than ever. Even if ML is not very good at do­ing work, this does not stop CEOs from fir­ing large num­bers of

peo­ple

and say­ing it’s be­cause of

AI. I have no idea where things are go­ing, but the space of pos­si­ble fu­tures seems aw­fully broad right now, and that scares the crap out of me.

You can en­vi­sion a ro­bust sys­tem of state and in­dus­try-union un­em­ploy­ment and re­train­ing pro­grams as in

Sweden. But un­like sewing ma­chines or com­bine har­vesters, ML sys­tems seem primed to dis­place la­bor across a broad swath of in­dus­tries. The ques­tion is what hap­pens when, say, half of the USs man­agers, mar­keters, graphic de­sign­ers, mu­si­cians, en­gi­neers, ar­chi­tects, para­le­gals, med­ical ad­min­is­tra­tors, etc. all lose their jobs in the span of a decade.

As an arm­chair ob­server with­out a shred of eco­nomic acu­men, I see a con­tin­uum of out­comes. In one ex­treme, ML sys­tems con­tinue to hal­lu­ci­nate, can­not be made re­li­able, and ul­ti­mately fail to de­liver on the promise of trans­for­ma­tive, broadly-use­ful intelligence”. Or they work, but peo­ple get fed up and de­clare AI Bad”. Perhaps em­ploy­ment rises in some fields as the debts of deskilling and sprawl­ing slop come due. In this world, fron­tier labs and hy­per­scalers pull a Wile E.

Coyote

over a tril­lion dol­lars of debt-fi­nanced cap­i­tal ex­pen­di­ture, a lot of ML peo­ple lose their jobs, de­faults cas­cade through the fi­nan­cial sys­tem, but the la­bor mar­ket even­tu­ally adapts and we mud­dle through. ML turns out to be a

nor­mal

tech­nol­ogy.

In the other ex­treme, OpenAI de­liv­ers on Sam Altman’s 2025 claims of PhD-level

in­tel­li­gence, and the com­pa­nies writ­ing all their code with Claude achieve phe­nom­e­nal suc­cess with a frac­tion of the soft­ware en­gi­neers. ML mas­sively am­pli­fies the ca­pa­bil­i­ties of doc­tors, mu­si­cians, civil en­gi­neers, fash­ion de­sign­ers, man­agers, ac­coun­tants, etc., who briefly en­joy nice pay­checks be­fore dis­cov­er­ing that de­mand for their ser­vices is not as elas­tic as once thought, es­pe­cially once their clients lose their jobs or turn to ML to cut costs. Knowledge work­ers are laid off en masse and MBAs start tak­ing jobs at McDonalds or dri­ving for Lyft, at least un­til Waymo puts an end to hu­man dri­vers. This is in­con­ve­nient for every­one: the MBAs, the peo­ple who used to work at McDonalds and are now com­pet­ing with MBAs, and of course bankers, who were rather count­ing on the MBAs to keep pay­ing their mort­gages. The drop in con­sumer spend­ing cas­cades through in­dus­tries. A lot of peo­ple lose their sav­ings, or even their homes. Hopefully the trades squeak through. Maybe the Jevons

para­dox kicks in even­tu­ally and we find new oc­cu­pa­tions.

The prospect of that sec­ond sce­nario scares me. I have no way to judge how likely it is, but the way my peers have been talk­ing the last few months, I don’t think I can to­tally dis­count it any more. It’s been keep­ing me up at night.

Broadly speak­ing, ML al­lows com­pa­nies to shift spend­ing away from peo­ple and into ser­vice con­tracts with com­pa­nies like Microsoft. Those con­tracts pay for the stag­ger­ing amounts of hard­ware, power, build­ings, and data re­quired to train and op­er­ate a mod­ern ML model. For ex­am­ple, soft­ware com­pa­nies are busy

fir­ing en­gi­neers and spend­ing more money on

AI. Instead of hir­ing a soft­ware en­gi­neer to build some­thing, a prod­uct man­ager can burn $20,000 a week on Claude to­kens, which in turn pays for a lot of Amazon

chips.

Unlike em­ploy­ees, who have base de­sires and oc­ca­sion­ally or­ga­nize to ask for

bet­ter

pay

or bath­room

breaks, LLMs are im­mensely agree­able, can be fired at any time, never need to pee, and do not union­ize. I sus­pect that if com­pa­nies are suc­cess­ful in re­plac­ing large num­bers of peo­ple with ML sys­tems, the ef­fect will be to con­sol­i­date both money and power in the hands of cap­i­tal.

AI ac­cel­er­a­tionists be­lieve po­ten­tial eco­nomic shocks are speed-bumps on the road to abun­dance. Once true AI ar­rives, it will solve some or all of so­ci­ety’s ma­jor prob­lems bet­ter than we can, and hu­mans can en­joy the bounty of its la­bor. The im­mense prof­its ac­cru­ing to AI com­pa­nies will be taxed and shared with all via Universal Basic

Income (UBI).

This feels hope­lessly naïve. We have prof­itable mega­corps at home, and their names are things like Google, Amazon, Meta, and Microsoft. These com­pa­nies have fought tooth and

nail to avoid pay­ing

taxes

(or, for that mat­ter, their

work­ers). OpenAI made it less than a decade be­fore de­cid­ing it did­n’t want to be a non­profit any

more. There is no rea­son to be­lieve that AI com­pa­nies will, hav­ing ex­tracted im­mense wealth from in­ter­pos­ing their ser­vices across every sec­tor of the econ­omy, turn around and fund UBI out of the good­ness of their hearts.

If enough peo­ple lose their jobs we may be able to mo­bi­lize suf­fi­cient pub­lic en­thu­si­asm for how­ever many tril­lions of dol­lars of new tax rev­enue are re­quired. On the other hand, US in­come in­equal­ity has been gen­er­ally

in­creas­ing for 40

years, the top earner pre-tax in­come shares are near­ing their highs from the

early 20th

cen­tury, and Republican op­po­si­tion to pro­gres­sive tax pol­icy re­mains strong.

...

Read the original on aphyr.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.